WO2012014627A1 - 車両周辺監視装置 - Google Patents
車両周辺監視装置 Download PDFInfo
- Publication number
- WO2012014627A1 WO2012014627A1 PCT/JP2011/065092 JP2011065092W WO2012014627A1 WO 2012014627 A1 WO2012014627 A1 WO 2012014627A1 JP 2011065092 W JP2011065092 W JP 2011065092W WO 2012014627 A1 WO2012014627 A1 WO 2012014627A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- edge
- edge image
- outer shape
- vehicle
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/165—Anti-collision systems for passive traffic, e.g. including static obstacles, trees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the present invention relates to a vehicle periphery monitoring device that uses an in-vehicle camera to detect an object existing outside the vehicle.
- Patent Document 1 a three-dimensional position of an object existing in front of a vehicle is calculated based on a stereo image in front of the vehicle acquired by two in-vehicle cameras, and based on the three-dimensional position information.
- a technique for detecting a white line of a road on which a vehicle is traveling and a position of a side wall object is disclosed.
- Patent Document 2 describes a technique for detecting saddle-shaped lane marks such as dots / botts and cat's eyes based on the shape of an object in a captured image acquired by an in-vehicle camera.
- a travel line such as a white line from the captured image
- a pylon arranged to indicate a road construction area or the like it may be necessary to identify whether the contained object is a pylon.
- the white line on the road surface is distinguished from the side wall object based on the height of the object.
- the technique found in Patent Document 1 does not specify what kind of structure the side wall object is composed of.
- the luminance edge that indicates the outline of the object is extracted only when there is a significant difference in luminance between the object and the surrounding background in the captured image. To get. For this reason, there are many cases where a luminance edge can be extracted only with a part of the outline of the object depending on how the light strikes the object and the color of the object and its background.
- the present invention has been made in view of such a background, and an object of the present invention is to provide a vehicle periphery monitoring device that can increase the accuracy of identifying the type of an object in a captured image acquired by an in-vehicle camera.
- the vehicle periphery monitoring device of the present invention is the vehicle periphery monitoring device that detects an object existing outside the vehicle based on a captured image of the vehicle periphery acquired by the in-vehicle camera. Based on the luminance component of the image, a first edge as a discontinuous change portion of the value of the luminance component is extracted from the captured image, and a first edge that is an image constituted by the extracted first edge First edge image generating means for generating an image, and extracting a second edge as a discontinuous change portion of the value of the hue component or saturation component based on the hue component or saturation component of the captured image A second edge image generating means for generating a second edge image that is an image composed of the extracted second edges, and a composition obtained by combining the generated first edge image and the second edge image.
- a combined edge image generating means for generating a wedge image, and an outer shape of the object indicated by an object outline image which is an image constituting an outline of an object included in the captured image among the generated synthesized edge images
- Object type specifying means for determining whether the object matches the outer shape of the predetermined type of structure and specifying whether the object is the predetermined type of structure based on at least the determination result It is characterized by providing.
- the first edge extracted from the captured image by the first edge image generation means is a discontinuous change portion of the luminance component value of the captured image (the magnitude of the luminance component value is This is an edge extracted as a portion that changes relatively rapidly. Therefore, the first edge is an edge extracted depending on the distribution of the luminance component of the captured image.
- the second edge extracted from the captured image by the second edge image generation means is a discontinuous change portion (hue component or saturation component value) of the hue component or saturation component value of the captured image.
- This is an edge that is extracted as a portion where the size of the change changes relatively rapidly. Therefore, the second edge is an edge extracted depending on the distribution of the hue component or the saturation component of the captured image.
- the distribution of the hue component or the saturation component of the captured image is generally different from the distribution of the luminance component of the captured image. For this reason, even if it is a part which is not extracted as said 1st edge among the outlines of the object in a captured image, the probability extracted as a 2nd edge increases. Conversely, a part that is not extracted as the second edge in the outline of the object in the captured image may be extracted as the first edge.
- the synthesized edge image obtained by synthesizing the first edge image and the second edge image by the synthesized edge image generating means the whole or many parts of the outline of the object included in the captured image are the first. It is included as an edge or a second edge.
- the object type specifying means includes an outer shape of the object indicated by an object outer shape image that is an image constituting an outer shape line of the object included in the captured image in the generated synthesized edge image. Is determined to match the outer shape of the predetermined type of structure. Then, the object type specifying means specifies whether or not the object is the predetermined type of structure based on at least the determination result.
- the reliability of the outer shape of the object indicated by the object outer shape image is high. For this reason, the reliability of the determination result as to whether or not the outer shape of the object indicated by the object outer shape image matches the outer shape of the predetermined type of structure is increased. Thereby, according to this invention, the specific precision of the kind of object in the captured image acquired by the vehicle-mounted camera can be improved.
- the outer surface thereof is divided into a plurality of colored regions by a plurality of boundary lines extending parallel to each other in the horizontal direction or the oblique direction on the outer surface.
- a structure for example, a pylon
- the object type specifying means matches the outer shape of the object indicated by the object outer shape image to the outer shape of the predetermined type of structure, and is parallel to each other inside the object outer shape image.
- the block diagram which shows the principal part structure of the vehicle periphery monitoring apparatus in embodiment of this invention.
- the figure which shows the example of the 1st edge image obtained from the captured image of FIG. The figure which shows the example of the 2nd edge image obtained from the captured image of FIG.
- a vehicle periphery monitoring device 1 is mounted on a vehicle (not shown) and includes an in-vehicle camera 2 and an image processing unit 3.
- the in-vehicle camera 2 is mounted on the front part of the vehicle.
- the in-vehicle camera 2 captures an image in front of the vehicle.
- This in-vehicle camera 2 is constituted by a CCD camera or the like.
- the in-vehicle camera 2 is a camera that captures an image in front of the vehicle as a color image. Therefore, the in-vehicle camera 2 generates and outputs a color video image signal.
- the in-vehicle camera 2 may be a camera that captures an image of the rear or side of the vehicle.
- the image processing unit 3 is an electronic circuit unit including a CPU, a RAM, a ROM, an interface circuit, etc. (not shown).
- the image processing unit 3 receives a color video image signal generated by the in-vehicle camera 2.
- This image processing unit 3 includes functions such as an image acquisition unit 4, a first edge image generation unit 5, a second edge image generation unit 6, an image composition unit 7, and an object type that are realized by executing the installed program.
- a specifying unit 8 is provided.
- the color video image signal generated by the in-vehicle camera 2 is taken into the image acquisition unit 4 of the image processing unit 3 at a predetermined calculation processing cycle.
- the image acquisition unit 4 converts an image signal (an image signal for each pixel) that is an analog signal input from the in-vehicle camera 2 into digital data, and stores and stores the digital signal in an image memory (not shown).
- the image acquisition unit 4 includes a luminance component image obtained by converting a luminance component of an input color video image signal into digital data, and a hue component and a saturation component of the image signal.
- a hue component image obtained by converting a hue component into digital data is stored and held in an image memory. Accordingly, a luminance component image (an image in which the value of each pixel becomes the value of the luminance component) as a luminance component in the captured image acquired by the in-vehicle camera 2 and a hue component image (a value of each pixel) as a hue component. are stored in the image memory.
- the luminance component image and the hue component image are given to the first edge image generation unit 5 and the second edge image generation unit 6, respectively. Then, the processes of the first edge image generation unit 5 and the second edge image generation unit 6 are executed next.
- the first edge image generation unit 5 and the second edge image generation unit 6 have functions as a first edge image generation unit and a second edge image generation unit in the present invention, respectively.
- the first edge image generation unit 5 to which the luminance component image is given performs processing of a known edge extraction filter such as a differential filter on the luminance component image. Accordingly, the first edge image generation unit 5 extracts the first edge as a discontinuous change portion (a portion where the luminance component value changes relatively abruptly) in the captured image. Further, the first edge image generation unit 5 generates a first edge image that is an image (binarized image) constituted by the first edges. Then, the first edge image generation unit 5 stores and holds the first edge image in an image memory (not shown).
- a known edge extraction filter such as a differential filter
- the first edge image generation unit 5 extracts the first edge as a discontinuous change portion (a portion where the luminance component value changes relatively abruptly) in the captured image. Further, the first edge image generation unit 5 generates a first edge image that is an image (binarized image) constituted by the first edges. Then, the first edge image generation unit 5 stores and holds the first edge image in an image memory (not shown).
- the second edge image generation unit 6 to which the hue component image is given performs the same processing as the edge extraction filter on the hue component image. Thereby, the second edge image generation unit 6 extracts the second edge as a discontinuous change portion of the hue component value in the captured image (a portion where the value of the hue component changes relatively abruptly). Further, the second edge image generation unit 6 generates a second edge image that is an image (binarized image) constituted by the second edges. Then, the second edge image generation unit 6 stores and holds the second edge image in an image memory (not shown).
- FIGS. 1-10 an example of a captured image acquired by the in-vehicle camera 2 and an example of the first edge image and the second edge image obtained from the captured image are shown in FIGS.
- FIG. 2 shows an example of a captured image acquired by the in-vehicle camera 2.
- images of the pylon 52 installed on the road 51 and its surroundings are taken.
- the pylon 52 in the illustrated example is a structure including a base 53 and a conical main body 54 (cone part) projecting on the base 53.
- subjected the referential mark 51a, 51b has shown the white line of the road 51.
- the outer surface of the main body 54 of the pylon 52 is colored with two colors of red and white. More specifically, the outer surface of the main body 54 has a plurality of (in the illustrated example, a plurality of (in the illustrated example, two) boundary lines 55a and 55b that extend in the horizontal direction in parallel with each other. 3) colored regions 56a, 56b and 56c.
- the boundary lines 55 a and 55 b are annular lines coaxial with the main body portion 54.
- the uppermost colored area 56a and the lowermost colored area 56c which are colored areas with stippling, are colored in red, and the middle stage located between these colored areas 56a and 56c.
- the colored region 56b is colored white. Therefore, the colored regions (56a, 56b) and (56b, 56c) that are adjacent to each other across the boundary lines 55a, 55b are colored with different colors.
- the color of the base 53 of the pylon 52 is black or a color close thereto.
- FIG. 3 shows an example of a first edge image generated by the first edge image generation unit 5 from the captured image shown in FIG.
- the first edge from which the white portion in FIG. 3 is extracted is shown.
- a part of the outline of the main body 54 of the pylon 52 in the captured image of FIG. 2 is extracted as the first edge indicated by reference numerals 57a, 57b, 57c, and 57d.
- boundary lines 55a and 55b of the main body 54 of the pylon 52 are extracted as first edges indicated by reference numerals 57e and 57f.
- the brightness difference between the white colored area 56b (particularly the side where the light strikes) of the main body 54 of the pylon 52 is relatively prominent with respect to the road 51 and the red colored areas 56a and 56c. Therefore, many portions of the outline of the main body 54 and the boundary lines 55a and 55b in the white colored region 56b are extracted as the first edges 57a to 57f.
- the red colored areas 56 a and 56 c and the base 53 of the main body 54 of the pylon 52 are unlikely to have a luminance difference from the road 51. Therefore, the outline of the main body 52 and the outline of the base 53 in the red colored areas 56a and 56c are difficult to be extracted as the first edge.
- the first edges indicated by reference numerals 57 g and 57 h are the first edges corresponding to the white lines 51 a and 51 b of the road 51.
- the first edge indicated by reference numeral 57 i is a first edge extracted from an image such as grassy grass on the side of the road 51.
- FIG. 4 shows an example of a second edge image generated by the second edge image generation unit 6 from the captured image shown in FIG.
- the second edge from which the white portion in FIG. 4 is extracted is shown.
- a part of the outline of the main body 54 of the pylon 52 in the captured image of FIG. 2 is extracted as the second edge indicated by reference numerals 58a, 58b, 58c, 58d, 58e, and 58f.
- boundary lines 55a and 55b of the main body 54 of the pylon 52 are extracted as second edges indicated by reference numerals 58g and 58h.
- the hue difference between the red colored areas 56a and 56c of the main body 54 of the pylon 52 and the road 51 and the white colored area 56b is relatively prominent. Therefore, many portions of the outline of the main body 54 and the boundary lines 55a and 55b in the red colored region 56b are extracted as the second edge.
- the second edge indicated by reference numeral 58 i is a first edge corresponding to the outline of the base 53 of the pylon 52. Further, the second edge indicated by reference numeral 58j is a second edge extracted from an image of grass on the side of the road 51 or the like.
- the first edge image and the second edge image generated and stored as described above are given to the image composition unit 7. Then, the process of the image generation unit 7 is executed next.
- the image composition unit 7 has a function as a composite edge image generation means in the present invention.
- the image synthesizing unit 7 synthesizes the first edge image and the second edge image (more specifically, synthesizes the values of pixels corresponding to each other in both edge images), thereby synthesizing the edge image (binarized image). Is generated. Then, the image composition unit 7 stores and holds the composite edge image in an image memory (not shown).
- the image composition unit 7 sets the value of each pixel of the composite edge image (the value indicating whether an edge exists), the value of the pixel of the first edge image corresponding to the pixel, and the second value. It is set according to the pixel value of the edge image. That is, for each pixel of the composite edge image, the image composition unit 7 determines whether the value of the pixel of the first edge image corresponding to the pixel is a value indicating the presence of the first edge, or corresponds to the pixel When the value of the pixel of the second edge image is a value indicating the presence of the second edge, the value of the pixel of the composite edge image is set to a value indicating that the edge exists at the position of the pixel. .
- the image composition unit 7 indicates that the value of the pixel of the first edge image corresponding to the pixel is a value indicating that the first edge does not exist, and the pixel If the value of the pixel of the second edge image corresponding to is a value indicating that the second edge does not exist, the value of the pixel of the composite edge image is set to indicate that no edge exists at the position of the pixel. Set to the value shown.
- a combined edge image is generated by combining the first edge image and the second edge image. Therefore, in the composite edge image, the portion that becomes an edge is a portion in which at least one of the first edge and the second edge is extracted. Further, the portion that does not become an edge is a portion where both the first edge and the second edge are not extracted.
- an edge in the composite edge image is referred to as a composite edge.
- FIG. 5 shows an example of a composite edge image generated as described above.
- This illustrated example is a combined edge image obtained by combining the first edge image and the second edge image shown in FIGS. 3 and 4 respectively.
- a part of the outline of the main body 54 of the pylon 52 in the captured image of FIG. 2 is represented by a composite edge indicated by reference numerals 59a, 59b, 59c, 59d, 59e, and 59f.
- the boundary lines 55a and 55b of the main body 54 of the pylon 52 are represented by synthetic edges indicated by reference numerals 59g and 59h.
- the composite edge indicated by reference numeral 59 i is a composite edge corresponding to the outline of the base 53 of the pylon 52.
- the composite edges indicated by reference numerals 59j and 59k are composite edges corresponding to the white lines 51a and 51b of the road 51.
- the composite edge indicated by reference numeral 59m is a composite edge corresponding to an image such as grass on the side of the road 51.
- the composite edge image generated / stored / held as described above is given to the object type identification unit 8. Then, the processing of the object type identification unit 8 is executed next.
- the object type specifying unit 8 has a function as object type specifying means in the present invention.
- the object type specifying unit 8 specifies whether or not the object included in the captured image acquired by the in-vehicle camera 2 is a predetermined type of structure.
- the pylon 52 having the structure shown in FIG. 2 is one of the predetermined types of structures.
- the object type identification unit 8 has a function of identifying whether or not the object included in the captured image acquired by the in-vehicle camera 2 is the same type of object as the pylon 52 having the structure shown in FIG. ing.
- the processing of the object type identification unit 8 is performed as follows. That is, the object type identification unit 8 first extracts a composite edge (hereinafter referred to as an object composition composite edge) that is a component of the same object from a given composite edge image.
- a composite edge hereinafter referred to as an object composition composite edge
- the distance to the object in the imaging region of the in-vehicle camera 2 can be detected based on a stereo image, radar, or the like, the distance is approximately the same in the synthesized edge image. Extract included synthetic edges. This makes it possible to extract a composite edge (object structure composite edge) that is a component of the same object.
- composite edges 59a to 59i are extracted as object constituent composite edges.
- the object type specifying unit 8 determines that the outer shape (two-dimensional outer shape) of the object indicated by the object configuration combined edge corresponding to the outline of the object among the extracted object configuration combined edges is the pylon 52. It is determined whether or not the outer shape matches (hereinafter referred to as shape determination).
- the object type identification unit 8 has an object configuration synthesis edge corresponding to the outline of the object among the object configuration synthesis edges (this corresponds to the object outline image in the present invention. ). Thereby, the object type identification unit 8 creates an image indicating the outer shape of the object. Then, the object type identification unit 8 compares the shape of the image with a shape pattern set in advance as indicating the outer shape of the pylon 52, so that the shape of the image matches the outer shape of the pylon 52. It is determined whether or not.
- the image showing the outer shape of the object has a shape that approximates an isosceles triangle having an apex angle in a certain angle range, for example, it is determined that the shape of the image matches the outer shape of the pylon 52.
- the composite edges 59a to 59f among the object constituent composite edges 59a to 59i and the outer periphery of the composite edge 59i are the object contour composite edges. Then, the shape of the image formed by connecting these object outline composite edges is compared with a shape pattern set in advance as indicating the outer shape of the pylon 52. In this case, the shape of the image is determined to match the outer shape of the pylon 52. Therefore, the determination result of the shape determination is positive.
- the object type identification unit 8 includes a plurality of object configuration composite edges (in a region surrounded by the object contour composite edge) among the object configuration composite edges (a region surrounded by the object contour composite edge) extending in parallel to each other in the horizontal direction (
- pattern determination it is determined whether or not there is an in-object horizontal composite edge (hereinafter referred to as pattern determination).
- the in-object horizontal composite edge does not need to extend strictly in the horizontal direction.
- the in-object horizontal composite edge may extend in an inclined direction within a predetermined angle range near zero with respect to the horizontal direction.
- the composite edges 59g and 59h among the object constituent composite edges 59a to 59i are extracted as the horizontal composite edges in the object.
- the pattern determination may be omitted.
- the object type specifying unit 8 specifies whether or not the type of the object in the captured image is the same as that of the pylon 52 based on these determination results. .
- the object type specifying unit 8 specifies that the type of the object in the captured image is the same as that of the pylon 52 when the determination results of both the shape determination and the pattern determination are affirmative. Further, the object type identification unit 8 identifies that the type of the object in the captured image is different from the pylon 52 when the determination result of either the shape determination or the pattern determination is negative.
- the determination results of both the shape determination and the pattern determination are positive as described above.
- the type of the object in the captured image acquired by the in-vehicle camera 2 is specified to be the same as that of the pylon 52.
- the first edge image generated based on the luminance component of the captured image acquired by the imaging camera 2 and the second edge image generated based on the hue component of the captured image are combined.
- the edge image it is specified whether or not the type of the object in the captured image is the same as that of the pylon 52.
- the luminance component and the hue component of the captured image generally have different distribution patterns. Therefore, an edge that is not included in the first edge image is included in the second edge image as the second edge, or an edge that is not included in the second edge image is included in the first edge image as the first edge. become. Therefore, by combining the first edge image and the second edge image, the edges that are not extracted in the processes of the first edge image generation unit 5 and the second edge image generation unit 6 are complemented with each other.
- a composite edge image as an image can be generated. That is, the composite edge image is an image including many portions of the outline of the object and the boundary line of the colored region in the captured image as the composite edge.
- the reliability of these determination results can be increased by performing the above-described shape determination and pattern determination using the composite edge image. As a result, the type of the object in the captured image can be specified with high reliability.
- the object in the captured image is the same type as the pylon 52. Therefore, it is possible to prevent the image of the pylon 52 from being confused with other objects or white lines on the road. As a result, an object other than the pylon 52 and a white line on the road can be detected from the captured image separately from the pylon 52.
- the second edge image generation unit 6 generates the second edge image from the hue component image as the hue component in the captured image acquired by the vehicle-mounted camera 2.
- the second edge image may be generated from a saturation component image as a saturation component in the captured image.
- the saturation component of the captured image generally has a distribution pattern different from the luminance component of the captured image. For this reason, even if it produces
- the object in the captured image is the same type of structure as the pylon 52 having the structure shown in FIG.
- the type of the object is the same as another type of pylon having a structure different from that of the pylon 52.
- the type of pylon to be specified may be the pylon 71 having the structure shown in FIG. 6 or the pylon 81 having the structure shown in FIG.
- a pylon 71 illustrated in FIG. 6 is a structure including a base 72 and a columnar main body 73 projecting from the base 72.
- the outer surface of the main body 73 of the pylon 71 is colored with two colors of red and white. More specifically, the outer surface of the main body 73 is formed by a plurality of (two in the illustrated example) boundary lines 75a and 74b extending in a diagonal direction (spiral) in parallel with each other on the outer surface (see FIG. It is divided into two colored regions 75a and 75b in the example shown.
- a colored region 75a which is a colored region marked with stippling, is colored red, and a colored region 75b adjacent to the colored region 75a is colored white.
- the image processing unit 3 When specifying whether or not the object in the captured image acquired by the in-vehicle camera 2 is the same type as the pylon 71 shown in FIG. 6, the image processing unit 3 acquires the image in the same manner as in the above embodiment. The process of the part 4, the 1st edge image generation part 5, the 2nd edge image generation part 6, and the image synthetic
- an image (an image showing the outer shape of the object) obtained by connecting the object outline synthesis edges among the object configuration synthesis edges in the synthesis edge image generated by the image synthesis unit 7 is obtained.
- a comparison is made with a shape pattern set in advance to indicate the outer shape of the pylon 71.
- the object type identification unit 8 determines whether or not the shape of the image matches the outer shape of the pylon 71.
- the pylon 81 illustrated in FIG. 7 is a structure including a base 82 and a barrel-shaped main body 83 protruding from the base 82.
- the outer surface of the main body 83 of the pylon 81 is colored with two colors of red and white. More specifically, the outer surface of the main body portion 83 is defined by a plurality of (four in the illustrated example) boundary lines (annular lines) 84a, 84b, 84c, and 84d that extend in the horizontal direction in parallel with each other on the outer surface. It is divided into a plurality (five in the illustrated example) of colored areas 85a, 85b, 85c, 85d, and 85e.
- the colored regions 85a, 85c, and 85e which are colored regions marked with stippling in FIG. 6, are colored in red. Further, a colored region 85b sandwiched between the red colored regions 85a and 85c and a colored region 85d sandwiched between the red colored regions 85c and 85e are colored white.
- the image processing unit 3 When specifying whether or not the object in the captured image acquired by the in-vehicle camera 2 is the same type as the pylon 81 shown in FIG. 7, the image processing unit 3 acquires the image as in the above embodiment. The process of the part 4, the 1st edge image generation part 5, the 2nd edge image generation part 6, and the image synthetic
- an image (an image showing the outer shape of the object) formed by connecting the object outline synthesis edges among the object configuration synthesis edges in the synthesis edge image generated by the image synthesis unit 7 is obtained.
- a comparison is made with a shape pattern set in advance to indicate the outer shape of the pylon 81.
- the object type identification unit 8 determines whether or not the shape of the image matches the outer shape of the pylon 81.
- shape determination and pattern determination are performed.
- the object in the captured image is the same type as the pylon 52 (or pylon 71 or pylon 81). It may be.
- the present invention is useful as one that can accurately identify the type of an object existing outside a vehicle from an image captured by a vehicle-mounted camera.
- SYMBOLS 1 Vehicle periphery monitoring apparatus, 2 ... In-vehicle camera, 5 ... This 1 edge image generation part (1st edge image generation means), 6 ... 2nd edge image generation part (2nd edge image generation means), 7 ... Image composition Parts (composite edge image generating means), 8... Object type specifying part (object type specifying means), 52, 71, 81... Pylon (a predetermined type of structure).
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Closed-Circuit Television Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims (2)
- 車載カメラにより取得された車両周辺の撮像画像に基づいて、車両の外部に存在する物体を検出する車両周辺監視装置において、
前記撮像画像の輝度成分に基づいて、該輝度成分の値の不連続的な変化部としての第1エッジを前記撮像画像から抽出し、その抽出された第1エッジにより構成される画像である第1エッジ画像を生成する第1エッジ画像生成手段と、
前記撮像画像の色相成分又は彩度成分に基づいて、該色相成分又は彩度成分の値の不連続的な変化部としての第2エッジを抽出し、その抽出された第2エッジにより構成される画像である第2エッジ画像を生成する第2エッジ画像生成手段と、
前記生成された第1エッジ画像と第2エッジ画像とを合成してなる合成エッジ画像を生成する合成エッジ画像生成手段と、
前記生成された合成エッジ画像のうち、前記撮像画像に含まれる物体の外形線を構成する画像である物体外形画像により示される該物体の外形状が所定種類の構造物の外形状に合致するか否かを判定し、少なくとも該判定結果に基づいて、該物体が前記所定種類の構造物であるか否かを特定する物体種別特定手段とを備えることを特徴とする車両周辺監視装置。 - 請求項1記載の車両周辺監視装置において、
前記所定種類の構造物は、その外表面が、該外表面上で互いに平行に水平方向もしくは斜め方向に延在する複数の境界線によって複数の着色領域に区分けされていると共に各境界線を挟んで互いに隣合う着色領域が互いに異なる色で着色されている構造物であり、
前記物体種別特定手段は、前記物体外形画像により示される物体の外形状が前記所定種類の構造物の外形状に合致し、且つ、該物体外形画像の内側に、互いに平行に水平方向もしくは斜め方向に延在する複数の前記第1エッジ又は第2エッジが存在する場合に、該物体が前記所定種類の構造物であると判定することを特徴とする車両周辺監視装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/702,610 US8867788B2 (en) | 2010-07-29 | 2011-06-30 | Vehicle periphery monitoring device |
JP2012526390A JP5718920B2 (ja) | 2010-07-29 | 2011-06-30 | 車両周辺監視装置 |
EP11812220.9A EP2557540B1 (en) | 2010-07-29 | 2011-06-30 | Vehicle periphery monitoring device |
CN201180030094.1A CN102985947B (zh) | 2010-07-29 | 2011-06-30 | 车辆周围监测装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010170045 | 2010-07-29 | ||
JP2010-170045 | 2010-07-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012014627A1 true WO2012014627A1 (ja) | 2012-02-02 |
Family
ID=45529846
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/065092 WO2012014627A1 (ja) | 2010-07-29 | 2011-06-30 | 車両周辺監視装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8867788B2 (ja) |
EP (1) | EP2557540B1 (ja) |
JP (1) | JP5718920B2 (ja) |
CN (1) | CN102985947B (ja) |
WO (1) | WO2012014627A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019525346A (ja) * | 2016-08-31 | 2019-09-05 | ヴィオニア スウェーデン エービー | 自動車用視覚システム及び方法 |
JP2020027468A (ja) * | 2018-08-13 | 2020-02-20 | Kyb株式会社 | 画像処理装置、画像処理方法及び画像処理システム |
JP2020140633A (ja) * | 2019-03-01 | 2020-09-03 | クラリオン株式会社 | 画像処理装置 |
JP2023170536A (ja) * | 2022-05-19 | 2023-12-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101251793B1 (ko) * | 2010-11-26 | 2013-04-08 | 현대자동차주식회사 | 차량내 운전자 실제 얼굴 인증 방법 |
CA2810540C (en) * | 2012-03-28 | 2020-06-16 | Schlumberger Canada Limited | Seismic attribute color model transform |
CN107924625B (zh) * | 2015-08-19 | 2021-11-12 | 三菱电机株式会社 | 车道识别装置以及车道识别方法 |
WO2017032335A1 (en) * | 2015-08-26 | 2017-03-02 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for traffic monitoring |
KR102275310B1 (ko) * | 2017-04-20 | 2021-07-12 | 현대자동차주식회사 | 자동차 주변의 장애물 검출 방법 |
DE102017215718B4 (de) | 2017-09-07 | 2019-06-13 | Audi Ag | Verfahren zum Auswerten eines optischen Erscheinungsbildes in einer Fahrzeugumgebung und Fahrzeug |
JP7095738B2 (ja) | 2018-06-13 | 2022-07-05 | 富士通株式会社 | 取得方法、生成方法、取得プログラム、生成プログラムおよび情報処理装置 |
CN109993046B (zh) * | 2018-06-29 | 2021-04-09 | 长城汽车股份有限公司 | 基于视觉摄像机的自阴影物体边缘识别方法、装置及车辆 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001145092A (ja) * | 1999-11-18 | 2001-05-25 | Meidensha Corp | 映像監視システムおよび映像監視方法 |
JP3324821B2 (ja) | 1993-03-12 | 2002-09-17 | 富士重工業株式会社 | 車輌用車外監視装置 |
JP2009151602A (ja) * | 2007-12-21 | 2009-07-09 | Shima Seiki Mfg Ltd | 輪郭抽出装置と輪郭抽出方法及び輪郭抽出プログラム |
JP4358147B2 (ja) | 2005-04-28 | 2009-11-04 | 本田技研工業株式会社 | 車両及びレーンマーク認識装置 |
JP2010109451A (ja) * | 2008-10-28 | 2010-05-13 | Panasonic Corp | 車両周囲監視装置及び車両周囲監視方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0683962A (ja) | 1992-05-21 | 1994-03-25 | Sanyo Electric Co Ltd | 画像認識方法 |
US5493392A (en) * | 1992-12-15 | 1996-02-20 | Mcdonnell Douglas Corporation | Digital image system for determining relative position and motion of in-flight vehicles |
US5638116A (en) * | 1993-09-08 | 1997-06-10 | Sumitomo Electric Industries, Ltd. | Object recognition apparatus and method |
JP2986439B2 (ja) * | 1998-01-12 | 1999-12-06 | 松下電器産業株式会社 | 車両用画像処理装置 |
JP3990375B2 (ja) | 2004-03-30 | 2007-10-10 | 東芝ソリューション株式会社 | 画像処理装置および画像処理方法 |
JP4365350B2 (ja) * | 2005-06-27 | 2009-11-18 | 本田技研工業株式会社 | 車両及び車線認識装置 |
JP4365352B2 (ja) * | 2005-07-06 | 2009-11-18 | 本田技研工業株式会社 | 車両及びレーンマーク認識装置 |
JP4632987B2 (ja) * | 2006-03-28 | 2011-02-16 | 株式会社パスコ | 道路画像解析装置及び道路画像解析方法 |
JP5083658B2 (ja) * | 2008-03-26 | 2012-11-28 | 本田技研工業株式会社 | 車両用車線認識装置、車両、及び車両用車線認識プログラム |
-
2011
- 2011-06-30 WO PCT/JP2011/065092 patent/WO2012014627A1/ja active Application Filing
- 2011-06-30 US US13/702,610 patent/US8867788B2/en active Active
- 2011-06-30 CN CN201180030094.1A patent/CN102985947B/zh not_active Expired - Fee Related
- 2011-06-30 EP EP11812220.9A patent/EP2557540B1/en not_active Not-in-force
- 2011-06-30 JP JP2012526390A patent/JP5718920B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3324821B2 (ja) | 1993-03-12 | 2002-09-17 | 富士重工業株式会社 | 車輌用車外監視装置 |
JP2001145092A (ja) * | 1999-11-18 | 2001-05-25 | Meidensha Corp | 映像監視システムおよび映像監視方法 |
JP4358147B2 (ja) | 2005-04-28 | 2009-11-04 | 本田技研工業株式会社 | 車両及びレーンマーク認識装置 |
JP2009151602A (ja) * | 2007-12-21 | 2009-07-09 | Shima Seiki Mfg Ltd | 輪郭抽出装置と輪郭抽出方法及び輪郭抽出プログラム |
JP2010109451A (ja) * | 2008-10-28 | 2010-05-13 | Panasonic Corp | 車両周囲監視装置及び車両周囲監視方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2557540A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019525346A (ja) * | 2016-08-31 | 2019-09-05 | ヴィオニア スウェーデン エービー | 自動車用視覚システム及び方法 |
JP2020027468A (ja) * | 2018-08-13 | 2020-02-20 | Kyb株式会社 | 画像処理装置、画像処理方法及び画像処理システム |
JP7129270B2 (ja) | 2018-08-13 | 2022-09-01 | Kyb株式会社 | 画像処理装置、画像処理方法及び画像処理システム |
JP2020140633A (ja) * | 2019-03-01 | 2020-09-03 | クラリオン株式会社 | 画像処理装置 |
JP7244301B2 (ja) | 2019-03-01 | 2023-03-22 | フォルシアクラリオン・エレクトロニクス株式会社 | 画像処理装置 |
JP2023170536A (ja) * | 2022-05-19 | 2023-12-01 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
JP7483790B2 (ja) | 2022-05-19 | 2024-05-15 | キヤノン株式会社 | 画像処理装置、画像処理方法、移動体、及びコンピュータプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2012014627A1 (ja) | 2013-09-12 |
EP2557540A4 (en) | 2013-12-11 |
US8867788B2 (en) | 2014-10-21 |
EP2557540B1 (en) | 2014-11-12 |
JP5718920B2 (ja) | 2015-05-13 |
CN102985947A (zh) | 2013-03-20 |
US20130083968A1 (en) | 2013-04-04 |
CN102985947B (zh) | 2015-06-10 |
EP2557540A1 (en) | 2013-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5718920B2 (ja) | 車両周辺監視装置 | |
CN104952254B (zh) | 车辆识别方法、装置和车辆 | |
JP5083658B2 (ja) | 車両用車線認識装置、車両、及び車両用車線認識プログラム | |
US9197860B2 (en) | Color detector for vehicle | |
US8319854B2 (en) | Shadow removal in an image captured by a vehicle based camera using a non-linear illumination-invariant kernel | |
US20160180180A1 (en) | Vehicle vision system with adaptive lane marker detection | |
JP2010224925A (ja) | 環境認識装置 | |
JP2006338556A (ja) | 車両及び路面標示認識装置 | |
JP2011081736A (ja) | 歩行者検出システム | |
WO2009130827A1 (ja) | 車両周辺監視装置 | |
JP6753134B2 (ja) | 画像処理装置、撮像装置、移動体機器制御システム、画像処理方法、及び画像処理プログラム | |
JP2011076214A (ja) | 障害物検出装置 | |
WO2010007718A1 (ja) | 車両周辺監視装置 | |
WO2019013252A1 (ja) | 車両周囲認識装置 | |
CN114173066A (zh) | 成像***和方法 | |
JP6375911B2 (ja) | カーブミラー検出装置 | |
JP2006259885A (ja) | カラー画像を用いた看板および標識の認識方法 | |
JP2009230530A (ja) | 車両認識装置、車両、及び車両認識用プログラム | |
JP4336755B2 (ja) | カラー画像を用いた看板の認識方法 | |
JP4821399B2 (ja) | 物体識別装置 | |
JP2005284377A (ja) | 標識認識装置及び標識認識方法 | |
KR20140062334A (ko) | 장애물 검출 장치 및 방법 | |
JP6173962B2 (ja) | レーンマーク認識装置 | |
JP2011221613A (ja) | 物体認識装置 | |
US20210279478A1 (en) | Vision system and method for a motor vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180030094.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11812220 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012526390 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011812220 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13702610 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |