WO2013018673A1 - 立体物検出装置及び立体物検出方法 - Google Patents
立体物検出装置及び立体物検出方法 Download PDFInfo
- Publication number
- WO2013018673A1 WO2013018673A1 PCT/JP2012/069094 JP2012069094W WO2013018673A1 WO 2013018673 A1 WO2013018673 A1 WO 2013018673A1 JP 2012069094 W JP2012069094 W JP 2012069094W WO 2013018673 A1 WO2013018673 A1 WO 2013018673A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- lane
- dimensional object
- detection
- width direction
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
- G08G1/167—Driving aids for lane monitoring, lane changing, e.g. blind spot detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/08—Detecting or categorising vehicles
Definitions
- the present invention relates to a three-dimensional object detection device and a three-dimensional object detection method.
- a vehicle periphery monitoring device in which a radar determines whether a three-dimensional object exists in a detection area behind the vehicle and notifies the driver.
- this vehicle periphery monitoring device at least a portion that becomes a blind spot of the side mirror is included in the detection area, and when the angle of the side mirror changes, the position of the detection area is changed accordingly (see Patent Document 1).
- the detection area is fixed as long as the angle of the side mirror does not change. In such a state, for example, when the host vehicle is leaning to the left side of the lane and another vehicle on the right side is approaching the right side of the lane, the other vehicle does not enter the detection area. It can no longer be detected.
- the present invention has been made to solve such a conventional problem, and an object of the present invention is to provide a three-dimensional object detection device and a three-dimensional object detection method capable of improving the detection accuracy of a three-dimensional object. There is to do.
- the solid object detection device of the present invention captures an image including a predetermined area of a division line and an adjacent lane, and determines whether or not a solid object exists in the predetermined area. Further, the three-dimensional object detection device detects the vehicle width direction distance between the vehicle position and the division line in the traveling lane of the own vehicle from the captured image, and the division line exists as the vehicle width direction distance increases. The size of the predetermined area located on the side is enlarged outward in the vehicle width direction.
- FIG. 4A and 4B are diagrams for explaining an outline of processing of the alignment unit shown in FIG. 3, in which FIG. 3A shows a moving state of the vehicle V, and FIG. It is the schematic which shows the mode of the production
- region setting part has shown the example when a detection area is expanded.
- 2nd Embodiment it is a graph which shows the relationship between the distance in a vehicle width direction with a division line, and the magnitude
- FIG. 5 is a schematic diagram showing a state of processing by a ground line detection unit 37.
- FIG. 22 is a graph showing an area increase rate of a plurality of differential waveforms DW t1 to DW t4 shown in FIG. It is a graph which shows the relationship between the vehicle width direction distance with the division line in 4th Embodiment, and the magnitude
- FIG. 1 is a schematic configuration diagram of a three-dimensional object detection device 1 according to the present embodiment, and shows an example in which the three-dimensional object detection device 1 is mounted on a vehicle V.
- the three-dimensional object detection apparatus 1 shown in FIG. 1 detects a three-dimensional object (for example, another vehicle, a two-wheeled vehicle, etc.) that travels in an adjacent lane that is adjacent to a travel lane in which the host vehicle V travels. It provides various information to the driver of the vehicle V, and includes a camera (imaging means) 10, a vehicle speed sensor 20, and a calculator 30.
- the travel lane is a travel band in which the host vehicle V can travel when there is no lane change, and is an area excluding the division line.
- the adjacent lane is a travel band that is adjacent to the travel lane through the lane marking and excludes the lane marking.
- a division line is a line such as a white line that serves as a boundary between a traveling lane and an adjacent lane.
- the camera 10 shown in FIG. 1 is attached so that the optical axis is at an angle ⁇ downward from the horizontal at a position of height h behind the host vehicle V.
- the camera 10 images the detection area from this position.
- the vehicle speed sensor 20 detects the traveling speed of the host vehicle V, and for example, a sensor that detects the number of rotations of wheels is applied.
- the computer 30 detects a three-dimensional object (for example, another vehicle, a two-wheeled vehicle, etc.) existing behind the host vehicle V based on an image captured by the camera 10.
- the three-dimensional object detection device 1 includes an alarm device (not shown), and warns the driver of the host vehicle V when the three-dimensional object detected by the computer 30 may come into contact with the host vehicle V. To do.
- FIG. 2 is a top view showing a traveling state of the vehicle shown in FIG.
- the camera 10 can capture an image of the rear side of the host vehicle V, specifically, an area including a lane marking and an adjacent lane.
- Detection areas (predetermined areas) A1 and A2 for detecting three-dimensional objects such as other vehicles are set in adjacent lanes adjacent to the travel lane in which the host vehicle V travels, and the computer 30 is in the detection areas A1 and A2. It is detected whether or not there is a three-dimensional object.
- Such detection areas A1 and A2 are set from a relative position to the host vehicle V.
- FIG. 3 is a block diagram showing details of the computer 30 shown in FIG. In FIG. 3, the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a viewpoint conversion unit 31, a positioning unit (positioning unit) 32, and a three-dimensional object detection unit (three-dimensional object detection unit) 33.
- the viewpoint conversion unit 31 inputs captured image data including the detection areas A1 and A2 obtained by imaging with the camera 10, and converts the input captured image data into a bird's-eye image data in a bird's-eye view state. is there.
- the state viewed from a bird's-eye view is a state viewed from the viewpoint of a virtual camera looking down from above, for example, vertically downward.
- This viewpoint conversion is executed as described in, for example, Japanese Patent Application Laid-Open No. 2008-219063.
- the alignment unit 32 sequentially inputs the bird's-eye image data obtained by the viewpoint conversion of the viewpoint conversion unit 31, and aligns the positions of the inputted bird's-eye image data at different times.
- FIG. 4 is a top view showing an outline of the processing of the alignment unit 32 shown in FIG. 3, (a) shows the moving state of the vehicle V, and (b) shows an outline of the alignment.
- the host vehicle V at the current time is located at V1, and the host vehicle V one hour before is located at V2.
- the other vehicle V is located in the rear direction of the own vehicle V and is in parallel with the own vehicle V, the other vehicle V at the current time is located at V3, and the other vehicle V one hour before is located at V4.
- the host vehicle V has moved a distance d at one time.
- “one hour before” may be a past time by a predetermined time (for example, one control cycle) from the current time, or may be a past time by an arbitrary time.
- the bird's-eye image PB t at the current time is as shown in Figure 4 (b).
- the bird's-eye image PB t it becomes a rectangular shape for the white line drawn on the road surface, and has a relatively accurately been viewed state.
- the other vehicle V3 has fallen.
- the white line drawn on the road surface has a rectangular shape and is relatively accurately viewed from above, but the other vehicle V4 falls down. Has occurred.
- the alignment unit 32 performs alignment of the bird's-eye images PB t and PB t ⁇ 1 as described above on the data. At this time, the alignment unit 32 is offset a bird's-eye view image PB t-1 before one unit time, to match the position and bird's-eye view image PB t at the current time.
- the offset amount d ′ is an amount corresponding to the moving distance d shown in FIG. 4A, and is determined based on the signal from the vehicle speed sensor 20 and the time from one time before to the current time.
- the alignment unit 32 takes the difference between the bird's-eye images PB t and PB t ⁇ 1 and generates data of the difference image PD t .
- the pixel value of the difference image PD t may be an absolute value of the difference between the pixel values of the bird's-eye images PB t and PB t ⁇ 1 , and the absolute value is predetermined in order to cope with a change in the illuminance environment. It may be set to “1” when the value is exceeded and “0” when the value is not exceeded.
- the computer 30 includes a lateral position detection unit (lateral position detection means) 34.
- the lateral position detection unit 34 determines, based on the captured image data captured by the camera 10, the own vehicle position (specifically, the side surface of the own vehicle V) in the traveling lane of the own vehicle V and the dividing line that divides the lane. The vehicle width direction distance is detected.
- the lateral position detection unit 34 can detect whether the computer 30 is traveling in the center of the travel lane or is biased to the left or right.
- the three-dimensional object detection unit 33 detects a three-dimensional object based on the data of the difference image PD t as shown in FIG.
- the three-dimensional object detection unit 33 includes a differential waveform generation unit (difference waveform generation unit) 33a and an area setting unit (area setting unit) 33b.
- FIG. 5 is a schematic diagram showing how a differential waveform is generated by the differential waveform generator 33a shown in FIG.
- the differential waveform generation unit 33a generates a differential waveform DW t from a portion corresponding to the detection areas A1 and A2 in the differential image PD t .
- the prorated waveform generation unit 33a generates a differential waveform DW t along the direction in which the three-dimensional object falls due to viewpoint conversion.
- only the detection area A1 is described for convenience.
- the differential waveform generation unit 33a first defines a line La in the direction in which the three-dimensional object falls on the data of the differential image DW t . Then, the difference waveform generation unit 33a counts the number of difference pixels DP indicating a predetermined difference on the line La.
- the difference pixel DP indicating the predetermined difference exceeds the predetermined value when the pixel value of the difference image DW t is an absolute value of the difference between the pixel values of the bird's-eye images PB t and PB t ⁇ 1. If the pixel value of the difference image DW t is expressed by “0” and “1”, the pixel indicates “1”.
- the difference waveform generation unit 33a counts the number of difference pixels DP and then obtains an intersection CP between the line La and the line L1. Then, the differential waveform generation unit 33a associates the intersection CP with the count number, determines the horizontal axis position (position on the vertical axis in FIG. 5) based on the position of the intersection CP, and determines the vertical axis position from the count number. (Position on the horizontal axis in FIG. 5) is determined.
- the differential waveform generation unit 33a defines a line in the direction in which the three-dimensional object falls, counts the number of difference pixels DP, determines the horizontal axis position based on the position of the intersection CP, and counts ( The vertical axis position is determined from the number of difference pixels DP).
- the three-dimensional object detection unit 33 generates the differential waveform DW by repeating the above in order and performing frequency distribution.
- the line La and the line Lb in the direction in which the three-dimensional object collapses have different distances overlapping the detection area A1. For this reason, if the detection area A1 is filled with the difference pixels DP, the number of the difference pixels DP is larger on the line La than on the line Lb. Therefore, when determining the vertical axis position from the count number of the difference pixel DP, the difference waveform generation unit 33a normalizes based on the distance at which the lines La and Lb in the direction in which the three-dimensional object falls and the detection area A1 overlap. To do. As a specific example, in FIG. 5, there are six difference pixels DP on the line La, and there are five difference pixels DP on the line Lb.
- the differential waveform generation unit 33a normalizes the count number by dividing it by the overlap distance.
- the difference waveform DW t the line La on the direction the three-dimensional object collapses
- the value of the differential waveform DW t corresponding to Lb is substantially the same.
- the three-dimensional object detection unit 33 detects a three-dimensional object based on the data of the difference waveform DW t .
- the moving object detection unit 34 first calculates the estimated speed of the three-dimensional object by associating the differential waveform DW t ⁇ 1 one time ago with the current differential waveform DW t . For example, when the three-dimensional object is the other vehicle V, since the difference pixel DP is easily obtained in the tire portion of the other vehicle V, the difference waveform DW tends to have two maximum values.
- the relative speed of the other vehicle V with respect to the host vehicle V can be obtained by obtaining the difference between the maximum values of the difference waveform DW t-1 one time before and the current difference waveform DW t .
- the mobile body detection part 34 calculates
- the moving object detection unit 34 determines whether or not the three-dimensional object is a three-dimensional object by determining whether the estimated speed of the three-dimensional object is an appropriate speed as the three-dimensional object.
- the area setting unit 33b sets the sizes of the detection areas A1 and A2 shown in FIG.
- the area setting unit 33b increases the size of the detection areas A1 and A2 positioned on the side where the lane marking exists as the distance in the vehicle width direction from the lane marking detected by the lateral position detection section 34 increases. To enlarge.
- FIG. 6 is a top view showing the traveling state of the vehicle shown in FIG. 1, and shows an example in which the host vehicle V is traveling on a traveling lane. As shown in FIG. 6, it is assumed that the host vehicle V is traveling in a biased manner on the travel lane, and is traveling close to the lane marking on the left side of the vehicle (left side as viewed from the driver).
- the region setting unit 33b prevents the situation in which the detection region A1 is enlarged to cause detection omission.
- FIG. 7 is a top view showing the traveling state of the vehicle shown in FIG. 1, and shows an example in which the region setting unit 33b enlarges the detection region A1. As shown in FIG. 7, the detection area A1 is enlarged by the area setting unit 33b. Thereby, the other vehicle V will be located in detection area A1, and the detection omission of the other vehicle V can be prevented.
- FIG. 8 is a graph showing the relationship between the vehicle width direction distance ⁇ y from the lane marking and the size of the detection area A1 (enlargement amount ⁇ y 0 fs).
- the amount of enlargement of the detection area A1 is zero. Further, when the vehicle width direction distance ⁇ y is from y1 to y2, the enlargement amount of the detection area A1 is increased according to the size of the vehicle width direction distance ⁇ y. Further, when the vehicle width direction distance ⁇ y exceeds y2, the enlargement amount of the detection area A1 is fixed to y 0 fs ′. As described above, the reason why the expansion amount of the detection area A1 is fixed to the specific value y 0 fs ′ is that if the detection area A1 is expanded indefinitely, not only the adjacent lane but also the adjacent lane is detected. This is because it may fall within A1.
- the amount of enlargement of the detection area A1 is proportionally increased in the section where the vehicle width direction distance ⁇ y is from y1 to y2, but is not limited to the proportional increase and increases exponentially. It may be.
- the detection area A ⁇ b> 1 once enlarged is reduced.
- the detection area A1 is enlarged based on the vehicle width direction distance ⁇ y from the vehicle right side surface (right side surface as viewed from the driver) to the right lane marking. Needless to say, it is determined based on the vehicle width direction distance ⁇ y from the left side of the vehicle (left side as viewed from the driver) to the left lane marking.
- the area setting unit 33b is configured not to change the detection areas A1 and A2 abruptly. This is because, if the detection areas A1 and A2 are suddenly changed, the detection of the three-dimensional object becomes unstable, and the possibility of causing a detection failure of the three-dimensional object increases.
- the region setting unit 33b prevents the amount of change when changing the detection regions A1 and A2 from exceeding a limit value (enlarged specified value, specified value). More specifically, the area setting unit 33b obtains target values of the sizes of the detection areas A1 and A2 based on the graph shown in FIG. Then, the region setting unit 33b sequentially brings the sizes of the detection regions A1 and A2 close to the target value within a range that does not exceed the limit value.
- the enlargement limit value (enlarged specified value) that is a limit value when the detection areas A1 and A2 are enlarged is smaller than the reduction limit value (specified value) that is the limit value when the detection areas A1 and A2 are reduced. Is set. As a result, when the detection areas A1 and A2 are reduced, the detection areas A1 and A2 are not suddenly reduced, and the other areas V are removed from the detection areas A1 and A2 by rapidly reducing the detection areas A1 and A2. It is possible to prevent a situation in which leakage occurs.
- the area setting unit 33b makes the limit value smaller during detection of the three-dimensional object than during non-detection of the three-dimensional object. This is to prevent a situation in which the other vehicle V being detected is deviated from the detection areas A1 and A2 by causing the detection areas A1 and A2 to be suddenly reduced, thereby causing a detection omission.
- FIG. 9 is a flowchart showing the three-dimensional object detection method according to the present embodiment.
- the lateral position detector 34 detects the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the dividing line (S1). At this time, the lateral position detection unit 34 detects the vehicle width direction distance ⁇ y based on the image data captured by the camera 10. In the present embodiment, since the detection areas A1 and A2 are set to the left and right rear of the host vehicle V, the lateral position detection unit 34 determines the distance in the vehicle width direction between the left and right side surfaces of the host vehicle V and the left and right dividing lines. Each ⁇ y is detected. In the following description, for convenience of explanation, only one detection area A1 will be described as an example, but the same applies to the other detection area A2.
- the area setting unit 33b sets a target value for the detection area A1 (S2). At this time, the region setting unit 33b sets a target value based on the graph data described with reference to FIG. Next, the three-dimensional object detection unit 33 determines whether or not a three-dimensional object is currently being detected (S3).
- the region setting unit 33b sets the limit value that is the upper limit of the amount of change in the detection region A1 to the first limit value (S4). Then, the process proceeds to step S6.
- the region setting unit 33b sets the limit value that is the upper limit value of the change amount of the detection region A1 to the second limit value (S5). Then, the process proceeds to step S6.
- the first limit value is smaller than the second limit value. For this reason, a sudden change of the detection area A1 is further prevented during the detection of the three-dimensional object.
- step S6 the three-dimensional object detection unit 33 determines whether or not to reduce the detection area A1 based on the target value obtained in step S2 (S6).
- the area setting unit 33b decreases the limit value set in steps S4 and S5 (S7). Thereby, when the detection area A1 is reduced, a sudden change in the detection area A1 can be further suppressed. Then, the process proceeds to step S8.
- the area setting unit 33b performs the process without reducing the limit value set in steps S4 and S5. The process proceeds to step S8.
- step S8 the area setting unit 33b changes the size of the detection area A1 (S8).
- the region setting unit 33b enlarges or reduces the size of the detection region A1 within a range that does not exceed the limit value obtained through the above processing.
- the computer 30 detects the vehicle speed based on the signal from the vehicle speed sensor 20 (S9).
- the alignment unit 32 detects the difference (S10).
- the alignment unit 32 generates data of the difference image PD t as described with reference to FIG.
- the differential waveform generation unit 33a generates the differential waveform DW as described with reference to FIG. 5 based on the differential image PD t generated in step S10 (S11). Then, the three-dimensional object detection unit 33 calculates the estimated speed of the three-dimensional object by associating the differential waveform DW t ⁇ 1 one time ago with the current differential waveform DW t (S12).
- the three-dimensional object detection unit 33 determines whether or not the estimated speed calculated in step S12 is a detection target (S13).
- the three-dimensional object detection device 1 detects other vehicles, two-wheeled vehicles, and the like that may be contacted when the lane is changed. For this reason, the three-dimensional object detection unit 33 determines whether or not the estimated speed is appropriate as the speed of the other vehicle or the two-wheeled vehicle in step S13.
- the three-dimensional object detection unit 33 detects the three-dimensional object (other vehicle) indicated by the difference waveform DW t . Or a motorcycle (S14). Then, the process shown in FIG. 9 ends.
- the three-dimensional object detection unit 33 determines that the three-dimensional object indicated by the differential waveform DW t is not a three-dimensional object to be detected. The processing shown in FIG. 9 ends.
- the detection areas A1 and A2 are not appropriately set to the adjacent vehicles, and other vehicles, etc. It is possible to prevent the three-dimensional object from being out of the detection areas A1 and A2 and leaking detection. Therefore, the detection accuracy of the three-dimensional object can be improved.
- the limit value is made smaller during detection of a three-dimensional object than during non-detection of a three-dimensional object. That is, since the reduction limit value is smaller than the enlargement limit value, the size of the detection areas A1 and A2 is rapidly reduced during detection of the three-dimensional object, and the detection areas A1 and A2 are extremely reduced and detected. It is possible to prevent a situation in which leakage occurs.
- Second Embodiment a second embodiment of the present invention will be described.
- the three-dimensional object detection device and the three-dimensional object detection method according to the second embodiment are the same as those of the first embodiment, but the configuration and processing contents are partially different.
- differences from the first embodiment will be described.
- FIG. 10 is a block diagram showing details of the computer 30 according to the second embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a lane width detection unit (width detection means) 35.
- the lane width detection unit 35 detects the lane width of the traveling lane.
- the lane width detection unit 35 detects the lane width of the traveling road based on the captured image data captured by the camera 10.
- the lane width detection unit 35 may detect the lane width of the adjacent lane and detect this as the lane width of the traveling lane. This is because the lane width is basically uniform on the road.
- FIG. 11 is a top view showing a running state of the vehicle when the lane width is narrow, and shows an example in which the area setting unit 33b enlarges the detection area A1.
- the detection area A1 is enlarged as in the first embodiment, as shown in FIG. 11, another vehicle V in the adjacent lane may enter the detection area A1.
- solid object detection is performed based on such detection area A1, the precision of solid object detection will fall. The same applies to the detection area A2.
- the region setting unit 33b decreases the amount of expansion when the detection regions A1 and A2 are expanded outward in the vehicle width direction as the lane width detected by the lane width detection unit 35 decreases. To do.
- FIG. 12 is a graph showing the relationship between the vehicle width direction distance ⁇ y with the lane marking and the size of the detection area A1 (enlargement amount ⁇ y 0 fs) in the second embodiment.
- the enlargement amount of the detection area A1 is increased according to the size of the vehicle width direction distance ⁇ y. Is smaller than the example shown in FIG. That is, the area setting unit 33b according to the second embodiment reduces the amount of enlargement when the detection areas A1 and A2 are enlarged, thereby preventing the detection area A1 from being excessively enlarged. As a result, the detection area A1 is set to be adjacent to the lane, thereby preventing the accuracy of the three-dimensional object detection from being lowered.
- the maximum value y 0 fs ′ is smaller than the example shown in FIG. This is because it is possible to further prevent the detection area A1 from being set to the adjacent lane.
- 13 and 14 are flowcharts showing the three-dimensional object detection method according to the second embodiment.
- the lane width detection part 35 detects the lane width of a driving lane based on the image data imaged with the camera 10 (S21).
- the area setting unit 33b sets an enlargement amount (S22). That is, as shown in FIG. 12, the region setting unit 33b decreases the enlargement amount according to the vehicle width direction distance ⁇ y as the lane width decreases. In this process, it is desirable that the area setting unit 33b also reduces the maximum value y 0 fs ′.
- steps S23 to S36 processing similar to that in steps S1 to S14 shown in FIG. 9 is executed.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the smaller the lane width of the traveling lane the smaller the amount of enlargement when the sizes of the detection areas A1, A2 are enlarged. For this reason, when the lane width is small, it is possible to prevent the detection areas A1 and A2 from being set to the adjacent lanes instead of the adjacent lanes.
- FIG. 15 is a block diagram showing details of the computer 30 according to the third embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a lane change detection unit (lane change detection means) 36.
- the lane change detection unit 36 detects a lane change of the host vehicle V.
- the lane change detection unit 36 calculates the degree of approach to the lane marking based on the image data obtained by the imaging of the camera 10, and the host vehicle V Determines whether or not to change lanes.
- the lane change detection unit 36 is not limited to the above, and may determine the lane change from the steering amount or may be determined from other methods.
- the lane change detection unit 36 detects that the host vehicle V changes its lane when the side surface of the host vehicle V enters within a predetermined distance (for example, 10 cm) from the lane marking. Further, the lane change detection unit 36 may detect that the side of the host vehicle V has entered within a predetermined distance from the lane marking, but does not change the lane when the vehicle V is separated by a predetermined distance or more again. Further, the lane change detection unit 36 determines that the lane change has been completed when the lane change has occurred a predetermined distance or more away from the lane line (that is, when the lane change has exceeded the lane line and the lane change has occurred a predetermined distance or more away). May be.
- a predetermined distance for example, 10 cm
- FIG. 16 is a top view showing a running state of the vehicle at the time of lane change.
- the host vehicle V is located at the center of the lane (see reference numeral Va), and then changes to the lane and reaches the position Vb.
- the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking is temporarily increased.
- the detection area A1 is enlarged, and there is a possibility that the other vehicle V in the adjacent lane will enter the detection area A1. And in such a case, the precision of a solid-object detection will fall.
- the region setting unit 33b expands the size of the detection regions A1 and A2 for a certain period of time. Reduce the amount. That is, when the lane change detection unit 36 detects a lane change of the host vehicle V, the region setting unit 33b performs a vehicle width direction distance ⁇ y between the side surface of the host vehicle V shown in FIG. The enlargement amount ⁇ y 0 fs of the detection areas A1 and A2 is reduced. Specifically, as shown in FIG. 17, the region setting unit 33 b separates the side surface of the host vehicle V from the side of the host vehicle V for a certain period of time after the lane change of the host vehicle V is detected.
- FIG. 17 is a graph showing the relationship between the distance in the vehicle width direction and the size of the detection area (enlargement amount ⁇ y 0 fs) in the third embodiment.
- 18 and 19 are flowcharts showing a three-dimensional object detection method according to the third embodiment.
- the lane change detection unit 36 calculates the degree of approach to the lane marking based on the image data captured by the camera 10, and determines whether or not the host vehicle V changes the lane (S41). When it is determined that the host vehicle V does not change lanes (S41: NO), the process proceeds to step S43. On the other hand, when it is determined that the host vehicle V changes the lane (S41: YES), the area setting unit 33b sets an enlargement amount (S42). That is, the area setting unit 33b reduces the enlargement amount according to the vehicle width direction distance ⁇ y as shown in FIG. In this process, it is desirable that the area setting unit 33b also reduces the maximum value y 0 fs ′.
- steps S43 to S56 processing similar to that in steps S1 to S14 shown in FIG. 9 is executed.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the amount of enlargement when the size of the detection areas A1, A2 is enlarged is reduced. For this reason, it is possible to prevent a situation in which the detection areas A1 and A2 are set not to the adjacent lane but to the adjacent lane in a state where the lane is temporarily changed during the lane change.
- FIG. 20 is a block diagram showing details of the computer 30 according to the fourth embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a ground wire detection unit 37.
- the ground line detection unit 37 detects the position (position in the vehicle width direction) where the tire of the other vehicle V traveling in the adjacent lane contacts the ground as a ground line. Details will be described below with reference to FIGS. 21 and 22.
- 21 and 22 are diagrams for explaining a method of detecting a ground line by the ground line detector 37.
- FIG. 21 and 22 are diagrams for explaining a method of detecting a ground line by the ground line detector 37.
- the ground line detection unit 37 sets a plurality of lines L 1 to L n substantially parallel to the traveling direction of the host vehicle V at different positions with respect to the detection areas A1 and A2. For example, in the example illustrated in FIG. 21, the ground line detection unit 37 sets four substantially parallel lines.
- four substantially parallel lines L 1 to L 4 will be described as an example. However, the present invention is not limited to this, and the number of parallel lines may be two or three, or five or more. There may be.
- the ground line detection unit 37 causes the differential waveform generation unit 33a to generate the differential waveform DW t for the set lines L 1 to L 4 . That is, the ground line detection unit 37 causes the difference waveform generation unit 33a to count the number of difference pixels DP, and then the line La in the direction in which the three-dimensional object falls on the data of the difference image DW t and the lines L 1 to L. 4 is obtained, and the intersection point CP is associated with the count number, thereby generating a differential waveform DW t for each of the lines L 1 to L 4 . Thereby, the ground wire detection part 37 can obtain a plurality of differential waveforms as shown in FIG. Incidentally, in FIG.
- the differential waveform DW t1 are those generally based on parallel lines L 1
- the differential waveform DW t2 are those generally based on parallel lines L 2
- the difference waveform DW t3 is substantially parallel is based on a line L 3
- the differential waveform DW t4 are those generally based on parallel lines L 4.
- the ground line detection unit 37 determines the ground line L t of the other vehicle V from the shape change of the plurality of differential waveforms DW t1 to DW t4 as described above. In the example shown in FIG. 21 (b), a ground line detection unit 37 determines that the ground line L t substantially parallel lines L 3. Specifically, the ground line L t is determined from the area increase rate shown in FIG. FIG. 22 is a graph showing an increase rate of the area of the plurality of differential waveforms DW t1 to DW t4 shown in FIG. As shown in FIG. 22, the ground line detection unit 37 refers to the increase rate of the area from the farthest parallel line to the nearest parallel line among the calculated areas.
- the area of the differential waveform DW t2 indicates a constant increase rate of the area of the differential waveform DW t1
- the area of the difference waveform DW t3 indicates a constant increase rate of the area of the differential waveform DW t2
- the area of the differential waveform DW t4 and the area of the differential waveform DW t3 are the same, and the increase rate is equal to or less than a predetermined value.
- ground line detection unit 37 detects substantially parallel lines L3 as a ground line L t of the other vehicle V.
- the area setting unit 33 b based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the dividing line, similarly to the first embodiment, together to expand the size, further in the fourth embodiment, based on the ground line L t of the other vehicle V detected by a ground line detection unit 37, at the time of expanding the detection area A1, A2 in the vehicle width direction outer side
- the enlargement amount ⁇ y 0 fs is changed.
- the area setting unit 33b as shown in FIG.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted, as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the ground line L t of the other vehicle V traveling in the adjacent lane is detected, and as the distance from the side surface of the host vehicle V to the ground line L t is shorter, the detection areas A1 and A2 are detected. The amount of change ⁇ y 0 fs when enlarging the size is reduced.
- the detection areas A1 and A2 are set to adjacent lanes or off-roads by suppressing the amount of change ⁇ y 0 fs when the detection areas A1 and A2 are enlarged.
- the detection areas A1 and A2 are set to adjacent lanes or off-roads by suppressing the amount of change ⁇ y 0 fs when the detection areas A1 and A2 are enlarged.
- FIG. 24 is a block diagram showing details of the computer 30 according to the fifth embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a turning state detection unit (turning state detection means) 38.
- the turning state detection unit 38 determines whether or not the host vehicle V is turning based on the host vehicle speed detected by the vehicle speed sensor 20 and the steering amount detected by a steering angle sensor (not shown). Further, when the host vehicle V is turning, the turning radius of the host vehicle V is detected.
- the detection method of the turning state by the turning state detection part 38 is not specifically limited, For example, the turning state of the own vehicle V1 may be detected based on the detection result by the lateral acceleration sensor, or You may detect the turning state of the own vehicle V1 by estimating a road shape based on the captured image imaged with the camera 10. FIG. Further, the turning state of the host vehicle V1 may be detected by specifying the road on which the host vehicle V is traveling according to the map information such as the navigation system and the current position information of the host vehicle V.
- the area setting unit 33b enlarges the sizes of the detection areas A1 and A2 based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking as in the first embodiment.
- the turning state detecting unit 38 determines that the host vehicle V is turning
- the size of the detection areas A1 and A2 is increased according to the turning radius of the own vehicle V.
- the enlargement amount ⁇ y 0 fs at the time of increasing the height is changed.
- the region setting unit 33b decreases the enlargement amount ⁇ y 0 fs of the detection regions A1, A2 as the turning radius of the host vehicle V is smaller.
- FIG. 25 is a top view showing the traveling state of the vehicle when the vehicle is turning, and shows an example in which the region setting unit 33b enlarges the detection region A1.
- the region setting unit 33b enlarges the detection region A1.
- the detection area A1 is enlarged outward in the vehicle width direction as in the first embodiment, the other vehicle V in the adjacent lane enters the detection area A1, and such other vehicle V is set as an adjacent vehicle. There is a case where it is erroneously detected.
- the region setting unit 33b increases the distance in the vehicle width direction as the turning radius of the host vehicle V decreases as shown in FIG.
- the enlargement amount ⁇ y 0 fs of the detection areas A1, A2 with respect to ⁇ y is reduced.
- the area setting unit 33b increases the detection areas A1 and A2 based on the lane width of the traveling lane of the host vehicle, as described in the second embodiment, in addition to the above configuration.
- a configuration in which ⁇ y 0 fs is reduced can also be adopted.
- the smaller enlargement amount ⁇ y 0 fs can be selected to enlarge the size of the detection areas A1, A2 outward in the vehicle width direction.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the turning state detection unit 38 determines that the host vehicle V is turning, as shown in FIG.
- the detection areas A1 and A2 are in the adjacent lane even when the host vehicle V is turning along a curve, as shown in FIG. Therefore, it is possible to effectively prevent erroneous detection of the other vehicle V in the adjacent lane as an adjacent vehicle.
- the enlargement amount ⁇ y 0 fs of the detection areas A1 and A2 can be suppressed based on the turning state of the host vehicle V. For example, when it is detected that the host vehicle V1 is turning while the host vehicle V1 is traveling on a straight road, the host vehicle V may change lanes as the turning radius decreases. It can be judged that it is expensive.
- FIG. 27 is a top view showing the traveling state of the vehicle in the sixth embodiment, and shows an example in which the region setting unit 33b enlarges the detection region A1.
- the region setting unit 33b shifts the detection regions A1 and A2 to the outside in the vehicle width direction based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking. Based on the vehicle width direction distance ⁇ y between the side surface and the dividing line, the detection areas A1, A2 are expanded outward in the vehicle width direction. Details will be described below.
- FIG. 28A shows the relationship between the vehicle width direction distance ⁇ y from the lane marking and the movement amount (shift amount ⁇ y 0 fs1) when the detection areas A1, A2 are shifted outward in the vehicle width direction.
- FIG. 28B is a graph showing the relationship between the vehicle width direction distance ⁇ y from the lane marking and the amount of enlargement (enlargement amount ⁇ y 0 fs2) when the detection areas A1, A2 are enlarged outward in the vehicle width direction. It is a graph to show.
- the area setting unit 33b when the vehicle width direction distance ⁇ y between the side surface and the dividing line of the vehicle V1 is less than y 3, the detection areas A1, A2 and not directly change the vehicle width direction distance [Delta] y is when a and y less than 4 at y 3 or more, depending on the vehicle width direction distance [Delta] y, shifting the detection areas A1, A2 in the vehicle width direction outer side.
- FIG. 28 (B) when the vehicle width direction distance ⁇ y between the side surface and the dividing line of the vehicle V1 is and y less than 4 at y 3 or more, the vehicle width of the detection area A1, A2 There is no expansion outward.
- the area setting unit 33b when the vehicle width direction distance [Delta] y is y 4 above, as shown in FIG. 28 (A), the detection areas A1, A2 predetermined shift amount [Delta] y 0 fs1 'only vehicle width
- the detection areas A1 and A2 are enlarged outward in the vehicle width direction while being shifted outward in the direction. More specifically, when the vehicle width direction distance [Delta] y is less than and y 5 is y 4 above, as shown in FIG.
- the detection areas A1, A2 is enlarged outward in the vehicle width direction
- the vehicle width direction distance [Delta] y is the case where y 5 or more
- the detection areas A1, A2 predetermined expansion amount [Delta] y 0 fs2 'only to expand outward in the vehicle width direction.
- the vehicle width direction distance ⁇ y between the side surface and the dividing line of the vehicle V becomes y 3 or more
- the vehicle width direction in accordance with the distance [Delta] y will shift the detection areas A1, A2 in the vehicle width direction outer side, then, if the vehicle width direction distance [Delta] y becomes y 4 above, the vehicle width direction outside of the detection area A1, A2 Instead, the detection areas A1 and A2 are enlarged outward in the vehicle width direction.
- the area setting unit 33b the vehicle width direction distance [Delta] y is until y 5, depending on the vehicle width direction distance [Delta] y, will the detection area A1, A2 is enlarged in the vehicle width direction outer side in the vehicle width direction distance [Delta] y There it becomes y 5, also enlarged in the vehicle width direction outside of the detection area A1, A2 so that the stop.
- the area setting unit 33b when the vehicle width direction distance ⁇ y is less than y 5, the vehicle width the size of the detection area A1, A2 has expanded will narrowed inward, then when the vehicle width direction distance ⁇ y is less than y 4, stop the reduction in the vehicle width direction inside of the detection area A1, A2, instead, in the vehicle width direction by a distance ⁇ y Accordingly, the detection areas A1 and A2 are shifted inward in the vehicle width direction.
- the area setting unit 33b when the vehicle width direction distance ⁇ y is less than y 3, it becomes possible to stop shifting in the vehicle width direction inside of the detection area A1, A2.
- the region setting unit 33b performs normal operation when the lane of the host vehicle V is changed, when a three-dimensional object is detected, for each of the shift amount ⁇ y 0 fs1 and the enlargement amount ⁇ y 0 fs2 of the detection regions A1 and A2. There is a limit value (enlarged specified value, specified value) of the amount of change according to each situation at the time (when the host vehicle V goes straight and no solid object is detected).
- the area setting unit 33b gradually shifts the detection areas A1 and A2 to the outside in the vehicle width direction within a range not exceeding the limit values of ⁇ y 0 fs1 and ⁇ y 0 fs2 corresponding to each situation, and detects the detection areas A1 and A2.
- the size of is gradually increased outward in the vehicle width direction.
- the limit value for every situation mentioned above will be applied irrespective of the turning state of the own vehicle and the lane width of the traveling lane.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted, as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the detection areas A1 and A2 are shifted outward in the vehicle width direction based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking, and then the vehicle width direction distance ⁇ y. Is larger than the predetermined value, and when the shift amount of the detection areas A1 and A2 becomes a predetermined amount, instead of shifting the detection areas A1 and A2, the detection areas A1 and A2 are expanded outward in the vehicle width direction, The following effects can be achieved.
- a three-dimensional object such as another vehicle traveling in the adjacent lane in the detection areas A1 and A2
- a three-dimensional object is located in the detection areas A1 and A2 that are located on the outer side in the vehicle width direction than the actual moving speed.
- the moving speed of the three-dimensional object is calculated at a slower speed than the actual moving speed in the area located inside the vehicle width direction in the detection areas A1 and A2.
- Tend to. Therefore, if the detection areas A1 and A2 are excessively expanded outward in the vehicle width direction based on the vehicle width direction distance ⁇ y, the detected three-dimensional object is detected according to the detection position of the three-dimensional object in the detection areas A1 and A2.
- the detection areas A1 and A2 are set in the vehicle width direction based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking. By shifting to the outside, it is possible to suppress such a variation in the detection result of the moving speed of the three-dimensional object and appropriately detect the three-dimensional object (adjacent vehicle).
- the detection areas A1 and A2 are shifted too far outward in the vehicle width direction, a two-wheeled vehicle or the like traveling near the host vehicle V1 in the vehicle width direction does not enter the detection areas A1 and A2. It may not be detected.
- the shift amount for shifting the detection areas A1, A2 to the outside in the vehicle width direction is equal to or larger than a predetermined amount, the detection areas A1, A2 are replaced with the shifts of the detection areas A1, A2.
- FIG. 29 is a block diagram showing details of the computer 30 according to the seventh embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a foreign matter detection unit (foreign matter detection means) 39.
- the foreign object detection unit 39 detects foreign objects such as raindrops and scales adhering to the lens based on the captured image captured by the camera 10.
- the foreign matter detection unit 39 irradiates infrared light toward the lens, and detects the amount of raindrops attached to the lens by detecting the attenuation amount by which the irradiated infrared light is attenuated by raindrops.
- the operation intensity of the wiper the amount of raindrops attached to the lens is detected, and the detected amount of raindrops is output to the region setting unit 33b.
- the foreign matter detection unit 39 is not limited to one that detects raindrops, and may be one that detects, for example, scale or mud adhering to the lens.
- the foreign object detection unit 39 extracts the edge of the subject from the captured image, and determines the sharpness of the image from the extracted edge characteristics, so that the lens is clouded and the lens is clouded.
- the amount of foreign matter adhering to the lens may be detected by determining the degree of presence (a white thin film is formed on the lens surface).
- the foreign matter detection unit 39 determines that a foreign matter has adhered to the region when an edge having the same intensity is detected in the same region of the captured image over a certain period of time, and the foreign matter that adheres to the lens. May be detected.
- the area setting unit 33b moves the detection areas A1 and A2 outward in the vehicle width direction based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking.
- the enlargement amount ⁇ y 0 fs for enlarging the detection areas A1, A2 is further changed based on the amount of foreign matter detected by the foreign matter detection unit 39.
- the area setting unit 33b decreases the enlargement amount ⁇ y 0 fs when enlarging the detection areas A1 and A2 as the amount of foreign matter attached to the lens increases.
- FIG. 30 is a diagram illustrating the relationship between the amount of foreign matter attached to the lens and the size of the detection areas A1 and A2 (enlargement amount ⁇ y 0 fs).
- the area setting unit 33b decreases the enlargement amount ⁇ y 0 fs when enlarging the detection areas A1 and A2 as the amount of foreign matter attached to the lens increases.
- the area setting unit 33b when the amount of foreign matter adhered to the lens is less than q 1 is an enlarged amount [Delta] y 0 fs detection region A1, A2, based on the vehicle width direction by a distance [Delta] y is was left initial expansion amount, the amount of foreign matter adhered to the lens is at q 1 or more, and if it is less than q 2, the more the amount of foreign matter adhered to the lens increases the detection area A1, A2 of reducing the expansion amount [Delta] y 0 fs, the amount of foreign matter adhered to the lens when it exceeds q 2 is an enlarged amount of the detection area A1, A2, sets the smallest predetermined value ⁇ y 0 fs ''.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted, as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the amount of foreign matter attached to the lens of the camera 10 is detected, and the larger the amount of foreign matter, the smaller the enlargement amount ⁇ y 0 fs of the detection areas A1, A2.
- the amount of foreign matter attached to the lens increases, a part of the light beam from the subject is blocked or diffusely reflected by the foreign matter, and as a result, the image of the dividing line imaged by the camera 10 is distorted, It may become blurred and the detection accuracy of the lane marking may be lowered.
- the distance in the vehicle width direction between the side surface of the vehicle V and the dividing line is reduced by reducing the enlargement amount ⁇ y 0 fs of the detection areas A1 and A2.
- the detection error of ⁇ y effectively prevents the detection areas A1 and A2 from being enlarged too much, so that other vehicles traveling in the adjacent lane or grass outside the road are erroneously detected as adjacent vehicles. can do.
- FIG. 31 is a block diagram showing details of the computer 30 according to the eighth embodiment.
- the camera 10 and the vehicle speed sensor 20 are also illustrated in order to clarify the connection relationship.
- the computer 30 includes a lane marking type specifying unit (marking line type specifying means) 40.
- the lane marking type identification unit 40 identifies the lane marking type based on the captured image captured by the camera 10.
- the method for specifying the type of the lane line is not particularly limited.
- the lane line type specifying unit 40 performs pattern matching on the lane line captured by the camera 10 to determine the type of the lane line. Can be identified.
- the lane line type identification unit 40 identifies the road (lane) on which the host vehicle V travels according to the map information such as the navigation system and the current position information of the host vehicle V. Can be specified.
- the area setting unit 33b moves the detection areas A1, A2 outward in the vehicle width direction based on the vehicle width direction distance ⁇ y between the side surface of the host vehicle V and the lane marking.
- the amount of enlargement ⁇ y 0 fs for enlarging the detection areas A1 and A2 is changed based on the type of the division line specified by the division line type specification unit 40.
- FIG. 32 is a diagram showing the relationship between the type of the dividing line and the sizes of the detection areas A1 and A2.
- the area setting unit 33b reduces the enlargement amount ⁇ y 0 fs of the detection areas A1 and A2 as the possibility that the identified lane marking is a lane marking that divides the traveling lane and the adjacent lane of the host vehicle V is low.
- the area setting unit 33b can specify four types of dividing lines, namely, a broken line white line, a solid white line, a yellow line, and a multiple line as types of the dividing lines. To do.
- the area setting unit 33b most likely has an area adjacent to the traveling lane of the host vehicle V among the four types of lane markings. It is determined that the value is high, and among these four types of dividing lines, the amount of enlargement ⁇ y 0 fs of the detection areas A1 and A2 is maximized.
- the area setting unit 33b determines that there is a possibility that the adjacent lane is adjacent to the traveling lane of the host vehicle V when the dividing line is a solid white line, and is smaller than the dashed white line, as larger amounts [Delta] y 0 fs detection area than other section lines A1, A2 increases, sets a larger amount [Delta] y 0 fs.
- the area setting unit 33b determines that there is a low possibility that the next lane of the host vehicle V is an adjacent lane, and the amount of enlargement ⁇ y of the detection areas A1 and A2
- the dividing line is a multiple line, it is determined that the possibility that the adjacent lane is next to the traveling lane of the host vehicle V is the lowest, and among these four dividing line types, The enlargement amount ⁇ y 0 fs of the detection areas A1 and A2 is minimized.
- the detection accuracy of the three-dimensional object can be improved and the detection of the three-dimensional object is omitted as in the first embodiment. It is possible to prevent a situation (such as that) that would cause In addition, it is possible to prevent a situation in which the detection areas A1 and A2 are extremely reduced to cause detection failure.
- the type of the lane marking is specified, and the enlargement amount ⁇ y 0 fs of the detection areas A1 and A2 is made different based on the specified lane marking type.
- the type of the division line is a yellow line or a multiple line
- A1 and A2 are enlarged, the possibility of detecting grass or noise outside the road increases, and the detection accuracy of a three-dimensional object (such as another vehicle) decreases.
- the classification line type is a yellow line or a multiple line
- the detection of grass and noise outside the road is effectively suppressed by suppressing the expansion of the detection areas A1 and A2.
- the type of the dividing line is a broken white line or a solid white line
- the three-dimensional object (another vehicle V) in the adjacent lane can be detected appropriately.
- the amount of enlargement ⁇ y when the sizes of the detection areas A1, A2 are enlarged for a certain period of time is illustrated, in addition to this configuration, when the lane change of the own vehicle V is detected, the moving speed of the own vehicle V in the vehicle width direction is calculated, and the vehicle of the own vehicle V is calculated. As the moving speed in the width direction is higher, the enlargement amount ⁇ y 0 fs when the detection areas A1 and A2 are enlarged can be reduced.
- the calculation method of the moving speed of the own vehicle V in the vehicle width direction is not particularly limited.
- the region setting unit 33b detects the distance from the side surface of the own vehicle V detected by the lateral position detection unit 34 to the lane marking.
- the moving speed of the host vehicle V in the vehicle width direction can be calculated based on the time change of the vehicle width direction distance ⁇ y or by using a lateral acceleration sensor (not shown).
- the detection areas A1 and A2 are detected after the detection areas A1 and A2 are shifted outward in the vehicle width direction based on the vehicle width direction distance ⁇ y from the side surface of the host vehicle V to the lane marking.
- the present invention is not limited to this configuration.
- the sizes of the detection areas A1 and A2 are simultaneously increased. It is good also as a structure which expands this to the vehicle width direction outer side.
- the vehicle speed of the host vehicle V is determined based on a signal from the speed sensor 20, but the present invention is not limited thereto, and the speed may be estimated from a plurality of images at different times. . In this case, a vehicle speed sensor becomes unnecessary, and the configuration can be simplified. Further, the vehicle behavior may be determined only from the image.
- the captured image at the current time and the image one hour before are converted into a bird's-eye view, and after the alignment of the converted bird's-eye view is performed, the difference image PD t is generated, and the generated difference image
- PD t is evaluated along the falling direction (the falling direction of the three-dimensional object when the captured image is converted into a bird's eye view)
- the differential waveform DW t is generated, but the present invention is not limited to this.
- the differential waveform DW t may be generated by evaluating along the direction corresponding to the falling direction (that is, the direction in which the falling direction is converted into the direction on the captured image).
- the difference image PD t is generated from the difference between the two images subjected to the alignment, and the difference image PD t is converted into a bird's eye view
- the bird's eye view need not necessarily be generated as long as the evaluation can be performed along the falling direction.
- the three-dimensional object detection devices 1 to 3 detect a three-dimensional object based on the differential waveform DW t .
- the present invention is not limited to this.
- an optical flow or an image template is used.
- a three-dimensional object may be detected.
- the differential waveform DW t it may be detected three-dimensional object using the difference image PD t.
- the sizes of the detection areas A1 and A2 are enlarged, in the present embodiment, the sizes of the detection areas A1 and A2 are changed.
- the present invention is not limited to this, and is different from the detection areas A1 and A2.
- An enlarged area may be set.
- the alignment unit 32 aligns the positions of the bird's-eye view images at different times in the bird's-eye view.
- This “positioning” processing is performed according to the type of detection target and the required detection. It can be performed with an accuracy according to the accuracy. For example, it may be configured to perform exact alignment based on the same time and the same position, or may be configured to perform loose alignment so as to grasp the coordinates of each bird's-eye view image.
- Moving distance detector 10 ... Camera (imaging means) DESCRIPTION OF SYMBOLS 20 ... One vehicle speed sensor 30 ... Computer 31 ... Viewpoint conversion part 32 ... Positioning part 33 ... One solid object detection part (three-dimensional object detection means) 33a ... Differential waveform generation unit 33b ... Area setting unit (area setting means) 34 ... Lateral position detector (horizontal position detector) 35 ... Lane width detection unit (width detection means) 36 ... Lane change detection unit (lane change detection means) 37 ... Grounding wire detection unit (grounding wire detection means) 38 ... Turning state detection unit (turning state detection means) 39 ... Foreign matter detection unit (foreign matter detection means) 40...
- Classification line type identification part (division line type identification means) a ... angle A1, A2 ... detection area CP ... intersection DP ... differential pixel DW t, DW t '... differential waveform La, line on the direction Lb ... three-dimensional object collapses PB ... bird's PD ... difference image V ... vehicle , Other vehicle ⁇ y ... distance in the vehicle width direction
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
以下、本発明の好適な実施形態を図面に基づいて説明する。図1は、本実施形態に係る立体物検出装置1の概略構成図であって、立体物検出装置1が車両Vに搭載される場合の例を示している。図1に示す立体物検出装置1は、自車両Vが走行する走行車線に境界である区分線を介して隣接する隣接車線を走行する立体物(例えば他車両、二輪車等)を検出し、自車両Vの運転者に対して各種情報を提供するものであって、カメラ(撮像手段)10と、車速センサ20と、計算機30とを備えている。なお、以下において走行車線とは、車線変更がない場合において自車両Vが走行可能な走行帯域であって区分線を除く領域である。同様に、隣接車線とは、走行車線に区分線を介して隣接する走行帯域であって区分線を除く領域である。区分線は、走行車線と隣接車線との境界となる白線等のラインである。
次に、本発明の第2実施形態を説明する。第2実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第3実施形態を説明する。第3実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第4実施形態を説明する。第4実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第5実施形態を説明する。第5実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第6実施形態を説明する。第6実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第7実施形態を説明する。第7実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
次に、本発明の第8実施形態を説明する。第8実施形態に係る立体物検出装置及び立体物検出方法は、第1実施形態のものと同様であるが、構成及び処理内容が一部異なっている。以下、第1実施形態との相違点を説明する。
は無く、本発明の趣旨を逸脱しない範囲で、変更を加えてもよいし、各実施形態を組み合
わせてもよい。
10…カメラ(撮像手段)
20…一車速センサ
30…計算機
31…視点変換部
32…位置合わせ部
33…一立体物検出部(立体物検出手段)
33a…差分波形生成部
33b…領域設定部(領域設定手段)
34…横位置検出部(横位置検出手段)
35…車線幅検出部(幅検出手段)
36…車線変更検出部(車線変更検出手段)
37…接地線検出部(接地線検出手段)
38…旋回状態検出部(旋回状態検出手段)
39…異物検出部(異物検出手段)
40…区分線種別特定部(区分線種別特定手段)
a…画角
A1,A2…検出領域
CP…交点
DP…差分画素
DWt,DWt’ …差分波形
La,Lb…立体物が倒れ込む方向上の線
PB…鳥瞰画像
PD…差分画像
V…自車両、他車両
Δy…車幅方向距離
Claims (12)
- 自車両が走行する走行車線に境界である区分線を介して隣接する隣接車線を走行する立体物を検出する立体物検出装置であって、
自車両に搭載され、前記区分線及び前記隣接車線の所定領域を含んで撮像する撮像手段と、
前記撮像手段により撮像された所定領域内に立体物が存在するか否かを判断する立体物判断手段と、
前記撮像手段が撮像した画像から、自車両の走行車線における自車位置と、前記区分線との車幅方向距離を検出する横位置検出手段と、
前記横位置検出手段により検出された区分線との車幅方向距離が長くなるほど、当該区分線が存在する側に位置する前記所定領域の大きさを車幅方向外側に拡大させる領域設定手段と、
自車両の車線変更を検出する車線変更検出手段と、を備え、
前記領域設定手段は、前記車線変更検出手段により自車両の車線変更が検出された場合に、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする立体物検出装置。 - 前記領域設定手段は、前記車線変更検出手段により自車両の車線変更が検出された場合に、自車両の車幅方向への移動速度を算出し、該車幅方向への移動速度が速いほど、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする請求項1に記載の立体物検出装置。
- 自車両の走行車線または前記隣接車線の車線幅を検出する幅検出手段をさらに備え、
前記領域設定手段は、前記幅検出手段により検出された前記車線幅が小さくなるほど、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする請求項1または2に記載の立体物検出装置。 - 前記領域設定手段は、前記所定領域の大きさを拡大させる際には、前記所定領域を所定の拡大規定値ずつ複数回の処理により拡大させ、且つ、拡大した前記所定領域を元の大きさに戻す際には、前記所定領域を前記拡大規定値より小さな規定値ずつ複数回の処理により車幅方向内側に縮小させることを特徴とする請求項1~3のいずれかに記載の立体物検出装置。
- 前記領域設定手段は、立体物の検出中において、立体物の非検出中よりも前記規定値を小さくさせることを特徴とする請求項1~4のいずれかに記載の立体物検出装置。
- 前記隣接車線を走行する立体物の接地線を検出する接地線検出手段をさらに備え、
前記領域設定手段は、自車両から前記接地線までの車幅方向における距離が短いほど、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする請求項1~5のいずれかに記載の立体物検出装置。 - 自車両の旋回状態を検出する旋回状態検出手段をさらに備え、
前記領域設定手段は、前記旋回状態検出手段により検出された自車両の旋回半径が小さいほど、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする請求項1~6のいずれかに記載の立体物検出装置。 - 前記領域設定手段は、前記車幅方向距離が所定値以上の場合に、前記所定領域を車幅方向外側に移動させるとともに、前記所定領域の大きさを車幅方向外側に拡大させることを特徴とする請求項1~7のいずれかに記載の立体物検出装置。
- 前記領域設定手段は、前記所定領域を車幅方向外側に移動させた後に、前記所定領域の大きさを車幅方向外側に拡大させることを特徴とする請求項8に記載の立体物検出装置。
- 前記撮像手段が有するレンズに付着した異物を検出する異物検出手段をさらに備え、
前記領域設定手段は、前記異物検出手段により検出された異物の量が多いほど、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくすることを特徴とする請求項1~9のいずれかに記載の立体物検出装置。 - 前記区分線の種別を特定する区分線種別特定手段をさらに備え、
前記領域設定手段は、前記区分線種別特定手段により特定された前記区分線の種別に基づいて、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を異ならせることを特徴とする請求項1~10のいずれかに記載の立体物検出装置。 - 自車両が走行する走行車線に境界である区分線を介して隣接する隣接車線を走行する立体物を検出する立体物検出方法であって、
自車両から、前記区分線及び前記隣接車線の所定領域を含んで撮像する撮像工程と、
前記撮像工程において撮像された所定領域内に立体物が存在するか否かを判断する立体物判断工程と、
前記撮像工程における撮像により得られた画像から、自車両の走行車線における自車位置と、前記区分線との車幅方向距離を検出する横位置検出工程と、
自車両の車線変更を検出する車線変更検出工程と、
前記横位置検出工程において検出された区分線との車幅方向距髄が長くなるほど、当該区分線が存在する側に位置する前記所定領域の大きさを車幅方向外側に拡大させるとともに、自車両の車線変更が検出された場合、前記所定領域の大きさを車幅方向外側に拡大させる際の拡大量を小さくする領域設定工程と、を有することを特徴とする立体物検出方法。
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12819626.8A EP2741271B8 (en) | 2011-08-02 | 2012-07-27 | Object detector and object detection method |
RU2014107926/11A RU2563534C1 (ru) | 2011-08-02 | 2012-07-27 | Устройство обнаружения сплошных объектов и способ обнаружения сплошных объектов |
CN201280037841.9A CN103718224B (zh) | 2011-08-02 | 2012-07-27 | 三维物体检测装置和三维物体检测方法 |
BR112014001824-3A BR112014001824B1 (pt) | 2011-08-02 | 2012-07-27 | dispositivo e método de detecção de objeto sólido |
MX2014000652A MX2014000652A (es) | 2011-08-02 | 2012-07-27 | Dispositivo de deteccion de objeto solido y metodo de deteccion de objeto solido. |
US14/233,404 US9092676B2 (en) | 2011-08-02 | 2012-07-27 | Object detector and object detection method |
JP2013526868A JP5761349B2 (ja) | 2011-08-02 | 2012-07-27 | 立体物検出装置及び立体物検出方法 |
MYPI2013004517A MY183995A (en) | 2011-08-02 | 2012-07-27 | Object detector device and object detection method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011168904 | 2011-08-02 | ||
JP2011-168904 | 2011-08-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013018673A1 true WO2013018673A1 (ja) | 2013-02-07 |
Family
ID=47629197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/069094 WO2013018673A1 (ja) | 2011-08-02 | 2012-07-27 | 立体物検出装置及び立体物検出方法 |
Country Status (9)
Country | Link |
---|---|
US (1) | US9092676B2 (ja) |
EP (1) | EP2741271B8 (ja) |
JP (1) | JP5761349B2 (ja) |
CN (1) | CN103718224B (ja) |
BR (1) | BR112014001824B1 (ja) |
MX (1) | MX2014000652A (ja) |
MY (1) | MY183995A (ja) |
RU (1) | RU2563534C1 (ja) |
WO (1) | WO2013018673A1 (ja) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014017434A1 (ja) * | 2012-07-27 | 2014-01-30 | クラリオン株式会社 | 画像処理装置 |
CN103942960A (zh) * | 2014-04-22 | 2014-07-23 | 深圳市宏电技术股份有限公司 | 一种车辆变道检测方法及装置 |
WO2015008566A1 (ja) * | 2013-07-18 | 2015-01-22 | クラリオン株式会社 | 車載装置 |
EP2927710A3 (en) * | 2014-03-13 | 2015-12-30 | Ricoh Company, Ltd. | Ranging system, information processing apparatus, information processing method and program thereof |
US11077853B2 (en) * | 2017-09-29 | 2021-08-03 | Mando Corporation | Apparatus and method for controlling lane-keeping |
WO2022176245A1 (ja) * | 2021-02-22 | 2022-08-25 | 日立Astemo株式会社 | 後方監視装置 |
WO2023157721A1 (ja) * | 2022-02-17 | 2023-08-24 | 株式会社デンソー | 車両制御装置、車両制御方法 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5916444B2 (ja) * | 2012-03-08 | 2016-05-11 | 日立建機株式会社 | 鉱山用車両 |
JP6132359B2 (ja) * | 2014-10-20 | 2017-05-24 | 株式会社Soken | 走行区画線認識装置 |
JP6532229B2 (ja) * | 2014-12-18 | 2019-06-19 | 株式会社デンソーテン | 物体検出装置、物体検出システム、物体検出方法及びプログラム |
JP6581379B2 (ja) * | 2015-03-31 | 2019-09-25 | 株式会社デンソー | 車両制御装置、及び車両制御方法 |
US20170248958A1 (en) * | 2016-02-25 | 2017-08-31 | Delphi Technologies, Inc. | Adjacent lane verification for an automated vehicle |
MX2018011509A (es) * | 2016-03-24 | 2019-01-10 | Nissan Motor | Metodo de deteccion de carril de circulacion y dispositivo de deteccion de carril de circulacion. |
US9910440B2 (en) * | 2016-05-13 | 2018-03-06 | Delphi Technologies, Inc. | Escape-path-planning system for an automated vehicle |
CN109804421A (zh) * | 2016-10-07 | 2019-05-24 | 日产自动车株式会社 | 车辆判断方法、行驶路径修正方法、车辆判断装置及行驶路径修正装置 |
CN110023712A (zh) * | 2017-02-28 | 2019-07-16 | 松下知识产权经营株式会社 | 位移计测装置以及位移计测方法 |
JP6530782B2 (ja) * | 2017-06-09 | 2019-06-12 | 株式会社Subaru | 車両制御装置 |
US10576894B2 (en) * | 2018-06-04 | 2020-03-03 | Fca Us Llc | Systems and methods for controlling vehicle side mirrors and for displaying simulated driver field of view |
JP7156225B2 (ja) * | 2019-09-20 | 2022-10-19 | 株式会社デンソーテン | 付着物検出装置および付着物検出方法 |
CN114078246A (zh) * | 2020-08-11 | 2022-02-22 | 华为技术有限公司 | 确定检测对象的三维信息的方法及装置 |
CN118050054A (zh) * | 2024-04-16 | 2024-05-17 | 中国民用航空飞行学院 | 基于智慧数据分析的飞行安全实时监控***及方法 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000149197A (ja) | 1998-11-17 | 2000-05-30 | Toyota Motor Corp | 車両周辺監視装置 |
JP2003276542A (ja) * | 2002-03-20 | 2003-10-02 | Nissan Motor Co Ltd | 車両用後方監視装置 |
JP2004331023A (ja) * | 2003-05-12 | 2004-11-25 | Nissan Motor Co Ltd | 車両用運転操作補助装置およびその装置を備えた車両 |
JP2008100554A (ja) * | 2006-10-17 | 2008-05-01 | Yamaha Motor Co Ltd | 後方視認装置 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2009116723A (ja) * | 2007-11-08 | 2009-05-28 | Denso Corp | 車線変更支援装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000161915A (ja) * | 1998-11-26 | 2000-06-16 | Matsushita Electric Ind Co Ltd | 車両用単カメラ立体視システム |
DE10218010A1 (de) * | 2002-04-23 | 2003-11-06 | Bosch Gmbh Robert | Verfahren und Vorrichtung zur Querführungsunterstützung bei Kraftfahrzeugen |
US8130269B2 (en) * | 2005-03-23 | 2012-03-06 | Aisin Aw Co., Ltd. | Visual recognition apparatus, methods, and programs for vehicles |
JP4855158B2 (ja) * | 2006-07-05 | 2012-01-18 | 本田技研工業株式会社 | 運転支援装置 |
JP4420011B2 (ja) * | 2006-11-16 | 2010-02-24 | 株式会社日立製作所 | 物体検知装置 |
DE102007044535B4 (de) * | 2007-09-18 | 2022-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Verfahren zur Fahrerinformation in einem Kraftfahrzeug |
JP5359085B2 (ja) * | 2008-03-04 | 2013-12-04 | 日産自動車株式会社 | 車線維持支援装置及び車線維持支援方法 |
KR101356201B1 (ko) * | 2008-09-19 | 2014-01-24 | 현대자동차주식회사 | 차량용 후측방 감지시스템 |
-
2012
- 2012-07-27 EP EP12819626.8A patent/EP2741271B8/en active Active
- 2012-07-27 CN CN201280037841.9A patent/CN103718224B/zh active Active
- 2012-07-27 MY MYPI2013004517A patent/MY183995A/en unknown
- 2012-07-27 JP JP2013526868A patent/JP5761349B2/ja active Active
- 2012-07-27 RU RU2014107926/11A patent/RU2563534C1/ru active
- 2012-07-27 WO PCT/JP2012/069094 patent/WO2013018673A1/ja active Application Filing
- 2012-07-27 MX MX2014000652A patent/MX2014000652A/es active IP Right Grant
- 2012-07-27 US US14/233,404 patent/US9092676B2/en active Active
- 2012-07-27 BR BR112014001824-3A patent/BR112014001824B1/pt active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000149197A (ja) | 1998-11-17 | 2000-05-30 | Toyota Motor Corp | 車両周辺監視装置 |
JP2003276542A (ja) * | 2002-03-20 | 2003-10-02 | Nissan Motor Co Ltd | 車両用後方監視装置 |
JP2004331023A (ja) * | 2003-05-12 | 2004-11-25 | Nissan Motor Co Ltd | 車両用運転操作補助装置およびその装置を備えた車両 |
JP2008100554A (ja) * | 2006-10-17 | 2008-05-01 | Yamaha Motor Co Ltd | 後方視認装置 |
JP2008219063A (ja) | 2007-02-28 | 2008-09-18 | Sanyo Electric Co Ltd | 車両周辺監視装置及び方法 |
JP2009116723A (ja) * | 2007-11-08 | 2009-05-28 | Denso Corp | 車線変更支援装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2741271A4 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014017434A1 (ja) * | 2012-07-27 | 2014-01-30 | クラリオン株式会社 | 画像処理装置 |
JPWO2014017434A1 (ja) * | 2012-07-27 | 2016-07-11 | クラリオン株式会社 | 画像処理装置 |
US9721169B2 (en) | 2012-07-27 | 2017-08-01 | Clarion Co., Ltd. | Image processing device for detecting vehicle in consideration of sun position |
JPWO2015008566A1 (ja) * | 2013-07-18 | 2017-03-02 | クラリオン株式会社 | 車載装置 |
US10095934B2 (en) | 2013-07-18 | 2018-10-09 | Clarion Co., Ltd. | In-vehicle device |
WO2015008566A1 (ja) * | 2013-07-18 | 2015-01-22 | クラリオン株式会社 | 車載装置 |
CN105393293A (zh) * | 2013-07-18 | 2016-03-09 | 歌乐株式会社 | 车载装置 |
EP2927710A3 (en) * | 2014-03-13 | 2015-12-30 | Ricoh Company, Ltd. | Ranging system, information processing apparatus, information processing method and program thereof |
CN103942960B (zh) * | 2014-04-22 | 2016-09-21 | 深圳市宏电技术股份有限公司 | 一种车辆变道检测方法及装置 |
CN103942960A (zh) * | 2014-04-22 | 2014-07-23 | 深圳市宏电技术股份有限公司 | 一种车辆变道检测方法及装置 |
US11077853B2 (en) * | 2017-09-29 | 2021-08-03 | Mando Corporation | Apparatus and method for controlling lane-keeping |
WO2022176245A1 (ja) * | 2021-02-22 | 2022-08-25 | 日立Astemo株式会社 | 後方監視装置 |
JP7486657B2 (ja) | 2021-02-22 | 2024-05-17 | 日立Astemo株式会社 | 後方監視装置 |
WO2023157721A1 (ja) * | 2022-02-17 | 2023-08-24 | 株式会社デンソー | 車両制御装置、車両制御方法 |
Also Published As
Publication number | Publication date |
---|---|
EP2741271B1 (en) | 2019-04-03 |
CN103718224A (zh) | 2014-04-09 |
BR112014001824B1 (pt) | 2021-04-20 |
JP5761349B2 (ja) | 2015-08-12 |
EP2741271A4 (en) | 2015-08-19 |
RU2563534C1 (ru) | 2015-09-20 |
CN103718224B (zh) | 2016-01-13 |
US9092676B2 (en) | 2015-07-28 |
JPWO2013018673A1 (ja) | 2015-03-05 |
EP2741271A1 (en) | 2014-06-11 |
MX2014000652A (es) | 2014-04-30 |
EP2741271B8 (en) | 2019-09-11 |
BR112014001824A2 (pt) | 2017-02-21 |
MY183995A (en) | 2021-03-17 |
US20140147007A1 (en) | 2014-05-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5761349B2 (ja) | 立体物検出装置及び立体物検出方法 | |
JP5664787B2 (ja) | 移動体検出装置及び移動体検出方法 | |
EP2767927B1 (en) | Road surface information detection apparatus, vehicle device control system employing road surface information detection apparatus, and carrier medium of road surface information detection program | |
JP5997276B2 (ja) | 立体物検出装置及び異物検出装置 | |
EP2803944A2 (en) | Image Processing Apparatus, Distance Measurement Apparatus, Vehicle-Device Control System, Vehicle, and Image Processing Program | |
JP5787024B2 (ja) | 立体物検出装置 | |
JP5776795B2 (ja) | 立体物検出装置 | |
WO2014192137A1 (ja) | 移動軌跡予測装置及び移動軌跡予測方法 | |
JP2006018751A (ja) | 車両用画像処理装置 | |
JP6936098B2 (ja) | 対象物推定装置 | |
Suzuki et al. | Sensor fusion-based pedestrian collision warning system with crosswalk detection | |
JP2019003606A (ja) | 地図変化点検出装置 | |
JP6115429B2 (ja) | 自車位置認識装置 | |
JP3925285B2 (ja) | 走行路環境検出装置 | |
JP2012252501A (ja) | 走行路認識装置及び走行路認識用プログラム | |
JP5794378B2 (ja) | 立体物検出装置及び立体物検出方法 | |
CN112078580A (zh) | 用于确定对象与行车带的重合度的方法,装置和存储介质 | |
KR102003387B1 (ko) | 조감도 이미지를 이용한 교통 장애물의 검출 및 거리 측정 방법, 교통 장애물을 검출하고 거리를 측정하는 프로그램을 저장한 컴퓨터 판독가능 기록매체 | |
JP5938940B2 (ja) | 立体物検出装置 | |
JP5999183B2 (ja) | 立体物検出装置および立体物検出方法 | |
KR102681321B1 (ko) | 듀얼 카메라를 이용하여 거리를 계산하는 고속도로 주행지원 시스템의 성능 평가 장치와 그 방법 | |
JP5732890B2 (ja) | 並走体検出装置及び並走体検出方法 | |
JP5724570B2 (ja) | 走行支援装置及び走行支援方法 | |
JP5842466B2 (ja) | 移動体検出装置及び移動体検出方法 | |
JP5817913B2 (ja) | 立体物検出装置及び立体物検出方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12819626 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013526868 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/000652 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14233404 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2014107926 Country of ref document: RU Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014001824 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112014001824 Country of ref document: BR Kind code of ref document: A2 Effective date: 20140124 |