CN111353453B - Obstacle detection method and device for vehicle - Google Patents

Obstacle detection method and device for vehicle Download PDF

Info

Publication number
CN111353453B
CN111353453B CN202010152469.2A CN202010152469A CN111353453B CN 111353453 B CN111353453 B CN 111353453B CN 202010152469 A CN202010152469 A CN 202010152469A CN 111353453 B CN111353453 B CN 111353453B
Authority
CN
China
Prior art keywords
vehicle
dimensional coordinate
coordinate information
determining
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010152469.2A
Other languages
Chinese (zh)
Other versions
CN111353453A (en
Inventor
刘雪晴
王睿索
张俊
张娴婧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010152469.2A priority Critical patent/CN111353453B/en
Publication of CN111353453A publication Critical patent/CN111353453A/en
Application granted granted Critical
Publication of CN111353453B publication Critical patent/CN111353453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the application discloses a method and a device for detecting obstacles of a vehicle, which relate to the technical field of automatic driving, in particular to the technical field of autonomous parking. The vehicle includes an image acquisition device. One embodiment of the above method comprises: acquiring calibration data of an image acquisition device; acquiring a labeling frame aiming at an obstacle in an image acquired by an image acquisition device; determining an estimated distance between the obstacle and the vehicle according to the calibration data and the marking frame; determining a plurality of candidate points around the vehicle according to the position information of the vehicle and the estimated distance; and determining the optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame. This embodiment can improve the accuracy of obstacle distance detection.

Description

Obstacle detection method and device for vehicle
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for detecting obstacles of a vehicle.
Background
To accommodate the requirements of dynamic scenarios, autonomous vehicles must be able to detect and identify the surrounding environment and take corresponding decisions and path planning based on the surrounding environment. Static environments include buildings, curbs, trees, etc.; while dynamic environments include moving obstructions such as vehicles, people, etc. The identification of the obstacle comprises discrimination of the type and the size of the obstacle, and the detection comprises estimation of the position and the posture of the obstacle. In this case, the estimation of the obstacle position is often simplified to the estimation of the distance between the obstacle and the vehicle, so that the obstacle ranging scheme is a key problem and difficulty of automatic driving.
Different ranging optimization schemes are available for different sensors. In the automatic driving scheme based on the laser radar or the millimeter wave radar, the radar has the capability of direct ranging, so that the ranging accuracy is higher, but the original data volume is large. Optimization of the ranging scheme is biased towards the computation time and speed, and rejection of gross errors. Based on the dual purpose autopilot scheme, ranging is mainly affected by calibration and stereo matching, and erroneous matching often results in erroneous depth calculation. Therefore, optimization of the ranging scheme is biased towards optimization of matching accuracy and online and offline calibration. The sensor costs are large, both for laser and for dual purpose solutions.
With the development of vision, monocular-based sensor solutions are becoming more and more widely adopted. Monocular acquisition of depth through monocular is a difficult problem in the computer vision world because of the lack of sufficient geometric constraints.
Disclosure of Invention
The embodiment of the application provides an obstacle detection method and device for a vehicle.
In a first aspect, an embodiment of the present application provides an obstacle detection method for a vehicle, the vehicle being mounted with an image acquisition device, the method including: acquiring calibration data of the image acquisition device; acquiring a labeling frame aiming at an obstacle in an image acquired by the image acquisition device; determining an estimated distance between the obstacle and the vehicle according to the calibration data and the marking frame; determining a plurality of candidate points around the vehicle based on the position information of the vehicle and the estimated distance; and determining an optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
In some embodiments, the marking frame includes at least two ground points of the obstacle; and determining an estimated distance between the obstacle and the vehicle based on the calibration data and the marking frame, including: determining a distance between each grounding point and the vehicle according to the calibration data and the at least two grounding points; the estimated distance is determined based on the distance between each ground contact point and the vehicle.
In some embodiments, the determining a plurality of candidate points around the vehicle according to the position information of the vehicle and the estimated distance includes: determining a candidate point selection range according to the position information of the vehicle and the estimated distance; selecting a plurality of points as candidate points in the candidate point selection range; and determining three-dimensional coordinate information of the candidate points according to the high-precision map.
In some embodiments, the determining the optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame includes: projecting the three-dimensional coordinate information of the candidate points to an image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system; determining three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information, the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system; and determining the optimized distance according to the three-dimensional coordinate information.
In some embodiments, the determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information of the candidate point, the two-dimensional coordinate information, and the two-dimensional coordinate information of the labeling frame in the image coordinate system includes: determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point; sequencing the distances according to the order of the distances from small to large, and taking the candidate points of the number of the previous targets as target candidate points; and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate point and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
In some embodiments, the vehicle is provided with a first image acquisition device and a second image acquisition device which have different imaging principles; the method further comprises the following steps: and determining a target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principle of the first image acquisition device and the second image acquisition device.
In a second aspect, an embodiment of the present application provides an obstacle detecting apparatus for a vehicle mounted with an image capturing apparatus, the apparatus including: a first acquisition unit configured to acquire calibration data of the image acquisition device; a second acquisition unit configured to acquire a labeling frame for an obstacle in the image acquired by the image acquisition device; a distance estimating unit configured to determine an estimated distance between the obstacle and the vehicle based on the calibration data and the marking frame; a candidate point determination unit configured to determine a plurality of candidate points around the vehicle based on the position information of the vehicle and the estimated distance; and a distance optimizing unit configured to determine an optimized distance between the obstacle and the vehicle based on the plurality of candidate points and the labeling frame.
In some embodiments, the marking frame includes at least two ground points of the obstacle; and the above distance estimation unit is further configured to: determining a distance between each grounding point and the vehicle according to the calibration data and the at least two grounding points; the estimated distance is determined based on the distance between each ground contact point and the vehicle.
In some embodiments, the candidate point determining unit is further configured to: determining a candidate point selection range according to the position information of the vehicle and the estimated distance; selecting a plurality of points as candidate points in the candidate point selection range; and determining three-dimensional coordinate information of the candidate points according to the high-precision map.
In some embodiments, the distance optimizing unit is further configured to: projecting the three-dimensional coordinate information of the candidate points to an image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system; determining three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information, the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system; and determining the optimized distance according to the three-dimensional coordinate information.
In some embodiments, the distance optimizing unit is further configured to: determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point; sequencing the distances according to the order of the distances from small to large, and taking the candidate points of the number of the previous targets as target candidate points; and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate point and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
In some embodiments, the vehicle is provided with a first image acquisition device and a second image acquisition device which have different imaging principles; and the above apparatus further comprises an apparatus determining unit configured to: and determining a target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principle of the first image acquisition device and the second image acquisition device.
In a third aspect, an embodiment of the present application provides an electronic device, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the first aspect.
The above embodiment of the application provides an obstacle detection method and device for a vehicle, on which an image acquisition device is mounted. The method and the device of the embodiment can firstly acquire the calibration data of the image acquisition device. And acquiring a labeling frame for the obstacle in the image acquired by the image acquisition device. Then, according to the calibration data and the marking frame, the estimated distance between the obstacle and the vehicle is determined. Then, a plurality of candidate points around the vehicle are determined according to the position of the vehicle and the estimated distance. And finally, determining the optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame. The method of the embodiment can improve the accuracy of obstacle distance detection.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for obstacle detection for a vehicle according to the present application;
fig. 3 is a schematic view of an application scenario of the obstacle detection method for a vehicle according to the present application;
fig. 4 is a flowchart of another embodiment of an obstacle detection method for a vehicle according to the present application;
fig. 5 is a schematic structural view of an embodiment of an obstacle detecting apparatus for a vehicle according to the present application;
fig. 6 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Fig. 1 shows an exemplary system architecture 100 to which an embodiment of an obstacle detection method for a vehicle or an obstacle detection device for a vehicle of the application may be applied.
As shown in fig. 1, a system architecture 100 may include a vehicle 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between vehicle 101 and server 103. Network 102 may include various wireless connection types.
The vehicle 101 may interact with the server 105 during travel to receive or send messages, etc. The vehicle 101 may have various sensors mounted thereon, such as a monocular camera, a speed sensor, and the like.
The vehicle 101 may be hardware or software. When the vehicle 101 is hardware, it may be various vehicles capable of traveling, including an autonomous vehicle, a semi-autonomous vehicle, a manually driven vehicle, and the like. When the vehicle 101 is software, it may be installed in the above-listed vehicle. Which may be implemented as multiple software or software modules (e.g., to provide distributed services), or as a single software or software module. The present application is not particularly limited herein.
The server 103 may be a server that provides various services, such as a background server that processes information collected during travel of the vehicle 101. The background server may perform processing such as analysis on the received data and feed back the processing result (e.g., obstacle distance) to the vehicle 101.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or may be implemented as a single software or software module. The present application is not particularly limited herein.
It should be noted that, the method for detecting an obstacle for a vehicle according to the embodiment of the present application may be executed by the vehicle 101 or may be executed by the server 103. Accordingly, the obstacle detecting device for the vehicle may be provided in the vehicle 101 or in the server 103.
It should be understood that the number of vehicles, networks, and servers in fig. 1 are merely illustrative. There may be any number of vehicles, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of obstacle detection for a vehicle in accordance with the present application is shown. In this embodiment, the vehicle is mounted with an image pickup device, and the image pickup device may include various monocular cameras, such as a monocular wide-angle camera, a monocular fisheye camera. The obstacle detection method for a vehicle of the present embodiment may include the steps of:
step 201, calibration data of an image acquisition device is acquired.
In the present embodiment, the execution subjects of the obstacle detection method for a vehicle (e.g., the vehicle 101 and the server 103 shown in fig. 1) may acquire calibration data of the image pickup device in various ways. Here, the calibration data may include an inner parameter and an outer parameter. The internal parameters are parameters related to the characteristics of the image acquisition device, such as the focal length of the camera, the pixel size, and the like. The external parameters are parameters in the world coordinate system, such as the position of the camera, the direction of rotation, etc. The calibration data can be obtained by calibrating the image acquisition device.
Step 202, obtaining a labeling frame for an obstacle in an image acquired by an image acquisition device.
The execution subject may also acquire a labeling frame for an obstacle in the image acquired by the image acquisition device. Specifically, a trained obstacle recognition model can be arranged in the execution body, and the obstacle recognition model can recognize and mark obstacles in the image. The marking can be performed by using marking boxes, and different types of barriers can be represented by marking boxes with different colors. The image acquired by the image acquisition device can be input into the obstacle recognition model to obtain the annotation frame of the obstacle in the image. It is understood that the obstacle recognition model may be installed in other electronic devices, and the execution subject may send the image acquired by the image acquisition device to the electronic device and receive the marking frame of the obstacle sent by the electronic device.
And 203, determining the estimated distance between the obstacle and the vehicle according to the calibration data and the marking frame.
In this embodiment, the execution subject acquires the calibration data and the labelAfter the frame, an estimated distance between the obstacle and the vehicle may be determined. Specifically, the execution body may select two points from the labeling frame based on the inverse perspective projection, and project the two points to a plane with z=1 to obtain coordinates (x 1 ,y 1 )(x 2 ,y 2 ). The distance S from two points to the vehicle can be calculated by combining the height of the vehicle and utilizing the geometric principle 1 And S is 2 . Then, calculate S 1 And S is 2 And taking the average as an estimated distance between the obstacle and the vehicle.
In some optional implementations of this embodiment, the labeling frame includes at least two grounding points of the obstacle, and in step 203, the executing body may project the at least two grounding points onto a plane with z=1, so as to obtain a distance between the at least two grounding points and the vehicle. And determining an estimated distance based on the at least two distances.
Step 204, determining a plurality of candidate points around the vehicle according to the position information of the vehicle and the estimated distance.
After the estimated distance is calculated, the execution subject may also acquire position information of the vehicle. And determining a plurality of candidate points around the vehicle based on the position information and the estimated distance. Specifically, the execution subject may select a plurality of points from the obtained circle as candidate points by using the vehicle as a center of a circle and the estimated interior as a radius. The points can be selected randomly or uniformly. The candidate points may be some marked points on the road surface, such as lane lines, stop lines, etc. It is understood that the candidate points may include three-dimensional coordinate information of the candidate points. The above coordinate information can be obtained from a high-definition map. The high-precision map can comprise lanes with different directions, lane lines, lane boundaries, stop lines, crosswalks, deceleration strips, traffic lights, traffic signs, warning signs and the like, and can also comprise position information of the objects and the like.
Step 205, determining an optimized distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
In this embodiment, after determining a plurality of candidate points, the execution body may determine an optimal distance between the obstacle and the vehicle in combination with the labeling frame. Specifically, the execution subject may calculate two-dimensional coordinates of the plurality of candidate points in the image coordinate system, and then determine a relative positional relationship between the candidate points and the labeling frame in combination with the two-dimensional coordinates of the labeling frame in the image coordinate system. And then determining the three-dimensional coordinates of the labeling frame according to the three-dimensional coordinates of the candidate points and the relative position relation. So that an optimal distance between the obstacle and the vehicle can be determined.
With continued reference to fig. 3, fig. 3 is a schematic diagram of one application scenario of the obstacle detection method for a vehicle according to the present embodiment. In the application scenario of fig. 3, a monocular camera is mounted on an autonomous vehicle 301 and images are acquired during travel. After the automated guided vehicle 301 has traveled near the parking space, it is necessary to acquire images of the surroundings of the parking space, and the distances between the preceding vehicle, the following vehicle, and the automated guided vehicle are obtained by the processing of steps 201 to 205, so that the automated guided vehicle is guided to travel so as to accurately stop at the parking space.
According to the obstacle detection method for the vehicle provided by the embodiment of the application, firstly, the calibration data of the image acquisition device can be acquired. And acquiring a labeling frame for the obstacle in the image acquired by the image acquisition device. Then, according to the calibration data and the marking frame, the estimated distance between the obstacle and the vehicle is determined. Then, a plurality of candidate points around the vehicle are determined according to the position of the vehicle and the estimated distance. And finally, determining the optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame. The method of the embodiment can improve the accuracy of obstacle distance detection.
With continued reference to fig. 4, a flow 400 of another embodiment of an obstacle detection method for a vehicle in accordance with the present application is shown. In this embodiment, the vehicle may be mounted with a first image pickup device and a second image pickup device having different imaging principles. The first image device may be a monocular wide angle camera and the second image capturing device may be a monocular fisheye camera. As shown in fig. 4, the obstacle detection method for a vehicle of the present embodiment may include the steps of:
step 401, acquiring calibration data of an image acquisition device.
When two image acquisition devices are installed on a vehicle, calibration data of the two image acquisition devices can be acquired, namely, calibration data of the first image acquisition device and calibration data of the second image acquisition device are acquired respectively.
Step 402, obtaining a labeling frame for an obstacle in an image acquired by an image acquisition device.
In this embodiment, the labeling frame for the obstacle in the first image acquired by the first image acquisition device and the labeling frame for the obstacle in the second image acquired by the second image acquisition device may be acquired respectively.
And step 403, determining the estimated distance between the obstacle and the vehicle according to the calibration data and the marking frame.
The principle of this step is similar to that of step 203, and will not be described here again.
Step 404, determining a target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principles of the first image acquisition device and the second image acquisition device.
Different image sensors have different accuracies at different ranging distances due to different imaging principles. When a monocular wide-angle camera and a monocular fisheye camera are simultaneously installed in a vehicle, an appropriate image sensor should be selected according to characteristics of different image sensors and an obstacle distribution range. For example, the monocular fisheye camera has higher ranging accuracy at 2-6 meters, and the monocular wide-angle camera has better ranging accuracy at 6-20 meters. When an obstacle appears in a range of about 14 meters from the vehicle, a monocular wide-angle camera should be employed as a main image sensor for ranging, i.e., a target image pickup device.
Step 405, determining a candidate point selection range according to the position information of the vehicle and the estimated distance.
In this embodiment, the executing body may further acquire position information of the vehicle, and then determine the candidate point selection range by combining the estimated distance. Specifically, the execution subject may select a range using the position of the vehicle as the center of a circle, using 2 or 3 times the estimated distance as the radius, and using the obtained circle as the candidate point.
And step 406, selecting a plurality of points in the candidate point selection range as candidate points.
After determining the candidate point selection range, the execution body may select a plurality of points as candidate points. Specifically, the execution subject may uniformly select a plurality of points as candidate points in the candidate point selection range. For example, at 20 cm intervals.
In some optional implementations of this embodiment, after determining the candidate point selection range, the execution subject may further determine, according to the high-precision map, a three-dimensional feature in the candidate point selection range. The three-dimensional terrain may include lane lines. And then selecting candidate points in the three-dimensional ground object and candidate point selection range.
Step 407, determining three-dimensional coordinate information of the candidate points according to the high-precision map.
After the candidate points are selected, the executing body can be combined with the high-precision map to determine the three-dimensional coordinate information of the candidate points.
And step 408, projecting the three-dimensional coordinate information of the candidate points to the image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system.
In the present embodiment, the three-dimensional coordinate information of the candidate points is in the geodetic coordinate system. The execution subject may first convert three-dimensional coordinate information in the geodetic coordinate system into the vehicle coordinate system. The three-dimensional coordinate information of the candidate points in the vehicle coordinate system is then projected to the image coordinate system. Since the image coordinate system is two-dimensional, two-dimensional coordinate information of the candidate points in the image coordinate system can be seen.
And 409, determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information, the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
Then, the executing body can determine the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information, the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system. Specifically, the coordinates of each point of the labeling frame in the image coordinate system are two-dimensional, and the execution subject can determine the proportional difference between the two-dimensional coordinates. And combining the three-dimensional coordinate information of the candidate points to obtain the three-dimensional coordinate information of the labeling frame. It will be appreciated that the three-dimensional coordinate information of the callout box is in the vehicle coordinate system herein.
In some alternative implementations of the present embodiment, the execution body may further implement step 409 by the following steps not shown in fig. 2: determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point; sorting the distances according to the order of the distances from small to large, and taking the number of candidate points of the front target as target candidate points; and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
In this implementation manner, the execution body may calculate the distance between each candidate point and the labeling frame after obtaining the two-dimensional coordinate information of each candidate point. Then, N candidate points closest to the target candidate point are set as target candidate points. In some application scenarios, N is 4. And finally, determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information, the two-dimensional coordinate information of the target candidate point and the two-dimensional coordinate information of the labeling frame in the image coordinate system. Thus, the workload in calculation can be effectively reduced.
In step 410, an optimized distance is determined based on the three-dimensional coordinate information.
After the three-dimensional coordinate information of the labeling frame is obtained, the execution body can calculate the optimal distance between the vehicle and the obstacle by combining the position of the vehicle.
According to the obstacle detection method for the vehicle, provided by the embodiment of the application, the three-dimensional coordinate information of the labeling frame can be determined by utilizing the candidate points of the known three-dimensional coordinate information, so that the estimated distance can be optimized, and the obtained distance is more accurate.
With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of an obstacle detection device for a vehicle, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic apparatuses. In this embodiment, the vehicle is mounted with an image pickup device.
As shown in fig. 5, the obstacle detecting apparatus 500 for a vehicle of the present embodiment includes: a first acquisition unit 501, a second acquisition unit 502, a distance estimation unit 503, a candidate point determination unit 504, and a distance optimization unit 505.
The first acquisition unit 501 is configured to acquire calibration data of the image acquisition device.
The second acquisition unit 502 is configured to acquire a labeling frame for an obstacle in an image acquired by the image acquisition device.
The distance estimation unit 503 is configured to determine an estimated distance between the obstacle and the vehicle based on the calibration data and the label frame.
The candidate point determination unit 504 is configured to determine a plurality of candidate points around the vehicle based on the position information of the vehicle and the estimated distance.
The distance optimizing unit 505 is configured to determine an optimized distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
In some alternative implementations of the present embodiment, the callout box includes at least two ground points for the obstacle. The distance estimation unit 503 may be further configured to: determining the distance between each grounding point and the vehicle according to the calibration data and at least two grounding points; an estimated distance is determined based on the distance between each ground point and the vehicle.
In some optional implementations of the present embodiment, the candidate point determination unit 504 may be further configured to: determining a candidate point selection range according to the position information of the vehicle and the estimated distance; selecting a plurality of points as candidate points in the candidate point selection range; and determining three-dimensional coordinate information of the candidate points according to the high-precision map.
In some optional implementations of the present embodiment, the distance optimizing unit 505 may be further configured to: projecting the three-dimensional coordinate information of the candidate points to an image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system; determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system; and determining an optimized distance according to the three-dimensional coordinate information.
In some optional implementations of the present embodiment, the distance optimizing unit 505 may be further configured to: determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point; sorting the distances according to the order of the distances from small to large, and taking the number of candidate points of the front target as target candidate points; and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
In some alternative implementations of the present embodiment, the vehicle is mounted with a first image acquisition device and a second image acquisition device that differ in imaging principles. The apparatus 600 may further comprise an apparatus determining unit, not shown in fig. 5, configured to: and determining the target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principle of the first image acquisition device and the second image acquisition device.
It should be understood that the units 501 to 505 described in the obstacle detection device 500 for a vehicle correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above with respect to the obstacle detection method for a vehicle are equally applicable to the apparatus 500 and the units contained therein, and are not described in detail herein.
Referring now to fig. 6, a schematic diagram of a configuration of an electronic device (e.g., a server in fig. 1 or a terminal device installed in a vehicle) 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 6 is merely an example and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 6, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 6 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing means 601. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring calibration data of an image acquisition device; acquiring a labeling frame aiming at an obstacle in an image acquired by an image acquisition device; determining an estimated distance between the obstacle and the vehicle according to the calibration data and the marking frame; determining a plurality of candidate points around the vehicle according to the position information of the vehicle and the estimated distance; and determining the optimal distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a first acquisition unit, a second acquisition unit, a distance estimation unit, a candidate point determination unit, and a distance optimization unit. The names of these units do not in any way constitute a limitation of the unit itself, for example the first acquisition unit may also be described as "unit acquiring calibration data of the image acquisition device".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (10)

1. An obstacle detection method for a vehicle, the vehicle being mounted with an image acquisition device, the method comprising:
acquiring calibration data of the image acquisition device;
acquiring a labeling frame aiming at an obstacle in an image acquired by the image acquisition device;
according to the calibration data and the marking frame, determining the estimated distance between the obstacle and the vehicle comprises the following steps: determining the distance between each grounding point and the vehicle according to the calibration data and at least two grounding points; determining the estimated distance based on the distance between each ground point and the vehicle;
determining a plurality of candidate points around the vehicle based on the position information of the vehicle and the estimated distance, including: determining a candidate point selection range according to the position information of the vehicle and the estimated distance; selecting a plurality of points as candidate points in the candidate point selection range; determining three-dimensional coordinate information of the candidate points according to the high-precision map;
and determining an optimized distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
2. The method of claim 1, wherein the determining an optimal distance between the obstacle and the vehicle from the plurality of candidate points and the callout box comprises:
projecting the three-dimensional coordinate information of the candidate points to an image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system;
determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system;
and determining the optimized distance according to the three-dimensional coordinate information.
3. The method of claim 2, wherein the determining the three-dimensional coordinate information of the annotation frame from the three-dimensional coordinate information of the candidate point, the two-dimensional coordinate information, and the two-dimensional coordinate information of the annotation frame in the image coordinate system comprises:
determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point;
sorting the distances according to the order of the distances from small to large, and taking the number of candidate points of the front target as target candidate points;
and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate point and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
4. The method of claim 1, wherein the vehicle is mounted with a first image acquisition device and a second image acquisition device that differ in imaging principle; and
the method further comprises the steps of:
and determining a target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principles of the first image acquisition device and the second image acquisition device.
5. An obstacle detection device for a vehicle, the vehicle mounted with an image acquisition device, the device comprising:
a first acquisition unit configured to acquire calibration data of the image acquisition device;
a second acquisition unit configured to acquire a labeling frame for an obstacle in an image acquired by the image acquisition device;
a distance estimation unit configured to determine an estimated distance between the obstacle and the vehicle based on the calibration data and the callout box, further configured to: determining the distance between each grounding point and the vehicle according to the calibration data and at least two grounding points; determining the estimated distance based on the distance between each ground point and the vehicle;
a candidate point determination unit configured to determine a plurality of candidate points around the vehicle based on the position information of the vehicle and the estimated distance, further configured to: determining a candidate point selection range according to the position information of the vehicle and the estimated distance; selecting a plurality of points as candidate points in the candidate point selection range; determining three-dimensional coordinate information of the candidate points according to the high-precision map;
and a distance optimizing unit configured to determine an optimized distance between the obstacle and the vehicle according to the plurality of candidate points and the labeling frame.
6. The apparatus of claim 5, wherein the distance optimization unit is further configured to:
projecting the three-dimensional coordinate information of the candidate points to an image coordinate system to obtain two-dimensional coordinate information of the candidate points in the image coordinate system;
determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the candidate points and the two-dimensional coordinate information of the labeling frame in the image coordinate system;
and determining the optimized distance according to the three-dimensional coordinate information.
7. The apparatus of claim 6, wherein the distance optimization unit is further configured to:
determining the distance between the candidate point and the labeling frame according to the two-dimensional coordinate information of the candidate point;
sorting the distances according to the order of the distances from small to large, and taking the number of candidate points of the front target as target candidate points;
and determining the three-dimensional coordinate information of the labeling frame according to the three-dimensional coordinate information and the two-dimensional coordinate information of the target candidate point and the two-dimensional coordinate information of the labeling frame in the image coordinate system.
8. The apparatus of claim 5, wherein the vehicle is mounted with a first image acquisition device and a second image acquisition device that differ in imaging principle; and
the apparatus further comprises an apparatus determining unit configured to:
and determining a target image acquisition device from the first image acquisition device and the second image acquisition device according to the estimated distance and the imaging principles of the first image acquisition device and the second image acquisition device.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
CN202010152469.2A 2020-03-06 2020-03-06 Obstacle detection method and device for vehicle Active CN111353453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010152469.2A CN111353453B (en) 2020-03-06 2020-03-06 Obstacle detection method and device for vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010152469.2A CN111353453B (en) 2020-03-06 2020-03-06 Obstacle detection method and device for vehicle

Publications (2)

Publication Number Publication Date
CN111353453A CN111353453A (en) 2020-06-30
CN111353453B true CN111353453B (en) 2023-08-25

Family

ID=71197485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010152469.2A Active CN111353453B (en) 2020-03-06 2020-03-06 Obstacle detection method and device for vehicle

Country Status (1)

Country Link
CN (1) CN111353453B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883909A (en) * 2021-03-16 2021-06-01 东软睿驰汽车技术(沈阳)有限公司 Surrounding box-based obstacle position detection method and device and electronic equipment
CN113126120B (en) * 2021-04-25 2023-08-25 北京百度网讯科技有限公司 Data labeling method, device, equipment, storage medium and computer program product
CN114802261B (en) * 2022-04-21 2024-04-19 合众新能源汽车股份有限公司 Parking control method, obstacle recognition model training method and device
CN117058210A (en) * 2023-10-11 2023-11-14 比亚迪股份有限公司 Distance calculation method and device based on vehicle-mounted sensor, storage medium and vehicle

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1255624A (en) * 1998-11-26 2000-06-07 松下电器产业株式会社 Stereo observing system of automotive single pickup camera
CN1690659A (en) * 2004-04-23 2005-11-02 株式会社自动网络技术研究所 Vehicle periphery viewing apparatus
CN101549683A (en) * 2009-04-23 2009-10-07 上海交通大学 Vehicle intelligent method for automatically identifying road pit or obstruction
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN102774341A (en) * 2011-05-12 2012-11-14 富士重工业株式会社 Environment recognition device and environment recognition method
CN107146247A (en) * 2017-05-31 2017-09-08 西安科技大学 Automobile assistant driving system and method based on binocular camera
CN107179768A (en) * 2017-05-15 2017-09-19 上海木爷机器人技术有限公司 A kind of obstacle recognition method and device
CN107886043A (en) * 2017-07-20 2018-04-06 吉林大学 The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible
CN108027971A (en) * 2015-07-28 2018-05-11 法雷奥开关和传感器有限责任公司 For identifying method, driver assistance system and the motor vehicles of the object in the region of motor vehicles
CN109116374A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Determine the method, apparatus, equipment and storage medium of obstacle distance
DE102018116040A1 (en) * 2017-07-04 2019-01-10 Denso Ten Limited PERIPHERAL VIEW CONTROL DEVICE
CN109472251A (en) * 2018-12-16 2019-03-15 华为技术有限公司 A kind of object collision prediction method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8027029B2 (en) * 2007-11-07 2011-09-27 Magna Electronics Inc. Object detection and tracking system

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1255624A (en) * 1998-11-26 2000-06-07 松下电器产业株式会社 Stereo observing system of automotive single pickup camera
US6172601B1 (en) * 1998-11-26 2001-01-09 Matsushita Electric Industrial Co., Ltd. Three-dimensional scope system with a single camera for vehicles
CN1690659A (en) * 2004-04-23 2005-11-02 株式会社自动网络技术研究所 Vehicle periphery viewing apparatus
CN101549683A (en) * 2009-04-23 2009-10-07 上海交通大学 Vehicle intelligent method for automatically identifying road pit or obstruction
CN102774341A (en) * 2011-05-12 2012-11-14 富士重工业株式会社 Environment recognition device and environment recognition method
CN102508246A (en) * 2011-10-13 2012-06-20 吉林大学 Method for detecting and tracking obstacles in front of vehicle
CN108027971A (en) * 2015-07-28 2018-05-11 法雷奥开关和传感器有限责任公司 For identifying method, driver assistance system and the motor vehicles of the object in the region of motor vehicles
CN107179768A (en) * 2017-05-15 2017-09-19 上海木爷机器人技术有限公司 A kind of obstacle recognition method and device
CN107146247A (en) * 2017-05-31 2017-09-08 西安科技大学 Automobile assistant driving system and method based on binocular camera
CN109116374A (en) * 2017-06-23 2019-01-01 百度在线网络技术(北京)有限公司 Determine the method, apparatus, equipment and storage medium of obstacle distance
DE102018116040A1 (en) * 2017-07-04 2019-01-10 Denso Ten Limited PERIPHERAL VIEW CONTROL DEVICE
CN109204136A (en) * 2017-07-04 2019-01-15 丰田自动车株式会社 Side images display control unit
CN107886043A (en) * 2017-07-20 2018-04-06 吉林大学 The vehicle front-viewing vehicle and pedestrian anti-collision early warning system and method for visually-perceptible
CN109472251A (en) * 2018-12-16 2019-03-15 华为技术有限公司 A kind of object collision prediction method and device

Also Published As

Publication number Publication date
CN111353453A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
US10897575B2 (en) Lidar to camera calibration for generating high definition maps
CN111108342B (en) Visual range method and pair alignment for high definition map creation
EP3759562B1 (en) Camera based localization for autonomous vehicles
CN111353453B (en) Obstacle detection method and device for vehicle
CN110869700B (en) System and method for determining vehicle position
US10240934B2 (en) Method and system for determining a position relative to a digital map
JP7082545B2 (en) Information processing methods, information processing equipment and programs
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
JP2018533721A (en) Method and system for generating and using localization reference data
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
US11204610B2 (en) Information processing apparatus, vehicle, and information processing method using correlation between attributes
CN111339876B (en) Method and device for identifying types of areas in scene
JP2018077162A (en) Vehicle position detection device, vehicle position detection method and computer program for vehicle position detection
CN112258519A (en) Automatic extraction method and device for way-giving line of road in high-precision map making
US20220316909A1 (en) Method and Communication System for Supporting at Least Partially Automatic Vehicle Control
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
US20230098314A1 (en) Localizing and updating a map using interpolated lane edge data
CN116295508A (en) Road side sensor calibration method, device and system based on high-precision map
WO2022133986A1 (en) Accuracy estimation method and system
CN111383337B (en) Method and device for identifying objects
US11294385B2 (en) System and method for generating a representation of an environment
CN111461982B (en) Method and apparatus for splice point cloud
JP2019132701A (en) Map information creation method
JP7241582B2 (en) MOBILE POSITION DETECTION METHOD AND MOBILE POSITION DETECTION SYSTEM
Shami et al. Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA Driveworks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant