CN115729245A - Obstacle fusion detection method, chip and terminal for mine ramp - Google Patents

Obstacle fusion detection method, chip and terminal for mine ramp Download PDF

Info

Publication number
CN115729245A
CN115729245A CN202211489480.3A CN202211489480A CN115729245A CN 115729245 A CN115729245 A CN 115729245A CN 202211489480 A CN202211489480 A CN 202211489480A CN 115729245 A CN115729245 A CN 115729245A
Authority
CN
China
Prior art keywords
obstacle
information
detection
frame
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211489480.3A
Other languages
Chinese (zh)
Inventor
宋瑞琦
杨雪
郭翔宇
游昌斌
任凤至
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Vehicle Intelligence Pioneers Inc
Original Assignee
Qingdao Vehicle Intelligence Pioneers Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Vehicle Intelligence Pioneers Inc filed Critical Qingdao Vehicle Intelligence Pioneers Inc
Priority to CN202211489480.3A priority Critical patent/CN115729245A/en
Publication of CN115729245A publication Critical patent/CN115729245A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The application provides a method, a chip and a terminal for detecting the fusion of obstacles in a ramp of a mining area, wherein the method comprises the following steps: in response to the fact that the vehicles in the mining area drive into the ramp, acquiring 3D point cloud data of a laser radar, a shot image of a camera and detection data of a millimeter wave radar so as to respectively determine first obstacle detection information, second obstacle detection information and third obstacle detection information; projecting the 3D point cloud data onto a shot image of a camera; determining 3D position information of an obstacle reflected by the 3D point cloud data on the shot image based on the relative position relation between the obtained 2D projection point and the 2D detection frame of the shot image; extracting first fusion information from the first obstacle detection information and the second obstacle detection information; the second fusion information is determined based on the first fusion information and the third obstacle detection information. The technical scheme of this application helps promoting the driving safety of mining area vehicle at mining area ramp.

Description

Obstacle fusion detection method, chip and terminal for mining area ramp
[ technical field ] A
The application relates to the technical field of artificial intelligence, in particular to a method, a chip and a terminal for detecting fusion of obstacles in a ramp of a mining area.
[ background of the invention ]
The existing sensors of different types have respective advantages and disadvantages in the sensing capability of the unmanned driving field to obstacles. For example, the laser radar has high positioning precision and accurate distance measurement, but the measured point cloud is sparse, and the classification and detection of the target are difficult. Compared with laser radar, the camera can effectively identify the target, but the positioning accuracy is relatively low, and the distance measurement level is limited. The millimeter wave radar has higher precision on speed measurement, but the measured point cloud is sparse and lacks height information. In an actual unmanned driving scene, due to complex road conditions, an accurate measurement result cannot be obtained through the traditional obstacle distance measurement mode applied to the plane road, and the safe operation of unmanned driving is influenced.
Therefore, how to accurately detect the position information of the obstacle in the unmanned driving according to the complex road condition becomes a technical problem to be solved urgently at present.
[ summary of the invention ]
The embodiment of the application provides a method, a chip and a terminal for detecting the fusion of obstacles in a mine ramp, and aims to solve the technical problem that the position information of the obstacles is difficult to accurately detect when unmanned driving is carried out on complex road conditions in the related technology.
In a first aspect, an embodiment of the present application provides an obstacle fusion detection method for a mine ramp, including: in response to the fact that the vehicle in the mining area drives into the ramp, acquiring 3D point cloud data from a laser radar, a shot image from a camera and detection data from a millimeter wave radar; respectively determining first obstacle detection information, second obstacle detection information and third obstacle detection information based on the 3D point cloud data, the shot image and the detection data; projecting the 3D point cloud data in the first obstacle detection information to a shot image of the camera to obtain a 2D projection point corresponding to the 3D point cloud data; determining 3D position information of an obstacle reflected by the 3D point cloud data on the shot image based on the relative position relation between the 2D projection point and a 2D detection frame of the shot image; extracting first fusion information from the first obstacle detection information and the second obstacle detection information according to the contact ratio of the 2D projection frame corresponding to the 3D position information and the 2D detection frame; determining second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information.
In a second aspect, an embodiment of the present application provides an obstacle fusion detection apparatus for a mine ramp, including: the detection data acquisition unit is used for responding to the detection that vehicles in the mining area drive into a ramp, and acquiring 3D point cloud data from a laser radar, shot images from a camera and detection data from a millimeter wave radar; an obstacle detection information determination unit configured to determine first obstacle detection information, second obstacle detection information, and third obstacle detection information, respectively, based on the 3D point cloud data, the captured image, and the detection data; the projection processing unit is used for projecting the 3D point cloud data in the first obstacle detection information onto a shot image of the camera to obtain a 2D projection point corresponding to the 3D point cloud data; a 3D position information acquisition unit, configured to determine, based on a relative position relationship between the 2D projection point and a 2D detection frame of the captured image, 3D position information of an obstacle on the captured image, which is reflected by the 3D point cloud data; a first fusion information generating unit, configured to extract first fusion information from the first obstacle detection information and the second obstacle detection information according to a degree of coincidence between a 2D projection frame corresponding to the 3D position information and the 2D detection frame; a second fusion information generating unit configured to determine second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information.
In a third aspect, an embodiment of the present application provides a computer device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and arranged to perform the method of any one of the first aspects above.
In a fourth aspect, an embodiment of the present application provides a storage medium storing computer-executable instructions for performing the method flow of any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes at least one processor and a communication interface, where the communication interface is coupled to the at least one processor, and the at least one processor is configured to execute a computer program or instructions to implement the method of any one of the above first aspects.
In a sixth aspect, the present application provides a terminal including the obstacle fusion detection apparatus for a mine slope according to the second aspect.
The invention has the beneficial effects that: aiming at the technical problem that the position information of an obstacle is difficult to accurately detect when unmanned driving is carried out on complex road conditions in the related technology, firstly, in response to the fact that a mine vehicle is detected to drive into a ramp, 3D point cloud data, a shot image and detection data on the ramp in front of the mine vehicle are respectively obtained through a laser radar, a camera and a millimeter wave radar;
then, respectively determining first obstacle detection information, second obstacle detection information and third obstacle detection information based on the 3D point cloud data, the shot image and the detection data, wherein the first obstacle detection information, the second obstacle detection information and the third obstacle detection information are respectively used for reflecting obstacles detected by a laser radar, a camera and a millimeter wave radar;
then, projecting the 3D point cloud data in the first obstacle detection information onto a shot image of the camera to obtain a 2D projection point corresponding to the 3D point cloud data, wherein the 2D projection point reflects the projection position of an obstacle corresponding to the 3D point cloud data on the shot image;
then, a 2D detection frame of the shot image reflects 2D position information of the obstacle detected by the shot image, a 2D projection point corresponding to the 3D point cloud data on the shot image reflects a projection position of the obstacle corresponding to the 3D point cloud data on the shot image, the difference of the two positions reflects the difference of a 3D detection result of the laser radar on the obstacle and a 2D detection result of the camera on the obstacle to a certain extent, and the two positions can be fused and corrected based on the difference to obtain the 3D position information of the obstacle in the 2D image of the shot image;
secondly, the 2D detection frame is used for identifying the position occupied by the obstacle in the shot image, and the 2D projection frame is used for identifying the projection position of the obstacle detected by the laser radar in the shot image, so that the coincidence degree of the 2D detection frame and the 2D projection frame can reflect the matching degree of the obstacle in the shot image represented by the 2D detection frame and the obstacle detected by the laser radar identified by the 2D projection frame, and further, the matching degree of the two determines the fusion condition of the first obstacle detection information and the second obstacle detection information corresponding to the two;
therefore, 3D point cloud data and an obstacle recognition result of a shot image can be fused according to the coincidence degree of a 2D projection frame and a 2D detection frame corresponding to the 3D position information, the laser radar has the advantages of high positioning precision and accurate distance measurement, the shot image of the camera has the advantage of easy target recognition, in other words, the first obstacle detection information has accurate obstacle position data, the second obstacle detection information has accurate obstacle type data, and the first fusion information obtained by fusing the first obstacle detection information and the second obstacle detection information can accurately reflect the position and the type of an obstacle, has higher detection accuracy relative to respective individual detection results of the laser radar and the camera, and is favorable for improving the safety of unmanned driving;
moreover, the millimeter wave radar has high precision on speed measurement, in other words, the third obstacle detection information comprises effective obstacle speed information, the position, type and speed of the obstacle can be accurately reflected by the second fusion information obtained by fusing the first fusion information and the third obstacle detection information, and compared with the respective independent detection results of the laser radar, the camera and the millimeter wave radar, the detection results are more comprehensive and accurate, and the reliability and safety of unmanned driving are improved.
In summary, for a scene that a vehicle enters a ramp in a mining area, the 3D point cloud data of the laser radar can be projected onto a shot image to obtain the 3D position information of the obstacle in the 2D image of the shot image, the laser radar and the detection result of the camera are fused to obtain first fusion information, effective obstacle speed information detected by the millimeter wave radar is fused with the first fusion information to obtain second fusion information, and the second fusion information can have the advantages of high laser radar positioning precision, accurate distance measurement, accurate camera identification target and accurate millimeter wave radar speed measurement. In other words, the advantages of the detection results of the laser radar, the camera and the millimeter wave radar are fused, the obtained fusion result can accurately reflect the position, type, speed and other multi-dimensional information of the obstacle, the detection results are more comprehensive and accurate compared with the respective independent detection results of the laser radar, the camera and the millimeter wave radar, and the reliability and safety of unmanned driving are improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 shows a flow chart of an obstacle fusion detection method for a mine ramp according to one embodiment of the present application;
FIG. 2 shows a schematic monocular ranging according to one embodiment of the present application;
FIG. 3 shows a map model range diagram according to an embodiment of the present application;
FIG. 4 illustrates a schematic view of obstacle height correction according to an embodiment of the present application;
FIG. 5 shows a schematic view of an obstacle fusion detection arrangement for a mine ramp according to one embodiment of the present application;
fig. 6 shows a schematic view of an obstacle fusion detection arrangement for a mine ramp according to another embodiment of the present application;
FIG. 7 is a schematic structural diagram of a storage medium according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
[ detailed description ] embodiments
In order to better understand the technical solution of the present application, the following detailed description is made with reference to the accompanying drawings.
Fig. 1 shows a flow chart of an obstacle fusion detection method for a mine ramp according to one embodiment of the present application.
As shown in fig. 1, the obstacle fusion detection method for a mine ramp according to one embodiment of the present application includes:
and 102, responding to the fact that the vehicles in the mining area are detected to drive into a ramp, and acquiring 3D point cloud data from a laser radar, shot images from a camera and detection data from a millimeter wave radar.
There are various ways to detect whether a mine vehicle is driving into a ramp. In one possible design, the position information of the mine vehicle can be periodically acquired, whether the position information of the mine vehicle is located at the position of a ramp is judged according to a known map model, and when the position information of the mine vehicle is detected to be located at the position of the ramp, the mine vehicle is determined to drive into the ramp, and the technical scheme of obstacle fusion detection is triggered. In another possible design, the course information of the mine vehicles can be periodically acquired, the included angle between the course information of the mine vehicles and the horizontal plane is calculated, and when the included angle exceeds the range of the included angle corresponding to the planar driving, the mine vehicles are determined to drive into the ramp, and the technical scheme of the obstacle fusion detection is triggered. Of course, the manner of detecting whether the mine vehicle is driven into the slope is not limited to the above example, and may be any other manner that meets the actual obstacle detection requirement.
Further, when the mine vehicle enters the slope, the information on the slope in front of the mine vehicle can be acquired through the laser radar, the camera and the millimeter wave radar. In one possible design, the heading angle of the mine vehicle can be obtained by combining inertial navigation, and the direction corresponding to the heading angle is used as the detection direction of the laser radar, the camera and the millimeter wave radar.
And 104, respectively determining first obstacle detection information, second obstacle detection information and third obstacle detection information based on the 3D point cloud data, the shot image and the detection data.
The method comprises the steps of acquiring 3D point cloud data, shot images and detection data, and recognizing obstacles in the 3D point cloud data, the shot images and the detection data respectively by adopting a preset target recognition algorithm to obtain first obstacle detection information, second obstacle detection information and third obstacle detection information corresponding to the three. The first obstacle detection information, the second obstacle detection information, and the third obstacle detection information are used to reflect obstacles detected by the laser radar, the camera, and the millimeter wave radar, respectively. The preset target recognition algorithm includes, but is not limited to, a deep learning algorithm, a neural network model, and any other manner capable of performing target recognition.
And 106, projecting the 3D point cloud data in the first obstacle detection information to a shot image of the camera to obtain a 2D projection point corresponding to the 3D point cloud data.
And the 2D projection point corresponding to the 3D point cloud data on the shot image reflects the projection position of the obstacle corresponding to the 3D point cloud data on the shot image.
And 108, determining 3D position information of the obstacle reflected by the 3D point cloud data on the shot image based on the relative position relation between the 2D projection point and the 2D detection frame of the shot image.
The 2D detection frame of the shot image reflects the 2D position information of the obstacle detected by the shot image, the 2D projection point corresponding to the 3D point cloud data on the shot image reflects the projection position of the obstacle corresponding to the 3D point cloud data on the shot image, the difference of the two positions reflects the difference of the 3D detection result of the laser radar on the obstacle and the 2D detection result of the camera on the obstacle to a certain extent, and the two positions can be fused and corrected based on the difference to obtain the 3D position information of the obstacle in the 2D image of the shot image.
And step 110, extracting first fusion information from the first obstacle detection information and the second obstacle detection information according to the coincidence degree of the 2D projection frame corresponding to the 3D position information and the 2D detection frame.
The 2D detection frame is used for identifying the position occupied by the obstacle in the shot image, and the 2D projection frame is used for identifying the projection position of the obstacle detected by the laser radar in the shot image, so that the coincidence degree of the 2D detection frame and the 2D projection frame can reflect the matching degree of the obstacle in the shot image represented by the 2D detection frame and the obstacle detected by the laser radar identified by the 2D projection frame, and further, the matching degree of the two determines the fusion condition of the first obstacle detection information and the second obstacle detection information corresponding to the two.
When the coincidence degree of the 2D detection frame and the 2D projection frame is large enough, the obstacles in the shot image represented by the 2D detection frame can be considered to be matched with the obstacles detected by the laser radar of the 2D projection frame identifier, and the obstacles are the same effective obstacles.
Therefore, 3D point cloud data and an obstacle recognition result of a shot image can be fused according to the coincidence degree of a 2D projection frame corresponding to the 3D position information and a 2D detection frame, the laser radar has the advantages of high positioning precision and accurate distance measurement, the shot image of the camera has the advantage of easy target recognition, in other words, the first obstacle detection information has accurate obstacle position data, the second obstacle detection information has accurate obstacle type data, the first obstacle detection information and the second obstacle detection information are fused to obtain first fusion information, the position and the type of an obstacle can be accurately reflected, the detection accuracy is higher compared with the respective independent detection results of the laser radar and the camera, and the unmanned safety is improved.
Step 112, determining second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information.
The millimeter wave radar has high precision on speed measurement, in other words, the third obstacle detection information comprises effective obstacle speed information, the position, type and speed of the obstacle can be accurately reflected by the second fusion information obtained by fusing the first fusion information and the third obstacle detection information, and compared with respective independent detection results of the laser radar, the camera and the millimeter wave radar, the detection results are more comprehensive and accurate, and the reliability and safety of unmanned driving are improved.
Above technical scheme, to the scene that the mining area vehicle drove into the ramp, the accessible obtains the mode of the 3D positional information of barrier in this 2D image of shooting image on shooting image with laser radar's 3D point cloud data projection, fuses laser radar, the testing result of camera, obtains first fusion information, fuse the effectual barrier speed information that millimeter wave radar detected and first fusion information and obtain second fusion information again, make second fusion information can have laser radar positioning accuracy height concurrently, the range finding is accurate, camera discernment target is accurate, the accurate advantage of millimeter wave radar speed measurement. In other words, the advantages of the detection results of the laser radar, the camera and the millimeter wave radar are fused, the obtained fusion result can accurately reflect the position, type, speed and other multi-dimensional information of the obstacle, the detection results are more comprehensive and accurate compared with the respective independent detection results of the laser radar, the camera and the millimeter wave radar, and the reliability and safety of unmanned driving are improved.
Wherein respective timestamps of the 3D point cloud data, the captured image, the real-time pose data of the mine vehicle, the first obstacle detection information, the second obstacle detection information, and the third obstacle detection information may be aligned with a timestamp of the detection data of the millimeter wave radar. Therefore, the time lines of the information are synchronized, so that when information fusion is carried out, a group of fused first obstacle detection information, second obstacle detection information and third obstacle detection information are data at the same time, the condition of the obstacles at the same time can be reflected more accurately by a fusion result, and the obstacle detection accuracy is improved.
In one possible design, the specific manner of determining the 3D position information of the obstacle reflected by the 3D point cloud data on the captured image based on the relative position relationship between the 2D projection point and the 2D detection frame of the captured image includes: and for the 2D detection frame where each obstacle is located on the shot image of the camera, if a plurality of 2D projection points exist in the 2D detection frame, determining the central points of the 2D projection points as the 3D position information of the obstacle on the shot image.
When the camera shoots an image, an obstacle on the shot image is identified by a 2D detection frame through a target detection function. The 2D projection points are the mapping of the obstacle in the 3D point cloud data of the laser radar on the shot image, and reflect the position of the obstacle in the 3D point cloud data, so if there are a plurality of 2D projection points in the 2D detection frame, it is indicated that the obstacle identified by the 2D detection frame and the obstacle identified by the plurality of 2D projection points have coincidence and are likely to be the same obstacle, and therefore, the center point of the plurality of 2D projection points can be determined as the position of the obstacle in the 3D point cloud data, and accordingly, the center point also becomes the 3D position information of the obstacle on the shot image.
If the 2D projection points do not exist in the 2D detection frame, detecting whether a plurality of 2D projection points exist in a specified distance range outside the 2D detection frame or not; and when a plurality of 2D projection points exist in the specified distance range, determining the central points of the 2D projection points as the 3D position information of the obstacle on the shot image, otherwise, obtaining the 3D position information of the obstacle on the shot image according to a preset measuring mode.
When the 2D projection points do not exist in the 2D detection frame, whether a plurality of 2D projection points exist near the 2D detection frame or not can be continuously detected, if yes, the fact that the obstacle identified by the 2D detection frame and the obstacle identified by the 2D projection points have certain possibility of overlapping can be still demonstrated, the possibility of the obstacle is the same, and therefore the center points of the 2D projection points can be determined as the 3D position information of the obstacle on the shot image.
Wherein the specified distance range is positively correlated with the size of the 2D detection frame. The specified distance range outside the 2D detection frame can be considered to belong to the vicinity of the 2D detection frame, and the specified distance range can be adaptively set based on the size of the 2D detection frame, and the larger the 2D detection frame is, the larger the corresponding specified distance range is.
In one possible design, the specified distance range is 1/3 of the length of the bottom side of the 2D detection frame.
In another possible design, the specified distance range is 1/2 of the length of the diagonal of the 2D detection box.
Of course, the specified distance range may be any size that is positively correlated with the size of the 2D detection frame and meets the actual obstacle detection requirement, and is not limited to the examples given in this application.
And if the 2D projection points do not exist in the vicinity of the 2D detection frame, it is indicated that the obstacle identified by the 2D detection frame does not have the possibility of overlapping with the obstacle identified by the 2D projection points, and at this time, the 3D position information of the obstacle on the shot image can be further detected by adopting a preset measurement mode.
Through the technical scheme, 3D position information obtained by projecting the 3D point cloud data of the laser radar to the 2D image of the shot image can be obtained, reliable position information is provided for fusing the detection results of the laser radar and the camera subsequently, and the accuracy of obstacle fusion detection is improved.
In some embodiments of the present invention, the obtaining the target location information according to a predetermined measurement mode includes: obtaining first relative position information of the mine vehicle and the barrier in a monocular distance measuring mode; and determining an obstacle height based on the category of the obstacle; acquiring an intersection point of a designated ray and a designated section of a preset map road model as a bottom central point of the barrier, wherein the designated ray is a ray which takes a camera optical center as an original point and is located by a connecting line of the camera optical center and a bottom edge midpoint of the 2D detection frame of the barrier, and the designated section is a section of the preset map road model on a vertical plane where the current driving direction of the mine vehicle is located; determining the position of the bottom center point after the bottom center point is raised by 1/2 of the height of the obstacle as second relative position information of the mining area vehicle and the obstacle; and performing weighted summation processing on the first relative position information and the second relative position information to obtain 3D position information of the obstacle on the shot image.
The principle of the monocular distance measurement is shown in fig. 2, a target direction vector v is determined according to the optical center of the camera and the center of the 2D detection frame, and then the real distance D from the obstacle to the camera is determined based on the similar triangle principle by using the pixel height H and the real height H of the obstacle and the focal length f of the camera. And finally, correcting the real distance D through the target direction vector v, and taking v x D as the relative position of the obstacle and the camera, thereby taking the relative position as first relative position information.
As shown in fig. 3, a specific manner of obtaining the second relative position information may be described as constructing a ray x by taking the optical center of the camera and the middle point of the bottom edge of the 2D detection frame of the obstacle as two points on the ray, where the ray x may represent the light ray of the bottom of the obstacle during the imaging process. Then, an intersection point of the ray x and a preset map road model is obtained, and the intersection point is the actual bottom center point of the obstacle. Then, based on the type of the obstacle obtained by the camera performing the target recognition, the actual height of the obstacle is inquired in a preset obstacle type and a corresponding height table. And finally, taking the position of the actual bottom center point which is raised by 1/2 of the actual height as the actual position of the obstacle after the height is corrected, and taking the actual position as second relative position information.
Then, the first relative position information and the second relative position information are subjected to weighted summation processing, optionally, the weights of the first relative position information and the second relative position information are respectively set to be 0.5, the coordinate of the first relative position information is multiplied by the weight of 0.5 to obtain a first product, the coordinate of the second relative position information is multiplied by the weight of 0.5 to obtain a second product, and the sum of the first product and the second product is used as the final 3D position information of the obstacle on the shot image.
According to the technical scheme, the bottom central point of the barrier can be determined through the preset map road model, and the position of the bottom central point of the barrier is corrected by combining the height corresponding to the type of the barrier, so that the position information of the barrier is corrected, and the accuracy of the barrier detection result is improved. Meanwhile, the corrected position information of the obstacle can be further corrected by combining a monocular distance measuring mode, so that more accurate position information of the obstacle is obtained, and the accuracy of the obstacle detection result is further improved.
In some embodiments of the present invention, after obtaining the 3D position information of the obstacle in the 2D image of the captured image and before using it for fusion, that is, before extracting the first fusion information in the first obstacle detection information and the second obstacle detection information, further includes: determining a vehicle coordinate system based on the real-time pose information of the mine vehicles; and converting the 3D position information into conversion position information under the vehicle coordinate system by taking the calibration information in the 3D point cloud data and the calibration information in the shot image as reference information, so as to extract the first fusion information according to the contact ratio of the conversion position information and the 2D detection frame.
The real-time pose information of the mine vehicles comprises the positions and the course angles of the mine vehicles, the calibration information of the 3D point cloud data and the calibration information of the shot images are obtained from a preset calibration file, and the preset calibration file comprises the 3D point cloud data collected from the samples and the sample calibration information of the shot images of the samples. And the calibration information in the 3D point cloud data and the calibration information in the shot image are taken as reference information, and the 3D position information is converted into conversion position information in a vehicle coordinate system, so that subsequent data processing processes can be performed in the vehicle coordinate system, the calculation in the unmanned process is facilitated, and the calculation efficiency is improved.
In some embodiments of the present invention, the specific step of extracting the first fusion information from the first obstacle detection information and the second obstacle detection information includes: determining whether the obstacle is a valid obstacle based on the 3D position information and a predetermined fusible obstacle screening rule; if the obstacle is the effective obstacle, when the contact ratio of a 2D detection frame and a 2D projection frame corresponding to the 3D position information is larger than or equal to a specified contact ratio threshold value, the effective obstacle is determined to be matched with the obstacle in the 2D detection frame, and the obstacle position information in the first obstacle detection information and the obstacle category information in the second obstacle detection information are used as first fusion information of the effective obstacle.
The method comprises the steps of obtaining a preset fused obstacle screening rule, screening effective obstacles which are necessary to be subjected to obstacle detection information fusion in the obstacles, and providing a reliable basis for the accuracy of unmanned driving.
Specifically, the predetermined fusible obstacle screening rule is: and if the position indicated by the 3D position information is within the field angle range of the camera and the obstacle is a foreground object of the camera, determining that the obstacle is an effective obstacle.
Because the field angle of the laser radar is larger than that of the camera, the common obstacle detected by the laser radar and the camera only exists in the intersection of the laser radar and the camera, namely the field angle range of the camera. Meanwhile, as the number of obstacles which can be identified by the camera is large, and the foreground and the background are distinguished by camera shooting, the background can cover the obstacle far away from the front of the road, namely the background target, and the obstacle which needs to be detected and responded in the unmanned driving process is often the obstacle which is close to the camera or the vehicle in the mining area, namely the foreground target, therefore, the obstacle which needs to be detected and responded in the unmanned driving process can be determined as the foreground target of the camera.
Based on the above, the common obstacles detected by the laser radar and the camera can be screened by taking the foreground targets which are in the field angle range of the camera and belong to the camera as the conditions which need to be met simultaneously, so that the common obstacles detected by the laser radar and the camera can be effectively identified, and further obstacle detection information fusion is facilitated.
In particular. A threshold value for the specified overlap ratio may be set to represent the lowest overlap ratio at which a valid obstacle in the captured image represented by the 2D detection box matches a valid obstacle detected by the lidar identified by the 2D projection box. Therefore, when the coincidence degree of the 2D detection frame and the 2D projection frame is greater than or equal to a specified coincidence degree threshold value, it is determined that the valid obstacle matches the obstacle within the 2D detection frame.
At this time, since the laser radar has the advantages of high positioning accuracy and accurate distance measurement, and the image taken by the camera has the advantage of easy target identification, the two advantages can be integrated, and the obstacle position information in the first obstacle detection information of the laser radar and the obstacle category information in the second obstacle detection information detected by the camera are used as the first fusion information of the effective obstacle.
It is to be added that, based on actual detection requirements, the coincidence ratio of the 2D detection frame and the 2D projection frame may be set as a first ratio of the intersection area of the 2D detection frame and the 2D projection frame to the 2D detection frame; or a second ratio of the intersection area of the 2D detection frame and the 2D projection frame to the 2D projection frame; or a third ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a smaller area in the 2D detection frame and the 2D projection frame; or a fourth ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a larger area in the 2D detection frame and the 2D projection frame; or the maximum value or the average of any number of the first ratio, the second ratio, the third ratio, and the fourth ratio. This application is right the 2D detects the frame with the coincidence degree of the 2D projection frame that 3D positional information corresponds does not do further restriction, and the mode that any coincidence degree set up all is in the protection scope of this application.
The scheme mainly fuses the laser radar and the common barriers in the foreground target of the camera, screening measures are taken, and on the other hand, the background target of the camera can include the barriers far away from the front of the road.
Specifically, when the coincidence degree of the 2D detection frame and the 2D projection frame is smaller than a specified coincidence degree threshold value, setting the obstacle in the 2D detection frame as the effective obstacle; and setting the obstacle type information and the obstacle position information of the obstacles in the 2D detection frame as first fusion information of the effective obstacles. That is to say, since the camera detects the target more accurately, the remaining obstacles that are left in the captured image of the camera and that are not matched with the obstacle detected by the laser radar may be set as valid obstacles by the above-mentioned screening scheme. These remaining obstacles include at least all background objects in the captured image of the camera.
Above technical scheme has promoted the mining area vehicle and has gone under the scene barrier detection comprehensiveness on the slope, helps driving safety's promotion.
In one possible design, the specific manner of determining the second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information includes: determining whether the relative distance between the obstacle in the third obstacle detection information and the effective obstacle closest to the obstacle in the third obstacle detection information is smaller than a specified distance threshold value or not based on the position information of the obstacle after the height information is corrected in the third obstacle detection information and the position information of the effective obstacle closest to the obstacle in the third obstacle detection information; if the relative distance is smaller than the specified distance threshold value, determining that the obstacle in the third obstacle detection information is matched with the effective obstacle closest to the obstacle, wherein the specified distance threshold value is a specified percentage of the length of the effective obstacle closest to the obstacle in the third obstacle detection information; and determining the speed information of the obstacle in the third obstacle detection information and the first fusion information of the effective obstacle closest to the third obstacle detection information as the second fusion information of the effective obstacle.
That is, for each effective obstacle determined in the foregoing solution, the corresponding obstacle is matched in the third obstacle detection information of the millimeter wave radar, and in the case where the matching is successful, the speed information of the obstacle matched in the third obstacle detection information is given to the first fusion information, so that the second fusion information having accurate speed information is obtained.
Therefore, the speed information measured by the millimeter wave radar can be combined with the obstacle position information measured by the laser radar and the obstacle type information measured by the camera by virtue of the advantage of high accuracy of the obstacle speed measured by the millimeter wave radar, so that the obtained second fusion information has higher comprehensiveness and accuracy, and an accurate and reliable driving control data basis is provided for unmanned driving.
Specifically, if the relative distance between the obstacle in the third obstacle detection information and the effective obstacle closest to the third obstacle detection information is small enough, the third obstacle detection information and the effective obstacle can be determined to be matched and to be the same obstacle. In other words, when the relative distance between the obstacle in the third obstacle detection information and the effective obstacle closest to itself is smaller than a prescribed distance threshold value, it is determined that the obstacle in the third obstacle detection information matches the effective obstacle closest to itself.
Wherein the specified distance threshold is a specified percentage of the length of the effective obstacle closest to the obstacle in the third obstacle detection information. In one possible design, the specified distance threshold is 1/2 of the length of the effective obstacle closest to the obstacle in the third obstacle detection information. The specified percentage is not further limited, and any percentage which meets the actual application scenario is within the protection scope of the present application.
In addition, when the millimeter wave radar detects the obstacle distance, it can only detect the plane distance, and cannot consider the slope of the road and other factors, so that the height information of the obstacle in the third obstacle detection information of the millimeter wave radar has an error with the actual height of the obstacle, in other words, the position of the obstacle output by the millimeter wave radar is lower than the actual road surface due to the long distance and uneven slope of the road surface. Therefore, the height information of the obstacle in the third obstacle detection information can be corrected in order to improve the accuracy of matching the obstacle detected by the millimeter wave radar and the effective obstacle.
Specifically, as shown in fig. 4, an arc may be generated with the position of the millimeter wave radar as an origin and the effective detection distance of the millimeter wave radar as a radius r; and correcting the height information in the position information of the obstacle in the third obstacle detection information according to the intersection point of the circular arc and the road in the preset map model and the individual height of the obstacle in the third obstacle detection information.
According to the technical scheme, the height information of the obstacle detected by the millimeter wave radar is corrected, namely the position information of the obstacle detected by the millimeter wave radar is corrected, so that the position information can be matched with the effective obstacle based on the corrected position information, and the matching accuracy of the detection result of the millimeter wave radar and the effective obstacle is improved. On the basis, if the obstacle in the third obstacle detection information is determined to be matched with the effective obstacle based on the corrected position information of the millimeter wave radar, the obstacle speed information detected by the millimeter wave radar and the first fusion information of the effective obstacle can be fused to obtain second fusion information. Therefore, the height information of the obstacle detected by the millimeter wave radar is corrected, and the accuracy of the second fusion information, namely the final obstacle fusion detection result, is indirectly improved.
The individual height of the obstacle is calculated by inquiring the actual height of the obstacle in a preset obstacle type and a corresponding height table based on the type of the obstacle obtained by target recognition of a camera.
Fig. 5 shows a schematic view of an obstacle fusion detection arrangement for a mine ramp according to an embodiment of the present application.
By combining the technical scheme, the obstacle fusion detection device for the mine slope can be externally arranged on the vehicle and can also be integrally arranged in the vehicle as part of the vehicle. As shown in fig. 5, the obstacle fusion detection apparatus for a mine ramp according to an embodiment of the present application at least includes an input module, a detection module, and a fusion module, where the input module may collect 3D point cloud data through a laser radar, collect 2D image data (i.e., a shot image) through a camera, collect vehicle pose data through a GPS (global positioning system)/IMU (inertial measurement unit), and obtain the point cloud data through a millimeter wave radar, and meanwhile, the input module may also obtain calibration data from sensor calibration parameters, and the calibration data is used to convert each data in the above technical solution into vehicle coordinates for calculation, so as to reduce the calculation complexity. In addition, the input module may also obtain the road model from a high-precision map provided by a third party.
The detection module comprises three sub-modules of 3D point cloud detection, 2D image detection and 2.5D detection, the 3D point cloud detection is used for identifying obstacles in 3D point cloud data of the laser radar, the 2D image detection is used for identifying obstacles in a shooting image of the camera, and the 2.5D detection is used for identifying obstacles in a detection result of the millimeter wave radar.
So far, the input module transmits all the acquired data to the fusion module, and each submodule of the detection module also transmits the identification result to the fusion module.
After receiving the data, the fusion module first performs fusion of detection results of the laser radar and the camera, and specifically obtains first fusion information through various processes such as point cloud information projection, image target ranging, coordinate conversion, data screening, detection fusion and the like. And then, the fusion module fuses the first fusion information with the detection result of the millimeter wave radar, specifically, obtains second fusion information through height correction, target matching and speed assignment, and forms a final fusion detection result. The specific fusion process is shown in the foregoing embodiments, and is not described herein again.
Aiming at the problem that unmanned mine vehicles can not accurately detect obstacles due to the existence of ramps on the mine road in the related technology, the technical scheme realizes the detection of the 3D position of the obstacles in the 2D image detection result through the fusion of the detection results of the laser radar and the camera, corrects the height information in the detection result of the millimeter wave radar in the process of fusing the detection result of the millimeter wave radar, improves the accuracy and the reliability of the detection result, and is favorable for the safety of unmanned driving.
Fig. 6 shows a schematic view of an obstacle fusion detection arrangement for a mine ramp according to another embodiment of the present application.
As shown in fig. 6, an obstacle fusion detecting apparatus 600 for a mine slope according to another embodiment of the present application includes: a detection data acquisition unit 602, configured to acquire, in response to detection that a mine vehicle enters a ramp, 3D point cloud data from a laser radar, a captured image from a camera, and detection data from a millimeter wave radar; an obstacle detection information determination unit 604 for determining first obstacle detection information, second obstacle detection information, and third obstacle detection information, respectively, based on the 3D point cloud data, the captured image, and the detection data; a projection processing unit 606, configured to project the 3D point cloud data in the first obstacle detection information onto a captured image of the camera, so as to obtain a 2D projection point corresponding to the 3D point cloud data; a 3D position information obtaining unit 608, configured to determine, based on a relative position relationship between the 2D projection point and the 2D detection frame of the captured image, 3D position information of an obstacle reflected by the 3D point cloud data on the captured image; a first fusion information generating unit 610, configured to extract first fusion information from the first obstacle detection information and the second obstacle detection information according to a coincidence degree of a 2D projection frame corresponding to the 3D position information and the 2D detection frame; a second fusion information generating unit 612, configured to determine second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information.
In an embodiment of the present invention, optionally, the obstacle fusion detecting apparatus 600 for a mine slope further includes: a time alignment unit for aligning respective timestamps of the 3D point cloud data, the shot image, the real-time pose data of the mine vehicle, the first obstacle detection information, the second obstacle detection information, and the third obstacle detection information with a timestamp of the detection data of the millimeter wave radar.
In an embodiment of the present invention, optionally, the 3D position information obtaining unit 608 is configured to: for a 2D detection frame where each obstacle is located on a shot image of the camera, if a plurality of 2D projection points exist in the 2D detection frame, determining the central points of the 2D projection points as 3D position information of the obstacle on the shot image; if the 2D projection points do not exist in the 2D detection frame, detecting whether a plurality of 2D projection points exist in a specified distance range outside the 2D detection frame or not; when a plurality of 2D projection points exist in the specified distance range, determining the central points of the 2D projection points as the 3D position information of the obstacle on the shot image, wherein the specified distance range is positively correlated with the size of the 2D detection frame, otherwise, obtaining the 3D position information of the obstacle on the shot image according to a preset measurement mode.
In an embodiment of the present invention, optionally, the 3D position information obtaining unit 608 is configured to: obtaining first relative position information of the mine vehicle and the barrier in a monocular distance measuring mode; and determining an obstacle height based on the category of the obstacle; acquiring an intersection point of a designated ray and a designated section of a preset map road model as a bottom central point of the obstacle, wherein the designated ray is a ray which takes a camera optical center as an original point and is located by a connecting line of the camera optical center and a bottom edge midpoint of the 2D detection frame of the obstacle, and the designated section is a section of the preset map road model on a vertical plane where the current driving direction of the mine vehicle is located; determining the position of the bottom center point after the bottom center point is raised by 1/2 of the height of the obstacle as second relative position information of the mining area vehicle and the obstacle; and performing weighted summation processing on the first relative position information and the second relative position information to obtain 3D position information of the obstacle on the shot image.
In an embodiment of the present invention, optionally, further comprising: a coordinate system conversion unit, configured to determine a vehicle coordinate system based on the real-time pose information of the mine vehicle before the first fusion information is extracted by the first fusion information generation unit 610; and taking the calibration information in the 3D point cloud data and the calibration information in the shot image as reference information, converting the 3D position information into conversion position information under a vehicle coordinate system, and extracting the first fusion information according to the contact ratio of the conversion position information and the 2D detection frame.
In an embodiment of the present invention, optionally, the first fusion information generating unit 610 is configured to: determining whether the obstacle is a valid obstacle based on the 3D position information and a predetermined fusible obstacle screening rule, wherein the predetermined fusible obstacle screening rule is: if the position indicated by the 3D position information is within the field angle range of the camera and the obstacle is a foreground target of the camera, determining that the obstacle is an effective obstacle; if the obstacle is the effective obstacle, when the contact ratio of a 2D detection frame and a 2D projection frame corresponding to the 3D position information is larger than or equal to a specified contact ratio threshold value, the effective obstacle is determined to be matched with the obstacle in the 2D detection frame, and the obstacle position information in the first obstacle detection information and the obstacle category information in the second obstacle detection information are used as first fusion information of the effective obstacle.
In an embodiment of the present invention, optionally, the obstacle fusion detecting apparatus 600 for a mine slope further includes: the obstacle supplementing unit is used for setting the obstacles in the 2D detection frame as the effective obstacles when the coincidence degree of the 2D detection frame and the 2D projection frame is smaller than a specified coincidence degree threshold value; the first fusion information generating unit 610 is further configured to set the obstacle category information and the obstacle position information of the obstacle in the 2D detection frame as the first fusion information of the effective obstacle.
In an embodiment of the present invention, optionally, a coincidence ratio of the 2D detection frame and the 2D projection frame is: a first ratio of an intersection area of the 2D detection frame and the 2D projection frame to the 2D detection frame; or a second ratio of the intersection area of the 2D detection frame and the 2D projection frame to the 2D projection frame; or a third ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a smaller area in the 2D detection frame and the 2D projection frame; or a fourth ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a larger area in the 2D detection frame and the 2D projection frame; or the maximum value or the average of any number of the first ratio, the second ratio, the third ratio, and the fourth ratio.
In an embodiment of the present invention, optionally, the second fusion information generating unit 612 is configured to: generating an arc by taking the position of the millimeter wave radar as an origin and the effective detection distance of the millimeter wave radar as a radius; and correcting the height information in the position information of the obstacle in the third obstacle detection information according to the intersection point of the circular arc and the road in the preset map model and the individual height of the obstacle in the third obstacle detection information.
In an embodiment of the present invention, optionally, the second fusion information generating unit 612 is configured to: determining whether the relative distance between the obstacle in the third obstacle detection information and the effective obstacle closest to the obstacle in the third obstacle detection information is smaller than a specified distance threshold value or not based on the position information of the obstacle after the height information is corrected in the third obstacle detection information and the position information of the effective obstacle closest to the obstacle in the third obstacle detection information; if the relative distance is smaller than the specified distance threshold value, determining that the obstacle in the third obstacle detection information is matched with the effective obstacle closest to the third obstacle detection information, wherein the specified distance threshold value is a specified percentage of the length of the effective obstacle closest to the obstacle in the third obstacle detection information; and determining the speed information of the obstacle in the third obstacle detection information and the first fusion information of the effective obstacle closest to the third obstacle detection information as the second fusion information of the effective obstacle.
The obstacle fusion detection device 600 for the mine slope uses the solution described in any of the above embodiments, and therefore has all the technical effects described above, and is not described herein again.
Fig. 7 is a schematic structural diagram of a storage medium provided in an embodiment of the present application, and as shown in fig. 7, a computer-readable storage medium 700 stores a computer program 710, where the computer program 710 is used, when executed by a processor, to implement the method for detecting obstacle fusion for a mine slope according to any one of the foregoing embodiments. The method for detecting the fusion of obstacles on the mine ramp has been described in detail in the foregoing, and is not described in detail herein.
The methods described in the above embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. Storage medium 700 may include computer storage media and communication media and may include any medium that can communicate a computer program from one place to another. A storage medium may be any target medium that can be accessed by a computer.
As one possible design, the storage medium 700 may include a compact disk read-only memory (CD-ROM), RAM, ROM, EEPROM, or other optical disk storage; the computer readable medium may include a disk memory or other disk storage computer device. Also, any connecting line may also be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application, and as shown in fig. 8, the computer device 800 includes a memory 820, a processor 810 and a computer program stored on the memory 820 and executable by the processor, where the processor 810 executes the computer program 840 to perform the steps of the method of the present application, so as to implement obstacle fusion detection when a vehicle travels on a slope of a mine. Note that the computer program 840 in this embodiment is the same as the computer program 710 described above.
The memory 820 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. The memory 820 has a storage space 830 for storing a computer program 840 for performing any of the method steps of the above-described method. Computer program 840 can be read from and written to one or more computer program products. These computer program products comprise a program code carrier such as a hard disk, a Compact Disc (CD), a memory card or a floppy disk. Such a computer program product is typically a computer readable storage medium such as described in fig. 7. The computer device may include a plurality of processors, each of which may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores that process data (e.g., computer program instructions).
Fig. 9 is a schematic structural diagram of a chip according to an embodiment of the present disclosure, and as shown in fig. 9, the chip 900 includes one or more than two (including two) processors 910 and a communication interface 930. The communication interface 930 is coupled to the at least one processor 910, and the at least one processor 910 is configured to execute a computer program or instructions to perform the steps of the method of the present application, so as to implement the obstacle fusion detection when the vehicle travels on the slope of the mine.
Preferably, the memory 940 stores the following elements: an executable module or a data structure, or a subset thereof, or an expanded set thereof.
In the illustrated embodiment, memory 940 may include both read-only memory and random-access memory, and provides instructions and data to processor 910. A portion of memory 940 may also include non-volatile random access memory (NVRAM).
In the illustrated embodiment, the memory 940, the communication interface 930, and the memory 940 are coupled together via a bus system 920. The bus system 920 may include a power bus, a control bus, a status signal bus, and the like, in addition to the data bus. For ease of description, the various buses are identified in FIG. 9 as bus system 920.
The method described in the embodiments of the present application may be applied to the processor 910, or implemented by the processor 910. The processor 910 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits in hardware or by instructions in software in the processor 910. The processor 910 may be a general-purpose processor (e.g., a microprocessor or a conventional processor), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an FPGA (field-programmable gate array) or other programmable logic device, discrete gate, transistor logic device or discrete hardware component, and the processor 910 may implement or execute the methods, steps and logic blocks disclosed in the embodiments of the present invention.
Fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and as shown in fig. 10, the terminal 1000 includes an obstacle fusion detection apparatus 600 for a mine ramp according to the above-mentioned technical solution of the present application.
The terminal 1000 can execute the steps of the method in the application through the obstacle fusion detection device 600 for the mine slope, and can realize the obstacle fusion detection of the vehicle running on the mine slope. It can be understood that the implementation manner of the terminal 1000 for controlling the obstacle fusion detection apparatus 600 for a mine slope may be set according to an actual application scenario, and the embodiment of the present application is not particularly limited.
The terminal 1000 includes, but is not limited to: the vehicle can implement the method provided by the application through the vehicle-mounted terminal, the vehicle-mounted controller, the vehicle-mounted module, the vehicle-mounted component, the vehicle-mounted chip, the vehicle-mounted unit, the vehicle-mounted radar or the camera. The vehicle in this application includes passenger car and commercial car, and the common motorcycle type of commercial car includes but not limited to: pickup trucks, mini trucks, pickup trucks, mini-passenger trucks, dump trucks, tractors, trailers, special cars, mining vehicles, and the like. Mining vehicles include, but are not limited to, mine trucks, wide body cars, articulated trucks, excavators, power shovels, dozers, and the like. The type of the intelligent vehicle is not further limited, and any vehicle type is within the protection scope of the intelligent vehicle.
The technical scheme of the application is explained in detail in combination with the attached drawings, through the technical scheme of the application, for a scene that a mine vehicle enters a ramp, the mode that 3D point cloud data of a laser radar are projected onto a shot image to obtain 3D position information of an obstacle in the 2D image of the shot image is obtained, the laser radar and a detection result of a camera are fused to obtain first fusion information, effective obstacle speed information detected by a millimeter wave radar is fused with the first fusion information to obtain second fusion information, and the second fusion information has the advantages of high laser radar positioning precision, accurate distance measurement, accurate camera identification target and accurate millimeter wave radar speed measurement. In other words, the advantages of the detection results of the laser radar, the camera and the millimeter wave radar are fused, the obtained fusion result can accurately reflect the position, type, speed and other multi-dimensional information of the obstacle, the detection results are more comprehensive and accurate compared with the respective independent detection results of the laser radar, the camera and the millimeter wave radar, and the reliability and safety of unmanned driving are improved.
It should be understood that although the terms first, second, etc. may be employed to describe the obstacle detection information in the embodiments of the present application, the obstacle detection information should not be limited to these terms. These terms are used only to distinguish the obstacle detection information from each other. For example, the first obstacle detection information may also be referred to as second obstacle detection information, and similarly, the second obstacle detection information may also be referred to as first obstacle detection information, without departing from the scope of the embodiments of the present application.
The word "if," as used herein, may be interpreted as "at \8230; \8230when" or "when 8230; \823030when" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer-readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The above description is only a preferred embodiment of the present application and should not be taken as limiting the present application, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. An obstacle fusion detection method for a mine ramp, comprising:
in response to the fact that the vehicle in the mining area drives into the ramp, acquiring 3D point cloud data from a laser radar, a shot image from a camera and detection data from a millimeter wave radar;
respectively determining first obstacle detection information, second obstacle detection information and third obstacle detection information based on the 3D point cloud data, the shot image and the detection data;
projecting the 3D point cloud data in the first obstacle detection information onto a shot image of the camera to obtain a 2D projection point corresponding to the 3D point cloud data;
determining 3D position information of an obstacle reflected by the 3D point cloud data on the shot image based on the relative position relation between the 2D projection point and a 2D detection frame of the shot image;
extracting first fusion information from the first obstacle detection information and the second obstacle detection information according to the contact ratio of the 2D projection frame corresponding to the 3D position information and the 2D detection frame;
determining second fusion information of the effective obstacle based on the first fusion information and the third obstacle detection information.
2. The method according to claim 1, wherein the determining 3D position information of the obstacle on the captured image reflected by the 3D point cloud data based on the relative position relationship between the 2D projection point and the 2D detection frame of the captured image comprises:
for the 2D detection frame where each obstacle is located on the captured image of the camera,
if a plurality of 2D projection points exist in the 2D detection frame, determining the central points of the 2D projection points as the 3D position information of the obstacle on the shot image;
if the 2D projection points do not exist in the 2D detection frame, detecting whether a plurality of 2D projection points exist in a specified distance range outside the 2D detection frame;
when a plurality of 2D projection points exist in the specified distance range, determining the central points of the 2D projection points as the 3D position information of the obstacle on the shot image, wherein the specified distance range is positively correlated with the size of the 2D detection frame, and otherwise, obtaining the 3D position information of the obstacle on the shot image according to a preset measurement mode.
3. The method according to claim 2, wherein the obtaining 3D position information of the obstacle on the captured image according to the predetermined measurement mode comprises:
obtaining first relative position information of the mine vehicle and the barrier in a monocular distance measuring mode;
determining an obstacle height based on the category of the obstacle;
acquiring an intersection point of a designated ray and a designated section of a preset map road model as a bottom central point of the barrier, wherein the designated ray is a ray which takes a camera optical center as an original point and is located by a connecting line of the camera optical center and a bottom edge midpoint of the 2D detection frame of the barrier, and the designated section is a section of the preset map road model on a vertical plane where the current driving direction of the mine vehicle is located;
determining the position of the bottom center point after the bottom center point is raised by 1/2 of the height of the obstacle as second relative position information of the mining area vehicle and the obstacle;
and carrying out weighted summation processing on the first relative position information and the second relative position information to obtain the 3D position information of the obstacle on the shot image.
4. The method of claim 1, wherein determining second fused information for the valid obstacle based on the first fused information and the third obstacle detection information comprises:
generating an arc by taking the position of the millimeter wave radar as an origin and the effective detection distance of the millimeter wave radar as a radius;
and correcting the height information in the position information of the obstacle in the third obstacle detection information according to the intersection point of the circular arc and the road in the preset map model and the individual height of the obstacle in the third obstacle detection information.
5. The method of claim 4, wherein the determining second fused information of the valid obstacle based on the first fused information and the third obstacle detection information comprises:
determining whether the relative distance between the obstacle in the third obstacle detection information and the effective obstacle closest to the obstacle in the third obstacle detection information is smaller than a specified distance threshold value or not based on the position information of the obstacle after the height information is corrected in the third obstacle detection information and the position information of the effective obstacle closest to the obstacle in the third obstacle detection information;
if the relative distance is smaller than the designated distance threshold value, determining that the obstacle in the third obstacle detection information is matched with the effective obstacle closest to the third obstacle detection information,
wherein the specified distance threshold is a specified percentage of the length of the effective obstacle closest to the obstacle in the third obstacle detection information;
and determining the speed information of the obstacle in the third obstacle detection information and the first fusion information of the effective obstacle closest to the third obstacle detection information as the second fusion information of the effective obstacle.
6. The method according to claim 2, wherein before extracting first fusion information from the first obstacle detection information and the second obstacle detection information based on the degree of coincidence between the 2D projection frame corresponding to the 3D position information and the 2D detection frame, the method further comprises:
determining a vehicle coordinate system based on the real-time pose information of the mine vehicle;
and converting the 3D position information into conversion position information under the vehicle coordinate system by taking the calibration information in the 3D point cloud data and the calibration information in the shot image as reference information, so as to extract the first fusion information according to the contact ratio of the conversion position information and the 2D detection frame.
7. The method according to claim 2, wherein the extracting first fusion information from the first obstacle detection information and the second obstacle detection information according to the degree of coincidence between the 2D projection frame and the 2D detection frame corresponding to the 3D position information includes:
determining whether the obstacle is a valid obstacle based on the 3D position information and a predetermined fusible obstacle screening rule, wherein the predetermined fusible obstacle screening rule is: if the position indicated by the 3D position information is within the field angle range of the camera and the obstacle is a foreground target of the camera, determining that the obstacle is an effective obstacle;
if the obstacle is the effective obstacle, when the coincidence degree of a 2D detection frame and a 2D projection frame corresponding to the 3D position information is larger than or equal to a specified coincidence degree threshold value, the effective obstacle is determined to be matched with the obstacle in the 2D detection frame, and the obstacle position information in the first obstacle detection information and the obstacle type information in the second obstacle detection information are used as first fusion information of the effective obstacle.
8. The method according to claim 7, wherein the coincidence degree of the 2D detection frame and the 2D projection frame corresponding to the 3D position information is:
a first ratio of an intersection area of the 2D detection frame and the 2D projection frame to the 2D detection frame; or
A second ratio of an intersection area of the 2D detection frame and the 2D projection frame to the 2D projection frame; or
A third ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a smaller area in the 2D detection frame and the 2D projection frame; or
A fourth ratio of the intersection area of the 2D detection frame and the 2D projection frame to a frame with a larger area in the 2D detection frame and the 2D projection frame; or
A maximum value or an average of any number of the first ratio, the second ratio, the third ratio, and the fourth ratio.
9. A chip, characterized in that the chip comprises at least one processor and a communication interface, the communication interface being coupled with the at least one processor, the at least one processor being configured to run a computer program or instructions to implement the method of obstacle fusion detection for a mine ramp according to any of claims 1-8.
10. A terminal, characterized in that it comprises a method for obstacle fusion detection for a mine ramp according to any of claims 1-8.
CN202211489480.3A 2022-11-25 2022-11-25 Obstacle fusion detection method, chip and terminal for mine ramp Pending CN115729245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211489480.3A CN115729245A (en) 2022-11-25 2022-11-25 Obstacle fusion detection method, chip and terminal for mine ramp

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211489480.3A CN115729245A (en) 2022-11-25 2022-11-25 Obstacle fusion detection method, chip and terminal for mine ramp

Publications (1)

Publication Number Publication Date
CN115729245A true CN115729245A (en) 2023-03-03

Family

ID=85298303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211489480.3A Pending CN115729245A (en) 2022-11-25 2022-11-25 Obstacle fusion detection method, chip and terminal for mine ramp

Country Status (1)

Country Link
CN (1) CN115729245A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908768A (en) * 2023-03-10 2023-04-04 超音速人工智能科技股份有限公司 Battery defect identification method, system and platform based on combined type lighting shooting
CN116721093A (en) * 2023-08-03 2023-09-08 克伦斯(天津)轨道交通技术有限公司 Subway rail obstacle detection method and system based on neural network

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115908768A (en) * 2023-03-10 2023-04-04 超音速人工智能科技股份有限公司 Battery defect identification method, system and platform based on combined type lighting shooting
CN116721093A (en) * 2023-08-03 2023-09-08 克伦斯(天津)轨道交通技术有限公司 Subway rail obstacle detection method and system based on neural network
CN116721093B (en) * 2023-08-03 2023-10-31 克伦斯(天津)轨道交通技术有限公司 Subway rail obstacle detection method and system based on neural network

Similar Documents

Publication Publication Date Title
CN111712731B (en) Target detection method, target detection system and movable platform
CN115729245A (en) Obstacle fusion detection method, chip and terminal for mine ramp
CN111192295B (en) Target detection and tracking method, apparatus, and computer-readable storage medium
EP2889641B1 (en) Image processing apparatus, image processing method, program and image processing system
US10366295B2 (en) Object recognition apparatus
CN109583416B (en) Pseudo lane line identification method and system
US20210394761A1 (en) Method and Processing Unit for Determining Information With Respect to an Object in an Environment of a Vehicle
CN112149460A (en) Obstacle detection method and device
CN113743171A (en) Target detection method and device
CN114550142A (en) Parking space detection method based on fusion of 4D millimeter wave radar and image recognition
CN111079782B (en) Vehicle transaction method and system, storage medium and electronic device
CN115755000A (en) Safety verification method and device for environment sensing equipment of obstacle detection system
CN115205803A (en) Automatic driving environment sensing method, medium and vehicle
CN110940974B (en) Object detection device
CN114118253B (en) Vehicle detection method and device based on multi-source data fusion
CN114730004A (en) Object recognition device and object recognition method
CN112639811A (en) Method for evaluating sensor data with extended object recognition
CN114084129A (en) Fusion-based vehicle automatic driving control method and system
US11861914B2 (en) Object recognition method and object recognition device
CN110539748A (en) congestion car following system and terminal based on look around
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
Wang et al. A system of automated training sample generation for visual-based car detection
Sadik et al. Vehicles detection and tracking in advanced & automated driving systems: Limitations and challenges
CN115661366B (en) Method and image processing device for constructing three-dimensional scene model
WO2023248341A1 (en) Object detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination