CN114529789A - Target detection method, target detection device, computer equipment and storage medium - Google Patents

Target detection method, target detection device, computer equipment and storage medium Download PDF

Info

Publication number
CN114529789A
CN114529789A CN202011189874.8A CN202011189874A CN114529789A CN 114529789 A CN114529789 A CN 114529789A CN 202011189874 A CN202011189874 A CN 202011189874A CN 114529789 A CN114529789 A CN 114529789A
Authority
CN
China
Prior art keywords
detection result
point cloud
camera
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011189874.8A
Other languages
Chinese (zh)
Inventor
王邓江
刘建超
关喜嘉
邓永强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanji Technology Co Ltd
Original Assignee
Beijing Wanji Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanji Technology Co Ltd filed Critical Beijing Wanji Technology Co Ltd
Priority to CN202011189874.8A priority Critical patent/CN114529789A/en
Publication of CN114529789A publication Critical patent/CN114529789A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application discloses a target detection method, a target detection device, computer equipment and a storage medium, and relates to the technical field of road detection. The target detection method comprises the steps of obtaining a millimeter wave point cloud and a camera image at the same moment; acquiring a point cloud detection result according to the millimeter wave point cloud; carrying out target detection on the camera image to obtain a camera detection result; matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result; and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result. In the embodiment of the application, the target detection result is determined based on the fusion result, the point cloud detection result which is not successfully matched and the camera detection result, and the accuracy of the target detection result is improved.

Description

Target detection method, target detection device, computer equipment and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of road detection, in particular to a target detection method, a target detection device, computer equipment and a storage medium.
Background
With more and more urban vehicles, the difficulty in supervising the driving process of the vehicles is higher and higher, and in practical application, target detection is often required to be carried out on targets in roads.
In the prior art, a method for detecting a target in a road generally comprises the following steps: the method comprises the steps of installing a camera on a road, continuously shooting a target in the road through the camera to obtain a plurality of images, then carrying out target recognition on each image to determine the target in each image, and then determining a target detection result according to the position of each target on the plurality of continuous images.
However, the above method results in a low accuracy of the target detection result.
Disclosure of Invention
Based on this, it is necessary to provide an object detection method, an apparatus, a computer device, and a storage medium that can improve the accuracy of the object detection result, in view of the problems in the above methods.
A method of target detection, the method comprising:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
In one embodiment, acquiring a point cloud detection result according to a millimeter wave point cloud includes:
mapping the millimeter wave point cloud to a pixel coordinate system of the camera image by using the calibration parameters to obtain a millimeter wave point cloud to be processed;
and carrying out target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
In one embodiment, acquiring a point cloud detection result according to a millimeter wave point cloud includes:
performing target detection on the millimeter wave point cloud to obtain an initial point cloud detection result;
and converting the initial point cloud detection result into a pixel coordinate system of the camera image by using the calibration parameters to obtain a point cloud detection result.
In one embodiment, the target detection of the millimeter wave point cloud to obtain an initial point cloud detection result includes:
and processing the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, wherein the initial point cloud detection result comprises a course angle, a speed and a position of the target.
In one embodiment, the performing target detection on the camera image to obtain a camera detection result includes:
and carrying out target detection on the camera image by using an image target detection algorithm, and outputting a camera detection result, wherein the camera detection result comprises the position, the category and the category confidence coefficient of the target.
In one embodiment, matching the point cloud detection result with the camera detection result includes:
acquiring a point cloud positioning frame included in a point cloud detection result;
acquiring a camera positioning frame included in a camera detection result;
calculating the intersection ratio of each point cloud positioning frame and each camera positioning frame; and if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is greater than the intersection ratio threshold, determining that the certain point cloud positioning frame and the camera positioning frame are successfully matched, and successfully matching represents that the target corresponding to the certain point cloud positioning frame and the certain camera positioning frame is the same target.
In one embodiment, the fusion processing is performed on the successfully matched point cloud detection result and the camera detection result to obtain a fusion result, and the fusion result includes:
acquiring motion characteristics of the corresponding target based on the point cloud detection result, wherein the motion characteristics comprise a course angle, a speed and a position;
and acquiring category characteristics of the corresponding target based on the camera detection result, wherein the category characteristics comprise categories, category confidence degrees and color information.
In one embodiment, obtaining a target detection result according to a point cloud detection result and a camera detection result which are not successfully matched and a fusion result includes:
taking the point cloud detection result which is not successfully matched as a detection result of a first area, wherein the first area is an area which is in a detection area of the millimeter wave radar and is not in the detection area of the camera, and the detection area of the millimeter wave radar and the detection area of the camera have an overlapping area;
taking the fusion result as the detection result of the overlapping area;
taking the camera detection result which is not successfully matched as the detection result of a second area, wherein the second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera;
and splicing the detection result of the first area, the detection result of the overlapping area and the detection result of the second area, and outputting a target detection result.
An object detection apparatus, the apparatus comprising:
the first acquisition module is used for acquiring the millimeter wave point cloud and the camera image at the same moment;
the second acquisition module is used for acquiring a point cloud detection result according to the millimeter wave point cloud;
the first detection module is used for carrying out target detection on the camera image to obtain a camera detection result;
the fusion module is used for matching the point cloud detection result with the camera detection result and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and the detection result determining module is used for obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
A computer device comprising a memory and a processor, the memory storing a computer program that when executed by the processor performs the steps of:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the target detection method, the target detection device, the computer equipment and the storage medium can improve the accuracy of the target detection result. The target detection method comprises the following steps: acquiring a millimeter wave point cloud and a camera image at the same moment; acquiring a point cloud detection result according to the millimeter wave point cloud; carrying out target detection on the camera image to obtain a camera detection result; matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result; and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result. In the embodiment of the application, the target detection result is determined based on the fusion result, the point cloud detection result which is not successfully matched and the camera detection result, and the accuracy of the target detection result is improved.
Drawings
Fig. 1 is a schematic diagram of an implementation environment of a target detection method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of distortion provided by an embodiment of the present application;
fig. 3 is a flowchart of a target detection method according to an embodiment of the present application;
fig. 4 is a flowchart of a method for obtaining a point cloud detection result according to a millimeter wave point cloud according to an embodiment of the present application;
fig. 5 is a flowchart of another method for obtaining a point cloud detection result according to a millimeter wave point cloud according to the embodiment of the present application;
fig. 6 is a flowchart of a method for matching a point cloud detection result with a camera detection result according to an embodiment of the present disclosure;
fig. 7 is a flowchart of a method for determining a target detection result according to an embodiment of the present application;
fig. 8 is a block diagram of an object detection apparatus according to an embodiment of the present application;
fig. 9 is an internal structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
With more and more urban vehicles and more difficult to supervise the driving process of the vehicles, in practical application, the vehicles on the road are often required to be identified and tracked. The gradually complex road traffic environment promotes the requirements of China on intelligent traffic management systems to be developed towards intellectualization, comprehension, accuracy and real-time, and reliable real-time and accurate detection data must be relied on to meet the requirements. Currently, detection means such as big data, geomagnetism, video and millimeter wave radar provide multi-mode data.
In the prior art, a method for identifying and tracking a position of a vehicle on a road generally comprises the following steps: the method comprises the steps of installing a camera on a road, continuously shooting vehicles on the road through the camera to obtain a plurality of monitoring images, then carrying out target recognition on each monitoring image to determine a target object in each monitoring image, then determining the running track of each target object according to the positions of the target object on the plurality of continuous monitoring images, and calculating the speed of the target object based on the running track.
On the one hand, however, the velocity of the target object determined by the above method is less accurate. On the other hand, the camera has a high requirement on light, and the definition of a shot monitoring image is low due to dark light in a night environment, so that a target object determined through target identification may not be accurate. Therefore, the target detection result obtained by the method has low accuracy.
Based on the above problem, an embodiment of the present application provides a target detection method, including: acquiring a millimeter wave point cloud and a camera image at the same moment; acquiring a point cloud detection result according to the millimeter wave point cloud; carrying out target detection on the camera image to obtain a camera detection result; matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result; and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result. In the embodiment of the application, the target detection result is determined based on the fusion result, the point cloud detection result which is not successfully matched and the camera detection result, and the accuracy of the target detection result is improved on the premise of ensuring that the acquired data is fully utilized and a large amount of operation is not increased.
In the following, a brief description will be given of an implementation environment related to the target detection method provided in the embodiments of the present application.
Fig. 1 shows a roadside sensing system to which the above-mentioned target detection method is applied, and as shown in fig. 1, the roadside sensing system may include a millimeter wave radar 101, a camera 102, and a roadside Unit (RSU) 103. The dotted line represents a lane line, generally speaking, the heights of the millimeter wave radar 101 and the camera 102 from the ground are 4-6 meters, and the coverage ranges of the millimeter wave radar 101 and the camera 102 can be enlarged through the higher heights. There is an overlap in the detection areas of the millimeter wave radar 101 and the camera 102, and the overlapping range is referred to as an overlap area. The millimeter wave radar has the advantages of long distance measurement and accurate speed measurement, but the millimeter wave radar has weaker capacity of classifying targets, and sometimes even identifies one target into two targets; the video stream collected by the camera can be used for identifying the type of the target, but the identification capability of the distance and the speed of the target is weak.
Generally speaking, the detection range of the millimeter wave radar is about 30-300 meters away from the position of the millimeter wave radar, and the detection range of the camera is about 4-120 meters away from the position of the camera.
Alternatively, the type of camera may be a gun type camera, a dome type camera, or a ball type camera. The type of the millimeter wave radar may be 77G, 24G millimeter wave radar.
The millimeter wave radar 101 and the camera 102 may communicate with the roadside unit 103 in a wired or wireless manner, and the roadside unit 103 may be one server or a server cluster composed of multiple servers.
The road side sensing system can perform fusion processing on the acquired millimeter wave point cloud data and the camera image or the video stream, and the following operations are required for the fusion processing:
1. initializing system installation: after the millimeter wave radar and the camera are installed, the sampling frequencies of the millimeter wave radar and the camera need to be adjusted, so that the sampling frequencies of the millimeter wave radar and the camera are the same or approximately the same. For example, the difference between the sampling frequencies of the millimeter wave radar and the camera may be adjusted to be less than a frequency threshold. So as to realize the time synchronization of the millimeter wave radar and the data collected by the camera.
2. Data time synchronization: after the sampling frequency of the millimeter wave radar and the camera is adjusted, the process of realizing the time synchronization processing of the millimeter wave radar and the camera comprises the following steps: acquiring time stamps t1 and t2 of millimeter wave radar and a camera accurate to milliseconds in real time (if the time stamps cannot be acquired directly, the time stamps can be converted into the time stamps under the same time axis through time axis matching conversion, the reference time axis is t, the conversion time t' is t-delta t according to the time axis conversion difference delta t); then, calculating an absolute value | tx-ty | of the timestamp difference and determining whether the absolute value is smaller than a certain set fixed value δ, for example, δ may be set to 10 ms; if the absolute value is smaller than delta, the two frames of data are considered to be acquired at the same moment; if the absolute value is larger than delta, searching the next frame according to a certain frame rate to perform time matching. It should be noted that, this embodiment merely provides a scheme for performing time synchronization of data, and other manners may also be used for performing time synchronization processing during actual data processing, which is not limited in this application.
3. System calibration: in one embodiment, a system calibration method is provided to obtain calibration parameters of a roadside sensing system. Acquiring internal parameters of a camera, wherein the internal parameters of the camera define: the camera internal reference is a three-dimensional coordinate point projection imaging plane under a camera coordinate system and mainly comprises an internal reference matrix and a distortion coefficient.
Wherein, the reference matrix can be expressed as:
Figure BDA0002752477610000071
wherein, each value in the internal reference matrix is only related to the internal parameter of the camera and does not change along with the position change of the object. Where f denotes a focal length, dx, dy denote pixels per millimeter, and u0, v0 denote the number of horizontal and vertical pixels of a phase difference between the pixel coordinate of the center of the image coordinate system and the pixel coordinate of the origin of the image.
The distortion coefficient can eliminate the distortion effect of the convex lens of the camera, and mainly comprises radial distortion and tangential distortion. Radial distortion is caused by the manufacturing process of the lens shape, including barrel distortion and pincushion distortion, as shown in fig. 2.
The radial distortion coefficient can be expressed as follows:
Figure BDA0002752477610000072
the tangential distortion coefficient can be expressed as follows:
Figure BDA0002752477610000073
wherein k1, k2, k3, p1 and p2 are distortion parameters.
Mapping the point cloud corresponding to the millimeter wave point cloud data to the image to obtain a mapping point cloud according to an internal reference matrix of the camera and initial joint calibration parameters of the millimeter wave radar relative to the camera; then calculating the overlapping area between a point cloud target frame of a calibration object on the image and an image recognition target frame, wherein the calibration object is at least one target in the overlapping detection area of the camera and the millimeter wave radar, the point cloud target frame is a target frame drawn on the image based on the calibration object in the mapping point cloud, and the image recognition target frame is a target frame of the calibration object obtained by performing image recognition on the image; and adjusting the initial combined calibration parameters based on the overlapping areas corresponding to the calibration objects until the overlapping areas meet a preset threshold, and outputting the adjusted combined calibration parameters as target combined calibration parameters of the millimeter wave radar relative to the camera. The target joint calibration parameters are used for carrying out space synchronization on the millimeter wave point cloud data and the camera data.
Referring to fig. 3, a flowchart of an object detection method provided in an embodiment of the present application is shown, where the object detection method may be applied to the roadside unit shown in fig. 1, and the object detection method includes:
step 301, the road side unit acquires a millimeter wave point cloud and a camera image at the same time.
In the embodiment of the application, a millimeter wave radar in the road side sensing system detects a detection area of the millimeter wave radar to obtain a millimeter wave point cloud, and the millimeter wave point cloud is sent to a road side unit. The millimeter wave point cloud is millimeter wave point cloud data which includes data of a plurality of points of a target in the millimeter wave radar detection area, and the millimeter wave point cloud may include three-dimensional coordinates of the points, color information or reflection intensity information, and the like.
The camera detects the camera detection area to obtain a camera image, and the camera image is sent to the road side unit.
And step 302, the road side unit acquires a point cloud detection result according to the millimeter wave point cloud.
In the embodiment of the application, in the millimeter wave point cloud, a plurality of targets in the millimeter wave radar detection area are represented by a plurality of points. In the embodiment of the application, the road side unit can classify a plurality of points included in the millimeter wave point cloud to determine a plurality of point sets, each point set corresponds to a target in the millimeter wave radar detection area, and the points included in the point sets are points forming the target. The road side unit can determine target information of a target corresponding to the point set according to the position, the course angle and the speed of the points in the point set, and therefore a point cloud detection result is obtained. The point cloud detection result comprises target information of a plurality of targets in the millimeter wave radar detection area, wherein the target information comprises the position, the course angle and the speed of the targets.
And step 303, the road side unit performs target detection on the camera image to obtain a camera detection result.
In this embodiment of the application, the roadside unit may process the camera image by using an image detection algorithm and an image tracking algorithm, and optionally, the roadside unit may perform target detection on the camera image by using an image target detection algorithm, for example, YOLOv3, to obtain a camera detection result. The camera detection results include the location, category, and category confidence of the target.
And 304, the road side unit matches the point cloud detection result with the camera detection result, and performs fusion processing on the successfully matched point cloud detection result and the camera detection result to obtain a fusion result.
In this embodiment, the road side unit matching the point cloud detection result with the camera detection result means matching a plurality of targets included in the point cloud detection result with a plurality of targets included in the camera detection result.
And 305, the road side unit obtains a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
In the embodiment of the application, in the matching process, the condition that the target included in the point cloud detection result is not matched with the target included in the camera detection result exists, and the condition indicates that the target is detected by the millimeter radar and is not detected by the camera. And for the target in the condition, acquiring target information of the target through the point cloud detection result.
Accordingly, there is a case where none of the targets included in the camera detection result and the targets included in the millimeter wave detection result match, which indicates that the target is detected by the camera and not detected by the millimeter wave radar. For the target in this case, target information of the target is acquired by the millimeter wave detection result.
In the embodiment of the application, the road side unit can combine the point cloud detection result, the camera detection result and the fusion result which are not successfully matched as the target detection result. Optionally, the target detection result may be output according to the spatial positions of the point cloud detection result, the camera detection result, and the fusion result that are not successfully matched.
According to the target detection method provided by the embodiment of the application, the millimeter wave point cloud and the camera image at the same moment are acquired; acquiring a point cloud detection result according to the millimeter wave point cloud; carrying out target detection on the camera image to obtain a camera detection result; matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result; and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result. In the embodiment of the application, the target detection result is determined based on the fusion result, the point cloud detection result which is not successfully matched and the camera detection result, so that the detection result of a larger detection area is obtained by fully utilizing the acquired sensing data on the premise of ensuring the detection accuracy advantage of the multi-sensing data fusion result.
In an optional implementation manner, as shown in fig. 4, the technical process of the road side unit obtaining the point cloud detection result according to the millimeter wave point cloud includes the following steps:
step 401, the road side unit maps the millimeter wave point cloud to a pixel coordinate system of the camera image by using the calibration parameters to obtain a millimeter wave point cloud to be processed.
In the embodiment of the application, after the road side unit obtains the millimeter wave point cloud, the three-dimensional coordinates of each point included in the millimeter wave point cloud can be mapped to the pixel coordinate system of the camera image by using the predetermined calibration parameters, so as to obtain the two-dimensional coordinates of each point included in the millimeter wave point cloud, and the two-dimensional coordinates of each point form the millimeter wave point cloud to be processed.
And step 402, the road side unit performs target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
In the embodiment of the application, the road side unit can classify all the points in the millimeter wave point cloud to be processed according to the two-dimensional coordinates of all the points to obtain a plurality of point sets, and each point set corresponds to a target in the millimeter wave radar detection area. The road side unit can determine the two-dimensional coordinates of the target corresponding to each point set according to the two-dimensional coordinates of the points included in each point set. For example, the two-dimensional coordinates of the target are determined from an average of the two-dimensional coordinates of the plurality of points included in the point set, or the two-dimensional coordinates of the target are determined from the two-dimensional coordinates of the center point of the plurality of points included in the point set.
Meanwhile, the road side unit can determine the target information of the target according to the two-dimensional coordinates, the course angle and the speed of the target corresponding to the point set, so that a point cloud detection result is obtained.
In an optional implementation manner, as shown in fig. 5, the technical process of the roadside unit obtaining the point cloud detection result according to the millimeter wave point cloud includes the following steps:
step 501, the road side unit performs target detection on the millimeter wave point cloud to obtain an initial point cloud detection result.
In the embodiment of the application, the road side unit can classify each point according to the three-dimensional coordinates of each point included in the millimeter wave point cloud to obtain a plurality of point sets, each point set corresponds to a target in the millimeter wave radar detection area, and the road side unit can determine the three-dimensional coordinates of the target corresponding to each point set according to the three-dimensional coordinates of the point included in each point set. For example, the three-dimensional coordinates of the target are determined from an average of the three-dimensional coordinates of the plurality of points included in the point set, or the three-dimensional coordinates of the target are determined from the three-dimensional coordinates of the center point of the plurality of points included in the point set.
The road side unit can determine the course angle and the speed of the target corresponding to the point set according to the course angle and the speed of the points included in the point set.
In the embodiment of the application, the road side unit can obtain the target information of the target according to the three-dimensional coordinates of each target and the course angle and the speed of the target. In the embodiment of the application, the target information of a plurality of targets forms the initial point cloud detection result.
Optionally, in this embodiment of the application, the roadside unit processes the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, where the initial point cloud detection result includes a heading angle, a speed, and a position of the target. The millimeter wave radar transmits electromagnetic waves to scan the whole millimeter wave radar detection area, and all target reflection echo signals in the detection area are received by the millimeter wave radar; after receiving the echo signal, the millimeter wave radar firstly performs high-speed signal acquisition and analog-to-digital conversion processing to obtain millimeter wave point cloud, the road side unit can perform direction-of-arrival estimation on the target by using a digital beam forming algorithm of the phased array radar after receiving the millimeter wave point cloud so as to measure the azimuth angle of the target, and after determining the azimuth angle of the target, beam synthesis is performed according to the azimuth angle of the target to obtain a useful signal, and the useful signal is subjected to digital signal processing to obtain the course angle and the speed of the target.
Step 502, the road side unit converts the initial point cloud detection result to a pixel coordinate system of the camera image by using the calibration parameters to obtain a point cloud detection result.
In the embodiment of the application, the road side unit may map the three-dimensional coordinates of each target included in the initial point cloud detection result to a pixel coordinate system of the camera image by using a predetermined calibration parameter, so as to obtain the two-dimensional coordinates of each target, and obtain the point cloud detection result.
Optionally, as shown in fig. 6, the process of the road side unit matching the point cloud detection result with the camera detection result may include the following steps:
step 601, the road side unit obtains a point cloud positioning frame included in a point cloud detection result.
In the embodiment of the present application, the point cloud positioning box refers to a minimum bounding box of each target included in the point cloud detection result.
For each target, the road side unit may determine a bounding box of the point set according to the three-dimensional coordinates of the points included in the point set corresponding to the target in the point cloud detection result, and determine the bounding box of the point set as the point cloud positioning frame of the target.
Step 602, the roadside unit acquires a camera positioning frame included in the camera detection result.
The camera positioning frame refers to a minimum bounding box of each target in the camera image.
In the embodiment of the application, the road side unit may determine the camera positioning frames of the targets included in the camera detection result through an image detection algorithm.
Step 603, the road side unit calculates the intersection ratio of each point cloud positioning frame and each camera positioning frame, and if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is greater than the intersection ratio threshold, it is determined that the certain point cloud positioning frame and the camera positioning frame are successfully matched.
And the matching successfully represents that the targets corresponding to the cloud positioning frame and the camera positioning frame at a certain point are the same target.
In the embodiment of the application, the road side unit can calculate the intersection ratio of the areas of each camera positioning frame and each point cloud positioning frame to obtain a plurality of intersection ratio results. And then comparing the intersection comparison result with an intersection comparison threshold value for each intersection comparison result, and if the intersection comparison result is greater than the intersection comparison threshold value, successfully matching the point cloud positioning frame and the camera positioning frame corresponding to the intersection comparison result. And the point cloud positioning frame corresponding to the intersection comparison result and the target corresponding to the camera positioning frame are represented as the same target by successful matching.
And if the intersection comparison result is smaller than the intersection comparison threshold, the point cloud positioning frame and the camera positioning frame corresponding to the intersection comparison result fail to be matched.
In an optional implementation manner, after matching, in the embodiment of the present application, a point cloud detection result successfully matched with a camera detection result is subjected to fusion processing, and a process of obtaining a fusion result refers to:
the road side unit can obtain target information, namely a course angle, a speed and a position, of a corresponding target based on a point cloud detection result, obtain target information, namely a position, a category and a category confidence coefficient, of the corresponding target based on a camera detection result, combine the target information of the target obtained based on the point cloud detection result and the target information of the target obtained based on the camera detection result to obtain a fusion result, wherein the fusion result comprises the target information of each target successfully matched, the target information comprises a motion characteristic and a category characteristic, the motion characteristic comprises a course angle, a speed and a position, and the category characteristic comprises a position, a category and a category confidence coefficient.
In an embodiment of the present application, as shown in fig. 7, an embodiment of the present application further provides a technical solution for obtaining a target detection result according to a point cloud detection result and a camera detection result that are not successfully matched, and the fusion result, where the technical solution includes the following steps:
and step 701, the road side unit takes the point cloud detection result which is not successfully matched as the detection result of the first area.
The first area is an area which is in a detection area of the millimeter wave radar and is not in a detection area of the camera, the target located in the first area can be detected by the millimeter wave radar but cannot be detected by the camera, and for the target located in the first area, the course angle, the speed and the position of the target can be obtained through a point cloud detection result. And taking the course angle, the speed and the position of the target located in the first area as the detection result of the first area.
In step 702, the road side unit uses the fusion result as the detection result of the overlap region.
The overlapping area is an overlapping portion of the millimeter wave radar detection area and the camera detection area, and a target located in the overlapping area can be detected by the millimeter wave radar and the camera.
In step 703, the roadside unit uses the camera detection result that is not successfully matched as the detection result of the second region.
The second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera, the target in the second area can be detected by the camera but cannot be detected by the millimeter wave radar, and for the target in the second area, the position, the category and the category confidence of the target can be obtained through the detection result of the camera. And taking the position, the category and the category confidence of the target located in the second area as the detection result of the second area.
Step 704, the roadside unit concatenates the detection result of the first region, the detection result of the overlapping region, and the detection result of the second region, and outputs a target detection result.
In this embodiment of the application, the roadside unit may splice the detection result of the first region, the detection result of the overlapping region, and the detection result of the second region according to the positions of the first region, the overlapping region, and the second region, to obtain the target detection result.
In the embodiment of the application, the detection results of different areas are determined in a splicing mode, and the detection results of all the areas are spliced to form the target detection result, so that the accuracy of the target detection result is improved.
Referring to fig. 8, a block diagram of an object detection apparatus provided in an embodiment of the present application is shown, where the object detection apparatus may be configured in a roadside unit in the implementation environment shown in fig. 1, and as shown in fig. 8, the object detection apparatus may include an obtaining module 801, a first detection module 802, a second detection module 803, a fusion module 804, and a detection result determining module 805, where:
an obtaining module 801, configured to obtain a millimeter wave point cloud and a camera image at the same time;
the first detection module 802 is configured to obtain a point cloud detection result according to the millimeter wave point cloud;
a second detection module 803, configured to perform target detection on the camera image to obtain a camera detection result;
the fusion module 804 is used for matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and a detection result determining module 805 configured to obtain a target detection result according to the point cloud detection result and the camera detection result that are not successfully matched, and the fusion result.
In one embodiment of the present application, the first detection module 802 is further configured to:
mapping the millimeter wave point cloud to a pixel coordinate system of the camera image by using the calibration parameters to obtain a millimeter wave point cloud to be processed;
and carrying out target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
In one embodiment of the present application, the first detection module 802 is further configured to:
performing target detection on the millimeter wave point cloud to obtain an initial point cloud detection result;
and converting the initial point cloud detection result into a pixel coordinate system of the camera image by using the calibration parameters to obtain a point cloud detection result.
In one embodiment of the present application, the first detection module 802 is further configured to:
and processing the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, wherein the initial point cloud detection result comprises a course angle, a speed and a position of the target.
In an embodiment of the present application, the second detection module 803 is further configured to:
and carrying out target detection on the camera image by using YOLOv3, and outputting a camera detection result, wherein the camera detection result comprises the position, the category and the category confidence of the target.
In an embodiment of the present application, the fusion module 804 is further configured to:
acquiring a point cloud positioning frame included in a point cloud detection result;
acquiring a camera positioning frame included in a camera detection result;
calculating the intersection ratio of each point cloud positioning frame and each camera positioning frame; and if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is greater than the intersection ratio threshold, determining that the certain point cloud positioning frame and the camera positioning frame are successfully matched, and successfully matching represents that the target corresponding to the certain point cloud positioning frame and the certain camera positioning frame is the same target.
In an embodiment of the present application, the fusion module 804 is further configured to:
acquiring motion characteristics of the corresponding target based on the point cloud detection result, wherein the motion characteristics comprise a course angle, a speed and a position;
and acquiring category characteristics of the corresponding target based on the camera detection result, wherein the category characteristics comprise categories, category confidence degrees and color information.
In an embodiment of the present application, the detection result determining module 805 is further configured to:
taking the point cloud detection result which is not successfully matched as a detection result of a first area, wherein the first area is an area which is in a detection area of the millimeter wave radar and is not in the detection area of the camera, and the detection area of the millimeter wave radar and the detection area of the camera have an overlapping area;
taking the fusion result as the detection result of the overlapping area;
taking the camera detection result which is not successfully matched as the detection result of a second area, wherein the second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera;
and splicing the detection result of the first area, the detection result of the overlapping area and the detection result of the second area, and outputting a target detection result.
For specific limitations of the target detection device, reference may be made to the above limitations of the target detection method, which are not described herein again. The modules in the target detection device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment of the present application, a computer device is provided, and the computer device may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The computer program is executed by a processor to implement a method of object detection.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment of the present application, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
mapping the millimeter wave point cloud to a pixel coordinate system of the camera image by using the calibration parameters to obtain a millimeter wave point cloud to be processed;
and carrying out target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
performing target detection on the millimeter wave point cloud to obtain an initial point cloud detection result;
and converting the initial point cloud detection result into a pixel coordinate system of the camera image by using the calibration parameters to obtain a point cloud detection result.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and processing the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, wherein the initial point cloud detection result comprises a course angle, a speed and a position of the target.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
and carrying out target detection on the camera image by using YOLOv3, and outputting a camera detection result, wherein the camera detection result comprises the position, the category and the category confidence of the target.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
acquiring a point cloud positioning frame included in a point cloud detection result;
acquiring a camera positioning frame included in a camera detection result;
calculating the intersection ratio of each point cloud positioning frame and each camera positioning frame; and if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is greater than the intersection ratio threshold, determining that the certain point cloud positioning frame and the camera positioning frame are successfully matched, and successfully matching represents that the target corresponding to the certain point cloud positioning frame and the certain camera positioning frame is the same target.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
acquiring motion characteristics of the corresponding target based on the point cloud detection result, wherein the motion characteristics comprise a course angle, a speed and a position;
and acquiring category characteristics of the corresponding target based on the camera detection result, wherein the category characteristics comprise categories, category confidence degrees and color information.
In one embodiment of the application, the processor when executing the computer program further performs the steps of:
taking the point cloud detection result which is not successfully matched as a detection result of a first area, wherein the first area is an area which is in a detection area of the millimeter wave radar and is not in the detection area of the camera, and the detection area of the millimeter wave radar and the detection area of the camera have an overlapping area;
taking the fusion result as the detection result of the overlapping area;
taking the camera detection result which is not successfully matched as the detection result of a second area, wherein the second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera;
and splicing the detection result of the first area, the detection result of the overlapping area and the detection result of the second area, and outputting a target detection result.
The implementation principle and technical effect of the computer device provided in the embodiment of the present application are similar to those of the method embodiment described above, and are not described herein again.
In an embodiment of the application, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
mapping the millimeter wave point cloud to a pixel coordinate system of the camera image by using the calibration parameters to obtain a millimeter wave point cloud to be processed;
and carrying out target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
performing target detection on the millimeter wave point cloud to obtain an initial point cloud detection result;
and converting the initial point cloud detection result into a pixel coordinate system of the camera image by using the calibration parameters to obtain a point cloud detection result.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
and processing the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, wherein the initial point cloud detection result comprises a course angle, a speed and a position of the target.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
and carrying out target detection on the camera image by using YOLOv3, and outputting a camera detection result, wherein the camera detection result comprises the position, the category and the category confidence of the target.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
acquiring a point cloud positioning frame included in a point cloud detection result;
acquiring a camera positioning frame included in a camera detection result;
calculating the intersection ratio of each point cloud positioning frame and each camera positioning frame; and if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is greater than the intersection ratio threshold, determining that the certain point cloud positioning frame and the camera positioning frame are successfully matched, and successfully matching represents that the target corresponding to the certain point cloud positioning frame and the certain camera positioning frame is the same target.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
acquiring motion characteristics of the corresponding target based on the point cloud detection result, wherein the motion characteristics comprise a course angle, a speed and a position;
and acquiring category characteristics of the corresponding target based on the camera detection result, wherein the category characteristics comprise categories, category confidence degrees and color information.
In one embodiment of the application, the computer program, when executed by the processor, may further implement the steps of:
taking the point cloud detection result which is not successfully matched as a detection result of a first area, wherein the first area is an area which is in a detection area of the millimeter wave radar and is not in the detection area of the camera, and the detection area of the millimeter wave radar and the detection area of the camera have an overlapping area;
taking the fusion result as the detection result of the overlapping area;
taking the camera detection result which is not successfully matched as the detection result of a second area, wherein the second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera;
and splicing the detection result of the first area, the detection result of the overlapping area and the detection result of the second area, and outputting a target detection result.
The implementation principle and technical effect of the computer-readable storage medium provided in the embodiment of the present application are similar to those of the method embodiment described above, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several implementation modes of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A method of object detection, the method comprising:
acquiring a millimeter wave point cloud and a camera image at the same moment;
acquiring a point cloud detection result according to the millimeter wave point cloud;
carrying out target detection on the camera image to obtain a camera detection result;
matching the point cloud detection result with the camera detection result, and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
2. The method of claim 1, wherein obtaining point cloud detection results from the millimeter wave point cloud comprises:
mapping the millimeter wave point cloud to a pixel coordinate system of the camera image by using calibration parameters to obtain the millimeter wave point cloud to be processed;
and carrying out target detection on the millimeter wave point cloud to be processed to obtain a point cloud detection result.
3. The method of claim 1, wherein obtaining point cloud detection results from the millimeter wave point cloud comprises:
carrying out target detection on the millimeter wave point cloud to obtain an initial point cloud detection result;
and converting the initial point cloud detection result to a pixel coordinate system of a camera image by using the calibration parameters to obtain the point cloud detection result.
4. The method of claim 3, wherein performing target detection on the millimeter wave point cloud to obtain an initial point cloud detection result comprises:
and processing the millimeter wave point cloud by using a digital beam forming algorithm of the phased array radar to obtain an initial point cloud detection result, wherein the initial point cloud detection result comprises a course angle, a speed and a position of a target.
5. The method of claim 1, wherein performing object detection on the camera image to obtain a camera detection result comprises:
and carrying out target detection on the camera image by using an image target detection algorithm, and outputting a camera detection result, wherein the camera detection result comprises the position, the category and the category confidence of a target.
6. The method of claim 1, wherein matching the point cloud detection results with the camera detection results comprises:
acquiring a point cloud positioning frame included in the point cloud detection result;
acquiring a camera positioning frame included in the camera detection result;
calculating the intersection ratio of each point cloud positioning frame and each camera positioning frame; if the intersection ratio of a certain point cloud positioning frame and a certain camera positioning frame is larger than the intersection ratio threshold, determining that the certain point cloud positioning frame and the camera positioning frame are successfully matched, wherein the successful matching represents that the targets corresponding to the certain point cloud positioning frame and the certain camera positioning frame are the same target.
7. The method of claim 1, wherein the fusing the successfully matched point cloud detection result with the camera detection result to obtain a fused result, comprises:
acquiring motion characteristics of a corresponding target based on the point cloud detection result, wherein the motion characteristics comprise a course angle, a speed and a position;
and acquiring category characteristics of the corresponding target based on the camera detection result, wherein the category characteristics comprise categories, category confidence degrees and color information.
8. The method of claim 1, wherein obtaining a target detection result according to the point cloud detection result and the camera detection result that are not successfully matched and the fusion result comprises:
taking the point cloud detection result which is not successfully matched as a detection result of a first area, wherein the first area is an area which is in a detection area of the millimeter wave radar and is not in the detection area of the camera, and the detection area of the millimeter wave radar and the detection area of the camera have an overlapping area;
taking the fusion result as the detection result of the overlapping area;
taking the camera detection result which is not successfully matched as the detection result of a second area, wherein the second area is an area which is not in the detection area of the millimeter wave radar and is in the detection area of the camera;
and splicing the detection result of the first area, the detection result of the overlapping area and the detection result of the second area, and outputting the target detection result.
9. An object detection apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring the millimeter wave point cloud and the camera image at the same moment;
the second acquisition module is used for acquiring a point cloud detection result according to the millimeter wave point cloud;
the detection module is used for carrying out target detection on the camera image to obtain a camera detection result;
the fusion module is used for matching the point cloud detection result with the camera detection result and fusing the successfully matched point cloud detection result with the camera detection result to obtain a fusion result;
and the detection result determining module is used for obtaining a target detection result according to the point cloud detection result which is not successfully matched, the camera detection result and the fusion result.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN202011189874.8A 2020-10-30 2020-10-30 Target detection method, target detection device, computer equipment and storage medium Pending CN114529789A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011189874.8A CN114529789A (en) 2020-10-30 2020-10-30 Target detection method, target detection device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011189874.8A CN114529789A (en) 2020-10-30 2020-10-30 Target detection method, target detection device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114529789A true CN114529789A (en) 2022-05-24

Family

ID=81619682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011189874.8A Pending CN114529789A (en) 2020-10-30 2020-10-30 Target detection method, target detection device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114529789A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757981A (en) * 2023-06-19 2023-09-15 北京拙河科技有限公司 Multi-terminal image fusion method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116757981A (en) * 2023-06-19 2023-09-15 北京拙河科技有限公司 Multi-terminal image fusion method and device

Similar Documents

Publication Publication Date Title
US11719788B2 (en) Signal processing apparatus, signal processing method, and program
US11619496B2 (en) System and method of detecting change in object for updating high-definition map
CN112017251B (en) Calibration method and device, road side equipment and computer readable storage medium
EP3540464B1 (en) Ranging method based on laser radar system, device and readable storage medium
Dong et al. Probabilistic oriented object detection in automotive radar
US20220371602A1 (en) Vehicle positioning method, apparatus, and controller, intelligent vehicle, and system
US20220214424A1 (en) Sensor Calibration Method and Apparatus
CN111045000A (en) Monitoring system and method
US11538241B2 (en) Position estimating device
US20210270957A1 (en) Ranging method, ranging device and ranging system
KR102264152B1 (en) Method and system for ground truth auto labeling advanced sensor data and image by camera
CN111652097A (en) Image millimeter wave radar fusion target detection method
JP2004530144A (en) How to provide image information
CN110873879A (en) Device and method for deep fusion of characteristics of multi-source heterogeneous sensor
CN112037249A (en) Method and device for tracking object in image of camera device
CN115797454B (en) Multi-camera fusion sensing method and device under bird's eye view angle
Sehestedt et al. Robust lane detection in urban environments
CN114091561A (en) Target tracking method, device, server and readable storage medium
CN112784679A (en) Vehicle obstacle avoidance method and device
JP2017181476A (en) Vehicle location detection device, vehicle location detection method and vehicle location detection-purpose computer program
CN114814758B (en) Camera-millimeter wave radar-laser radar combined calibration method and device
CN114519845A (en) Multi-sensing data fusion method and device, computer equipment and storage medium
CN114413958A (en) Monocular vision distance and speed measurement method of unmanned logistics vehicle
CN117115784A (en) Vehicle detection method and device for target data fusion
CN114529789A (en) Target detection method, target detection device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination