CN115855079A - Time asynchronous perception sensor fusion method - Google Patents

Time asynchronous perception sensor fusion method Download PDF

Info

Publication number
CN115855079A
CN115855079A CN202211644316.5A CN202211644316A CN115855079A CN 115855079 A CN115855079 A CN 115855079A CN 202211644316 A CN202211644316 A CN 202211644316A CN 115855079 A CN115855079 A CN 115855079A
Authority
CN
China
Prior art keywords
target
time
laser radar
detection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211644316.5A
Other languages
Chinese (zh)
Inventor
王文军
孙兆聪
张军贤
戴国琛
张世泽
夏世平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
CRRC Nanjing Puzhen Co Ltd
Original Assignee
Tsinghua University
CRRC Nanjing Puzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, CRRC Nanjing Puzhen Co Ltd filed Critical Tsinghua University
Priority to CN202211644316.5A priority Critical patent/CN115855079A/en
Publication of CN115855079A publication Critical patent/CN115855079A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses a time asynchronous perception sensor fusion method, which comprises the steps of carrying out space calibration and synchronization on a binocular camera, a laser radar and a millimeter wave radar, and carrying out active time service on each sensor; acquiring target information based on binocular vision detection and tracking a target based on Kalman filtering; establishing a space-time trajectory of target motion based on visual detection according to the position information of the target under the target timestamp and the running speed of the train; acquiring a target tracking result of a millimeter wave radar, and establishing a target motion space-time trajectory based on millimeter wave detection; predicting the position information of a target at the moment when the laser radar acquires data, assisting the laser radar to narrow the detection range and establishing a candidate target output queue; and outputting the candidate target. The method of the invention promotes the time asynchronous data fusion of the binocular camera, the laser radar and the millimeter wave radar, overcomes the deviation caused by time asynchronism and communication time delay of a plurality of sensors, and realizes cooperative work and advantage sharing among different sensors.

Description

Time asynchronous perception sensor fusion method
Technical Field
The invention relates to the technical field of multi-sensor fusion, in particular to a time asynchronous perception sensor fusion method.
Background
With the widespread commercial, military and industrial application of multi-sensor fusion technology, the research level of the technology is continuously improved. The multi-sensor fusion technology can fully utilize information resources of the sensors and exert the advantages of the sensors through the complementation of data acquired among the sensors. In the field of multi-sensor fusion, the main direction of the existing research is the synchronization problem of multi-sensor fusion, and the theory of multi-sensor synchronization fusion considers that each sensor synchronously measures a target and synchronously transmits data to a data fusion center. However, in actual process, the sampling frequency and inherent communication delay of each sensor are different, which causes asynchronous problem in the sensor fusion process. Obviously, the reasonable solution of the asynchronous problem of the sensor fusion is more in line with the requirement of practical engineering.
At present, more achievements exist for the research of the asynchronous fusion of the sensors. The Machilus has proposed a multi-scale unscented Kalman asynchronous fusion filtering algorithm in the 'multi-sensor asynchronous fusion algorithm AUV docking navigation system', and the algorithm can divide object multi-scale information according to sampling rate, establish a system error model, and perform multi-sensor data asynchronous fusion aiming at information of different scales. Zhao Hai Fei has proposed an asynchronous unequal interval filtering algorithm for the problem of multi-sensor data asynchronization in the implementation of multi-sensor asynchronous information multi-source combined navigation algorithm. The algorithm utilizes the available data information of a plurality of sensors at the current moment to update the mean square error and the optimal estimation value, and finally, the optimal fusion of the time asynchronous sensors is realized. The patent "CN114627442A" proposes a vehicle-road cooperative target detection method based on the fusion of a vehicle-end sensor and a road-end sensor. The method can respectively acquire the vehicle position information detected by the vehicle end and the road end, predict the position information under the time stamp of the vehicle end sensor according to the position information detected by the sensor under the time stamp of the road end sensor, and finally convert the vehicle position information detected by the vehicle end and the road end into the same coordinate system to realize the information fusion of the sensor. Patent "CN109544638A" proposes an asynchronous calibration method for sensor fusion. The method unifies data measured by a plurality of sensors into the same coordinate system through the conversion relation among coordinate systems of the calibration sensors, and ensures that the information of the plurality of sensors can be asynchronously fused. These methods work well for time-asynchronous sensor data fusion when the external variation amplitude is small, but their implementation process is complex and is difficult to adapt to vehicles running at high speed.
Aiming at the problems, the invention provides a time asynchronous perception sensor fusion method, which reasonably solves the time asynchronous problem of data acquisition of a binocular camera, a laser radar and a millimeter wave radar through an algorithm, promotes the fusion of the three sensors in the aspect of data, and finally realizes target detection.
Disclosure of Invention
Aiming at the defect of asynchronous sensor fusion time in the prior art, the invention provides a time asynchronous perception sensor fusion method, which is used for solving the problem that the time of a binocular camera, a laser radar and a millimeter wave radar is not uniform in the aspect of data acquisition, promoting the time asynchronous data fusion of the binocular camera, the laser radar and the millimeter wave radar, effectively overcoming the deviation caused by multi-sensor time asynchronism and communication time delay, and being used for target detection of an electronic guide rubber-tyred vehicle in the running process.
The technical scheme adopted by the invention is as follows:
a time asynchronous perception sensor fusion method comprises the following steps:
step 1, carrying out space calibration and synchronization on a binocular camera, a laser radar and a millimeter wave radar which are installed on a train, and actively timing each sensor;
step 2, acquiring target information of the binocular camera based on binocular vision detection and tracking a target based on Kalman filtering;
step 3, establishing a space-time trajectory based on the movement of the visual detection target according to the position information under the target timestamp and the running speed of the train;
step 4, acquiring a target tracking result of the millimeter wave radar, and establishing a space-time track based on the movement of the millimeter wave detection target based on the position information under the target timestamp and the running speed of the train;
step 5, predicting the position information of the target at the moment when the laser radar acquires data, assisting the laser radar to reduce the detection range and establishing a candidate target output queue;
and 6, outputting the candidate target.
Further, in the step 1, calibrating the binocular camera, the laser radar and the millimeter wave radar in a spatial position by extracting feature sets, and converting data acquired under respective coordinate systems into a train coordinate system. And the internal clocks of the binocular camera, the laser radar and the millimeter wave radar are unified in an active time service mode for each sensor.
Further, the method also comprises the step of performing motion compensation on the three-dimensional point cloud data acquired by the laser radar.
Further, the motion compensation specifically comprises the following steps:
according to the time stamp data of the IMU and the laser radar, vehicle angular velocity data and vehicle acceleration data in the IMU are obtained, wherein the time difference between the IMU and three-dimensional point cloud data collected by the laser radar at a certain moment is smaller than a set value;
acquiring attitude and running information of the vehicle according to the IMU data, and calculating a compensation transformation matrix of the three-dimensional point cloud data at any moment relative to the scanning moment according to the attitude and running information;
the position of each laser spot is corrected using the compensation transformation matrix.
Further, in step 2, detecting the left color image acquired by the binocular camera based on a target detection network of YOLOv7 to obtain visual target information;
and performing pixel matching on left and right images visually acquired by the binocular camera by using a stereo matching algorithm to obtain a disparity map, and acquiring target depth information and position information of a target in a three-dimensional space by using a projection matrix from a two-dimensional plane to the three-dimensional space on the disparity map.
Further, in step 2, a target tracking algorithm and target detection are cascaded based on a Kalman filter bank, and multi-target tracking is realized through an interframe data association algorithm.
Further, in step 3, according to the position information of the target in the two-dimensional plane and the three-dimensional space and the train speed information of the train at different acquisition moments, which are obtained in step 2, a time-space motion track of the target history and a time-space motion track of the predicted target are constructed by using a linear interpolation method.
Further, in step 4, according to the target positions detected by the millimeter wave radar under different timestamps of the target, a historical space-time motion trajectory of the target and a predicted space-time motion trajectory of the target are constructed.
Further, in step 5, after the timestamp of the laser radar collected data is obtained, searching the position information of the target at the current moment based on the target motion space-time trajectory information constructed based on the binocular camera vision and the target motion space-time trajectory information constructed based on the millimeter wave radar;
if the current moment is in the known historical time sequence, extracting the motion position of the target in the historical track corresponding to the current moment as a range base point for laser radar detection, and selecting a detection range in a self-adaptive manner by combining the relative motion state of the target;
if the current time is not in the known historical time sequence, extracting a motion position in a target prediction track based on the current time as a range base point for laser radar detection, and adaptively selecting a larger detection range by combining the relative motion state of the target;
and in the selected detection range, the laser radar acquires target point clouds in a clustering mode and establishes a candidate target output queue.
A time asynchronous perception sensor fusion method comprises the following steps:
s201, synchronizing time and space of the binocular camera, the laser radar and the millimeter wave radar, and acquiring time distribution information of the original data by analyzing timestamp information in the information acquired by the sensor. Meanwhile, motion compensation and segmentation are carried out on the original point cloud collected by the laser radar, and a preprocessed three-dimensional point cloud is obtained;
s202, carrying out target detection and target depth information acquisition on the image acquired by the binocular camera to obtain a visual target detection result, and then carrying out Kalman filtering processing on the visual target to obtain a detection result of the visual target. And calculating the plane and space prediction position information of the visual target, and establishing the historical and predicted space-time trajectory of the target.
S203, obtaining target detection results of data collected by the millimeter wave radar, calculating spatial prediction position information of the radar target, and establishing target history and predicted space-time motion tracks.
And S204, extracting plane information of the visual target during laser radar acquisition, guiding the laser radar to define a detection range for clustering, generating a candidate target sequence by combining the visual target and the three-dimensional position information of the radar target, and finally outputting a target detection result according to an operation scene.
The invention has the following beneficial effects:
the invention can be applied to a general binocular camera with a low frame rate, and a high frame rate camera which does not need to support hardware line control to trigger photographing is not needed. The binocular camera and the lidar may transmit data at a fixed frequency. Because the transmission delay of the laser radar is generally larger than that of the camera, the time of the processing equipment for acquiring data by the laser radar lags behind the camera in time difference, and the data of the laser radar and the data of the camera have a precedence order in a time sequence, so that the data fusion cannot be directly and synchronously performed.
The method can be applied to the millimeter wave radar which outputs target information at fixed frequency, and the target detection result output by the millimeter wave radar has a certain sequence with the visual and laser data in the time sequence in consideration of the target algorithm processing time, and the data fusion can not be directly and synchronously carried out.
The invention provides a fusion method of time asynchronous sensor data based on binocular vision, a laser radar and a millimeter wave radar sensing sensor, which mainly acquires target category and candidate position information through a binocular camera, establishes and predicts a space-time trajectory of target motion, fuses target information acquired by the radar at the current moment, solves the problem that the time of the binocular camera, the laser radar and the millimeter wave radar is not uniform in data acquisition aspect, promotes the binocular camera, the laser radar and the millimeter wave radar to perform time asynchronous data fusion, realizes a target detection function and actual engineering application, effectively overcomes deviation caused by multi-sensor time asynchronization and communication delay, realizes cooperative work and advantage sharing among different sensors, can be used for target detection of an electronic guide rubber-tyred vehicle in the running process, and improves the intelligence degree of an electronic guide rubber-tyred train target detection system.
Drawings
FIG. 1 is a flow chart of a fusion method;
FIG. 2 is a diagram of the steps of a fusion method;
FIG. 3 is a schematic view of binocular camera data, lidar data, and millimeter wave radar data acquisition;
FIG. 4 is a flowchart of an embodiment.
Detailed Description
The technical solutions of the present invention will be described in detail below with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention. All embodiments, which can be obtained by a person skilled in the art without any inventive step, based on embodiments of the present invention, are within the scope of protection of the present invention.
The invention provides a time asynchronous perception sensor fusion method which is used for solving the problem of time asynchrony of a binocular camera, a laser radar and a millimeter wave radar in a data fusion process. After the data time synchronization of the binocular camera, the laser radar and the millimeter wave radar is guaranteed, the system assists the laser radar in target detection by combining the data collected by the binocular camera and the millimeter wave radar. The flow chart of the fusion method of the invention is shown in figure 1, and the step chart of the fusion method is shown in figure 2. The method comprises the following steps:
step 1, carrying out space calibration and synchronization on an installed binocular camera, a laser radar and a millimeter wave radar, and actively timing each sensor;
step 2, acquiring target information of a binocular camera based on binocular vision detection and tracking a target based on Kalman filtering;
step 3, establishing a space-time trajectory based on the movement of the visual detection target according to the position information under the target timestamp and the running speed of the train;
and 4, acquiring a target tracking result of the millimeter wave radar. Establishing a target motion space-time trajectory based on millimeter wave detection based on the position information under the target timestamp and the running speed of the train;
step 5, predicting the position information of the target at the moment of data acquisition by the laser radar according to the established target motion space-time trajectory based on millimeter wave detection and the established target motion space-time trajectory based on millimeter wave detection, assisting the laser radar to narrow the detection range and establishing a candidate target output queue;
and 6, outputting the candidate target by combining the operation scene.
In the above technical solution, the step 1 specifically comprises: according to the fusion method provided by the invention, firstly, the binocular camera, the laser radar and the millimeter wave radar are calibrated in spatial positions, data acquired under respective coordinate systems are converted into a train coordinate system, and active time service is carried out on each sensor, so that internal clocks of the binocular camera, the laser radar and the millimeter wave radar are unified. The invention mainly completes space calibration and synchronization by extracting the characteristic set. The method comprises the steps of firstly extracting feature sets of elements such as points, lines and the like in a calibration plate image collected by a binocular camera, then matching the feature sets through the calibration plate image, three-dimensional point cloud data collected by a laser radar and a detection result of a millimeter wave radar, and solving registration parameters of the three.
The invention needs to consider the motion compensation of the three-dimensional point cloud data collected by the laser radar due to the consideration of the high-speed movement of the vehicle. The motion compensation method of the three-dimensional point cloud data acquired by the existing laser radar is more, and the method is not restricted.
Optionally, the motion compensation of the laser radar three-dimensional point cloud data is performed through vehicle angular velocity data and vehicle acceleration data acquired by the IMU. Firstly, the system needs to acquire vehicle angular velocity data and vehicle acceleration data in the IMU, wherein the time difference between the IMU and three-dimensional point cloud data acquired by the laser radar is smaller than a set value at a certain moment, according to the timestamp data of the IMU and the laser radar. The system then acquires the attitude and running information of the vehicle according to the data in the IMU, and calculates a compensation transformation matrix of the three-dimensional point cloud data at any time relative to the scanning time according to the attitude and running information. Finally, the system corrects the position of each laser point by using a compensation transformation matrix.
Further, the step 2 specifically comprises: target information of the binocular camera based on binocular vision detection and a target tracked based on Kalman filtering are obtained. The target detection network based on YOLOv7 detects the left color image acquired by the binocular camera to acquire the visual target plane information. The YOLO series target detection network has excellent detection effect and is convenient to transplant, and has wide application in industry.
The invention designs a tracking algorithm of a visual target based on a classic Kalman filter, and realizes the tracking of a multi-visual target based on a Kalman filter group. And cascading a target tracking algorithm and target detection, and realizing multi-target tracking through an inter-frame data association algorithm.
Optionally, the invention selects a target tracking network based on a TraDes model, the TraDes model is an online detection and tracking model, the detector and the tracker are tightly combined together, the detection is guided by complete tracking information in an end-to-end network, and the detection result is effectively fed back to the tracker. Compared with a detection network and tracking network cascading mode, the target tracking network based on the TraDes model has certain improvement in precision and efficiency.
The invention utilizes a stereo matching algorithm to carry out pixel matching on left and right images acquired by binocular camera vision so as to obtain a disparity map. And acquiring the depth information of the target and the position information of the target in the three-dimensional space by using a projection matrix from the two-dimensional plane to the three-dimensional space for the disparity map.
Further, the step 3 specifically includes: and establishing a space-time trajectory based on the movement of the visual detection target according to the position information of the target under the target timestamp and the running speed of the train.
And (3) by utilizing the position information of the target in the two-dimensional plane and the three-dimensional space and the train speed information of the train, which are acquired in the step (2) at different acquisition moments, the spatial-temporal motion trajectory of the target history can be constructed by utilizing a linear interpolation mode, and the spatial-temporal motion trajectory of the target can be predicted.
The linear interpolation formula is as follows:
Figure BDA0004009347090000071
P x (t i ) Represents t i Time, position information of object with object number x, P x (t j ) Represents t j Time and position information of the object with the object number x,
Figure BDA0004009347090000072
is t j Time t i The average speed of train movement at that moment.
Further, the step 4 specifically includes: and acquiring a target tracking result of the millimeter wave radar. And establishing a space-time trajectory of target motion based on millimeter wave detection based on the position information of the target under the target timestamp and the running speed of the train. The invention selects the millimeter wave radar for realizing target detection and tracking based on clustering, and the radar can output target position information and target occurrence time information.
Optionally, the point cloud information is collected by the 4D millimeter wave radar, the target detection is performed by using a 3D target detection algorithm based on deep learning, the number and the precision of the point clouds of the radar collected by the 4D millimeter wave radar are obviously increased, and more accurate target position information can be obtained.
And 3, according to the target positions detected by the millimeter wave radar under different timestamps, constructing historical space-time motion tracks of the target and predicted space-time motion tracks of the target.
Further, the step 5 specifically includes: and predicting the position information of the target at the moment when the laser radar acquires data, assisting the laser radar to narrow the detection range and establishing a target candidate output queue. After the timestamp of the laser radar acquisition data is obtained, the current moment is searched in a known historical time sequence based on the time-space track information of the target motion constructed based on the binocular vision and the time-space track information of the target motion constructed based on the millimeter wave radar. And if the current moment is in the known historical time sequence, extracting the motion position of the target in the historical track corresponding to the current moment as a range base point for laser radar detection, and adaptively selecting a detection range by combining the relative motion state of the target. And if the current time is not in the known historical time sequence, predicting the motion position of the target based on the current time to be used as a range base point for laser radar detection, and adaptively selecting a larger detection range by combining the relative motion state of the target. And the laser radar acquires target point clouds in an effective detection range in a clustering mode and constructs a candidate target output queue.
Further, the step 6 specifically includes: and outputting the candidate target by combining the operation scene of the electronic guide train.
The invention can be applied to a general binocular camera with a low frame rate, and a high frame rate camera which does not need to support hardware line control to trigger photographing is not needed. The binocular camera, the laser radar, and the millimeter wave radar may transmit data at a fixed frequency. It can be seen from fig. 3 that there is a time interval between the receiving times of the camera image, the lidar point cloud and the millimeter wave radar data, the shooting time of the camera image is instantaneous, and the acquisition time of the lidar point cloud is longer. Because the transmission delay of the radar is generally larger than that of the camera, the moment of acquiring data by the radar lags behind the camera on the processing equipment, and the data of the radar, the camera and the camera are in sequence in the time sequence, so that the data cannot be directly and synchronously fused. Therefore, the invention actively carries out time service on the binocular camera, the laser radar and the millimeter wave radar, unifies the internal clocks of the sensor and the processing equipment, and can acquire the time distribution information of the original data by analyzing the images shot by the camera, the point cloud information collected by the radar and the time information in the millimeter wave radar data.
The method processes left and right images acquired by the binocular camera at the same time, utilizes a target detection algorithm based on deep learning to perform target detection on the color image acquired by the left camera, and acquires target two-dimensional plane information to be processed. It should be noted that, in consideration of the fact that the time asynchronous fusion method provided by the invention needs the detection result of the visual target to assist the laser radar in defining the search range, the selection of the target detection algorithm for the deep learning of the binocular camera needs to consider the real-time property.
For example, the YOLO series and the SSD network realize target detection by establishing an end-to-end neural network based on a regression manner, and satisfy the requirement of the system for real-time performance compared with the R-CNN series networks established based on a candidate region manner. The YOLOv7 network has more excellent real-time performance and higher-precision detection performance, and is suitable for being used as a visual detection algorithm in a time asynchronous fusion method.
And after the target is detected, calculating the relative position relation between the target and the camera according to the difference of the imaging positions of the target on the left image and the right image so as to obtain the target type information, the two-dimensional plane range and the three-dimensional position relation at the current moment. The position of the target can be tracked in the continuous images, a motion model of the target is established, and the two-dimensional position information of the target at the data acquisition moment of the laser radar is predicted by Kalman filtering.
The target tracking is to establish the position relation of the target to be tracked in a continuous data sequence to obtain the complete motion track of the target. Typically, given the location characteristics of the object in the previous frame, the location of the object and the size of the bounding box are predicted in the next frame.
The method projects the target two-dimensional plane information in the image to a three-dimensional space range based on the relation between the internal parameters of the camera and the external parameters of the radar, and further guides the laser radar to demarcate a candidate detection area for target detection to obtain a candidate target sequence. And screening effective targets in the laser radar point cloud by combining the target three-dimensional position relationship calculated by the left image and the right image, and outputting the category information and the three-dimensional space position information of the targets.
In order to make the implementation steps of the present invention clearer, the method will be described below with reference to an embodiment flowchart, which is shown in fig. 4.
S201, time and space synchronization is conducted on the binocular camera, the laser radar and the millimeter wave radar, and time distribution information of original data is obtained by analyzing timestamp information in sensor acquisition information. Meanwhile, motion compensation and segmentation are carried out on the original point cloud collected by the laser radar, and a preprocessed three-dimensional point cloud is obtained;
s202, carrying out target detection and target depth information acquisition on the image acquired by the binocular camera to obtain a visual target detection result, and then carrying out Kalman filtering processing on the visual target to obtain a detection result of the visual target. And calculating the plane and space prediction position information of the visual target, and establishing the historical and predicted space-time trajectory of the target.
S203, obtaining target detection results of data collected by the millimeter wave radar, calculating spatial prediction position information of the radar target, and establishing target history and predicted space-time motion tracks.
And S204, extracting plane information of the visual target during laser radar acquisition, guiding the laser radar to define a detection range for clustering, generating a candidate target sequence by combining the visual target and the stereo position information of the radar target, and finally outputting a target detection result according to an operation scene.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A time asynchronous perception sensor fusion method is characterized in that,
step 1, carrying out space calibration and synchronization on a binocular camera, a laser radar and a millimeter wave radar which are installed on a train, and actively timing each sensor;
step 2, acquiring target information of the binocular camera based on binocular vision detection and tracking a target based on Kalman filtering;
step 3, establishing a space-time trajectory of the target motion based on visual detection according to the position information under the target timestamp and the running speed of the train;
step 4, acquiring a target tracking result of the millimeter wave radar, and establishing a target motion space-time trajectory based on millimeter wave detection based on the position information under the target timestamp and the running speed of the train;
step 5, predicting the position information of the target at the moment when the laser radar collects data, assisting the laser radar to reduce the detection range and establishing a candidate target output queue;
and 6, outputting the candidate target.
2. The fusion method of the time asynchronous perception sensor as claimed in claim 1, wherein in step 1, the binocular camera, the laser radar and the millimeter wave radar are calibrated in a spatial position by means of feature set extraction, and data collected under respective coordinate systems are converted into a train coordinate system. And the internal clocks of the binocular camera, the laser radar and the millimeter wave radar are unified in an active time service mode for each sensor.
3. The method as claimed in claim 1, wherein the method comprises the step of performing motion compensation on the three-dimensional point cloud data collected by the lidar.
4. The method for temporal asynchronous perceptual sensor fusion as defined in claim 3, wherein the motion compensation step comprises:
according to the time stamp data of the IMU and the laser radar, vehicle angular velocity data and vehicle acceleration data in the IMU are obtained, wherein the time difference between the IMU and three-dimensional point cloud data collected by the laser radar at a certain moment is smaller than a set value;
acquiring attitude and running information of the vehicle according to the IMU data, and calculating a compensation transformation matrix of the three-dimensional point cloud data at any moment relative to the scanning moment according to the attitude and running information;
the position of each laser spot is corrected using the compensation transformation matrix.
5. The method for fusing the time-asynchronous perception sensors as claimed in claim 3, wherein in step 2, a target detection network based on YOLOv7 detects a left color image collected by a binocular camera to obtain visual target information;
and performing pixel matching on left and right images visually acquired by the binocular camera by using a stereo matching algorithm to obtain a disparity map, and acquiring target depth information and position information of a target in a three-dimensional space by using a projection matrix from a two-dimensional plane to the three-dimensional space on the disparity map.
6. The time-asynchronous perceptual sensor fusion method of claim 1 or 5, wherein in the step 2, a target tracking algorithm and a target detection are cascaded based on a Kalman filter bank, and multi-target tracking is realized through an interframe data association algorithm.
7. The time-asynchronous perception sensor fusion method as claimed in claim 5, wherein in step 3, a linear interpolation method is used to construct a historical space-time motion track of the target and a predicted space-time motion track of the target according to the position information of the target in the two-dimensional plane and the three-dimensional space and the train speed information of the train at different collection times, which are acquired in step 2.
8. The method as claimed in claim 1, wherein in step 4, the temporal-spatial motion trajectory of the target history and the temporal-spatial motion trajectory of the predicted target are constructed according to the positions of the targets detected by the millimeter wave radar under different timestamps of the target.
9. The method for fusing the time-asynchronous perception sensor as claimed in claim 1, wherein in step 5, after a timestamp of data collected by the laser radar is obtained, position information of a target at the current time is searched based on target motion space-time trajectory information constructed based on binocular camera vision and target motion space-time trajectory information constructed based on the millimeter wave radar;
if the current moment is in the known historical time sequence, extracting the motion position of the target in the historical track corresponding to the current moment as a range base point for laser radar detection, and selecting a detection range in a self-adaptive manner by combining the relative motion state of the target;
if the current moment is not in the known historical time sequence, extracting a motion position in a target prediction track based on the current time as a range base point for laser radar detection, and adaptively selecting a larger detection range by combining the relative motion state of the target;
and in the selected detection range, the laser radar acquires target point clouds in a clustering mode and constructs a candidate target output queue.
10. A time asynchronous perception sensor fusion method is characterized by comprising the following steps:
s201, synchronizing time and space of a binocular camera, a laser radar and a millimeter wave radar, and acquiring time distribution information of original data by analyzing timestamp information in sensor acquisition information; meanwhile, motion compensation and segmentation are carried out on the original point cloud collected by the laser radar, and a preprocessed three-dimensional point cloud is obtained;
s202, carrying out target detection and target depth information acquisition on an image acquired by a binocular camera to obtain a visual target detection result, and then carrying out Kalman filtering processing on the visual target to obtain a detection result of the visual target; calculating plane and spatial prediction position information of the visual target, and establishing target history and predicted space-time trajectory;
s203, obtaining target detection results of data collected by the millimeter wave radar, calculating spatial prediction position information of the radar target, and establishing target history and predicted space-time motion tracks;
and S204, extracting plane information of the visual target during laser radar acquisition, guiding the laser radar to define a detection range for clustering, generating a candidate target sequence by combining the visual target and the stereo position information of the radar target, and finally outputting a target detection result according to an operation scene.
CN202211644316.5A 2022-12-21 2022-12-21 Time asynchronous perception sensor fusion method Pending CN115855079A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211644316.5A CN115855079A (en) 2022-12-21 2022-12-21 Time asynchronous perception sensor fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211644316.5A CN115855079A (en) 2022-12-21 2022-12-21 Time asynchronous perception sensor fusion method

Publications (1)

Publication Number Publication Date
CN115855079A true CN115855079A (en) 2023-03-28

Family

ID=85674691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211644316.5A Pending CN115855079A (en) 2022-12-21 2022-12-21 Time asynchronous perception sensor fusion method

Country Status (1)

Country Link
CN (1) CN115855079A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402092A (en) * 2023-05-17 2023-07-07 湖南奥通智能科技有限公司 Oil cylinder motion optimization method based on asynchronous time domain visual sensor
CN117953459A (en) * 2024-03-25 2024-04-30 安徽蔚来智驾科技有限公司 Perception fusion result acquisition method, readable storage medium and intelligent device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402092A (en) * 2023-05-17 2023-07-07 湖南奥通智能科技有限公司 Oil cylinder motion optimization method based on asynchronous time domain visual sensor
CN116402092B (en) * 2023-05-17 2023-09-12 湖南奥通智能科技有限公司 Oil cylinder motion optimization method based on asynchronous time domain visual sensor
CN117953459A (en) * 2024-03-25 2024-04-30 安徽蔚来智驾科技有限公司 Perception fusion result acquisition method, readable storage medium and intelligent device

Similar Documents

Publication Publication Date Title
Yu et al. Dair-v2x: A large-scale dataset for vehicle-infrastructure cooperative 3d object detection
CN115855079A (en) Time asynchronous perception sensor fusion method
CN110675418A (en) Target track optimization method based on DS evidence theory
CN112991391A (en) Vehicle detection and tracking method based on radar signal and vision fusion
CN115731268A (en) Unmanned aerial vehicle multi-target tracking method based on visual/millimeter wave radar information fusion
CN113850102B (en) Vehicle-mounted vision detection method and system based on millimeter wave radar assistance
CN111292369B (en) False point cloud data generation method of laser radar
CN113160327A (en) Method and system for realizing point cloud completion
CN112906777A (en) Target detection method and device, electronic equipment and storage medium
US11908199B2 (en) In-vehicle electronic control device
CN113095154A (en) Three-dimensional target detection system and method based on millimeter wave radar and monocular camera
WO2024114119A1 (en) Sensor fusion method based on binocular camera guidance
CN117274749B (en) Fused 3D target detection method based on 4D millimeter wave radar and image
CN115144828A (en) Automatic online calibration method for intelligent automobile multi-sensor space-time fusion
CN111753901B (en) Data fusion method, device, system and computer equipment
CN112630798B (en) Method and apparatus for estimating ground
CN116863382A (en) Expressway multi-target tracking method based on radar fusion
CN114119465B (en) Point cloud data processing method and device
CN115471526A (en) Automatic driving target detection and tracking method based on multi-source heterogeneous information fusion
CN116340876A (en) Spatial target situation awareness method for local multisource data fusion
CN115457497A (en) Method for detecting vehicle speed based on 3D target detection and multi-target tracking
CN113525370A (en) Multi-target tracking system based on vehicle high beam snapshot binocular vision and satellite navigation data
CN117706942B (en) Environment sensing and self-adaptive driving auxiliary electronic control method and system
Li et al. A Practical Large-Scale Roadside Multi-View Multi-Sensor Spatial Synchronization Framework for Intelligent Transportation Systems
CN114943943B (en) Target track obtaining method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination