CN112817301B - Fusion method, device and system of multi-sensor data - Google Patents

Fusion method, device and system of multi-sensor data Download PDF

Info

Publication number
CN112817301B
CN112817301B CN201911041828.0A CN201911041828A CN112817301B CN 112817301 B CN112817301 B CN 112817301B CN 201911041828 A CN201911041828 A CN 201911041828A CN 112817301 B CN112817301 B CN 112817301B
Authority
CN
China
Prior art keywords
sensor data
current
time
storage space
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911041828.0A
Other languages
Chinese (zh)
Other versions
CN112817301A (en
Inventor
管守奎
李元
胡佳兴
段睿
韩永根
穆北鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Momenta Technology Co Ltd
Original Assignee
Beijing Momenta Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Momenta Technology Co Ltd filed Critical Beijing Momenta Technology Co Ltd
Priority to CN201911041828.0A priority Critical patent/CN112817301B/en
Publication of CN112817301A publication Critical patent/CN112817301A/en
Application granted granted Critical
Publication of CN112817301B publication Critical patent/CN112817301B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)

Abstract

The embodiment of the invention discloses a method, a device and a system for fusing multi-sensor data, wherein the method comprises the following steps: the method comprises the steps that after determining to obtain current specified sensor data acquired by a specified sensor, a processor obtains a first moment corresponding to the current specified sensor data; acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from a preset storage space; filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data; and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, the filtering fusion result, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time so as to ensure the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the off-line platform test.

Description

Fusion method, device and system of multi-sensor data
Technical Field
The invention relates to the technical field of intelligent driving, in particular to a method, a device and a system for fusing multi-sensor data.
Background
Among the unmanned technologies, the vehicle positioning technology is of paramount importance. In the related art, when a vehicle is positioned, a plurality of sensors, such as an image acquisition unit, an IMU (Inertial measurement unit, an inertial measurement unit), a wheel speed sensor, an inertial navigation unit, and the like, which are disposed in a target vehicle, are generally used to fuse acquired sensor data, so as to obtain a vehicle positioning result of the target vehicle.
In the real vehicle positioning process, the problem of the related algorithm of the positioning technology is inevitably solved, and the deviation of the vehicle positioning result is caused, so that the feasibility of the related algorithm of the positioning technology and the safety of vehicles and drivers are ensured, the problem of the related algorithm of the positioning technology in the real vehicle positioning process is required to be repeated on an off-line platform, and correspondingly, the deviation of the vehicle positioning result is required to be repeated on the off-line platform in the real vehicle positioning process. Moreover, the difference of calculation forces of the real vehicle platform and the off-line platform in the real vehicle positioning process is considered, so that the fusion speed may be different when the multi-sensor data are fused, and further the vehicle positioning result cannot be consistent. Therefore, how to provide a method for fusing multi-sensor data, which can ensure the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the off-line platform test, is a problem to be solved.
Disclosure of Invention
The invention provides a method, a device and a system for fusing multi-sensor data, which are used for realizing the purpose of ensuring the consistency of a vehicle positioning result in a real vehicle positioning process and a vehicle positioning result in an off-line platform test. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for fusing multi-sensor data, which is applied to a processor of a multi-sensor data fusion system, where the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the method comprises:
after determining to obtain current appointed sensor data acquired by an appointed sensor, obtaining a first moment corresponding to the current appointed sensor data, wherein a difference value between the current acquisition moment corresponding to the current appointed sensor data and the first moment is a preset time difference;
obtaining target sensor data of corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
Filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current appointed sensor data;
and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, a filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time.
Optionally, the step of obtaining the target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space includes:
judging whether the preset storage space stores sensor data of the corresponding acquisition moment before the first moment or not;
and if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
After the step of obtaining the first time corresponding to the current specified sensor data, the method further includes:
and storing the first moment in the preset storage space corresponding to the current appointed sensor data as a fusion moment of a fusion filtering fusion result corresponding to the current appointed sensor data.
Optionally, the processor is a processor disposed on an off-board device;
the step of obtaining the first moment corresponding to the current specified sensor data comprises the following steps:
and obtaining a first moment corresponding to the current specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
the step of obtaining the first moment corresponding to the current specified sensor data comprises the following steps:
obtaining a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
In a second aspect, an embodiment of the present invention provides a multi-sensor data fusion device, which is applied to a processor of a multi-sensor data fusion system, where the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the device comprises:
The first obtaining module is configured to obtain a first moment corresponding to the current specified sensor data after determining to obtain the current specified sensor data collected by the specified sensor, wherein a difference value between the current collection moment corresponding to the current specified sensor data and the first moment is a preset time difference;
a second obtaining module configured to obtain target sensor data of a corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
the filtering module is configured to perform filtering processing on the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data;
the determining module is configured to determine current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
Optionally, the second obtaining module is specifically configured to determine whether the preset storage space stores sensor data before the first time at the corresponding acquisition time;
and if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
the apparatus further comprises:
the storage module is configured to store the first moment in the preset storage space corresponding to the current appointed sensor data after the filtering processing is performed on the target sensor data by using the current filter to obtain a filtering fusion result, and the first moment is used as a fusion moment of the fusion filtering fusion result corresponding to the current appointed sensor data.
Optionally, the processor is a processor disposed on an off-board device;
the first obtaining module is specifically configured to obtain a first moment corresponding to the current specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
the first obtaining module is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
In a third aspect, an embodiment of the present invention provides a system for fusing multi-sensor data, where the system includes a processor, at least two types of sensors, and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the processor is configured to acquire a first moment corresponding to the current specified sensor data after determining to acquire the current specified sensor data acquired by the specified sensor, wherein a difference value between the current acquisition moment corresponding to the current specified sensor data and the first moment is a preset time difference;
obtaining target sensor data of corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
Filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current appointed sensor data;
and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, a filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time.
Optionally, the processor is specifically configured to determine whether the preset storage space stores sensor data before the first time at the corresponding acquisition time;
and if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
the processor is further configured to store, after the first time corresponding to the current specified sensor data is obtained, the first time corresponding to the current specified sensor data in the preset storage space, and serve as a fusion time of a fusion filtering fusion result corresponding to the current specified sensor data.
Optionally, the processor is a processor disposed on an off-board device;
the processor is specifically configured to obtain a first time corresponding to the current specified sensor data from the preset storage space.
Optionally, the processor is a processor disposed in a vehicle platform of the target vehicle;
the processor is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
As can be seen from the above, the method, the device and the system for fusing multi-sensor data provided by the embodiments of the present invention are applied to a processor of a system for fusing multi-sensor data, and the system further includes at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by at least two types of sensors, and the processor can acquire a first moment corresponding to the current specified sensor data after determining to acquire the current specified sensor data acquired by the specified sensor, wherein a difference value between the first moment and the current acquisition moment corresponding to the current specified sensor data is a preset time difference; obtaining target sensor data of corresponding acquisition time before a first time and after a second time from a preset storage space, wherein the difference value of the second time and the acquisition time corresponding to the previous specified sensor data of the current specified sensor data is a preset time difference; filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data; and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely, the fusion process is triggered after the current appointed sensor data acquired by the appointed touch sensor is obtained, and when the current filter is utilized to carry out filtering processing on the sensor data, the data are processed according to the preset data processing sequence, thereby ensuring the ordering of the data processing and the fixation of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current appointed sensor data, and the fusion process of the multi-sensor data irrelevant to the platform efficiency can be realized by limiting the output time of the filtering fusion result of the filter, namely, outputting the filtering fusion result corresponding to the current appointed sensor data. Before filtering processing is performed on sensor data by using a current filter, first time corresponding to the current appointed sensor data is determined, target sensor data before the first time and after the second time corresponding to the acquisition time is obtained from a preset storage space, filtering processing is performed on the target sensor data by using the current filter, and therefore the fact that filtering fusion results output by the filters corresponding to the current appointed sensor data are identical in platforms with different calculation forces is guaranteed, namely that input results of the current pose predictors are identical in the same for the same appointed sensor data in the platforms with different calculation forces, and consistency of vehicle positioning results in a real vehicle positioning process and vehicle positioning results in off-line platform testing is guaranteed. The method solves the algorithm problem in the running process of the real vehicle, and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-vehicle platform, and greatly improves the reproduction and solving efficiency of the problem. Of course, it is not necessary for any one product or method of practicing the invention to achieve all of the advantages set forth above at the same time.
The innovation points of the embodiment of the invention include:
1. the triggering condition of the fusion process of the multi-sensor data can be limited, namely, the fusion process is triggered after the current appointed sensor data acquired by the appointed touch sensor is obtained, when the current filter is utilized to carry out filtering processing on the sensor data, the data are processed according to the preset data processing sequence, the ordering of the data processing and the fixation of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current appointed sensor data are ensured, and the fusion process of the multi-sensor data irrelevant to the platform efficiency is realized by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the current appointed sensor data is output. Before filtering processing is performed on sensor data by using a current filter, first time corresponding to the current appointed sensor data is determined, target sensor data before the first time and after the second time corresponding to the acquisition time is obtained from a preset storage space, filtering processing is performed on the target sensor data by using the current filter, and therefore the fact that filtering fusion results output by the filters corresponding to the current appointed sensor data are identical in platforms with different calculation forces is guaranteed, namely that input results of the current pose predictors are identical in the same for the same appointed sensor data in the platforms with different calculation forces, and consistency of vehicle positioning results in a real vehicle positioning process and vehicle positioning results in off-line platform testing is guaranteed. The method solves the algorithm problem in the running process of the real vehicle, and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-vehicle platform, and greatly improves the reproduction and solving efficiency of the problem.
2. Under the condition that the processor is arranged in the vehicle-mounted platform of the target vehicle, after the first moment corresponding to the current appointed sensor data is obtained, the current appointed sensor data corresponding to the first moment is stored in a preset storage space so as to ensure that the same sensor data is determined from the preset storage space when the fusion process of the multi-sensor data is executed on the off-line platform aiming at the current appointed sensor data, and a basis is provided for ensuring the consistency of the vehicle positioning result in the real vehicle positioning process and the vehicle positioning result in the off-line platform test.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is apparent that the drawings in the following description are only some embodiments of the invention. Other figures may be derived from these figures without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a schematic flow chart of a method for fusing multi-sensor data according to an embodiment of the present invention;
Fig. 2 is a schematic structural diagram of a multi-sensor data fusion device according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a multi-sensor data fusion system according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without any inventive effort, are intended to be within the scope of the invention.
It should be noted that the terms "comprising" and "having" and any variations thereof in the embodiments of the present invention and the accompanying drawings are intended to cover non-exclusive inclusions. A process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed but may alternatively include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The invention provides a method, a device and a system for fusing multi-sensor data, which are used for realizing the purpose of ensuring the consistency of a vehicle positioning result in a real vehicle positioning process and a vehicle positioning result in an off-line platform test. The following describes embodiments of the present invention in detail.
Fig. 1 is a schematic flow chart of a method for fusing multi-sensor data according to an embodiment of the present invention. The method is applied to a processor of a fusion system of multi-sensor data, and the system can further comprise at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data collected by at least two types of sensors, and the method may comprise the steps of:
s101: after determining to obtain the current specified sensor data acquired by the specified sensor, obtaining a first moment corresponding to the current specified sensor data.
The difference value between the current acquisition time corresponding to the current appointed sensor data and the first time is a preset time difference.
In one implementation, the processor may be a processor disposed in a vehicle-mounted platform of the target vehicle, or may be a processor disposed in an off-vehicle-mounted platform, where the off-vehicle-mounted platform may be an electronic device such as a desktop computer, a notebook computer, or an all-in-one machine. The processor may be in data communication with at least two types of sensors disposed within the target vehicle, and may obtain data collected by the at least two types of sensors.
In one implementation, the at least two types of sensors may include, but are not limited to, at least two of an IMU (inertial measurement unit ), a wheel speed sensor, an inertial navigation unit, and an image acquisition unit. Wherein, this inertial navigation unit can be: GNSS (Global Navigation Satellite System, global satellite navigation system and global navigation satellite system) positioning units or GPS (Global Positioning System global positioning system) positioning units. The image acquisition unit may be: cameras, etc.
In the embodiment of the invention, the processor can automatically or manually control to designate a sensor from at least two types of sensors in advance as a designated sensor, and the processor can immediately trigger the fusion flow of the multi-sensor data after determining to acquire the sensor data acquired by the designated sensor. The sensor data collected by the specified sensor may be referred to as specified sensor data. The current specified sensor data may be any specified sensor data that is currently desired to be processed.
In this step, after determining to obtain the current specified sensor data collected by the specified sensor, the processor may first obtain the first time corresponding to the current specified sensor data, and then execute the subsequent fusion process. The difference value between the current acquisition time corresponding to the current appointed sensor data and the first time is a preset time difference. The setting of the preset time difference is determined by the combination of at least two types of sensors included in the multi-sensor data fusion system, and a specific value of the preset time difference is determined by a preset principle, wherein the preset principle is that the data of the sensor with the slowest data transmission reaches the preset storage space after the preset time difference is waited from the time interval from the generation of one frame of data to the storage of the frame of data in the preset storage space in the at least two types of sensors. Correspondingly, the preset time difference may be greater than or equal to a transmission delay of a target sensor in at least two types of sensors included in the multi-sensor data fusion system, where the target sensor is: and the sensor with the longest time for transmitting the acquired data to the preset storage space is selected from the at least two types of sensors.
The preset storage space may be provided by a buffer.
Wherein the specified sensor may be any sensor in the fusion system of the multi-sensor data. In one implementation, the designated sensor may be an IMU in consideration of short transmission delay of data of the IMU, so as to improve real-time performance of determination of pose information of the target vehicle to a certain extent.
When the processor is a processor arranged in a different platform, the obtaining modes of the first moment are different. In an embodiment of the present invention, the processor is a processor provided on an off-board device; the S101 may include:
and obtaining a first moment corresponding to the current specified sensor data from the preset storage space.
It can be understood that, in the case that the processor is a processor disposed on the off-board device, the preset storage space has stored therein data collected by the at least two types of sensors disposed on the target vehicle during the target driving process, and each data is stored in the preset storage space corresponding to the collection time thereof. And for the specified sensor data, the fusion time of the fusion filtering fusion result corresponding to the specified sensor data is stored in the preset storage space. And for the current appointed sensor data, the corresponding fusion time of the fusion result of the fusion filter, namely the first time. The fusion time of the fusion filtering fusion result corresponding to each specified sensor data is as follows: the method comprises the steps that when a processor arranged in a vehicle-mounted platform processes the specified sensor data obtained in the target running process of a target vehicle, the processor arranged in the vehicle-mounted platform characterizes the moment corresponding to the latest filtering fusion result obtained by filtering fusion of a filter used when the processor arranged in the vehicle-mounted platform inputs the specified sensor data into a pose predictor.
In another embodiment of the present invention, the processor is a processor disposed within an on-board platform of the target vehicle; the S101 may include:
obtaining a preset time difference;
and calculating the time corresponding to the difference value between the current acquisition time corresponding to the current specified sensor data and the preset time difference as the first time corresponding to the current specified sensor data.
Under the condition that the processor is arranged in a vehicle-mounted platform of the target vehicle, the processor needs to calculate in real time to obtain a first moment corresponding to the current specified sensor data according to the current acquisition moment and the preset time difference of the current specified sensor data, and then execute a subsequent fusion process.
S102: and obtaining the target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
The difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is a preset time difference.
In this step, after obtaining the first time corresponding to the current specified sensor data, the processor may continue to traverse the preset storage space, and obtain, from the preset storage space, the sensor data, which is obtained by the corresponding acquisition time before the first time and after the second time, as the target sensor data.
In one case, there may be sensor data with a longer transmission delay among the sensor data stored in the preset storage space, for example, sensor data with a corresponding transmission delay exceeding the preset time difference. In order to ensure the accuracy of the determined pose information of the target vehicle, the processor may filter out the sensor data with the corresponding transmission delay exceeding the preset time difference from the sensor data with the corresponding acquisition time before the first time and after the second time, and use the remaining sensor data with the corresponding acquisition time before the first time and after the second time as the target sensor data. The transmission delay of each sensor data is as follows: the difference between the acquisition time and the time of transmission to the preset storage space corresponding to the sensor data.
S103: and filtering the target sensor data according to a preset data processing sequence by using the current filter to obtain a filtering fusion result corresponding to the current specified sensor data.
In this step, after obtaining the target sensor data corresponding to the current specified sensor data, the processor may input the target sensor data into the current filter, and perform filtering processing on each type of target sensor data according to a preset data processing sequence by using the current filter, to obtain a filtering fusion result corresponding to the current specified sensor data, and output the filtering fusion result.
In one case, the filter may be a kalman filter, in which a preset positioning fusion algorithm may be preset, and the target sensor data may be fused according to a preset data processing sequence by using the preset positioning fusion algorithm set in the kalman filter, so as to obtain a filtering fusion result. The filtering fusion result may include a vehicle positioning result, i.e., pose information, of the target vehicle at the first moment. The preset positioning fusion algorithm may be any positioning fusion algorithm in the related vehicle positioning, and the embodiment of the invention does not limit the specific type of the preset positioning fusion algorithm.
By setting the data processing sequence, when the processors arranged on different platforms process the target sensor data corresponding to the current appointed sensor data, the processing sequence and the processing process are the same, the consistency of the filter arranged in the different platforms on the processing result of the filter corresponding to the current appointed sensor data is ensured, and the fusion flow of the multi-sensor data irrelevant to the platform efficiency is realized. In addition, considering that the pose predictor uses the latest filtering fusion result output by the filter when executing the prediction action each time, in the embodiment of the invention, the output time of the filter is preset, namely the filtering fusion result corresponding to the current appointed sensor data is obtained and output, so that the consistency of the filtering fusion result of the filter corresponding to the input is ensured when the pose predictor inputs the current appointed sensor data by utilizing the processors arranged on different platforms for carrying out pose prediction on the current appointed sensor data.
It can be understood that, in order to ensure that the processors arranged on different platforms perform fusion flow on sensor data acquired by at least two types of sensors arranged in the target running process of the target vehicle, the reproduction of the positioning result of the target vehicle is realized, that is, the real-time vehicle positioning result is realized as the real-time pose information determination result in the vehicle-mounted platform of the target vehicle aiming at the target running process of the target vehicle by the processor arranged in the vehicle-mounted platform, and the consistency of the pose information determination result in the target running process of the target vehicle reproduced by the subsequent processor arranged in the non-vehicle-mounted platform, that is, the consistency of the vehicle positioning result is realized, the filters corresponding to the processors arranged on different platforms are the same, and the corresponding pose predictors are the same.
S104: and determining the current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
The processor obtains a filtering fusion result corresponding to the current appointed sensor data, inputs the filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time into the current pose predictor, and determines the current pose information of the target vehicle corresponding to the current appointed sensor data by utilizing the current pose predictor, the filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time. The current pose information may include the acquisition time of the current designated sensor data, that is, the position information and the pose information of the target vehicle at the current acquisition time.
The pose predictor can be preset with a preset pose prediction algorithm, and the current pose information of the target vehicle corresponding to the current specified sensor data can be determined by using the preset pose prediction algorithm set by the pose predictor, the filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time. The preset pose prediction algorithm may be any pose prediction algorithm in the related vehicle positioning, and the embodiment of the invention does not limit the specific type of the preset pose prediction algorithm. The determining of the current pose information of the target vehicle corresponding to the current specified sensor data by using the preset pose prediction algorithm in the current pose predictor may refer to the related art, and will not be described herein.
In one implementation, after determining the current pose information of the target vehicle, the processor may output the current pose information to a corresponding pose-using application.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely, the fusion process is triggered after the current appointed sensor data acquired by the appointed touch sensor is obtained, and when the current filter is utilized to carry out filtering processing on the sensor data, the data are processed according to the preset data processing sequence, thereby ensuring the ordering of the data processing and the fixation of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current appointed sensor data, and the consistency of the filtering fusion result of the filter can be ensured when the fusion process of the multi-sensor data in the embodiment is operated in platforms with different calculation forces by limiting the output time of the filtering fusion result of the filter, namely, the filtering fusion result corresponding to the current appointed sensor data. Before filtering processing is performed on sensor data by using a current filter, first time corresponding to the current appointed sensor data is determined, target sensor data before the first time and after the second time corresponding to the acquisition time is obtained from a preset storage space, filtering processing is performed on the target sensor data by using the current filter, and therefore the fact that filtering fusion results output by the filters corresponding to the current appointed sensor data are identical in platforms with different calculation forces is guaranteed, namely that input results of the current pose predictors are identical in the same for the same appointed sensor data in the platforms with different calculation forces, and consistency of vehicle positioning results in a real vehicle positioning process and vehicle positioning results in off-line platform testing is guaranteed. The method solves the algorithm problem in the running process of the real vehicle, and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-vehicle platform, and greatly improves the reproduction and solving efficiency of the problem.
In another embodiment of the present invention, the S102 may include:
judging whether the preset storage space stores sensor data before the first moment of the corresponding acquisition moment;
and if the preset storage space is judged to store the sensor data before the first moment in the corresponding acquisition moment, acquiring the target sensor data before the first moment and after the second moment in the corresponding acquisition moment from the preset storage space.
In this embodiment, in the case that the processor is a processor disposed in a vehicle-mounted platform of the target vehicle, the processor may first determine whether the preset storage space stores sensor data before the first time corresponding to the acquisition time after obtaining the first time corresponding to the current specified sensor data, and if it is determined that the preset storage space stores sensor data before the first time corresponding to the acquisition time, obtain the target sensor data before the first time corresponding to the acquisition time and after the second time corresponding to the acquisition time from the preset storage space. In another case, if it is determined that the preset storage space does not store the sensor data before the first time corresponding to the acquisition time, the fusion process for the currently specified sensor data may be ended; the subsequent processor may continue to monitor whether new current designated sensor data is obtained.
In another embodiment, if the processor is a processor disposed in the off-board platform, the processor may first determine whether the preset storage space stores sensor data before the first time corresponding to the collection time, and if so, obtain target sensor data before the first time and after the second time corresponding to the collection time from the preset storage space; otherwise, the fusion process for the currently specified sensor data may be ended; the subsequent processor may continue to monitor whether new current designated sensor data is obtained.
In another embodiment of the present invention, the processor is disposed within a processor of an on-board platform of the target vehicle; after S101, the method may further include:
and storing the first moment corresponding to the current appointed sensor data in the preset storage space as the fusion moment of the fusion filtering fusion result corresponding to the current appointed sensor data.
Under the condition that the processor is arranged in a vehicle-mounted platform of the target vehicle, after the processor calculates the first moment corresponding to the current appointed sensor data, the processor can store the first moment corresponding to the current appointed sensor data in a preset storage space as the fusion moment of the fusion filtering fusion result corresponding to the current appointed sensor data, so that when the vehicle positioning result of the target vehicle in the target driving process is reproduced in a follow-up off-line mode, the target sensor data corresponding to the current appointed sensor data are determined based on the first moment, the follow-up fusion process is carried out, the success of the off-line reproduction is ensured, namely the vehicle positioning result obtained through fusion on the vehicle-mounted platform, namely the pose information of the target vehicle is ensured, and the vehicle positioning result obtained through fusion on the non-vehicle-mounted platform, namely the pose information of the target vehicle is consistent.
In another embodiment of the invention, the designated sensor is an IMU inertial measurement unit: the current appointed sensor data is current IMU data; in the case where the processor is a processor provided in a vehicle-mounted platform of the target vehicle; prior to S101, the method may further include:
a process of obtaining current IMU data, wherein the process may include:
acquiring initial IMU data acquired by an IMU;
converting the initial IMU data into data in a first appointed format to obtain intermediate IMU data corresponding to the initial IMU data;
determining current IMU data corresponding to the whole moment by utilizing intermediate IMU data corresponding to the previous IMU data and intermediate IMU data corresponding to the initial IMU data acquired by the IMU;
storing the current IMU data and the corresponding acquisition time in a preset storage space;
after S104, the method may further include:
determining a map area corresponding to the current pose information from a target map based on the current pose information as a map area corresponding to the current specified sensor data, wherein the target map comprises map data;
and storing and converting the map area corresponding to the current specified sensor data and the corresponding acquisition time into a preset storage space.
The IMU may include: a gyroscope for acquiring an angular velocity of the target vehicle, an acceleration sensor for acquiring an acceleration of the target vehicle, and the like.
In this embodiment, the designated sensor is an IMU, and correspondingly, the current designated sensor data collected by the designated sensor is current IMU data; in view of the difference in the formats of the IMU data acquired by the IMUs of different models, the following multi-sensor data fusion process is facilitated. In the embodiment of the invention, after acquiring IMU data acquired by the IMU, the processor firstly converts the acquired IMU data into the IMU data in a uniform format in the multi-sensor data fusion system, and further, executes a subsequent flow. Correspondingly, the processor acquires initial IMU data acquired by the IMU in real time, and processes the initial IMU data to acquire IMU data in a format convenient for subsequent flow, namely current IMU data. After the processor obtains the initial IMU data, the initial IMU data may be converted into data in a first designated format to obtain intermediate IMU data corresponding to the initial IMU data, and further, in order to facilitate subsequent fusion, the intermediate IMU data corresponding to the initial IMU data is subjected to point alignment processing, that is, the current IMU data corresponding to the point moment is determined by using the intermediate IMU data corresponding to the previous IMU data acquired by the IMU and the intermediate IMU data corresponding to the initial IMU data, and further, the current IMU data and the acquisition moment corresponding to the current IMU data are stored in a preset storage space.
The process of determining the current IMU data corresponding to the whole point moment by using the intermediate IMU data corresponding to the previous IMU data and the intermediate IMU data corresponding to the initial IMU data acquired by the IMU may be: and determining the current IMU data corresponding to the whole moment by adopting a difference algorithm according to the intermediate IMU data corresponding to the previous IMU data and the intermediate IMU data corresponding to the initial IMU data acquired by the IMU. For example: taking an IMU of 100 hz as an example, the time interval between acquiring IMU data of every two frames by the IMU is 10 ms, the real time when the IMU acquires IMU may be 1.1234 seconds and 1.1334 seconds, and for the subsequent process of traversing, the IMU data acquired by the IMU needs to be aligned, that is, IMU data acquired by the IMU corresponding to 1.120 seconds and 1.130 seconds is calculated. Correspondingly, intermediate IMU data corresponding to initial IMU data can be acquired for 1.1234 seconds, and IMU data corresponding to the initial IMU data can be acquired for 1.1334 seconds, and IMU data acquired by the IMU corresponding to 1.130 seconds can be calculated by using a difference algorithm.
The first specified format may be any format in the related art that facilitates the subsequent fusion process, and the embodiment of the present invention does not limit the specific type of the first specified format. For example: the first specified format may include an estimated speed and an estimated pose of the target vehicle at the current acquisition time calculated from the initial IMU data and speed and pose information of the target vehicle at a time previous to the current acquisition time; or may include a speed variation amount and a pose information variation amount of the target vehicle between the current acquisition time and a time immediately before the current acquisition time. Wherein the current IMU data of the first specified format may be represented as an ImuFrame data frame.
Subsequently, in the embodiment of the present invention, the multi-sensor data fusion system further includes a target map, where the target map is a map corresponding to a scene where the target vehicle travels, and includes map data; after the processor determines the current pose information of the target vehicle corresponding to the current specified sensor data, a map area corresponding to the current pose information can be determined from the target map based on the current pose and used as the map area corresponding to the current specified sensor data; and converting the map area corresponding to the current specified sensor data into a second specified format, and further, converting the map area corresponding to the current specified sensor data of the second specified format and the acquisition time corresponding to the map area into a preset storage space. The collection time corresponding to the map area may be the collection time of the currently specified sensor data. In one case, the target map may be a high-precision map. The map region of the second specified format may be represented as a hdmapgeometry frame data frame. The second specified format may be any format of a map area in the related art that facilitates the subsequent fusion process, and the embodiment of the present invention is not limited thereto.
The map region corresponding to the current pose information can be a region in a preset range with the current pose information as a center in the target map.
In this embodiment, the designated sensor is set as the IMU, and due to the characteristic that the data delay of the IMU is low, the real-time performance of the current pose information of the target vehicle is improved to a certain extent. After the initial IMU data is obtained, the current IMU data corresponding to the whole point moment is determined by utilizing the intermediate IMU data corresponding to the first appointed format and the intermediate IMU data corresponding to the first appointed format, which are acquired by the IMU, so that when the positioning result precision evaluation is carried out with other high-precision integrated navigation equipment, the additional interpolation work is avoided.
In another embodiment of the present invention, if the at least two types of sensors include: the wheel speed sensor, the sensor data that at least two kinds of sensors gathered includes: spare wheel speed data collected by a wheel speed sensor; in the case where the processor is a processor disposed within an on-board platform of the target vehicle, the method may further include:
a process of obtaining backup wheel speed data collected by a wheel speed sensor, wherein the process may include:
Acquiring initial wheel speed data acquired by a wheel speed sensor;
converting the initial wheel speed data into data in a third appointed format to obtain spare wheel speed data;
and storing the spare wheel speed data and the corresponding acquisition time to a preset storage space.
In this embodiment, the at least two types of sensors may include wheel speed sensors, which may acquire a model value of wheel speeds of 4 wheels of the target vehicle. In view of the different formats of data collected by wheel speed sensors of different models, for example: the collected data is the angular velocity of the wheel, or the collected data is the linear velocity of the wheel. In order to facilitate the subsequent fusion process, the obtained initial wheel speed data acquired by the wheel speed sensor is converted into data in a third appointed format, standby wheel speed data are obtained, and the standby wheel speed data and the corresponding acquisition time are stored in a preset storage space. The third specified format of the spare wheel speed data may be represented as an OdoFrame data frame. The third designated format may be any format of wheel speed data in the related art that facilitates the subsequent fusion process, and the embodiment of the present invention is not limited thereto.
In another embodiment of the present invention, if the at least two types of sensors include: the inertial navigation unit, the sensor data that at least two kinds of sensors gathered includes: standby inertial navigation data acquired by the inertial navigation unit; in the case where the processor is a processor disposed within an on-board platform of the target vehicle, the method may further include:
A process of obtaining backup inertial navigation data acquired by an inertial navigation unit, wherein the process may include:
initial inertial navigation data acquired by an inertial navigation unit are obtained;
converting the initial inertial navigation data into data in a fourth appointed format to obtain standby inertial navigation data;
and storing the standby inertial navigation data and the corresponding acquisition time to a preset storage space.
In view of different formats of inertial navigation data acquired by different inertial navigation units, after acquiring initial inertial navigation data acquired by the inertial navigation units, the processor firstly converts the initial inertial navigation data into data in a fourth designated format to acquire standby inertial navigation data, and then stores the standby inertial navigation data and the corresponding acquisition time thereof into a preset storage space. For example, in one case, the inertial navigation unit is a GNSS, and the initial inertial navigation data collected by the GNSS may include position information and velocity information, where the format of the initial inertial navigation data is usually NMEA sentence or binary sentence with higher compression rate, so that, for facilitating the subsequent fusion process, the position information and velocity information with the fourth specified format may be extracted from the initial inertial navigation data and stored in the preset storage space. The fourth specified format of the alternate inertial navigation data may be represented as a GnssFrame data frame.
In another embodiment of the present invention, if the at least two types of sensors include: the image acquisition unit, the sensor data that at least two kinds of sensors gathered includes: standby image data acquired by an image acquisition unit; in the case where the processor is a processor disposed within an on-board platform of the target vehicle, the method may further include:
a process of obtaining standby image data acquired by an image acquisition unit, wherein the process may include:
obtaining an image acquired by an image acquisition unit;
detecting the image by utilizing a pre-trained target detection model to obtain perception data corresponding to the image;
converting the perception data corresponding to the image into data in a fifth appointed format to obtain middle perception data;
storing the middle perception data and the corresponding acquisition time to a preset storage space;
determining map data matched with the intermediate perception data from a target map based on the intermediate perception data and pose information of a target vehicle corresponding to the image, wherein the target map comprises the map data;
storing the map data which are converted into the intermediate perception data matching of the sixth appointed format to a preset storage space;
extracting characteristic points of the image, and determining characteristic point information in the image;
Encoding the characteristic point information in the image to obtain an image containing the characteristic point information and an encoding result;
and storing the image which is converted into the seventh appointed format and contains the characteristic point information and the coding result, and the corresponding acquisition time thereof, to a preset storage space.
In this embodiment, the at least two types of sensors may include an image acquisition unit, and correspondingly, sensor data acquired by the at least two types of sensors includes: and the image acquisition unit acquires standby image data. After the processor obtains the image acquired by the image acquisition unit, on one hand, the image can be converted into a preset image format so as to be input into a pre-trained target detection model, and the image is detected by utilizing the pre-trained target detection model to obtain perception data corresponding to the image; the pre-trained target detection model is a neural network model obtained by training based on a sample image marked with a target and marking information thereof, wherein the target can comprise traffic markers such as lane lines, parking spaces, lamp poles, traffic signs and the like, and the marking information can comprise position information of the target in the corresponding sample image. The specific training process may refer to the training process of the model in the related art, and will not be described herein.
The perception data corresponding to the image may include the position and type of the object contained in the image, such as traffic markers, such as lane lines, parking spaces, lamp posts, and/or traffic signs, and the position thereof. And converting the perception data corresponding to the image into data in a fifth specified format to obtain intermediate perception data, and storing the intermediate perception data in the fifth specified format and the corresponding acquisition time thereof in a preset storage space. The acquisition time corresponding to the intermediate perception data in the fifth specified format is the acquisition time of the corresponding image. The intermediate perceptual data of the fifth specified format may be represented as a PercentionFrame data frame.
Determining map data matched with each middle perception data from a target map based on the middle perception data and pose information of the target vehicle corresponding to the image; and converting map data matched with each middle perception data in the image into a sixth appointed format, and storing the map data and the corresponding acquisition time thereof into a preset storage space. And the acquisition time corresponding to the map data matched with each middle perception data in the image in the sixth appointed format is the acquisition time of the image. The map data matched with each intermediate sensing data in the image in the sixth specified format may be represented as a SemanticMatchFrame data frame, where the SemanticMatchFrame data frame includes intermediate sensing data and matched map data thereof in a corresponding relationship.
Under the condition that the IMU is a specified sensor, the acquisition frequency of IMU data is higher than the acquisition frequency of an image, after a certain frame of image is acquired, pose information of a target vehicle corresponding to the image is stored in a preset storage space, and the processor can directly read pose information of the target vehicle, the corresponding acquisition time of which is closest to the acquisition time of the image, from the preset storage space to serve as the pose information of the target vehicle corresponding to the image.
On the other hand, extracting characteristic points of the image by using a preset characteristic point extraction algorithm, and determining characteristic point information in the image, wherein the characteristic points can comprise angular points in the image, and correspondingly, the characteristic point information can comprise position information of the angular points in the image; and encoding the characteristic point information in the image to obtain characteristic points in the image containing the characteristic point information and an encoding result, wherein the same encoded characteristic point information on different images is ensured to correspond to the same object in the actual scene during encoding. And converting the image containing the characteristic point information and the coding result into a seventh appointed format, and transmitting the image and the corresponding acquisition time to a preset storage space. The collection time corresponding to the image containing the feature point information and the encoding result in the seventh specified format is the collection time of the image. The seventh specified format image containing feature point information and the encoding result may be expressed as a featurefile data frame.
In one implementation, when the processor stores the sensor data acquired by at least two types of sensors into the preset storage space, the processor may store the sensor data in a sequence from front to back or from back to front at the acquisition time corresponding to each sensor data.
Corresponding to the embodiment of the method, the embodiment of the invention provides a multi-sensor data fusion device which is applied to a processor of a multi-sensor data fusion system, wherein the system also comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data collected by the at least two types of sensors, as shown in fig. 2, and the apparatus includes:
a first obtaining module 210, configured to obtain a first time corresponding to current specified sensor data acquired by a specified sensor after determining to obtain the current specified sensor data, where a difference between a current acquisition time corresponding to the current specified sensor data and the first time is a preset time difference;
a second obtaining module 220, configured to obtain, from the preset storage space, target sensor data of a corresponding acquisition time before a first time and after a second time, where a difference between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
The filtering module 230 is configured to perform filtering processing on the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data;
the determining module 240 is configured to determine current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, the filtering fusion result corresponding to the current specified sensor data, and the specified sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely, the fusion process is triggered after the current appointed sensor data acquired by the appointed touch sensor is obtained, and when the current filter is utilized to carry out filtering processing on the sensor data, the data are processed according to the preset data processing sequence, thereby ensuring the ordering of the data processing and the fixation of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current appointed sensor data, and the fusion process of the multi-sensor data irrelevant to the platform efficiency can be realized by limiting the output time of the filtering fusion result of the filter, namely, outputting the filtering fusion result corresponding to the current appointed sensor data. Before filtering processing is performed on sensor data by using a current filter, first time corresponding to the current appointed sensor data is determined, target sensor data before the first time and after the second time corresponding to the acquisition time is obtained from a preset storage space, filtering processing is performed on the target sensor data by using the current filter, and therefore the fact that filtering fusion results output by the filters corresponding to the current appointed sensor data are identical in platforms with different calculation forces is guaranteed, namely that input results of the current pose predictors are identical in the same for the same appointed sensor data in the platforms with different calculation forces, and consistency of vehicle positioning results in a real vehicle positioning process and vehicle positioning results in off-line platform testing is guaranteed. The method solves the algorithm problem in the running process of the real vehicle, and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-vehicle platform, and greatly improves the reproduction and solving efficiency of the problem.
In another embodiment of the present invention, the second obtaining module 220 is specifically configured to determine whether the preset storage space stores sensor data before the first time at the corresponding acquisition time;
and if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
In another embodiment of the present invention, the processor is a processor disposed within a vehicle-mounted platform of the target vehicle;
the apparatus further comprises: the storage module is configured to store the first moment in the preset storage space corresponding to the current appointed sensor data after the filtering processing is performed on the target sensor data by using the current filter to obtain a filtering fusion result, and the first moment is used as a fusion moment of the fusion filtering fusion result corresponding to the current appointed sensor data.
In another embodiment of the present invention, the processor is a processor disposed on an off-board device;
the first obtaining module 210 is specifically configured to obtain, from the preset storage space, a first time corresponding to the current specified sensor data.
In another embodiment of the present invention, the processor is a processor disposed within a vehicle-mounted platform of the target vehicle;
the first obtaining module 210 is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
Corresponding to the above method embodiment, the embodiment of the present invention provides a multi-sensor data fusion system, as shown in fig. 3, where the system includes a processor 310, at least two types of sensors 320, and a preset storage space 330; each sensor 320 is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space 330 is configured to store sensor data acquired by the at least two types of sensors, and the processor 310 is configured to obtain a first time corresponding to the current specified sensor data after determining to obtain the current specified sensor data acquired by the specified sensor, where a difference between the current acquisition time corresponding to the current specified sensor data and the first time is a preset time difference;
Obtaining target sensor data of corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current appointed sensor data;
and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, a filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time.
By applying the embodiment of the invention, the triggering condition of the fusion process of the multi-sensor data can be limited, namely, the fusion process is triggered after the current appointed sensor data acquired by the appointed touch sensor is obtained, and when the current filter is utilized to carry out filtering processing on the sensor data, the data are processed according to the preset data processing sequence, thereby ensuring the ordering of the data processing and the fixation of the time corresponding to the positioning result information of the target vehicle in the filtering fusion result corresponding to the current appointed sensor data, and the fusion process of the multi-sensor data irrelevant to the platform efficiency can be realized by limiting the output time of the filtering fusion result of the filter, namely, outputting the filtering fusion result corresponding to the current appointed sensor data. Before filtering processing is performed on sensor data by using a current filter, first time corresponding to the current appointed sensor data is determined, target sensor data before the first time and after the second time corresponding to the acquisition time is obtained from a preset storage space, filtering processing is performed on the target sensor data by using the current filter, and therefore the fact that filtering fusion results output by the filters corresponding to the current appointed sensor data are identical in platforms with different calculation forces is guaranteed, namely that input results of the current pose predictors are identical in the same for the same appointed sensor data in the platforms with different calculation forces, and consistency of vehicle positioning results in a real vehicle positioning process and vehicle positioning results in off-line platform testing is guaranteed. The method solves the algorithm problem in the running process of the real vehicle, and the problem that the algorithm cannot be reproduced on an off-line platform, namely an off-vehicle platform, and greatly improves the reproduction and solving efficiency of the problem.
In another embodiment of the present invention, the processor 310 is specifically configured to determine whether the preset storage space stores sensor data of the corresponding acquisition time before the first time; and if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
In another embodiment of the present invention, the processor 310 is a processor disposed within an on-board platform of the target vehicle; the processor 310 is further configured to store, after the obtaining the first time corresponding to the current specified sensor data, the first time corresponding to the current specified sensor data in the preset storage space, as a fusion time of a fusion filtering fusion result corresponding to the current specified sensor data.
In another embodiment of the present invention, the processor 310 is a processor disposed on an off-board device;
the processor 310 is specifically configured to obtain, from the preset storage space, a first time corresponding to the current specified sensor data.
In another embodiment of the present invention, the processor 310 is a processor disposed within an on-board platform of the target vehicle; the processor 310 is specifically configured to obtain a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
The device and system embodiments correspond to the method embodiments, and have the same technical effects as the method embodiments, and specific description refers to the method embodiments. The apparatus embodiments are based on the method embodiments, and specific descriptions may be referred to in the method embodiment section, which is not repeated herein.
Those of ordinary skill in the art will appreciate that: the drawing is a schematic diagram of one embodiment and the modules or flows in the drawing are not necessarily required to practice the invention.
Those of ordinary skill in the art will appreciate that: the modules in the apparatus of the embodiments may be distributed in the apparatus of the embodiments according to the description of the embodiments, or may be located in one or more apparatuses different from the present embodiments with corresponding changes. The modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The fusion method of the multi-sensor data is characterized by being applied to a processor of a fusion system of the multi-sensor data, wherein the system also comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the method comprises:
after determining to obtain current specified sensor data acquired by a specified sensor, obtaining a first time corresponding to the current specified sensor data, wherein a difference value between the current acquisition time corresponding to the current specified sensor data and the first time is a preset time difference, and the preset time difference is greater than or equal to a transmission delay of a target sensor in at least two types of sensors included in a multi-sensor data fusion system, wherein the target sensor is: the sensor with the longest time required for transmitting the acquired data in the at least two types of sensors to a preset storage space;
Obtaining target sensor data of corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current appointed sensor data;
and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, a filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time.
2. The method of claim 1, wherein the step of obtaining target sensor data from the preset storage space for the corresponding acquisition time before the first time and after the second time comprises:
judging whether the preset storage space stores sensor data of the corresponding acquisition moment before the first moment or not;
And if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
3. The method of claim 1, wherein the processor is a processor disposed within an on-board platform of the target vehicle;
after the step of obtaining the first time corresponding to the current specified sensor data, the method further includes:
and storing the first moment in the preset storage space corresponding to the current appointed sensor data as a fusion moment of a fusion filtering fusion result corresponding to the current appointed sensor data.
4. The method of claim 1, wherein the processor is a processor disposed on an off-board device;
the step of obtaining the first moment corresponding to the current specified sensor data comprises the following steps:
and obtaining a first moment corresponding to the current specified sensor data from the preset storage space.
5. The method of claim 1, wherein the processor is a processor disposed within an on-board platform of the target vehicle;
The step of obtaining the first moment corresponding to the current specified sensor data comprises the following steps:
obtaining a preset time difference;
and calculating the time corresponding to the difference value of the preset time difference and the current acquisition time corresponding to the current appointed sensor data as the first time corresponding to the current appointed sensor data.
6. A multi-sensor data fusion device, characterized by being applied to a processor of a multi-sensor data fusion system, wherein the system further comprises at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the device comprises:
the first obtaining module is configured to obtain a first moment corresponding to current specified sensor data after determining to obtain the current specified sensor data acquired by the specified sensor, wherein a difference value between the current acquisition moment corresponding to the current specified sensor data and the first moment is a preset time difference, and the preset time difference is greater than or equal to transmission delay of a target sensor in at least two types of sensors included in the multi-sensor data fusion system, and the target sensor is: the sensor with the longest time required for transmitting the acquired data in the at least two types of sensors to a preset storage space;
A second obtaining module configured to obtain target sensor data of a corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference;
the filtering module is configured to perform filtering processing on the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current specified sensor data;
the determining module is configured to determine current pose information of the target vehicle corresponding to the current specified sensor data by using the current pose predictor, a filtering fusion result corresponding to the current specified sensor data, the current specified sensor data and the specified sensor data between the current acquisition time and the first time.
7. The apparatus of claim 6, wherein the second obtaining module is specifically configured to determine whether the preset storage space stores sensor data of the corresponding acquisition time before the first time;
And if the preset storage space is judged to store the sensor data of the corresponding acquisition time before the first time, acquiring target sensor data of the corresponding acquisition time before the first time and after the second time from the preset storage space.
8. The apparatus of claim 6, wherein the processor is a processor disposed within an on-board platform of the target vehicle;
the apparatus further comprises:
the storage module is configured to store the first moment in the preset storage space corresponding to the current appointed sensor data after the filtering processing is performed on the target sensor data by using the current filter to obtain a filtering fusion result, and the first moment is used as a fusion moment of the fusion filtering fusion result corresponding to the current appointed sensor data.
9. The apparatus of claim 6, wherein the processor is a processor disposed on an off-board device;
the first obtaining module is specifically configured to obtain a first moment corresponding to the current specified sensor data from the preset storage space.
10. A multi-sensor data fusion system, which is characterized by comprising a processor, at least two types of sensors and a preset storage space; each sensor is configured to collect corresponding sensor data, all disposed in the same target vehicle; the preset storage space is configured to store sensor data acquired by the at least two types of sensors, and the processor is configured to acquire a first moment corresponding to the current specified sensor data after determining to acquire the current specified sensor data acquired by the specified sensor, wherein a difference value between the current acquisition moment corresponding to the current specified sensor data and the first moment is a preset time difference;
Obtaining target sensor data of a corresponding acquisition time before a first time and after a second time from the preset storage space, wherein a difference value between the acquisition time corresponding to the previous specified sensor data of the current specified sensor data and the second time is the preset time difference, and the preset time difference is greater than or equal to transmission delay of a target sensor in at least two types of sensors included in the multi-sensor data fusion system, wherein the target sensor is: the sensor with the longest time required for transmitting the acquired data in the at least two types of sensors to a preset storage space;
filtering the target sensor data according to a preset data processing sequence by using a current filter to obtain a filtering fusion result corresponding to the current appointed sensor data;
and determining the current pose information of the target vehicle corresponding to the current appointed sensor data by using the current pose predictor, a filtering fusion result corresponding to the current appointed sensor data, the current appointed sensor data and the appointed sensor data between the current acquisition time and the first time.
CN201911041828.0A 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data Active CN112817301B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911041828.0A CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911041828.0A CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Publications (2)

Publication Number Publication Date
CN112817301A CN112817301A (en) 2021-05-18
CN112817301B true CN112817301B (en) 2023-05-16

Family

ID=75851371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911041828.0A Active CN112817301B (en) 2019-10-30 2019-10-30 Fusion method, device and system of multi-sensor data

Country Status (1)

Country Link
CN (1) CN112817301B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112859659B (en) * 2019-11-28 2022-05-13 魔门塔(苏州)科技有限公司 Method, device and system for acquiring multi-sensor data
CN113327344B (en) * 2021-05-27 2023-03-21 北京百度网讯科技有限公司 Fusion positioning method, device, equipment, storage medium and program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013037850A1 (en) * 2011-09-12 2013-03-21 Continental Teves Ag & Co. Ohg Time-corrected sensor system
DE102014211176A1 (en) * 2014-06-11 2015-12-17 Continental Teves Ag & Co. Ohg Method and system for correcting measurement data and / or navigation data of a sensor-based system
CN105682222B (en) * 2016-03-01 2019-02-19 西安电子科技大学 A kind of vehicle location positioning information fusion method based on vehicle self-organizing network
EP3236210B1 (en) * 2016-04-20 2020-08-05 Honda Research Institute Europe GmbH Navigation system and method for error correction
CN106840179B (en) * 2017-03-07 2019-12-10 中国科学院合肥物质科学研究院 Intelligent vehicle positioning method based on multi-sensor information fusion
CN108573271B (en) * 2017-12-15 2022-06-28 上海蔚来汽车有限公司 Optimization method and device for multi-sensor target information fusion, computer equipment and recording medium
CN108535755B (en) * 2018-01-17 2021-11-19 南昌大学 GNSS/IMU vehicle-mounted real-time integrated navigation method based on MEMS
CN110231028B (en) * 2018-03-05 2021-11-30 北京京东乾石科技有限公司 Aircraft navigation method, device and system
CN109059927A (en) * 2018-08-21 2018-12-21 南京邮电大学 The mobile robot slam of multisensor builds drawing method and system under complex environment
CN109947103B (en) * 2019-03-18 2022-06-28 深圳一清创新科技有限公司 Unmanned control method, device and system and bearing equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018077176A1 (en) * 2016-10-26 2018-05-03 北京小鸟看看科技有限公司 Wearable device and method for determining user displacement in wearable device

Also Published As

Publication number Publication date
CN112817301A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN108571974B (en) Vehicle positioning using a camera
US10077054B2 (en) Tracking objects within a dynamic environment for improved localization
CN112116654B (en) Vehicle pose determining method and device and electronic equipment
JP6252252B2 (en) Automatic driving device
KR101704405B1 (en) System and method for lane recognition
JP2005098853A (en) Map data updating method and map data updating apparatus
CN111812698A (en) Positioning method, device, medium and equipment
CN112817301B (en) Fusion method, device and system of multi-sensor data
US11002553B2 (en) Method and device for executing at least one measure for increasing the safety of a vehicle
US20200271453A1 (en) Lane marking localization and fusion
CN112747754A (en) Fusion method, device and system of multi-sensor data
CN111401255B (en) Method and device for identifying bifurcation junctions
CN112197780B (en) Path planning method and device and electronic equipment
CN110658542B (en) Method, device, equipment and storage medium for positioning and identifying automatic driving automobile
CN111832376A (en) Vehicle reverse running detection method and device, electronic equipment and storage medium
CN111521192A (en) Positioning method, navigation information display method, positioning system and electronic equipment
CN113191030A (en) Automatic driving test scene construction method and device
CN110702135A (en) Navigation method and device for vehicle, automobile and storage medium
CN113743312B (en) Image correction method and device based on vehicle-mounted terminal
CN112824835A (en) Vehicle positioning method, device and computer readable storage medium
KR20210029323A (en) Apparatus and method for improving cognitive performance of sensor fusion using precise map
CN114056337B (en) Method, device and computer program product for predicting vehicle running behavior
CN105956519B (en) The method, apparatus and terminal device of associated storage visual prompts auxiliary information
CN114166234A (en) System, method, device, processor and computer storage medium for selecting navigation route and road damage identification early warning based on road damage measurement
CN113345251A (en) Vehicle reverse running detection method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220308

Address after: 100083 unit 501, block AB, Dongsheng building, No. 8, Zhongguancun East Road, Haidian District, Beijing

Applicant after: BEIJING MOMENTA TECHNOLOGY Co.,Ltd.

Address before: 100083 room 28, 4 / F, block a, Dongsheng building, 8 Zhongguancun East Road, Haidian District, Beijing

Applicant before: BEIJING CHUSUDU TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant