CN116363173A - Method, device, equipment and storage medium for processing vehicle driving data - Google Patents

Method, device, equipment and storage medium for processing vehicle driving data Download PDF

Info

Publication number
CN116363173A
CN116363173A CN202310344136.3A CN202310344136A CN116363173A CN 116363173 A CN116363173 A CN 116363173A CN 202310344136 A CN202310344136 A CN 202310344136A CN 116363173 A CN116363173 A CN 116363173A
Authority
CN
China
Prior art keywords
vehicle driving
information
vehicle
pose information
driving data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310344136.3A
Other languages
Chinese (zh)
Inventor
杜鹃
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202310344136.3A priority Critical patent/CN116363173A/en
Publication of CN116363173A publication Critical patent/CN116363173A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Operations Research (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of automatic driving vehicle data processing, and discloses a vehicle driving data processing method, device, equipment and storage medium, which are used for inhibiting the drifting of the posture of an unmanned vehicle under the condition of not reducing the fusion output frequency of an unmanned vehicle positioning system. The vehicle driving data processing method comprises the following steps: acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information; optimizing the image to be processed through the driving motion information of the vehicle and extracting information to obtain a candidate camera pose information set; and optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.

Description

Method, device, equipment and storage medium for processing vehicle driving data
Technical Field
The present invention relates to the field of automatic driving vehicle data processing technologies, and in particular, to a method, an apparatus, a device, and a storage medium for processing vehicle driving data.
Background
At present, the technical strategies for the high-precision positioning of vehicles are as follows: global Navigation Satellite System (GNSS), inertial Navigation System (INS), vehicle model built based on vehicle sensors, high-definition Map (HD Map), millimeter-wave radar (millimeter-wave radar), laser radar (LiDAR), and the like.
The position information and the attitude information of the existing unmanned vehicle under the world coordinate system are obtained by fusing GPS data and radar data through an Inertial Measurement Unit (IMU) in an Inertial Navigation System (INS).
However, in some scenes, such as a tunnel traffic jam scene, the GPS data and the radar data may fail briefly, and at this time, the position information and the posture information of the unmanned vehicle are obtained only by relying on an Inertial Measurement Unit (IMU), which may cause the posture drift of the unmanned vehicle, so that the problem that the posture drift of the unmanned vehicle cannot be restrained without reducing the fusion output frequency of the unmanned vehicle positioning system is caused.
Disclosure of Invention
The invention provides a processing method, a device, equipment and a storage medium of vehicle driving data, which are used for inhibiting the drifting of the posture of an unmanned vehicle under the condition of not reducing the fusion output frequency of a positioning system of the unmanned vehicle.
The first aspect of the present invention provides a method for processing vehicle driving data, including: acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information; optimizing the image to be processed through the vehicle driving motion information and extracting information to obtain a candidate camera pose information set; and optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.
In a possible implementation manner, the optimizing processing and information extracting are performed on the image to be processed through the driving motion information of the vehicle to obtain a candidate camera pose information set, which includes: calculating the driving motion information of the vehicle according to the pose information of the image to be processed to obtain an initial camera pose; and calculating the initial camera pose through a first preset constraint equation to obtain a candidate camera pose information set, wherein the candidate camera pose information set comprises first candidate camera pose information of a preset vehicle-mounted camera in a current frame and second candidate camera pose information of the preset vehicle-mounted camera in a previous frame.
In a possible implementation manner, the calculating the driving motion information of the vehicle according to the pose information of the image to be processed to obtain an initial camera pose includes: acquiring historical camera pose information of a previous frame corresponding to the vehicle driving motion information in the image to be processed and a target time difference value, wherein the target time difference value is a time difference value between a current frame corresponding to the vehicle driving motion information in the image to be processed and the previous frame; and performing integral operation based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose.
In a possible implementation manner, the optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information includes: and optimizing the vehicle driving pose information in the vehicle driving data set through the first candidate camera pose information of the vehicle-mounted camera in the current frame and the second candidate camera pose information of the previous frame preset in the candidate camera pose information set, so as to obtain target pose information.
In a possible implementation manner, the optimizing the vehicle driving pose information in the vehicle driving dataset by presetting the first candidate camera pose information of the vehicle-mounted camera in the current frame and the second candidate camera pose information in the previous frame in the candidate camera pose information set to obtain target pose information includes: the pose information of the candidate camera is concentrated, the pose information of a first candidate camera of a vehicle-mounted camera in a current frame and the pose information of a second candidate camera in a previous frame are preset, and relative displacement information and relative rotation information between the current frame and the previous frame are obtained; determining the relative displacement information and the relative rotation information as constraint item information; and calculating based on the constraint item information, a second preset constraint equation and the vehicle driving pose information in the vehicle driving dataset to obtain target pose information.
In a possible implementation manner, the acquiring a vehicle driving data set and a to-be-processed image of the target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information, includes: acquiring a sensor data set of a target vehicle, and performing graph optimization processing on the sensor data set to obtain a vehicle driving data set, wherein the graph optimization frequency corresponding to the vehicle driving data set is a first preset frequency; acquiring an image to be processed shot by a preset vehicle-mounted camera and a shooting time stamp corresponding to the image to be processed, wherein the updating frequency corresponding to the image to be processed is a second preset frequency, and the first preset frequency is larger than the second preset frequency; and extracting information from the vehicle driving data set based on the shooting time stamp to obtain vehicle driving motion information.
In a possible implementation manner, the extracting information from the vehicle driving dataset based on the capturing timestamp to obtain vehicle driving motion information includes: obtaining a graph optimization time stamp of the vehicle driving dataset; if the shooting time stamp is the same as the map optimizing time stamp, extracting information from a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information, wherein the vehicle driving motion information comprises speed information and angular speed information; and if the shooting time stamp is different from the map optimizing time stamp, predicting the driving motion data through a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain the vehicle driving motion information.
A second aspect of the present invention provides a processing apparatus for vehicle driving data, comprising: the acquisition and extraction module is used for acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information; the first processing module is used for carrying out optimization processing on the image to be processed through the vehicle driving motion information and extracting information to obtain a candidate camera pose information set; and the second processing module is used for optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.
In a possible embodiment, the first processing module includes: the first operation unit is used for operating the driving motion information of the vehicle through the pose information of the image to be processed to obtain an initial camera pose; the second operation unit is used for operating the initial camera pose through a first preset constraint equation to obtain a candidate camera pose information set, wherein the candidate camera pose information set comprises first candidate camera pose information of a preset vehicle-mounted camera in a current frame and second candidate camera pose information of the preset vehicle-mounted camera in a previous frame.
In a possible embodiment, the first arithmetic unit is specifically configured to: acquiring historical camera pose information of a previous frame corresponding to the vehicle driving motion information in the image to be processed and a target time difference value, wherein the target time difference value is a time difference value between a current frame corresponding to the vehicle driving motion information in the image to be processed and the previous frame; and performing integral operation based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose.
In a possible embodiment, the second processing module includes: and the optimization processing unit is used for optimizing the vehicle driving pose information in the vehicle driving data set through the first candidate camera pose information of the vehicle-mounted camera in the current frame and the second candidate camera pose information of the previous frame preset in the candidate camera pose information set, so as to obtain target pose information.
In a possible embodiment, the optimization processing unit is specifically configured to: the pose information of the candidate camera is concentrated, the pose information of a first candidate camera of a vehicle-mounted camera in a current frame and the pose information of a second candidate camera in a previous frame are preset, and relative displacement information and relative rotation information between the current frame and the previous frame are obtained; determining the relative displacement information and the relative rotation information as constraint item information; and calculating based on the constraint item information, a second preset constraint equation and the vehicle driving pose information in the vehicle driving dataset to obtain target pose information.
In a possible embodiment, the obtaining and extracting module includes: the first acquisition unit is used for acquiring a sensor data set of a target vehicle, carrying out graph optimization processing on the sensor data set to obtain a vehicle driving data set, wherein the graph optimization frequency corresponding to the vehicle driving data set is a first preset frequency; the second acquisition unit is used for acquiring a to-be-processed image shot by a preset vehicle-mounted camera and a shooting time stamp corresponding to the to-be-processed image, wherein the update frequency corresponding to the to-be-processed image is a second preset frequency, and the first preset frequency is larger than the second preset frequency; and the information extraction unit is used for extracting information from the vehicle driving data set based on the shooting time stamp to obtain vehicle driving motion information.
In a possible embodiment, the information extraction unit is specifically configured to: obtaining a graph optimization time stamp of the vehicle driving dataset; if the shooting time stamp is the same as the map optimizing time stamp, extracting information from a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information, wherein the vehicle driving motion information comprises speed information and angular speed information; and if the shooting time stamp is different from the map optimizing time stamp, predicting the driving motion data through a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain the vehicle driving motion information.
A third aspect of the present invention provides a processing apparatus of vehicle driving data, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the vehicle driving data processing device to perform the vehicle driving data processing method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the above-described vehicle driving data processing method.
In the technical scheme provided by the invention, a vehicle driving data set and an image to be processed of a target vehicle are acquired, and the vehicle driving data set is subjected to information extraction to obtain vehicle driving motion information; optimizing the image to be processed through the driving motion information of the vehicle and extracting information to obtain a candidate camera pose information set; and optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information. According to the embodiment of the invention, the vehicle driving data set is subjected to information extraction, the image to be processed is subjected to optimization processing and information extraction through the extracted vehicle driving motion information, the vehicle driving pose information in the vehicle driving data set is subjected to optimization processing through the extracted candidate camera pose information set, the target pose information is obtained, and after the image to be processed is added, the unmanned vehicle positioning system can realize that the fusion output frequency of the unmanned vehicle positioning system is not reduced under the condition that GPS data and radar data are in short time failure, and the drift of the unmanned vehicle pose is restrained, so that the real-time requirement of unmanned vehicle positioning is met.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a method for processing vehicle driving data according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a method for processing vehicle driving data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a device for processing vehicle driving data according to an embodiment of the present invention;
FIG. 4 is a schematic view of another embodiment of a processing device for vehicle driving data according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an embodiment of a processing apparatus for vehicle driving data in an embodiment of the present invention.
Detailed Description
The invention provides a processing method, a device, equipment and a storage medium of vehicle driving data, which are used for inhibiting the drifting of the posture of an unmanned vehicle under the condition of not reducing the fusion output frequency of a positioning system of the unmanned vehicle.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention will be described below with reference to fig. 1, and an embodiment of a method for processing vehicle driving data in an embodiment of the present invention includes:
101. acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information;
it is to be understood that the execution subject of the present invention may be a processing device of vehicle driving data, and may also be a terminal, which is not limited herein. The embodiment of the invention is described by taking the terminal as an execution main body as an example.
In this embodiment, a plurality of sensors are provided in the target vehicle, and are respectively: inertial measurement devices (e.g., inertial sensors (Inertia Measurement Unit, IMU)), GPS sensors, and optical radars (Light Detection And Ranging, liDAR). The inertial measurement device (inertial sensor) is used for acquiring inertial data of the target vehicle, the inertial data comprises angular velocity information and acceleration information of the target vehicle in a world coordinate system, the GPS sensor is used for acquiring GPS data of the target vehicle, the GPS data comprises vehicle position information and velocity information of the target vehicle in the world coordinate system, the optical radar is used for acquiring radar data of the target vehicle, the radar data comprises vehicle position information and vehicle attitude information of the target vehicle in the world coordinate system, the vehicle position information is position information of the target vehicle in X axis, Y axis and Z axis in the world coordinate system, and the vehicle attitude information comprises pitch angle, course angle and roll angle of the target vehicle.
In this embodiment, a preset vehicle-mounted camera is set on the target vehicle, so that the terminal captures and acquires an external image around the target vehicle, i.e., an image to be processed, through the preset vehicle-mounted camera.
In this embodiment, a terminal acquires a sensor data set and an image to be processed of a target vehicle, performs optimization processing on the sensor data set to obtain a vehicle driving data set, and performs information extraction on the vehicle driving data set based on the image to be processed to obtain vehicle driving motion information, wherein the sensor data set includes inertial data, GPS data and radar data.
102. Optimizing the image to be processed through the driving motion information of the vehicle and extracting information to obtain a candidate camera pose information set;
in this embodiment, a terminal obtains historical camera pose information of a previous frame corresponding to vehicle driving motion information in an image to be processed, and a target time difference value, and the terminal calculates based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose, and performs constraint processing on the initial camera pose through a first preset constraint equation to obtain a candidate camera pose information set.
The target time difference value is a time difference value between a current frame and a previous frame corresponding to vehicle driving motion information in the image to be processed.
It should be noted that the first preset constraint equation is an existing constraint equation, and by way of example and not limitation, the first preset constraint equation may be a reprojection error constraint equation, which is not limited herein.
It should be noted that the candidate camera pose information set includes first candidate camera pose information of the preset vehicle-mounted camera in the current frame and second candidate camera pose information of the previous frame, where the first candidate camera pose information includes first candidate camera position information and first candidate camera pose information, and the second candidate camera pose information includes second candidate camera position information and second candidate camera pose information.
It can be understood that the camera position information is position information of the preset vehicle-mounted camera in the X-axis, the Y-axis and the Z-axis in the world coordinate system, and the camera posture information includes a pitch angle, a heading angle and a roll angle of the preset vehicle-mounted camera.
103. And optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.
It is understood that the vehicle driving pose information includes vehicle position information and vehicle pose information of the target vehicle.
In this embodiment, the terminal calculates first candidate camera pose information of a vehicle-mounted camera preset in the candidate camera pose information set in a current frame and second candidate camera pose information of a previous frame to obtain relative displacement information and relative rotation information between the current frame and the previous frame, and performs constraint processing on vehicle driving pose information in the vehicle driving data set through the relative displacement information and the relative rotation information to obtain target pose information.
In a possible implementation manner, the terminal performs constraint processing on the vehicle driving pose information in the vehicle driving data set through a second preset constraint equation, relative displacement information and relative rotation information to obtain target pose information.
According to the embodiment of the invention, the vehicle driving data set is subjected to information extraction, the image to be processed is subjected to optimization processing and information extraction through the extracted vehicle driving motion information, the vehicle driving pose information in the vehicle driving data set is subjected to optimization processing through the extracted candidate camera pose information set, the target pose information is obtained, and after the image to be processed is added, the unmanned vehicle positioning system can realize that the fusion output frequency of the unmanned vehicle positioning system is not reduced under the condition that GPS data and radar data are in short time failure, and the drift of the unmanned vehicle pose is restrained, so that the real-time requirement of unmanned vehicle positioning is met.
Referring to fig. 2, another embodiment of a method for processing vehicle driving data according to an embodiment of the present invention includes:
201. acquiring a sensor data set of a target vehicle, and performing graph optimization processing on the sensor data set to obtain a vehicle driving data set, wherein the graph optimization frequency corresponding to the vehicle driving data set is a first preset frequency;
it is understood that the sensor data set includes inertial data including angular velocity information and acceleration information of the target vehicle in the world coordinate system, GPS data including vehicle position information and velocity information of the target vehicle in the world coordinate system, and radar data including vehicle position information and vehicle attitude information of the target vehicle in the world coordinate system.
It should be noted that, the terminal acquires the sensor data set of the target vehicle in real time. The map optimization processing is an existing sensor data optimization manner, and in this embodiment, the map optimization processing is used to perform fusion processing on a sensor data set, so as to obtain a vehicle driving data set, where the vehicle driving data set includes angular velocity information, acceleration information, velocity information, vehicle position information, and vehicle posture information after optimization of a target vehicle.
The graph optimization frequency is used to represent the frequency of performing graph optimization on the sensor data set, and a specific first preset frequency may be set according to an actual application scenario, which is exemplified by, but not limited to, 100hz, that is, performing graph optimization on the sensor data set every 10ms, and 1000hz, that is, performing graph optimization on the sensor data set every 1ms, where the first preset frequency may be other values, but theoretically, the faster the better the frequency, the delay of the vehicle position may be reduced.
In this embodiment, the terminal performs the graph optimization processing on the sensor data set, so that errors of angular velocity information, acceleration information, speed information, vehicle position information and vehicle posture information of the target vehicle in the sensor data set are reduced, and accuracy of the angular velocity information, the acceleration information, the speed information, the vehicle position information and the vehicle posture information is improved.
202. Acquiring an image to be processed shot by a preset vehicle-mounted camera and a shooting time stamp corresponding to the image to be processed, wherein the update frequency corresponding to the image to be processed is a second preset frequency, and the first preset frequency is larger than the second preset frequency;
In this embodiment, a preset vehicle-mounted camera is set on the target vehicle, so that the terminal captures and acquires an image around the target vehicle, that is, an image to be processed, through the preset vehicle-mounted camera.
It will be appreciated that the photographing time stamp is used to indicate the moment when the preset in-vehicle camera photographed the image to be processed. The update frequency is used to indicate the shooting frequency of the preset vehicle-mounted camera, and the specific second preset frequency can be set according to the actual application scene, and as an example and not by way of limitation, the second preset frequency can be 10hz, that is, the first preset frequency is greater than the second preset frequency by shooting through the preset vehicle-mounted camera every 100ms and acquiring the image to be processed, the second preset frequency can be 100hz, that is, the second preset frequency is another numerical value by shooting through the preset vehicle-mounted camera every 10ms, but theoretically, the faster and better the frequency, the delay of the vehicle position can be reduced.
In this embodiment, the terminal obtains the image to be processed and the capturing timestamp of the image to be processed, so as to meet the requirement of real-time output of the positioning of the target vehicle.
203. Obtaining a graph optimization time stamp of a vehicle driving data set;
It will be appreciated that the map optimization time stamp is used to indicate the time at which the vehicle driving dataset was obtained.
The time corresponding to the graph optimization time stamp is earlier than or equal to the time corresponding to the photographing time stamp.
In this embodiment, the graph optimization time stamp is used to record the time when the vehicle driving data set is obtained, so that information extraction can be performed according to the time stamp.
204. If the shooting time stamp is the same as the map optimizing time stamp, extracting information from a sensor data set in a vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information, wherein the vehicle driving motion information comprises speed information and angular speed information;
it is understood that since the preset in-vehicle camera is provided to the target vehicle, the movement of the preset in-vehicle camera and the movement of the target vehicle have consistency.
Since the image to be processed shot by the preset vehicle-mounted camera only has the position information of the target vehicle, when the image is in the same time stamp, namely the shooting time stamp is the same as the map optimization time stamp.
In this embodiment, when the photographing time stamp is the same as the map optimization time stamp, information extraction may be performed according to the sensor data set in the vehicle driving data set corresponding to the map optimization time stamp, so as to obtain speed information and angular velocity information corresponding to the photographing time stamp, thereby improving accuracy of extracting the speed information and the angular velocity information.
205. If the shooting time stamp is different from the map optimizing time stamp, predicting driving motion data through a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information;
in one possible implementation, the method includes (1) if a shooting time stamp is different from a map optimization time stamp, acquiring candidate speed information and candidate angular speed information corresponding to the map optimization time stamp in a vehicle driving dataset by a terminal; (2) The terminal performs integral operation through the shooting time stamp, the graph optimization time stamp, the candidate speed information and a first preset integral formula to obtain speed information; (3) And the terminal performs integral operation through a shooting time stamp, a graph optimization time stamp and a second preset integral formula of the candidate angular velocity information to obtain the angular velocity information.
In this embodiment, the first preset integral formula is:
Figure BDA0004159082440000101
wherein v is used for representing speed information, v is used for representing candidate speed information, t 1 For representing graph optimization time stamps, t 2 For representing the shooting time stamp. The second preset integral formula is: />
Figure BDA0004159082440000102
Wherein w is used for representing angular velocity information, w is used for representing candidate angular velocity information, t 1 For representing graph optimization time stamps, t 2 For representing the shooting time stamp.
In this embodiment, when the photographing time stamp is different from the map optimizing time stamp, the terminal predicts the driving motion data through the candidate speed information and the candidate angular speed information corresponding to the map optimizing time stamp, so as to avoid the need to wait for the map optimizing time stamp of the same time stamp, thereby reducing the time consumption and avoiding the delay problem of obtaining the driving motion information of the vehicle, and meeting the requirement of real-time output.
206. Acquiring historical camera pose information of a previous frame corresponding to vehicle driving motion information in an image to be processed and a target time difference value, wherein the target time difference value is a time difference value between a current frame and the previous frame corresponding to the vehicle driving motion information in the image to be processed;
it is understood that the historical camera pose information of the previous frame is real camera pose information, and it is understood that the real camera pose information is real pose information obtained by a real sensor.
207. Performing integral operation based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose;
it is understood that the initial camera pose is an estimated camera pose.
Specifically, the terminal performs integral operation based on the historical camera pose information, the target time difference value, the vehicle driving motion information and a third preset integral formula to obtain an initial camera pose.
The third preset integral formula is as follows: p=v i * dt+p0, where p is used to represent the initial camera pose, v i For representing speed information or angular speed information in the driving motion information of the vehicle, dt for representing a target time difference value, and p0 for representing historical camera pose information.
208. Calculating the initial camera pose through a first preset constraint equation to obtain a candidate camera pose information set, wherein the candidate camera pose information set comprises first candidate camera pose information of a preset vehicle-mounted camera in a current frame and second candidate camera pose information of a previous frame;
it should be noted that the first preset constraint equation is an existing constraint equation, and by way of example and not limitation, the first preset constraint equation may be a reprojection error constraint equation, which is not limited herein.
In a possible implementation manner, the terminal optimizes the initial camera pose through a first preset constraint equation to obtain first candidate camera pose information of a preset vehicle-mounted camera in a current frame, and determines historical camera pose information of a previous frame as second candidate camera pose information of the previous frame to obtain a candidate camera pose information set.
In this embodiment, the terminal performs integral operation through the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose, so that the initial camera pose is operated through a first preset constraint equation to obtain accurate first candidate camera pose information and second candidate camera pose information of a previous frame, and it can be understood that the accurate first candidate camera pose information is the same as the actual camera pose information corresponding to the current frame obtained through an actual sensor.
209. The method comprises the steps of carrying out operation on first candidate camera pose information of a vehicle-mounted camera in a current frame and second candidate camera pose information of a previous frame which are preset in a candidate camera pose information set, and obtaining relative displacement information and relative rotation information between the current frame and the previous frame;
it is understood that the first candidate camera pose information includes first candidate camera position information and first candidate camera pose information, and the second candidate camera pose information includes second candidate camera position information and second candidate camera pose information.
In one possible implementation, the terminal (1) calculates first candidate camera position information of a vehicle-mounted camera in first candidate camera pose information of a current frame and second candidate camera position information of second candidate camera pose information of a previous frame to obtain relative displacement information between the current frame and the previous frame; (2) And the terminal calculates the first candidate camera pose information in the first candidate camera pose information and the second candidate camera pose information in the second candidate camera pose information of the previous frame to obtain the relative rotation information between the current frame and the previous frame.
It can be understood that the camera position information is position information of the preset vehicle-mounted camera in the X-axis, the Y-axis and the Z-axis in the world coordinate system, and the camera posture information includes a pitch angle, a heading angle and a roll angle of the preset vehicle-mounted camera.
Specifically, the terminal performs displacement operation on the first coordinate information of the vehicle-mounted camera in the current frame and the second coordinate information of the previous frame according to the pose information set preset by the candidate camera, so as to obtain the relative displacement information between the current frame and the previous frame; and the terminal performs angle operation on a first pitch angle, a first course angle and a first roll angle of the preset vehicle-mounted camera in the current frame and a second pitch angle, a second course angle and a second roll angle of the previous frame respectively to obtain relative rotation information between the current frame and the previous frame.
In this embodiment, the terminal calculates the relative displacement information and the relative rotation information of the preset vehicle-mounted camera between two adjacent frames of images, and does not select two frames of images with too long interval time, so as to avoid the problem that the pose of the preset vehicle-mounted camera is greatly changed due to the fact that the preset vehicle-mounted camera moves along with the vehicle, and improve the accuracy of the relative displacement information and the relative rotation information.
210. Determining the relative displacement information and the relative rotation information as constraint item information;
in this embodiment, the terminal constructs constraint term information in the second preset constraint equation by using the relative displacement information and the relative rotation information as constraint term information.
211. And calculating based on the constraint item information, a second preset constraint equation and vehicle driving pose information in the vehicle driving dataset to obtain target pose information.
In this embodiment, the second preset constraint equation is: f=t 2 -1 T 1 ΔT, where f is used to represent a second preset constraint equation, T 2 -1 Inverse matrix, T, for representing current frame vehicle driving pose information corresponding to a target vehicle in a vehicle driving dataset 1 The vehicle driving position information is used for representing the last frame of vehicle driving position information corresponding to the target vehicle in the vehicle driving data set, and the deltaT is used for representing constraint item information, namely relative displacement information and relative rotation information of a preset vehicle-mounted camera between a current frame and the last frame.
In this embodiment, the terminal performs optimization processing on the vehicle driving pose information in the vehicle driving data set through the second preset constraint equation, so that under the condition that the GPS data and the radar data are in short time failure, drift of the unmanned vehicle pose is inhibited, and thus accurate vehicle driving pose information is obtained, and it can be understood that the accurate vehicle driving pose information is the same as the actual vehicle driving pose information obtained under the condition that the GPS data and the radar data are in a normal state.
In the embodiment of the invention, a sensor dataset of a target vehicle is subjected to graph optimization processing to obtain a vehicle driving dataset, a to-be-processed image shot by a preset vehicle-mounted camera and a shooting timestamp corresponding to the to-be-processed image are obtained, if the shooting timestamp is the same as the graph optimization timestamp, the sensor dataset in the vehicle driving dataset corresponding to the graph optimization timestamp is subjected to information extraction, if the shooting timestamp is different from the graph optimization timestamp, driving motion data are predicted through the sensor dataset in the vehicle driving dataset corresponding to the graph optimization timestamp, thus vehicle driving motion information is obtained, integral operation is carried out on the basis of historical camera pose information, a target time difference value and the vehicle driving motion information to obtain an initial camera pose, the first candidate camera pose information of the preset vehicle-mounted camera in a current frame and the second candidate camera pose information of a previous frame are obtained through operation of a first preset constraint equation, the relative displacement information and relative rotation information between the current frame and the previous frame are determined to be constraint item information, the constraint item information is based on the constraint item information, the second preset constraint item information and the second candidate camera pose information, the driving pose information is not required to be fused with the driving pose information of a vehicle driving system, and the driving pose information of a vehicle can be suppressed, and the driving system of a vehicle can not be output in a real-time when the vehicle is in a driving system has no need to be in a transient condition, and driving has no need to meet driving situation.
The method for processing vehicle driving data in the embodiment of the present invention is described above, and the apparatus for processing vehicle driving data in the embodiment of the present invention is described below, referring to fig. 3, where an embodiment of the apparatus for processing vehicle driving data in the embodiment of the present invention includes:
the acquiring and extracting module 301 is configured to acquire a vehicle driving dataset of a target vehicle and an image to be processed, and extract information from the vehicle driving dataset to obtain vehicle driving motion information;
the first processing module 302 is configured to perform optimization processing on an image to be processed through vehicle driving motion information and perform information extraction to obtain a candidate camera pose information set;
the second processing module 303 is configured to perform optimization processing on the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set, so as to obtain target pose information.
According to the embodiment of the invention, the vehicle driving data set is subjected to information extraction, the image to be processed is subjected to optimization processing and information extraction through the extracted vehicle driving motion information, the vehicle driving pose information in the vehicle driving data set is subjected to optimization processing through the extracted candidate camera pose information set, the target pose information is obtained, and after the image to be processed is added, the unmanned vehicle positioning system can realize that the fusion output frequency of the unmanned vehicle positioning system is not reduced under the condition that GPS data and radar data are in short time failure, and the drift of the unmanned vehicle pose is restrained, so that the real-time requirement of unmanned vehicle positioning is met.
Referring to fig. 4, another embodiment of a processing device for vehicle driving data in an embodiment of the present invention includes:
the acquiring and extracting module 301 is configured to acquire a vehicle driving dataset of a target vehicle and an image to be processed, and extract information from the vehicle driving dataset to obtain vehicle driving motion information;
the first processing module 302 is configured to perform optimization processing on an image to be processed through vehicle driving motion information and perform information extraction to obtain a candidate camera pose information set;
the second processing module 303 is configured to perform optimization processing on the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set, so as to obtain target pose information.
Optionally, the first processing module 302 includes:
a first operation unit 3021 for calculating vehicle driving motion information according to pose information of an image to be processed to obtain an initial camera pose;
the second operation unit 3022 is configured to operate on the initial camera pose according to a first preset constraint equation to obtain a candidate camera pose information set, where the candidate camera pose information set includes first candidate camera pose information of the preset vehicle-mounted camera in the current frame and second candidate camera pose information of the previous frame.
Optionally, the first operation unit 3021 is specifically configured to:
acquiring historical camera pose information of a previous frame corresponding to vehicle driving motion information in an image to be processed and a target time difference value, wherein the target time difference value is a time difference value between a current frame and the previous frame corresponding to the vehicle driving motion information in the image to be processed;
and performing integral operation based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain the initial camera pose.
Optionally, the second processing module 303 includes:
the optimization processing unit 3031 is configured to perform optimization processing on the vehicle driving pose information in the vehicle driving dataset by presetting pose information of a first candidate camera of the vehicle-mounted camera in the current frame and pose information of a second candidate camera in the previous frame in the candidate camera pose information set, so as to obtain target pose information.
Optionally, the optimization processing unit 3031 is specifically configured to:
the method comprises the steps of carrying out operation on first candidate camera pose information of a vehicle-mounted camera in a current frame and second candidate camera pose information of a previous frame which are preset in a candidate camera pose information set, and obtaining relative displacement information and relative rotation information between the current frame and the previous frame;
Determining the relative displacement information and the relative rotation information as constraint item information;
and calculating based on the constraint item information, a second preset constraint equation and vehicle driving pose information in the vehicle driving dataset to obtain target pose information.
Optionally, the acquiring and extracting module 301 includes:
a first obtaining unit 3011, configured to obtain a sensor dataset of a target vehicle, and perform graph optimization processing on the sensor dataset to obtain a vehicle driving dataset, where a graph optimization frequency corresponding to the vehicle driving dataset is a first preset frequency;
the second obtaining unit 3012 is configured to obtain an image to be processed and a capturing timestamp corresponding to the image to be processed, which are captured by the preset vehicle-mounted camera, where an update frequency corresponding to the image to be processed is a second preset frequency, and the first preset frequency is greater than the second preset frequency;
the information extraction unit 3013 is configured to extract information from the vehicle driving data set based on the capturing timestamp, so as to obtain vehicle driving motion information.
Optionally, the information extraction unit 3013 is specifically configured to:
obtaining a graph optimization time stamp of a vehicle driving data set;
if the shooting time stamp is the same as the map optimizing time stamp, extracting information from a sensor data set in a vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information, wherein the vehicle driving motion information comprises speed information and angular speed information;
If the shooting time stamp is different from the map optimizing time stamp, the driving motion data is predicted through a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp, and the vehicle driving motion information is obtained.
According to the embodiment of the invention, the vehicle driving data set is subjected to information extraction, the image to be processed is subjected to optimization processing and information extraction through the extracted vehicle driving motion information, the vehicle driving pose information in the vehicle driving data set is subjected to optimization processing through the extracted candidate camera pose information set, the target pose information is obtained, and after the image to be processed is added, the unmanned vehicle positioning system can realize that the fusion output frequency of the unmanned vehicle positioning system is not reduced under the condition that GPS data and radar data are in short time failure, and the drift of the unmanned vehicle pose is restrained, so that the real-time requirement of unmanned vehicle positioning is met.
The processing device for vehicle driving data in the embodiment of the present invention is described in detail above in fig. 3 and 4 from the point of view of modularized functional entities, and the processing device for vehicle driving data in the embodiment of the present invention is described in detail below from the point of view of hardware processing.
Fig. 5 is a schematic structural diagram of a processing device for vehicle driving data provided in an embodiment of the present invention, where the processing device 500 for vehicle driving data may have a relatively large difference due to different configurations or performances, and may include one or more processors (central processing units, CPU) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) storing application programs 533 or data 532. Wherein memory 520 and storage medium 530 may be transitory or persistent storage. The program stored in the storage medium 530 may include one or more modules (not shown), each of which may include a series of instruction operations in the processing device 500 for vehicle driving data. Still further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the processing device 500 of the vehicle driving data.
The vehicle driving data processing device 500 may also include one or more power sources 540, one or more wired or wireless network interfaces 550, one or more input/output interfaces 560, and/or one or more operating systems 531, such as Windows Serve, mac OS X, unix, linux, freeBSD, and the like. It will be appreciated by those skilled in the art that the configuration of the processing device for vehicle driving data shown in fig. 5 does not constitute a limitation of the processing device for vehicle driving data, and may include more or less components than those illustrated, or may combine certain components, or may be arranged in different components.
The present invention also provides a processing apparatus for vehicle driving data, including a memory and a processor, in which computer-readable instructions are stored, which when executed by the processor, cause the processor to execute the steps of the processing method for vehicle driving data in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, or may be a volatile computer readable storage medium, having stored therein instructions that, when executed on a computer, cause the computer to perform the steps of the method of processing vehicle driving data.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of processing vehicle driving data, characterized in that the method of processing vehicle driving data includes:
acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information;
optimizing the image to be processed through the vehicle driving motion information and extracting information to obtain a candidate camera pose information set;
and optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.
2. The method for processing vehicle driving data according to claim 1, wherein the optimizing the image to be processed by the vehicle driving motion information and extracting information to obtain a candidate camera pose information set includes:
Calculating the driving motion information of the vehicle according to the pose information of the image to be processed to obtain an initial camera pose;
and calculating the initial camera pose through a first preset constraint equation to obtain a candidate camera pose information set, wherein the candidate camera pose information set comprises first candidate camera pose information of a preset vehicle-mounted camera in a current frame and second candidate camera pose information of the preset vehicle-mounted camera in a previous frame.
3. The method for processing vehicle driving data according to claim 2, wherein the calculating the vehicle driving motion information by the pose information of the image to be processed to obtain an initial camera pose comprises:
acquiring historical camera pose information of a previous frame corresponding to the vehicle driving motion information in the image to be processed and a target time difference value, wherein the target time difference value is a time difference value between a current frame corresponding to the vehicle driving motion information in the image to be processed and the previous frame;
and performing integral operation based on the historical camera pose information, the target time difference value and the vehicle driving motion information to obtain an initial camera pose.
4. The method for processing vehicle driving data according to claim 1, wherein the optimizing the vehicle driving pose information in the vehicle driving data set by the candidate camera pose information set to obtain target pose information includes:
And optimizing the vehicle driving pose information in the vehicle driving data set through the first candidate camera pose information of the vehicle-mounted camera in the current frame and the second candidate camera pose information of the previous frame preset in the candidate camera pose information set, so as to obtain target pose information.
5. The method according to claim 4, wherein the optimizing the vehicle driving pose information in the vehicle driving data set by presetting the first candidate camera pose information of the vehicle-mounted camera in the current frame and the second candidate camera pose information in the previous frame in the candidate camera pose information set to obtain the target pose information includes:
the pose information of the candidate camera is concentrated, the pose information of a first candidate camera of a vehicle-mounted camera in a current frame and the pose information of a second candidate camera in a previous frame are preset, and relative displacement information and relative rotation information between the current frame and the previous frame are obtained;
determining the relative displacement information and the relative rotation information as constraint item information;
and calculating based on the constraint item information, a second preset constraint equation and the vehicle driving pose information in the vehicle driving dataset to obtain target pose information.
6. The method for processing vehicle driving data according to any one of claims 1 to 5, wherein the acquiring the vehicle driving data set and the image to be processed of the target vehicle, extracting information from the vehicle driving data set, and obtaining vehicle driving motion information, includes:
acquiring a sensor data set of a target vehicle, and performing graph optimization processing on the sensor data set to obtain a vehicle driving data set, wherein the graph optimization frequency corresponding to the vehicle driving data set is a first preset frequency;
acquiring an image to be processed shot by a preset vehicle-mounted camera and a shooting time stamp corresponding to the image to be processed, wherein the updating frequency corresponding to the image to be processed is a second preset frequency, and the first preset frequency is larger than the second preset frequency;
and extracting information from the vehicle driving data set based on the shooting time stamp to obtain vehicle driving motion information.
7. The method for processing vehicle driving data according to claim 6, wherein the extracting information from the vehicle driving data set based on the capturing timestamp to obtain vehicle driving motion information includes:
obtaining a graph optimization time stamp of the vehicle driving dataset;
If the shooting time stamp is the same as the map optimizing time stamp, extracting information from a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain vehicle driving motion information, wherein the vehicle driving motion information comprises speed information and angular speed information;
and if the shooting time stamp is different from the map optimizing time stamp, predicting the driving motion data through a sensor data set in the vehicle driving data set corresponding to the map optimizing time stamp to obtain the vehicle driving motion information.
8. A processing apparatus of vehicle driving data, characterized in that the processing apparatus of vehicle driving data includes:
the acquisition and extraction module is used for acquiring a vehicle driving data set and an image to be processed of a target vehicle, and extracting information from the vehicle driving data set to obtain vehicle driving motion information;
the first processing module is used for carrying out optimization processing on the image to be processed through the vehicle driving motion information and extracting information to obtain a candidate camera pose information set;
and the second processing module is used for optimizing the vehicle driving pose information in the vehicle driving data set through the candidate camera pose information set to obtain target pose information.
9. A processing apparatus of vehicle driving data, characterized in that the processing apparatus of vehicle driving data includes: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the processing device of vehicle driving data to perform the method of processing vehicle driving data as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium having instructions stored thereon, which when executed by a processor, implement the method of processing vehicle driving data according to any one of claims 1-7.
CN202310344136.3A 2023-03-31 2023-03-31 Method, device, equipment and storage medium for processing vehicle driving data Pending CN116363173A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310344136.3A CN116363173A (en) 2023-03-31 2023-03-31 Method, device, equipment and storage medium for processing vehicle driving data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310344136.3A CN116363173A (en) 2023-03-31 2023-03-31 Method, device, equipment and storage medium for processing vehicle driving data

Publications (1)

Publication Number Publication Date
CN116363173A true CN116363173A (en) 2023-06-30

Family

ID=86931227

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310344136.3A Pending CN116363173A (en) 2023-03-31 2023-03-31 Method, device, equipment and storage medium for processing vehicle driving data

Country Status (1)

Country Link
CN (1) CN116363173A (en)

Similar Documents

Publication Publication Date Title
CN110044354B (en) Binocular vision indoor positioning and mapping method and device
CN109887057B (en) Method and device for generating high-precision map
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
US9275458B2 (en) Apparatus and method for providing vehicle camera calibration
KR20200044420A (en) Method and device to estimate position
EP2112630A1 (en) Method and system for real-time visual odometry
CN111959495B (en) Vehicle control method and device and vehicle
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
JP6950832B2 (en) Position coordinate estimation device, position coordinate estimation method and program
WO2018182524A1 (en) Real time robust localization via visual inertial odometry
CN112781586B (en) Pose data determination method and device, electronic equipment and vehicle
CN109143205B (en) External parameter calibration method and device for integrated sensor
KR101890612B1 (en) Method and apparatus for detecting object using adaptive roi and classifier
CN111623773B (en) Target positioning method and device based on fisheye vision and inertial measurement
CN105141807A (en) Video signal image processing method and device
CN114136315B (en) Monocular vision-based auxiliary inertial integrated navigation method and system
WO2022062480A1 (en) Positioning method and positioning apparatus of mobile device
US10642272B1 (en) Vehicle navigation with image-aided global positioning system
CN113587934B (en) Robot, indoor positioning method and device and readable storage medium
CN113218389B (en) Vehicle positioning method, device, storage medium and computer program product
CN111351487B (en) Clock synchronization method and device for multiple sensors and computing equipment
CN116958452A (en) Three-dimensional reconstruction method and system
AU2020257038A1 (en) Camera orientation estimation
CN116363173A (en) Method, device, equipment and storage medium for processing vehicle driving data
CN115082290A (en) Projection method, device and equipment of laser radar point cloud and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination