CN113155121A - Vehicle positioning method and device and electronic equipment - Google Patents

Vehicle positioning method and device and electronic equipment Download PDF

Info

Publication number
CN113155121A
CN113155121A CN202110305408.XA CN202110305408A CN113155121A CN 113155121 A CN113155121 A CN 113155121A CN 202110305408 A CN202110305408 A CN 202110305408A CN 113155121 A CN113155121 A CN 113155121A
Authority
CN
China
Prior art keywords
sensor data
positioning
data
obtaining
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110305408.XA
Other languages
Chinese (zh)
Other versions
CN113155121B (en
Inventor
朱呈炜
李志恒
张凯
樊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Research Institute Tsinghua University
Original Assignee
Shenzhen Research Institute Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Research Institute Tsinghua University filed Critical Shenzhen Research Institute Tsinghua University
Priority to CN202110305408.XA priority Critical patent/CN113155121B/en
Publication of CN113155121A publication Critical patent/CN113155121A/en
Application granted granted Critical
Publication of CN113155121B publication Critical patent/CN113155121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Navigation (AREA)

Abstract

The invention provides a vehicle positioning method, a vehicle positioning device and electronic equipment, wherein the vehicle positioning method comprises the following steps: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data. By implementing the invention, the vehicle computational power can be reduced by adding the sensor information step by step, and the triggering condition of adding the sensor data each time is that the positioning uncertainty does not meet the condition, namely, the vehicle computational power can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.

Description

Vehicle positioning method and device and electronic equipment
Technical Field
The invention relates to the field of automatic driving, in particular to a vehicle positioning method and device and electronic equipment.
Background
The automatic driving vehicle generally bears sensor modules such as a GPS, a camera, a radar and an IMU for positioning, different sensors provide different positioning information for the vehicle, and the automatic driving vehicle executes corresponding automatic driving operation according to the positioning information. Existing single sensor based autonomous vehicle-mounted positioning includes: GPS positioning, visual positioning, IMU positioning, radar positioning, and the like. The GPS positioning is the most traditional positioning mode and the simplest and most direct positioning mode, a satellite system is used for resolving a vehicle positioning signal through a base station, and then positioning information is fed back to a vehicle. The visual positioning is a current hot positioning mode, information near a vehicle is obtained through a camera, then vehicle position information is obtained through scene reconstruction and semantic understanding, compared with GPS information, the positioning accuracy is obviously improved, and positioning uncertainty can be improved to a greater extent due to the fact that a vehicle-mounted camera system can effectively utilize vehicle-mounted scene information. However, visual positioning depends on the processing accuracy and speed of the algorithm, and positioning accuracy in similar scenes is greatly reduced due to the fact that providing position information depends on the scene. The IMU positioning is commonly used for auxiliary positioning, and because the information of a gyroscope and an accelerometer in the IMU needs to be converted into position information in an integral mode, the IMU positioning is often fused with vision and GPS positioning, so that the IMU positioning has the advantages of being free from the influence of environmental weather and scene factors, and has the defect of increasing errors accumulated along with time. The radar positioning is commonly used for short-distance special scene positioning, and is widely used for positioning medium and high-grade vehicles because the radar positioning has high accuracy and strong penetrating power and is not easily influenced by environmental factors. In an underground garage environment, etc., vision and GPS positioning are greatly affected. Radar positioning plays a major role, but radar positioning cannot be used for long distances and is costly. Other sensors are generally assisted in locating.
In order to overcome the defects caused by single sensor positioning and improve the positioning accuracy, a technical scheme of multi-sensor fusion is provided in the related technology, namely, the pose information (including position and direction) of the vehicle is obtained through different source sensors such as a GPS (global positioning system), a camera and an IMU (inertial measurement unit), however, the calculation of the vehicle is a great burden due to the fact that data of various sensors are fused at the same time.
Disclosure of Invention
In view of this, embodiments of the present invention provide a vehicle positioning method, a vehicle positioning device, and an electronic device, so as to solve the defect of large computational power consumption in vehicle positioning in the prior art.
According to a first aspect, an embodiment of the present invention provides a vehicle positioning method, including the steps of: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
Optionally, the method further comprises: when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data; and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data, and positioning the vehicle.
Optionally, the method further comprises: and when the first positioning uncertainty data meet a first preset condition, positioning the vehicle according to the first sensor data.
Optionally, the first sensor data is a visual sensor, and obtaining first positioning uncertainty data according to the first sensor data includes: and inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data.
Optionally, the second sensor data is a gyroscope, and obtaining second positioning uncertainty data according to the second sensor data and the first sensor data includes: obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; obtaining a first motion residual according to the first motion relation; and obtaining second positioning uncertainty data according to the first motion residual.
Optionally, the third sensor data is an accelerometer, and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data, and the first sensor data includes: obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; obtaining a second motion residual according to the second motion relation; and obtaining third positioning uncertainty data according to the second motion residual.
Optionally, the obtaining second positioning uncertainty data according to the first motion residual includes: obtaining a first deviation covariance according to the first motion residual; obtaining second positioning uncertainty data according to the first deviation covariance; obtaining a first deviation covariance according to the motion residual, including:
Figure BDA0002987451880000031
wherein E is1Is the first variance of the deviation, p is two residual scaling factors,
Figure BDA0002987451880000032
expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix, sigma, obtained by adding a gyroscopei,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,
Figure BDA0002987451880000033
to representCovariance matrix of the marginalized reprojection errors in the ith frame image, wherein JkRepresented is a jacobian matrix of reprojection errors,
Figure BDA0002987451880000034
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000035
is composed of
Figure BDA0002987451880000036
The transpose of (a) is performed,
Figure BDA0002987451880000037
is composed of
Figure BDA0002987451880000038
The transposing of (1).
Optionally, the obtaining third positioning uncertainty data according to the second motion residual includes: obtaining a second deviation covariance according to the second motion residual; obtaining third positioning uncertainty data according to the second deviation covariance; obtaining a second deviation covariance according to the second motion residual, including:
Figure BDA0002987451880000041
wherein E is2In order to be the second deviation covariance,
Figure BDA0002987451880000042
Figure BDA0002987451880000043
residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;
Figure BDA0002987451880000044
denotes the residual error from the reprojection, and p denotes two typesA residual error proportional coefficient;
Figure BDA0002987451880000045
is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,
Figure BDA0002987451880000046
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000047
is composed of
Figure BDA0002987451880000048
The transpose of (a) is performed,
Figure BDA0002987451880000049
is composed of
Figure BDA00029874518800000410
The transposing of (1).
Optionally, the target neural network is constructed according to a PoseNet model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution; and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
According to a second aspect, an embodiment of the present invention provides a vehicle positioning apparatus, including: the first sensor data acquisition module is used for acquiring first sensor data; the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data; the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition.
According to a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the vehicle positioning method according to the first aspect or any one of the embodiments of the first aspect when executing the program.
According to a fourth aspect, an embodiment of the present invention provides a storage medium having stored thereon computer instructions that, when executed by a processor, perform the steps of the vehicle localization method of the first aspect or any of the embodiments of the first aspect.
The technical scheme of the invention has the following advantages:
according to the vehicle positioning method/device provided by the embodiment, the positioning uncertainty is judged according to the first sensor data, the second sensor data is added when the positioning uncertainty does not meet the conditions, the vehicle calculation force can be reduced by adding the sensor information step by step, and the triggering condition for adding the sensor data each time is that the positioning uncertainty does not meet the conditions, namely, the vehicle calculation force can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of one specific example of a vehicle positioning method in the embodiment of the invention;
FIG. 2 is a schematic block diagram of a specific example of a vehicle locating device in accordance with an embodiment of the present invention;
fig. 3 is a schematic block diagram of a specific example of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment provides a vehicle positioning method, as shown in fig. 1, including the following steps:
s101, acquiring first sensor data; the first sensor may be a visual sensor, such as a camera. The first sensor data may be image information captured by a camera during the travel of the vehicle.
S102, obtaining first positioning uncertainty data according to the first sensor data;
in the positioning uncertainty characterization positioning process, uncertainty of deviation degree between the vehicle position coordinates and the vehicle actual position coordinates obtained by multi-source information fusion prediction is used, namely the uncertainty degree of the predicted position and the actual position within an acceptable precision range after the vehicle position is predicted by multi-source information fusion.
The method for obtaining the first positioning uncertainty data according to the first sensor data may be inputting image information shot by a camera to a target neural network, the target neural network may be constructed by a poseNet model, posterior probability distribution of current network node weights is obtained through a sample set and corresponding sample labels, the distribution is approximated by variational inference, positioning errors are obtained, after training of a large amount of data on a Cambridge Landmarks data set, strong correlation between the positioning errors and uncertainty is found, the higher the positioning uncertainty is, the larger the positioning errors are, and the stronger linear relationship between the uncertainty and the positioning errors is, so that the first positioning uncertainty data can be obtained according to the positioning errors.
S103, when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
illustratively, the first preset condition may be that the first positioning uncertainty data is less than or equal to 0.12 (corresponding to a positioning error of 8 meters). The second sensor may be a gyroscope or an accelerometer, and since the gyroscope is relatively accurate in a short time and may drift in a long time, and the accelerometer is opposite to the gyroscope, the second sensor is taken as the gyroscope for explanation in this embodiment, and the manner of acquiring the gyroscope data may be to acquire the gyroscope data in the in-vehicle IMU.
The second positioning uncertainty data may be obtained according to the second sensor data and the first sensor data by:
firstly, data preprocessing is carried out, and a coordinate system corresponding to the IMU is converted into a camera coordinate system, specifically:
in order to estimate the pose, sparse key point information is needed, and the key points are obtained by preprocessing the image, and specifically include the following steps:
the image preprocessing is mainly to calculate the reprojection error of key points, and the 3D position information of the kth road sign is set as Xk(the position information may be derived from GPS), and the coordinates of the ith frame image in the two-dimensional coordinate system corresponding to the position information are
Figure BDA0002987451880000071
Minimizing reprojection errors
Figure BDA0002987451880000072
Figure BDA0002987451880000073
Obtaining optimal camera direction pose parameters
Figure BDA0002987451880000081
Where pi () is an operator that projects a 3D point onto an image.
Obtaining the position and the direction of the IMU in the ith frame image according to the formula, wherein the method comprises the following steps:
Figure BDA0002987451880000082
wherein the content of the first and second substances,
Figure BDA0002987451880000083
respectively representing the direction pose parameters of the camera and the IMU in the ith frame image,RCBTo convert from the IMU coordinate system to the rotation matrix of the camera coordinate system,
Figure BDA0002987451880000084
the position of the camera and IMU at the time of the ith frame image,CpBand (4) converting the position coordinates of the camera into metric units, wherein s is a scale factor and is the position of the IMU in a camera coordinate system.
And secondly, obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data, wherein the first motion relation comprises a motion direction.
Figure BDA0002987451880000085
Wherein the content of the first and second substances,
Figure BDA0002987451880000086
the direction of the IMU in frame i +1,
Figure BDA0002987451880000087
is the direction of the IMU in the ith frame, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,
Figure BDA0002987451880000088
the deviation of the gyroscope is represented by a deviation,
Figure BDA0002987451880000089
the representation is a Jacobian matrix, which can be obtained by pre-fusion, and Exp (-) is an exponential mapping of lie groups.
Thirdly, obtaining a first motion residual according to the first motion relation;
Figure BDA00029874518800000810
wherein the content of the first and second substances,
Figure BDA00029874518800000811
is the ith frameDirectional residual (first motion residual) of image to i +1 th frame image, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,
Figure BDA00029874518800000812
which is a jacobian matrix, can be obtained by pre-fusion,
Figure BDA00029874518800000813
the deviation of the gyroscope is represented by a deviation,
Figure BDA00029874518800000814
the direction of the IMU in frame i +1,
Figure BDA00029874518800000815
for the direction of the IMU in the ith frame,
Figure BDA00029874518800000816
is composed of
Figure BDA00029874518800000817
Exp (-) is an exponential mapping of lie groups.
Then, according to the first motion residual, obtaining second positioning uncertainty data, including: step 1, obtaining a first deviation covariance (a deviation covariance of visual information and a gyroscope) according to a first motion residual; and 2, obtaining second positioning uncertainty data according to the first deviation covariance.
The first bias covariance (the visual information and the gyro bias covariance) in step 1 is solved as follows:
Figure BDA0002987451880000091
wherein E is1For the visual information and the bias covariance of the gyroscope (first bias covariance),
Figure BDA0002987451880000092
represented as the directional residual from the ith frame picture to the jth frame picture,
Figure BDA0002987451880000093
is composed of
Figure BDA0002987451880000094
Is transposed, sigmaR,i,jIs a covariance matrix obtained after adding a gyroscope, rho is two residual error proportionality coefficients which can be expressed by a Huber function,
Figure BDA0002987451880000095
a covariance matrix representing the marginalized reprojection error in the ith frame image, wherein JkRepresentative of reprojection errors
Figure BDA0002987451880000096
A jacobian matrix, sigmai,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,
Figure BDA0002987451880000097
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000098
is composed of
Figure BDA0002987451880000099
The transpose of (a) is performed,
Figure BDA00029874518800000910
wherein the content of the first and second substances,
Figure BDA00029874518800000911
indicating the direction and position of the linearized points,
Figure BDA00029874518800000912
is the position of the camera at the time of the ith frame image,
Figure BDA00029874518800000913
for the optimal phaseMachine position and posture parameters.
In step 2, in order to quantify the uncertainty, the visual information of the last frame image and the trace of the bias covariance (first bias covariance) of the gyroscope may be calculated, and the Mean Square Error (MSE) of the expected value may be used as the second positioning uncertainty data.
And S104, when the second positioning uncertainty data meet a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
Illustratively, the second preset condition may be that the second positioning uncertainty data is less than or equal to 0.1 (corresponding to a positioning error of 5 meters). The second preset condition is not limited in this embodiment, and can be determined by a person skilled in the art as needed. The vehicle location based on the first sensor data and the second sensor data may be by minimizing (1) to obtain location information of the vehicle.
According to the vehicle positioning method provided by the embodiment, the positioning uncertainty is judged according to the first sensor data, the second sensor data is added when the positioning uncertainty does not meet the conditions, the vehicle calculation force can be reduced by adding the sensor information step by step, and the triggering condition for adding the sensor data each time is that the positioning uncertainty does not meet the conditions, namely, the vehicle calculation force can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes:
firstly, when second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
the third sensor may be a gyroscope or an accelerometer, the present embodiment is described by taking the third sensor as an accelerometer, and the manner of acquiring the accelerometer data may be to acquire the accelerometer data in the in-vehicle IMU.
And secondly, obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data and positioning the vehicle.
Illustratively, the manner of deriving the third positioning uncertainty data from the third sensor data, the second sensor data, and the first sensor data may comprise the steps of:
the first step is as follows: and obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data, wherein the motion relation comprises direction, speed and position.
Figure BDA0002987451880000101
Figure BDA0002987451880000105
Figure BDA0002987451880000102
Wherein the content of the first and second substances,
Figure BDA0002987451880000103
the direction of the IMU in frame i +1,
Figure BDA0002987451880000104
is the direction of the IMU in the ith frame, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,
Figure BDA0002987451880000111
the deviation of the gyroscope is represented by a deviation,
Figure BDA0002987451880000112
representing a jacobian matrix, which can be obtained by pre-fusion, Exp (-) is an exponential mapping of lie groups,
Figure BDA0002987451880000113
the velocity of the IMU in the (i + 1) th frame,
Figure BDA0002987451880000114
for the velocity of the IMU in frame i, gw represents the gravity vector, Δ t represents the time difference between frame i and frame i +1, Δ vi,i+1Representing the result of a pre-measurement of the velocity of the IMU,
Figure BDA0002987451880000115
which is a jacobian matrix, can be obtained by pre-fusion,
Figure BDA0002987451880000116
the deviation of the accelerometer is indicated and,
Figure BDA0002987451880000117
indicating the position of the IMU in the i +1 th frame,
Figure BDA0002987451880000118
is the position of the IMU in the ith frame, Δ pi,i+1The result of the prediction of the position of the IMU is shown.
Secondly, according to the second motion relation, obtaining a second motion residual, including the residual of direction, speed and position:
Figure BDA0002987451880000119
Figure BDA00029874518800001110
Figure BDA00029874518800001111
wherein the content of the first and second substances,
Figure BDA00029874518800001112
is the directional residual from the ith frame image to the (i + 1) th frame image,
Figure BDA00029874518800001113
is the velocity residual from the ith frame image to the (i + 1) th frame image,
Figure BDA00029874518800001114
the position residual error from the ith frame image to the (i + 1) th frame image is obtained.
Thirdly, obtaining third positioning uncertainty data according to the second motion residual, wherein the third positioning uncertainty data comprises: step 1, obtaining a second deviation covariance (the deviation covariance of the visual information, the gyroscope and the accelerometer) according to the second motion residual, and step 2, obtaining third positioning uncertainty data according to the second deviation covariance.
The solving method of the second deviation covariance (the deviation covariance of the visual information, the gyroscope and the accelerometer) in the step 1 comprises the following steps:
Figure BDA00029874518800001115
wherein E is2For the bias covariance (second bias covariance) of the visual information, gyroscope, accelerometer,
Figure BDA00029874518800001117
Figure BDA00029874518800001116
residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;
Figure BDA0002987451880000121
representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;
Figure BDA0002987451880000122
is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,
Figure BDA0002987451880000123
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000124
is composed of
Figure BDA0002987451880000125
The transpose of (a) is performed,
Figure BDA0002987451880000126
is composed of
Figure BDA0002987451880000127
The transposing of (1).
In step 2, in order to quantify the uncertainty, a trajectory of the visual information of the last frame image, and a deviation covariance (second deviation covariance) of a gyroscope and an accelerometer may be calculated, and a Mean Square Error (MSE) of an expected value may be used as third positioning uncertainty data.
The vehicle location may be performed in a manner of minimizing equation (2) based on the third sensor data, the second sensor data, and the first sensor data, thereby obtaining location information. According to the vehicle positioning method provided by the embodiment, the positioning accuracy is improved by gradually adding various sensor data.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes: and when the first positioning uncertainty data meets a first preset condition, positioning the vehicle according to the first sensor data. The vehicle location based on the first sensor data may be performed by inputting the first sensor data into a target neural network to obtain vehicle location information.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a PoseNet model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution; and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
Illustratively, the vehicle-mounted camera is used for collecting image information, the image is input into a built Posenet neural network for training, the model defines the motion state x (vehicle position) and q (vehicle head direction) of the vehicle, and the loss function of the network is as follows:
Figure BDA0002987451880000131
and theta represents a parameter for optimizing the position information and the direction information simultaneously in the training process, and the model is trained in a random gradient descent mode, so that a better result can be obtained under the condition of a smaller sample.
And (3) solving posterior probability distribution of the current network node weight through a data set obtained by training and a corresponding label, using dropout sampling in the training process, and obtaining the uncertainty of the current position of the vehicle by using a Bayesian model.
Firstly, obtaining a data set through training, and firstly, obtaining a posterior probability distribution of the current network weight W through the data set X and the label Y obtained through training, namely:
p(W|X,Y);
variational inference is then applied to minimize the relative entropy between the approximation q (w) and the posterior probability distribution:
KL(q(W)||p(W|X,Y));
wherein the approximate values for each layer satisfy:
bij~Bernouli(pi),j=1,2,...,n-1
Wi=Midiag(bi);
wherein M isiThe approximate distribution of each layer satisfies the bernoulli distribution for the coefficient of variation.
Finally, the objective loss function is minimized, which is also the relative entropy in the approximation and the posterior probability distribution.
The detailed pseudo code is as follows:
Figure BDA0002987451880000132
Figure BDA0002987451880000141
the present embodiment provides a vehicle positioning apparatus, as shown in fig. 2, including:
a first sensor data acquisition module 201, configured to acquire first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A first positioning uncertainty data determining module 202, configured to obtain first positioning uncertainty data according to the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A second positioning uncertainty data determining module 203, configured to obtain second sensor data when the first positioning uncertainty data does not meet a first preset condition, and obtain second positioning uncertainty data according to the second sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The first positioning module 204 is configured to, when the second positioning uncertainty data meets a second preset condition, perform vehicle positioning according to the first sensor data and the second sensor data. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the vehicle positioning apparatus further includes:
the third sensor data determining module is used for acquiring third sensor data when the second positioning uncertainty data does not meet a second preset condition; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning module is used for obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data and positioning the vehicle. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the method further includes: and the third positioning module is used for positioning the vehicle according to the first sensor data when the first positioning uncertainty data meets a first preset condition. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the first sensor data is a vision sensor, and the first positioning uncertainty data determining module 202 includes: and the first positioning uncertainty data determining submodule is used for inputting the image data obtained by the visual sensor into a target neural network to obtain first positioning uncertainty data. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second sensor data is a gyroscope, and the second positioning uncertainty data determining module 203 includes:
the first motion relation determining module is used for obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The first motion residual determining module is used for obtaining a first motion residual according to the first motion relation; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning uncertainty data submodule is used for obtaining second positioning uncertainty data according to the first motion residual. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning module includes:
the second motion relation determining module is used for obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The second motion residual determining module is used for obtaining a second motion residual according to the second motion relation; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning submodule is used for obtaining third positioning uncertainty data according to the second motion residual. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning uncertainty data submodule includes:
the first deviation covariance determination module is used for obtaining a first deviation covariance according to the first motion residual; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The second positioning uncertainty data calculation module is used for obtaining second positioning uncertainty data according to the first deviation covariance; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A first bias covariance determination module that performs operations comprising:
Figure BDA0002987451880000161
wherein E is1In order to be the first deviation covariance,
Figure BDA0002987451880000162
expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix obtained by adding a gyroscope,
Figure BDA0002987451880000163
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000164
is composed of
Figure BDA0002987451880000165
The transpose of (a) is performed,
Figure BDA0002987451880000166
is composed of
Figure BDA0002987451880000167
The transposing of (1). For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning sub-module includes:
the second deviation covariance determination module is used for obtaining a second deviation covariance according to the second motion residual; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A third positioning uncertainty data calculation module, configured to obtain third positioning uncertainty data according to the second deviation covariance; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A second bias covariance determination module that performs operations comprising:
Figure BDA0002987451880000171
wherein E is2In order to be the second deviation covariance,
Figure BDA0002987451880000172
Figure BDA0002987451880000173
residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;
Figure BDA0002987451880000174
representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;
Figure BDA0002987451880000175
is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,
Figure BDA0002987451880000176
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure BDA0002987451880000177
is composed of
Figure BDA0002987451880000178
The transpose of (a) is performed,
Figure BDA0002987451880000179
is composed of
Figure BDA00029874518800001710
The transposing of (1). For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a PoseNet model, and the first positioning uncertainty data determining sub-module includes:
the posterior probability distribution determining module is used for inputting the sample to a pre-trained neural network to obtain the posterior probability distribution of each network weight; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The relative entropy determining module is used for determining the relative entropy between the approximate value of each layer of the network and the posterior probability distribution according to the posterior probability distribution; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the training module is used for finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The embodiment of the present application also provides an electronic device, as shown in fig. 3, including a processor 310 and a memory 320, where the processor 310 and the memory 320 may be connected by a bus or in other manners.
Processor 310 may be a Central Processing Unit (CPU). The Processor 310 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof.
The memory 320, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the vehicle positioning method in embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 320 may optionally include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 320 and, when executed by the processor 310, perform a vehicle localization method as in the embodiment shown in FIG. 1.
The details of the electronic device may be understood with reference to the corresponding related description and effects in the embodiment shown in fig. 1, and are not described herein again.
The present embodiment also provides a computer storage medium, where computer-executable instructions are stored, where the computer-executable instructions can execute the vehicle positioning method in any of method embodiments 1. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (12)

1. A vehicle positioning method, comprising the steps of:
acquiring first sensor data;
obtaining first positioning uncertainty data according to the first sensor data;
when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
2. The method of claim 1, further comprising:
when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data, and positioning the vehicle.
3. The method of claim 1, further comprising: and when the first positioning uncertainty data meet a first preset condition, positioning the vehicle according to the first sensor data.
4. The method of claim 1, wherein the first sensor data is a vision sensor, and wherein deriving the first positioning uncertainty data from the first sensor data comprises: and inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data.
5. The method of claim 4, wherein the second sensor data is a gyroscope, and wherein deriving second positioning uncertainty data from the second sensor data and the first sensor data comprises:
obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data;
obtaining a first motion residual according to the first motion relation;
and obtaining second positioning uncertainty data according to the first motion residual.
6. The method of claim 2, wherein the third sensor data is an accelerometer, and wherein deriving third positioning uncertainty data from the third sensor data, the second sensor data, and the first sensor data comprises:
obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data;
obtaining a second motion residual according to the second motion relation;
and obtaining third positioning uncertainty data according to the second motion residual.
7. The method of claim 5, wherein deriving second positioning uncertainty data from the first motion residuals comprises:
obtaining a first deviation covariance according to the first motion residual;
obtaining second positioning uncertainty data according to the first deviation covariance;
obtaining a first deviation covariance according to the motion residual, including:
Figure FDA0002987451870000021
wherein E is1Is the first variance of the deviation, p is two residual scaling factors,
Figure FDA0002987451870000022
expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix, sigma, obtained by adding a gyroscopei,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,
Figure FDA0002987451870000031
a covariance matrix representing the marginalized reprojection error in the ith frame image, wherein JkRepresented is a jacobian matrix of reprojection errors,
Figure FDA0002987451870000032
a residual vector representing the marginalized reprojection error in the ith frame image,
Figure FDA0002987451870000033
is composed of
Figure FDA0002987451870000034
The transpose of (a) is performed,
Figure FDA0002987451870000035
is composed of
Figure FDA0002987451870000036
The transposing of (1).
8. The method of claim 6, wherein said deriving third positioning uncertainty data from said second motion residuals comprises:
obtaining a second deviation covariance according to the second motion residual;
obtaining third positioning uncertainty data according to the second deviation covariance;
obtaining a second deviation covariance according to the second motion residual, including:
Figure FDA0002987451870000037
wherein E is2In order to be the second deviation covariance,
Figure FDA0002987451870000038
Figure FDA0002987451870000039
residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;
Figure FDA00029874518700000310
representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;
Figure FDA00029874518700000311
is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,
Figure FDA00029874518700000312
representing the marginalized re-projection in the ith frame imageThe residual vector of the shadow error is then calculated,
Figure FDA00029874518700000313
is composed of
Figure FDA00029874518700000314
The transpose of (a) is performed,
Figure FDA00029874518700000315
is composed of
Figure FDA00029874518700000316
The transposing of (1).
9. The method of claim 4, wherein the target neural network is constructed according to a PoseNet model, and wherein the target neural network training process comprises:
inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight;
determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution;
and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
10. A vehicle positioning device, comprising:
the first sensor data acquisition module is used for acquiring first sensor data;
the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data;
the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
and the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the vehicle localization method according to any of claims 1-9 are implemented when the program is executed by the processor.
12. A storage medium having computer instructions stored thereon, wherein the instructions, when executed by a processor, perform the steps of the vehicle localization method of any of claims 1-9.
CN202110305408.XA 2021-03-22 2021-03-22 Vehicle positioning method and device and electronic equipment Active CN113155121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110305408.XA CN113155121B (en) 2021-03-22 2021-03-22 Vehicle positioning method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110305408.XA CN113155121B (en) 2021-03-22 2021-03-22 Vehicle positioning method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN113155121A true CN113155121A (en) 2021-07-23
CN113155121B CN113155121B (en) 2024-04-02

Family

ID=76887947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110305408.XA Active CN113155121B (en) 2021-03-22 2021-03-22 Vehicle positioning method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113155121B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2966477A1 (en) * 2014-07-09 2016-01-13 ANavS GmbH Method for determining the position and attitude of a moving object using low-cost receivers
US20180081027A1 (en) * 2016-09-21 2018-03-22 Pinhas Ben-Tzvi Linear optical sensor arrays (losa) tracking system for active marker based 3d motion tracking
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
US20180231385A1 (en) * 2016-10-25 2018-08-16 Massachusetts Institute Of Technology Inertial Odometry With Retroactive Sensor Calibration
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
WO2020048623A1 (en) * 2018-09-07 2020-03-12 Huawei Technologies Co., Ltd. Estimation of a pose of a robot
CN111210477A (en) * 2019-12-26 2020-05-29 深圳大学 Method and system for positioning moving target
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111609868A (en) * 2020-05-29 2020-09-01 电子科技大学 Visual inertial odometer method based on improved optical flow method
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111750853A (en) * 2020-06-24 2020-10-09 国汽(北京)智能网联汽车研究院有限公司 Map establishing method, device and storage medium
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2966477A1 (en) * 2014-07-09 2016-01-13 ANavS GmbH Method for determining the position and attitude of a moving object using low-cost receivers
US20180081027A1 (en) * 2016-09-21 2018-03-22 Pinhas Ben-Tzvi Linear optical sensor arrays (losa) tracking system for active marker based 3d motion tracking
US20180231385A1 (en) * 2016-10-25 2018-08-16 Massachusetts Institute Of Technology Inertial Odometry With Retroactive Sensor Calibration
CN107869989A (en) * 2017-11-06 2018-04-03 东北大学 A kind of localization method and system of the fusion of view-based access control model inertial navigation information
WO2020048623A1 (en) * 2018-09-07 2020-03-12 Huawei Technologies Co., Ltd. Estimation of a pose of a robot
WO2020155616A1 (en) * 2019-01-29 2020-08-06 浙江省北大信息技术高等研究院 Digital retina-based photographing device positioning method
CN109991636A (en) * 2019-03-25 2019-07-09 启明信息技术股份有限公司 Map constructing method and system based on GPS, IMU and binocular vision
CN111210477A (en) * 2019-12-26 2020-05-29 深圳大学 Method and system for positioning moving target
CN111595333A (en) * 2020-04-26 2020-08-28 武汉理工大学 Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion
CN111609868A (en) * 2020-05-29 2020-09-01 电子科技大学 Visual inertial odometer method based on improved optical flow method
CN111795686A (en) * 2020-06-08 2020-10-20 南京大学 Method for positioning and mapping mobile robot
CN111739063A (en) * 2020-06-23 2020-10-02 郑州大学 Electric power inspection robot positioning method based on multi-sensor fusion
CN111750853A (en) * 2020-06-24 2020-10-09 国汽(北京)智能网联汽车研究院有限公司 Map establishing method, device and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
刘洪剑;王耀南;谭建豪;李树帅;钟杭;: "一种旋翼无人机组合导航***设计及应用", 传感技术学报, no. 02, 15 February 2017 (2017-02-15) *
夏凌楠;张波;王营冠;魏建明;: "基于惯性传感器和视觉里程计的机器人定位", 仪器仪表学报, no. 01, 15 January 2013 (2013-01-15), pages 110 - 111 *
夏凌楠;张波;王营冠;魏建明;: "基于惯性传感器和视觉里程计的机器人定位", 仪器仪表学报, no. 01, pages 110 - 111 *
敖龙辉;郭杭;: "室内环境下立体视觉惯导融合定位", 测绘通报, no. 12 *
敖龙辉;郭杭;: "室内环境下立体视觉惯导融合定位", 测绘通报, no. 12, 25 December 2019 (2019-12-25) *

Also Published As

Publication number Publication date
CN113155121B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN113945206B (en) Positioning method and device based on multi-sensor fusion
CN110243358B (en) Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system
CN108731670B (en) Inertial/visual odometer integrated navigation positioning method based on measurement model optimization
CN111947671B (en) Method, apparatus, computing device and computer-readable storage medium for positioning
CN109887057B (en) Method and device for generating high-precision map
Alonso et al. Accurate global localization using visual odometry and digital maps on urban environments
CN112113574B (en) Method, apparatus, computing device and computer-readable storage medium for positioning
US20200364883A1 (en) Localization of a mobile unit by means of a multi-hypothesis kalman filter method
CN104729506A (en) Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information
CN109059907B (en) Trajectory data processing method and device, computer equipment and storage medium
CN109596121B (en) Automatic target detection and space positioning method for mobile station
CN112629544B (en) Vehicle positioning method and device based on lane line
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
CN110596741A (en) Vehicle positioning method and device, computer equipment and storage medium
CN114755662A (en) Calibration method and device for laser radar and GPS with road-vehicle fusion perception
CN111241224A (en) Method, system, computer device and storage medium for target distance estimation
CN113252051A (en) Map construction method and device
CN114111818A (en) Universal visual SLAM method
CN114964276A (en) Dynamic vision SLAM method fusing inertial navigation
CN113405555B (en) Automatic driving positioning sensing method, system and device
CN112577479A (en) Multi-sensor fusion vehicle positioning method and device based on map element data
CN112446915A (en) Picture-establishing method and device based on image group
Verentsov et al. Bayesian localization for autonomous vehicle using sensor fusion and traffic signs
CN113155121B (en) Vehicle positioning method and device and electronic equipment
Verentsov et al. Bayesian framework for vehicle localization using crowdsourced data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant