CN113483769A - Particle filter based vehicle self-positioning method, system, device and medium - Google Patents

Particle filter based vehicle self-positioning method, system, device and medium Download PDF

Info

Publication number
CN113483769A
CN113483769A CN202110941721.2A CN202110941721A CN113483769A CN 113483769 A CN113483769 A CN 113483769A CN 202110941721 A CN202110941721 A CN 202110941721A CN 113483769 A CN113483769 A CN 113483769A
Authority
CN
China
Prior art keywords
vehicle
model
self
motion system
particle filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110941721.2A
Other languages
Chinese (zh)
Other versions
CN113483769B (en
Inventor
江昆
杨殿阁
冯润泽
于伟光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202110941721.2A priority Critical patent/CN113483769B/en
Publication of CN113483769A publication Critical patent/CN113483769A/en
Application granted granted Critical
Publication of CN113483769B publication Critical patent/CN113483769B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • G01S19/45Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
    • G01S19/47Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)

Abstract

The invention relates to a vehicle self-positioning method, a system, equipment and a medium based on a particle filter, which comprises the following steps: designing a metrology model of the particle filter, the metrology model including a direct metrology model and a design metrology model; training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model; the vehicle motion state is collected based on the combined inertial navigation and vision sensor, and the self parking position is estimated based on the vehicle motion system model and the measurement model, so that the vehicle is positioned. According to the invention, the transverse intercept from the vehicle to the left lane line and the right lane line observed by the vision module is introduced as an additional observed quantity, and the position and the orientation of the vehicle can be estimated. The invention can be widely applied to the field of vehicle self-positioning.

Description

Particle filter based vehicle self-positioning method, system, device and medium
Technical Field
The invention relates to the field of vehicle self-positioning, in particular to a method, a system, equipment and a storage medium for self-positioning a vehicle by using a particle filter based on sensors (a global navigation satellite system, an inertial measurement unit and a vision sensor) in a mass-production vehicle.
Background
The accurate self-positioning of the vehicle is a precondition for realizing automatic driving of a Level L3(Level 3) and higher levels, and the current mature high-precision self-positioning schemes of the vehicle mainly comprise the following four schemes: a combined Inertial Navigation positioning System composed of a GNSS (Global Navigation Satellite System) and an IMU (Inertial Measurement Unit); 2. a fusion positioning system combining inertial navigation fusion RTK (Real-Time Kinematic); 3. the laser radar is combined with a matching positioning system of a point cloud high-precision map; 4. the visual camera is combined with a matching positioning system of a high-precision map. However, since the cost of the high-precision IMU, the combined inertial navigation system combined with the RTK technology, and the lidar used in the first three schemes is high, many automobile manufacturers try to avoid the use of the schemes in mass production vehicles, so most of the schemes only stay in the research stage, and are not practically applied to mass production vehicle models. The visual sensor used in the scheme 4 is low in price, abundant semantic information and visual information can be obtained, high-precision positioning can be performed after the visual sensor is matched with a high-precision map, and meanwhile the use cost of the high-precision map is low, so that the scheme 4 is very suitable for mass production of vehicle types.
The matching positioning method of the current visual camera combined with the high-precision map can be divided into an optimization-based method and a filtering-based method, which are respectively introduced as follows:
the method based on optimization is to find a self-parking position with the minimum difference between the visual detection feature and the map feature, firstly, a large number of visual feature points need to be extracted from visual information, then the extracted visual feature points are matched with a point cloud map, and the position and the attitude of a vehicle are solved by continuously and iteratively calculating the position and the attitude between two point clouds. However, the method of deep learning is mostly used for extracting visual feature points, iterative solution is carried out on a large number of point clouds, and the two steps have large computational power requirements.
The filtering-based method comprises the steps of obtaining prior probability distribution of the self-parking position according to a vehicle motion model, obtaining an observation result, calculating to obtain posterior probability distribution of the self-parking position, and further estimating the self-parking position. The filtering-based method is small in calculation amount, however, the current popular Kalman filtering algorithm is only suitable for the condition of a linear observation model, but the observation model constructed based on the visual sensor is often nonlinear, and Chuihao et al use the Kalman filtering algorithm and the linear observation model constructed based on the visual sensor can only estimate the position of the vehicle and cannot estimate the attitude of the vehicle. In addition, the filtering-based method needs to construct a motion model of a vehicle, and a motion model based on kinematics, such as a uniform velocity uniform angular velocity (CVCT) model or a uniform acceleration model, is commonly used at present, but when a sampling interval is increased, there is a problem that the motion model fails, for example, the visual matching map positioning method proposed by Jae Kyu Suhr sets the filtering frequency to 100Hz, but the sampling frequency of many visual sensors is about 30Hz, so the vehicle motion model failure problem is also an obstacle to the practical application of the algorithm.
Disclosure of Invention
In view of the above problems, it is an object of the present invention to provide a particle filter based vehicle self-positioning method, system, device and medium for high precision positioning of vehicles using sensors (GNSS, IMU and camera) in combination with vector high precision maps, which are common in mass production vehicles.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect of the invention, a particle filter based vehicle self-positioning method is provided, which comprises the following steps:
designing a metrology model of the particle filter, the metrology model including a direct metrology model and a design metrology model;
training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
the vehicle motion state is collected based on the combined inertial navigation and vision sensor, and the self parking position is estimated based on the vehicle motion system model and the measurement model, so that the vehicle is positioned.
Preferably, the direct measurement model is based on combined inertial navigation formed by fusing a GPS and an IMU, and the output of the direct measurement model is vehicle positioning information with the frequency same as the sampling frequency of the IMU; the design measurement model is based on a vision sensor, and the output of the design measurement model is the transverse intercept of the distance between the vehicle and the adjacent left and right lane lines.
Preferably, the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
constructing a vehicle motion system model, and defining the input and the output of the vehicle motion system model;
defining a loss function adopted in the training process;
and processing the real vehicle data acquired in advance, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function to obtain the trained vehicle motion system model.
Preferably, the inputs to the vehicle motion system model are: time t of consecutive n sampling instants0,t1,...,tn-1The vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle direction and the due north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head, which are obtained by observation at the corresponding moment; the output of the vehicle motion system model is: prediction of vehicle position and vehicle heading at the next time.
Preferably, the preprocessing the real vehicle data acquired in advance includes:
converting longitude and latitude coordinates output by a vehicle inertial navigation system based on an wgs84 coordinate system into a UTM coordinate system;
at a segment from t0Last until tKFrom t in the data ofi,i∈[0,K-n]Starting n consecutive sampling instants ti,ti+1,...,ti+n-1Each sampling time to tiThe time interval of the time is used as the sampling time, and t is used as the sampling timeiThe position of the vehicle at each moment is taken as the origin of coordinates, the position of the vehicle at each moment is updated, and an input segment is obtained by combining the included angle theta between the vehicle orientation and the due north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head;
repeating the above steps from a segment t0Last until tKIn the data of (a), K-n +1 fragments for training were obtained.
Preferably, the method for acquiring the motion state of the vehicle based on the combined inertial navigation and vision sensor and estimating the self-parking attitude based on the vehicle motion system model and the measurement model to realize the positioning of the vehicle comprises the following steps:
judging whether the sampling time meets the input requirement of a vehicle motion system model, if not, entering a step II, otherwise, entering a step III;
when the number of sampling moments is less than n, generating particles by using the observed quantity of the combined inertial navigation;
inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next moment when the number of sampling moments is more than or equal to n, and generating particles based on the prediction of the vehicle state at the next moment;
fourthly, calculating to obtain the transverse intercept Inc of the particles from the left and right adjacent lanes by utilizing the particles obtained in the second step or the third step and combining the vector high-precision map M obtained by the vision sensorleftAnd IncrightAnd based on the transverse intercept Inc of the particles from the left and right adjacent lanesleftAnd IncrightUpdating the particle weight;
and fifthly, after the weight of the particles is obtained through calculation, normalization processing is carried out on the weight of the particles, and the state of all the particles is weighted and summed to obtain the estimation of the self parking position.
In a second aspect of the present invention, there is provided a particle filter based vehicle self-positioning system, comprising:
the system comprises a measurement model design module, a measurement model calculation module and a measurement model calculation module, wherein the measurement model design module is used for designing a measurement model of a particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
and the self-parking position estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and vision sensor, estimating the self-parking position based on the vehicle motion system model and the measurement model, and realizing the positioning of the vehicle.
Preferably, the direct measurement model is based on combined inertial navigation formed by fusing a GPS and an IMU, and the model outputs vehicle positioning information with the same frequency as the IMU sampling frequency; the design measurement model is based on a visual sensor, and the model output is the transverse intercept of the distance between the vehicle and the adjacent left and right lane lines.
In a third aspect of the invention, a processing device is provided, the processing device comprising at least a processor and a memory, the memory having stored thereon a computer program, the processor executing the computer program to perform the steps of the particle filter based self-localization method for a vehicle.
In a fourth aspect of the present invention, a computer storage medium is provided having computer readable instructions stored thereon which are executable by a processor to perform the steps of the particle filter based vehicle self-localization method.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. according to the invention, the transverse intercept from the vehicle to the left lane line and the right lane line observed by the vision module is introduced as an additional observed quantity, and the position and the orientation of the vehicle can be estimated.
2. The invention utilizes the recurrent neural network to construct the self-vehicle motion system model, and solves the problem that the filter fails when the filtering frequency is low by using the vehicle model based on kinematics as the vehicle motion model.
3. The invention uses the particle filter algorithm, is adapted to a nonlinear measurement model and a vehicle motion system model, and realizes the accurate self-positioning of the vehicle by using a mass production vehicle sensor.
Therefore, the invention can be widely applied to the field of vehicle self-positioning.
Drawings
FIG. 1 is a flow chart of an algorithm for implementing vehicle self-alignment in accordance with the present invention;
FIG. 2 is a diagram of a metrology model designed in the present invention;
FIG. 3 is a network model for predicting the status of a vehicle in accordance with the present invention;
FIG. 4 is a model diagram of the calculation of left and right lateral intercepts of a particle to lane line in the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
According to the vehicle self-positioning method, based on the algorithm of the particle filter, the transverse intercept from the vehicle to the left lane line and the transverse intercept from the vehicle to the right lane line observed by the vision module are used as additional observed quantities, the vehicle motion system is modeled by the recurrent neural network, a good filtering effect is still achieved when the filtering frequency is low, and the problem that a common filter is not suitable for a nonlinear measurement model is solved. Compared with an optimized matching positioning algorithm, the method has the advantages that the calculated amount is small, and the problem of low-cost high-precision self-positioning of mass production vehicles is solved.
Example 1
As shown in fig. 1, the present embodiment provides a particle filter-based vehicle self-positioning method, which includes the following steps:
1) a metrology model of the particle filter is designed, the metrology model including a direct metrology model and a design metrology model.
As shown in fig. 2, the direct measurement model is based on the combined inertial navigation formed by the fusion of the GPS and the IMU, and the output of the direct measurement model is vehicle positioning information (latitude, longitude and orientation) with the same frequency as the sampling frequency of the IMU;
the design metrology model is based on a vision sensor (mobiley e in this example)TMADAS camera) the output of the design metrology model is the lateral intercept of the vehicle from the adjacent left and right lane lines.
2) Training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model, so that the trained vehicle motion system model can be fitted to the actual motion system model of the vehicle as much as possible.
Specifically, the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
2.1) constructing a vehicle motion system model and defining the input and the output of the vehicle motion system model.
Because the RNN (Recurrent Neural Network) is suitable for extracting features of serialized information and predicting trends, the vehicle motion system model is constructed based on the RNN Network series full-connection Network in the present embodiment.
As shown in fig. 3, in the present embodiment, the vehicle motion system model is constructed by connecting 2 layers of fully connected networks using 5 layers of LSTM networks, where the inputs of the 5 layers of LSTM networks are: time t of consecutive n sampling instants0,t1,...,tn-1And outputting hidden layer information h obtained at the last moment as vehicle position abscissa x, vehicle position ordinate y, included angle theta between vehicle orientation and due north direction, vehicle speed v and component a of vehicle acceleration along the direction of vehicle head observed at corresponding momentt(ii) a The input of the 2-layer full-connection network is hidden layer information h obtained at the last moment of the LSTM networktOutput is tnPrediction of vehicle position and vehicle heading at the moment.
2.2) defining the loss function adopted in the training process.
In the present embodiment, the loss function of the vehicle motion system model is a mean square loss function, that is
Figure BDA0003215128640000051
Where y represents the network predicted vehicle state,
Figure BDA0003215128640000052
a true value representing a vehicle state; y isiThe ith dimension component of y;
Figure BDA0003215128640000053
is composed of
Figure BDA0003215128640000054
The ith-dimensional component of (1); n is the number y of the carbon atoms,
Figure BDA0003215128640000055
n ═ 3 (abscissa, ordinate, and orientation 3 dimensions) in this example.
2.3) processing the pre-acquired real vehicle data, taking the processed real vehicle data as a training set, and training the vehicle motion system model constructed in the step 2.1) based on the loss function determined in the step 2.2) to obtain a final vehicle motion system model.
When the real vehicle data is preprocessed, the method comprises the following steps:
firstly, converting longitude and latitude coordinates output by a vehicle inertial navigation System based on an wgs84 coordinate System into a UTM (Universal Transverse Mercator Grid System) coordinate System;
then, at a segment from t0Last until tKFrom t in the data ofi(i∈[0,K-n]) Starting n consecutive sampling instants (t)i,ti+1,...,ti+n-1) Each sampling time to tiThe time interval of the time is used as the sampling time, and t is used as the timeiThe position of the vehicle at the moment is taken as the origin of coordinates, the position of the vehicle at each moment is updated, the included angle theta between the vehicle direction and the due north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head are combined to serve as network input, and t is used as network inputi+nPosition of vehicle at timeThe abscissa, ordinate and orientation are taken as prediction truth values. Thus, it can be started from a segment t0Last until tKFrom the data in (1), K-n +1 fragments were obtained which were used for training.
3) The vehicle motion state is collected based on the combined inertial navigation and vision sensor, and the self parking position is estimated based on the vehicle motion system model and the measurement model, so that the vehicle is positioned.
Specifically, the method comprises the following steps:
3.1) judging whether the sampling time length meets the input requirement of the vehicle motion system model in the step 2), if not, entering the step 3.2), otherwise, entering the step 3.3);
3.2) algorithm start-up phase: when the number of sampling time is less than n, generating particles by using the observed quantity of the combined inertial navigation, wherein the generated particle states are as follows:
Figure BDA0003215128640000061
wherein x ist,yt,θtRespectively representing the abscissa, the ordinate and the orientation angle of the particle at the time t;
Figure BDA0003215128640000062
respectively representing the observed quantities of the combined inertial navigation at the time t;
Figure BDA0003215128640000063
respectively represent a mean value of 0 and a variance of
Figure BDA0003215128640000064
Figure BDA0003215128640000065
Gaussian noise.
3.3) in the algorithm operation stage, when the number of sampling time is more than or equal to n, inputting the observed quantity of the vehicle into the vehicle motion system model to obtain the prediction of the vehicle state at the next time, and generating particles based on the prediction of the vehicle state at the next time, wherein the particle state is as follows:
Figure BDA0003215128640000066
in the formula, xt,yt,θtRespectively representing the abscissa, the ordinate and the orientation angle of the generated particles at the time t;
Figure BDA0003215128640000067
θ^tpredrespectively, represent predictions of vehicle conditions by the vehicle motion system model at time t. N (0, v)2) Representing a mean of 0 and a variance of v2Gaussian noise of (2); v. ofposV in this embodiment is the observation noise of the positionpos=1m;vθV in this embodiment is the observation noise of orientationθ0.1 rad; in this example, the number of generated particles is 400.
3.4) calculating to obtain the transverse intercept Inc of the particle from the left and right adjacent lanes by using the particle state obtained in the step 3.2) or the step 3.3) and combining a vector high-precision map M obtained by a vision sensorleftAnd IncrightAnd based on the transverse intercept Inc of the particles from the left and right adjacent lanesleftAnd IncrightThe particle weights are updated.
The vector high-precision map M comprises the following components:
M={L1,L2,...,Ln} (4)
Li={P1,P2,...,Pn} (5)
Pj={xj,yj} (6)
in the formula, LiRepresents the ith lane line, PjRepresenting the j point, P, on the same lane linejIs in the position of (x)j,yj)。
As shown in fig. 4, the points in the vector high-precision map M are transformed into a coordinate system with the position of the particle as the origin of coordinates and the orientation of the particle as the positive direction of the vertical axis,searching to obtain a map point of each quadrant closest to the particle in the coordinate system, wherein the distance from the intersection point of the closest point of the first quadrant and the closest point of the third quadrant to the particle after the closest points of the first quadrant and the second quadrant are connected is the left transverse intercept IncleftThe distance from the intersection point of the closest point of the second quadrant and the closest point of the fourth quadrant after being connected with the abscissa to the particle is the right transverse intercept Incright
After the state quantity of the particles is obtained, the weight value of the particles is updated by combining the observed quantity of the design measurement model, and the weight of the kth (K belongs to [0, K ]) particle can be calculated according to the following formula:
Figure BDA0003215128640000071
in the formula, vIncRepresents MobileeeTMObservation error of ADAS camera for transverse intercept, v in this exampleInc=0.1m。
And 3.5) after the weight of the particles is obtained through calculation, normalization processing is carried out on the weight of the particles, and the state of all the particles is weighted and summed to obtain the estimation of the self parking position.
Example 2
The embodiment 1 provides a vehicle self-positioning method based on a particle filter, and correspondingly, the embodiment provides a vehicle self-positioning system. The vehicle self-positioning system provided by this embodiment can implement the vehicle self-positioning method based on the particle filter of embodiment 1, and the identification system can be implemented by software, hardware, or a combination of software and hardware. For example, the identification system may comprise integrated or separate functional modules or functional units to perform the corresponding steps in the methods of embodiment 1. Since the identification system of the present embodiment is basically similar to the method embodiment, the description process of the present embodiment is relatively simple, and the relevant points can be referred to the part of the description of embodiment 1, and the embodiment of the vehicle self-positioning system of the present embodiment is only schematic.
The embodiment provides a vehicle self-positioning system based on a particle filter, which comprises:
the system comprises a measurement model design module, a measurement model calculation module and a measurement model calculation module, wherein the measurement model design module is used for designing a measurement model of a particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
and the self-parking position estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and vision sensor, estimating the self-parking position based on the vehicle motion system model and the measurement model, and realizing the positioning of the vehicle.
Preferably, in the measurement model design module, the direct measurement model is based on combined inertial navigation formed by fusion of a GPS and an IMU, and the model outputs vehicle positioning information with the same frequency as the IMU sampling frequency; the design measurement model is based on a vision sensor, and the output of the model is the transverse intercept of the distance between the vehicle and the adjacent left and right lane lines.
Preferably, the vehicle motion system model training module comprises:
the model building module is used for building a vehicle motion system model and defining the input and the output of the vehicle motion system model;
the loss function definition module is used for defining the loss function adopted in the training process;
and the model training module is used for processing the pre-acquired real vehicle data, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function and the hyperparameter to obtain a final vehicle motion model.
Preferably, the self-parking position estimation module includes:
the sampling duration judging module is used for judging whether the sampling duration meets the input requirement of the vehicle motion system model, if not, the sampling duration is sent to the first particle generating module, and if not, the sampling duration is sent to the second particle generating module;
a first particle generation module for generating particles by using the observed quantity of the combined inertial navigation;
the second particle generation module is used for inputting the observed quantity of the vehicle into the vehicle motion system model to obtain the prediction of the vehicle state at the next moment and generating particles based on the prediction of the vehicle state at the next moment;
the particle weight updating module is used for calculating the transverse intercept Inc of the left and right adjacent lanes of the particle distance by using the obtained particles and combining the vector high-precision map MleftAnd IncrightAnd based on the transverse intercept Inc of the particles from the left and right adjacent lanesleftAnd IncrightUpdating the particle weight;
and the estimation module is used for calculating the weight of the particles, then carrying out normalization processing on the weight of the particles, and weighting and summing the states of all the particles to obtain the estimation of the self parking position.
Example 3
The present embodiment provides a processing device corresponding to the particle filter-based vehicle self-localization method provided in embodiment 1, where the processing device may be a processing device for a client, such as a mobile phone, a laptop, a tablet computer, a desktop computer, etc., to execute the particle filter-based vehicle self-localization method of embodiment 1.
The processing equipment comprises a processor, a memory, a communication interface and a bus, wherein the processor, the memory and the communication interface are connected through the bus so as to complete mutual communication. The memory stores a computer program capable of running on the processor, and the processor executes the vehicle self-positioning method provided by the embodiment 1 when running the computer program.
In some implementations, the Memory may be a high-speed Random Access Memory (RAM), and may also include a non-volatile Memory, such as at least one disk Memory.
In other implementations, the processor may be various general-purpose processors such as a Central Processing Unit (CPU), a Digital Signal Processor (DSP), and the like, and is not limited herein.
Example 4
The vehicle self-localization method of embodiment 1 may be embodied as a computer program product, which may include a computer readable storage medium having computer readable program instructions embodied thereon for executing the particle filter-based vehicle self-localization method of embodiment 1.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any combination of the foregoing.
Example 5
The embodiment combines real vehicle data to evaluate the performance of the vehicle self-positioning method based on the camera and the vector high-precision map and the traditional particle filter.
The traditional particle filter positioning method only uses the position and the orientation of the vehicle as observed quantities, and uses a uniform-speed and uniform-angle speed model as a vehicle motion model. By combining a section of real vehicle data acquisition, after the parameters in the embodiment are adopted, the algorithm is compared with the traditional particle filter algorithm under the condition that the filter frequency is 10Hz, and the experimental result (shown in Table 1) shows that when the observed average absolute distance error is 1.25m, the average absolute distance error of the traditional particle filter algorithm is 1.75m, and the average angle distance error of the algorithm is 0.39 m; when the observed average absolute angle error is 0.08rad, the average absolute distance error of the traditional particle filter algorithm is 0.18rad, and the average angular distance error of the algorithm is 0.07 rad. Experimental results show that the error of the traditional filter is larger than the direct observation error under the condition that the sampling frequency is 10Hz, and the traditional filter is invalid, but the positioning accuracy of the algorithm is greatly improved compared with the original observation.
TABLE 1 comparison of performance of the present algorithm with conventional particle filter algorithms
Mean absolute distance error (m) Mean absolute angle error (rad)
Observed quantity 1.25 0.08
Traditional particle filtering algorithm 1.75 0.18
Algorithm of the invention 0.39 0.07
It should be noted that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.
The above embodiments are only used for illustrating the present invention, and the structure, connection mode, manufacturing process, etc. of the components may be changed, and all equivalent changes and modifications performed on the basis of the technical solution of the present invention should not be excluded from the protection scope of the present invention.

Claims (10)

1. A vehicle self-positioning method based on a particle filter is characterized by comprising the following steps:
designing a metrology model of the particle filter, the metrology model including a direct metrology model and a design metrology model;
training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
the vehicle motion state is collected based on the combined inertial navigation and vision sensor, and the self parking position is estimated based on the vehicle motion system model and the measurement model, so that the vehicle is positioned.
2. A particle filter based vehicle self-localization method according to claim 1, wherein: the direct measurement model is based on combined inertial navigation formed by fusing a GPS (global positioning system) and an IMU (inertial measurement unit), and the output of the direct measurement model is vehicle positioning information with the same frequency as the IMU sampling frequency; the design measurement model is based on a vision sensor, and the output of the design measurement model is the transverse intercept of the distance between the vehicle and the adjacent left and right lane lines.
3. A particle filter based vehicle self-localization method according to claim 1, wherein: the method for training the pre-constructed vehicle motion system model based on the collected real vehicle data comprises the following steps:
constructing a vehicle motion system model, and defining the input and the output of the vehicle motion system model;
defining a loss function adopted in the training process;
and processing the real vehicle data acquired in advance, taking the processed data as a training set, and training the constructed vehicle motion system model based on the determined loss function to obtain the trained vehicle motion system model.
4. A particle filter based vehicle self-localization method according to claim 3, wherein: the inputs of the vehicle motion system model are: time t of consecutive n sampling instants0,t1,...,tn-1The vehicle position abscissa x, the vehicle position ordinate y, the included angle theta between the vehicle direction and the due north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head, which are obtained by observation at the corresponding moment; the output of the vehicle motion system model is: prediction of vehicle position and vehicle heading at the next time.
5. A particle filter based vehicle self-localization method according to claim 3, wherein: when preprocessing the real vehicle data acquired in advance, the method comprises the following steps:
firstly, converting longitude and latitude coordinates output by a vehicle inertial navigation system based on an wgs84 coordinate system into a UTM coordinate system;
then, at a segment from t0Last until tKFrom t in the data ofi,i∈[0,K-n]Starting n consecutive sampling instants ti,ti+1,...,ti+n-1Each sampling time to tiThe time interval of the time is used as the sampling time, and t is used as the sampling timeiThe position of the vehicle at each moment is taken as the origin of coordinates, the position of the vehicle at each moment is updated, and an input segment is obtained by combining the included angle theta between the vehicle orientation and the due north direction, the vehicle speed v and the component a of the vehicle acceleration along the direction of the vehicle head; repeating the above steps from a segment t0Last until tKIn the data of (a), K-n +1 fragments for training were obtained.
6. A particle filter based vehicle self-localization method according to claim 1, wherein: the method for acquiring the motion state of the vehicle based on the combined inertial navigation and vision sensor and estimating the self parking position based on the vehicle motion system model and the measurement model to realize the positioning of the vehicle comprises the following steps:
judging whether the sampling time meets the input requirement of a vehicle motion system model, if not, entering a step II, otherwise, entering a step III;
when the number of sampling moments is less than n, generating particles by using the observed quantity of the combined inertial navigation;
inputting the observed quantity of the vehicle into a vehicle motion system model to obtain the prediction of the vehicle state at the next moment when the number of sampling moments is more than or equal to n, and generating particles based on the prediction of the vehicle state at the next moment;
fourthly, calculating to obtain the transverse intercept Inc of the particles from the left and right adjacent lanes by utilizing the particles obtained in the second step or the third step and combining the vector high-precision map M obtained by the vision sensorleftAnd IncrightAnd based on the transverse intercept Inc of the particles from the left and right adjacent lanesleftAnd IncrightUpdating the particle weight;
and fifthly, after the weight of the particles is obtained through calculation, normalization processing is carried out on the weight of the particles, and the state of all the particles is weighted and summed to obtain the estimation of the self parking position.
7. A particle filter based vehicle self-positioning system, comprising:
the system comprises a measurement model design module, a measurement model calculation module and a measurement model calculation module, wherein the measurement model design module is used for designing a measurement model of a particle filter, and the measurement model comprises a direct measurement model and a design measurement model;
the vehicle motion system model training module is used for training a pre-constructed vehicle motion system model based on the collected real vehicle data to obtain a trained vehicle motion system model;
and the self-parking position estimation module is used for acquiring the motion state of the vehicle based on the combined inertial navigation and vision sensor, estimating the self-parking position based on the vehicle motion system model and the measurement model, and realizing the positioning of the vehicle.
8. A particle filter based vehicle self-positioning system as recited in claim 7, wherein: the direct measurement model is based on combined inertial navigation formed by fusing a GPS (global positioning system) and an IMU (inertial measurement unit), and the model outputs vehicle positioning information with the same frequency as the IMU sampling frequency; the design measurement model is based on a visual sensor, and the model output is the transverse intercept of the distance between the vehicle and the adjacent left and right lane lines.
9. A processing device comprising at least a processor and a memory, said memory having stored thereon a computer program, characterized in that the processor, when executing said computer program, executes to carry out the steps of the particle-filter based self-localization method of a vehicle according to any of claims 1 to 6.
10. A computer storage medium having computer readable instructions stored thereon which are executable by a processor to perform the steps of a particle filter based vehicle self-localization method according to any of claims 1 to 6.
CN202110941721.2A 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter Active CN113483769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110941721.2A CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110941721.2A CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Publications (2)

Publication Number Publication Date
CN113483769A true CN113483769A (en) 2021-10-08
CN113483769B CN113483769B (en) 2024-03-29

Family

ID=77946643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110941721.2A Active CN113483769B (en) 2021-08-17 2021-08-17 Vehicle self-positioning method, system, equipment and medium based on particle filter

Country Status (1)

Country Link
CN (1) CN113483769B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114323020A (en) * 2021-12-06 2022-04-12 纵目科技(上海)股份有限公司 Vehicle positioning method, system, device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110865403A (en) * 2019-10-18 2020-03-06 浙江天尚元科技有限公司 Positioning method based on neural network pre-learning and wheel-type odometer fusion
US20200124421A1 (en) * 2018-10-19 2020-04-23 Samsung Electronics Co., Ltd. Method and apparatus for estimating position
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN113075713A (en) * 2021-03-29 2021-07-06 北京理工大学重庆创新中心 Vehicle relative pose measuring method, system, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200124421A1 (en) * 2018-10-19 2020-04-23 Samsung Electronics Co., Ltd. Method and apparatus for estimating position
CN110095116A (en) * 2019-04-29 2019-08-06 桂林电子科技大学 A kind of localization method of vision positioning and inertial navigation combination based on LIFT
CN110865403A (en) * 2019-10-18 2020-03-06 浙江天尚元科技有限公司 Positioning method based on neural network pre-learning and wheel-type odometer fusion
CN111174782A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Pose estimation method and device, electronic equipment and computer readable storage medium
CN111862672A (en) * 2020-06-24 2020-10-30 北京易航远智科技有限公司 Parking lot vehicle self-positioning and map construction method based on top view
CN113075713A (en) * 2021-03-29 2021-07-06 北京理工大学重庆创新中心 Vehicle relative pose measuring method, system, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
彭文正;敖银辉;黄晓涛;王鹏飞;: "多传感器信息融合的自动驾驶车辆定位与速度估计", 传感技术学报, no. 08 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114323020A (en) * 2021-12-06 2022-04-12 纵目科技(上海)股份有限公司 Vehicle positioning method, system, device and computer readable storage medium
CN114323020B (en) * 2021-12-06 2024-02-06 纵目科技(上海)股份有限公司 Vehicle positioning method, system, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN113483769B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
KR102292277B1 (en) LIDAR localization inferring solutions using 3D CNN networks in autonomous vehicles
CN109211251B (en) Instant positioning and map construction method based on laser and two-dimensional code fusion
Li et al. Deep sensor fusion between 2D laser scanner and IMU for mobile robot localization
Scherer et al. River mapping from a flying robot: state estimation, river detection, and obstacle mapping
CN112639502A (en) Robot pose estimation
Chen et al. Milestones in autonomous driving and intelligent vehicles—part ii: Perception and planning
CN113920198B (en) Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment
Farag Real-time autonomous vehicle localization based on particle and unscented kalman filters
Li et al. Robust localization for intelligent vehicles based on compressed road scene map in urban environments
WO2022021661A1 (en) Gaussian process-based visual positioning method, system, and storage medium
Zheng et al. A novel visual measurement framework for land vehicle positioning based on multimodule cascaded deep neural network
Son et al. Synthetic deep neural network design for lidar-inertial odometry based on CNN and LSTM
Chen et al. Continuous occupancy mapping in dynamic environments using particles
Charroud et al. Fast and accurate localization and mapping method for self-driving vehicles based on a modified clustering particle filter
CN113483769B (en) Vehicle self-positioning method, system, equipment and medium based on particle filter
US20240094343A1 (en) Method, device, system, and storage medium for tracking moving target
Xiong et al. Multi-uncertainty captured multi-robot lidar odometry and mapping framework for large-scale environments
CN113759364A (en) Millimeter wave radar continuous positioning method and device based on laser map
Jo et al. Mixture density-PoseNet and its application to monocular camera-based global localization
Georgy et al. Unconstrained underwater multi-target tracking in passive sonar systems using two-stage PF-based technique
CN115061499B (en) Unmanned aerial vehicle control method and unmanned aerial vehicle control device
Soleimani et al. A disaster invariant feature for localization
Farag Self-driving vehicle localization using probabilistic maps and Unscented-Kalman filters
Törő et al. Cooperative object detection in road traffic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant