CN113155121A - Vehicle positioning method and device and electronic equipment - Google Patents
Vehicle positioning method and device and electronic equipment Download PDFInfo
- Publication number
- CN113155121A CN113155121A CN202110305408.XA CN202110305408A CN113155121A CN 113155121 A CN113155121 A CN 113155121A CN 202110305408 A CN202110305408 A CN 202110305408A CN 113155121 A CN113155121 A CN 113155121A
- Authority
- CN
- China
- Prior art keywords
- sensor data
- positioning
- data
- obtaining
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000013528 artificial neural network Methods 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 19
- 230000015654 memory Effects 0.000 claims description 18
- 238000012549 training Methods 0.000 claims description 15
- 230000004927 fusion Effects 0.000 claims description 10
- 230000001133 acceleration Effects 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 230000004807 localization Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 7
- 239000000126 substance Substances 0.000 description 6
- 238000005259 measurement Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000007547 defect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/3407—Route searching; Route guidance specially adapted for specific applications
- G01C21/3415—Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
Abstract
The invention provides a vehicle positioning method, a vehicle positioning device and electronic equipment, wherein the vehicle positioning method comprises the following steps: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data. By implementing the invention, the vehicle computational power can be reduced by adding the sensor information step by step, and the triggering condition of adding the sensor data each time is that the positioning uncertainty does not meet the condition, namely, the vehicle computational power can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
Description
Technical Field
The invention relates to the field of automatic driving, in particular to a vehicle positioning method and device and electronic equipment.
Background
The automatic driving vehicle generally bears sensor modules such as a GPS, a camera, a radar and an IMU for positioning, different sensors provide different positioning information for the vehicle, and the automatic driving vehicle executes corresponding automatic driving operation according to the positioning information. Existing single sensor based autonomous vehicle-mounted positioning includes: GPS positioning, visual positioning, IMU positioning, radar positioning, and the like. The GPS positioning is the most traditional positioning mode and the simplest and most direct positioning mode, a satellite system is used for resolving a vehicle positioning signal through a base station, and then positioning information is fed back to a vehicle. The visual positioning is a current hot positioning mode, information near a vehicle is obtained through a camera, then vehicle position information is obtained through scene reconstruction and semantic understanding, compared with GPS information, the positioning accuracy is obviously improved, and positioning uncertainty can be improved to a greater extent due to the fact that a vehicle-mounted camera system can effectively utilize vehicle-mounted scene information. However, visual positioning depends on the processing accuracy and speed of the algorithm, and positioning accuracy in similar scenes is greatly reduced due to the fact that providing position information depends on the scene. The IMU positioning is commonly used for auxiliary positioning, and because the information of a gyroscope and an accelerometer in the IMU needs to be converted into position information in an integral mode, the IMU positioning is often fused with vision and GPS positioning, so that the IMU positioning has the advantages of being free from the influence of environmental weather and scene factors, and has the defect of increasing errors accumulated along with time. The radar positioning is commonly used for short-distance special scene positioning, and is widely used for positioning medium and high-grade vehicles because the radar positioning has high accuracy and strong penetrating power and is not easily influenced by environmental factors. In an underground garage environment, etc., vision and GPS positioning are greatly affected. Radar positioning plays a major role, but radar positioning cannot be used for long distances and is costly. Other sensors are generally assisted in locating.
In order to overcome the defects caused by single sensor positioning and improve the positioning accuracy, a technical scheme of multi-sensor fusion is provided in the related technology, namely, the pose information (including position and direction) of the vehicle is obtained through different source sensors such as a GPS (global positioning system), a camera and an IMU (inertial measurement unit), however, the calculation of the vehicle is a great burden due to the fact that data of various sensors are fused at the same time.
Disclosure of Invention
In view of this, embodiments of the present invention provide a vehicle positioning method, a vehicle positioning device, and an electronic device, so as to solve the defect of large computational power consumption in vehicle positioning in the prior art.
According to a first aspect, an embodiment of the present invention provides a vehicle positioning method, including the steps of: acquiring first sensor data; obtaining first positioning uncertainty data according to the first sensor data; when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
Optionally, the method further comprises: when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data; and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data, and positioning the vehicle.
Optionally, the method further comprises: and when the first positioning uncertainty data meet a first preset condition, positioning the vehicle according to the first sensor data.
Optionally, the first sensor data is a visual sensor, and obtaining first positioning uncertainty data according to the first sensor data includes: and inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data.
Optionally, the second sensor data is a gyroscope, and obtaining second positioning uncertainty data according to the second sensor data and the first sensor data includes: obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; obtaining a first motion residual according to the first motion relation; and obtaining second positioning uncertainty data according to the first motion residual.
Optionally, the third sensor data is an accelerometer, and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data, and the first sensor data includes: obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; obtaining a second motion residual according to the second motion relation; and obtaining third positioning uncertainty data according to the second motion residual.
Optionally, the obtaining second positioning uncertainty data according to the first motion residual includes: obtaining a first deviation covariance according to the first motion residual; obtaining second positioning uncertainty data according to the first deviation covariance; obtaining a first deviation covariance according to the motion residual, including:
wherein E is1Is the first variance of the deviation, p is two residual scaling factors,expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix, sigma, obtained by adding a gyroscopei,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,to representCovariance matrix of the marginalized reprojection errors in the ith frame image, wherein JkRepresented is a jacobian matrix of reprojection errors,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1).
Optionally, the obtaining third positioning uncertainty data according to the second motion residual includes: obtaining a second deviation covariance according to the second motion residual; obtaining third positioning uncertainty data according to the second deviation covariance; obtaining a second deviation covariance according to the second motion residual, including:
wherein E is2In order to be the second deviation covariance, residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;denotes the residual error from the reprojection, and p denotes two typesA residual error proportional coefficient;is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1).
Optionally, the target neural network is constructed according to a PoseNet model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution; and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
According to a second aspect, an embodiment of the present invention provides a vehicle positioning apparatus, including: the first sensor data acquisition module is used for acquiring first sensor data; the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data; the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data; and the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition.
According to a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the vehicle positioning method according to the first aspect or any one of the embodiments of the first aspect when executing the program.
According to a fourth aspect, an embodiment of the present invention provides a storage medium having stored thereon computer instructions that, when executed by a processor, perform the steps of the vehicle localization method of the first aspect or any of the embodiments of the first aspect.
The technical scheme of the invention has the following advantages:
according to the vehicle positioning method/device provided by the embodiment, the positioning uncertainty is judged according to the first sensor data, the second sensor data is added when the positioning uncertainty does not meet the conditions, the vehicle calculation force can be reduced by adding the sensor information step by step, and the triggering condition for adding the sensor data each time is that the positioning uncertainty does not meet the conditions, namely, the vehicle calculation force can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of one specific example of a vehicle positioning method in the embodiment of the invention;
FIG. 2 is a schematic block diagram of a specific example of a vehicle locating device in accordance with an embodiment of the present invention;
fig. 3 is a schematic block diagram of a specific example of an electronic device in the embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two elements may be directly connected or indirectly connected through an intermediate medium, or may be communicated with each other inside the two elements, or may be wirelessly connected or wired connected. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The embodiment provides a vehicle positioning method, as shown in fig. 1, including the following steps:
s101, acquiring first sensor data; the first sensor may be a visual sensor, such as a camera. The first sensor data may be image information captured by a camera during the travel of the vehicle.
S102, obtaining first positioning uncertainty data according to the first sensor data;
in the positioning uncertainty characterization positioning process, uncertainty of deviation degree between the vehicle position coordinates and the vehicle actual position coordinates obtained by multi-source information fusion prediction is used, namely the uncertainty degree of the predicted position and the actual position within an acceptable precision range after the vehicle position is predicted by multi-source information fusion.
The method for obtaining the first positioning uncertainty data according to the first sensor data may be inputting image information shot by a camera to a target neural network, the target neural network may be constructed by a poseNet model, posterior probability distribution of current network node weights is obtained through a sample set and corresponding sample labels, the distribution is approximated by variational inference, positioning errors are obtained, after training of a large amount of data on a Cambridge Landmarks data set, strong correlation between the positioning errors and uncertainty is found, the higher the positioning uncertainty is, the larger the positioning errors are, and the stronger linear relationship between the uncertainty and the positioning errors is, so that the first positioning uncertainty data can be obtained according to the positioning errors.
S103, when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
illustratively, the first preset condition may be that the first positioning uncertainty data is less than or equal to 0.12 (corresponding to a positioning error of 8 meters). The second sensor may be a gyroscope or an accelerometer, and since the gyroscope is relatively accurate in a short time and may drift in a long time, and the accelerometer is opposite to the gyroscope, the second sensor is taken as the gyroscope for explanation in this embodiment, and the manner of acquiring the gyroscope data may be to acquire the gyroscope data in the in-vehicle IMU.
The second positioning uncertainty data may be obtained according to the second sensor data and the first sensor data by:
firstly, data preprocessing is carried out, and a coordinate system corresponding to the IMU is converted into a camera coordinate system, specifically:
in order to estimate the pose, sparse key point information is needed, and the key points are obtained by preprocessing the image, and specifically include the following steps:
the image preprocessing is mainly to calculate the reprojection error of key points, and the 3D position information of the kth road sign is set as Xk(the position information may be derived from GPS), and the coordinates of the ith frame image in the two-dimensional coordinate system corresponding to the position information areMinimizing reprojection errors
Where pi () is an operator that projects a 3D point onto an image.
Obtaining the position and the direction of the IMU in the ith frame image according to the formula, wherein the method comprises the following steps:
wherein the content of the first and second substances,respectively representing the direction pose parameters of the camera and the IMU in the ith frame image,RCBTo convert from the IMU coordinate system to the rotation matrix of the camera coordinate system,the position of the camera and IMU at the time of the ith frame image,CpBand (4) converting the position coordinates of the camera into metric units, wherein s is a scale factor and is the position of the IMU in a camera coordinate system.
And secondly, obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data, wherein the first motion relation comprises a motion direction.
Wherein the content of the first and second substances,the direction of the IMU in frame i +1,is the direction of the IMU in the ith frame, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,the deviation of the gyroscope is represented by a deviation,the representation is a Jacobian matrix, which can be obtained by pre-fusion, and Exp (-) is an exponential mapping of lie groups.
Thirdly, obtaining a first motion residual according to the first motion relation;
wherein the content of the first and second substances,is the ith frameDirectional residual (first motion residual) of image to i +1 th frame image, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,which is a jacobian matrix, can be obtained by pre-fusion,the deviation of the gyroscope is represented by a deviation,the direction of the IMU in frame i +1,for the direction of the IMU in the ith frame,is composed ofExp (-) is an exponential mapping of lie groups.
Then, according to the first motion residual, obtaining second positioning uncertainty data, including: step 1, obtaining a first deviation covariance (a deviation covariance of visual information and a gyroscope) according to a first motion residual; and 2, obtaining second positioning uncertainty data according to the first deviation covariance.
The first bias covariance (the visual information and the gyro bias covariance) in step 1 is solved as follows:
wherein E is1For the visual information and the bias covariance of the gyroscope (first bias covariance),represented as the directional residual from the ith frame picture to the jth frame picture,is composed ofIs transposed, sigmaR,i,jIs a covariance matrix obtained after adding a gyroscope, rho is two residual error proportionality coefficients which can be expressed by a Huber function,a covariance matrix representing the marginalized reprojection error in the ith frame image, wherein JkRepresentative of reprojection errorsA jacobian matrix, sigmai,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,wherein the content of the first and second substances,indicating the direction and position of the linearized points,is the position of the camera at the time of the ith frame image,for the optimal phaseMachine position and posture parameters.
In step 2, in order to quantify the uncertainty, the visual information of the last frame image and the trace of the bias covariance (first bias covariance) of the gyroscope may be calculated, and the Mean Square Error (MSE) of the expected value may be used as the second positioning uncertainty data.
And S104, when the second positioning uncertainty data meet a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
Illustratively, the second preset condition may be that the second positioning uncertainty data is less than or equal to 0.1 (corresponding to a positioning error of 5 meters). The second preset condition is not limited in this embodiment, and can be determined by a person skilled in the art as needed. The vehicle location based on the first sensor data and the second sensor data may be by minimizing (1) to obtain location information of the vehicle.
According to the vehicle positioning method provided by the embodiment, the positioning uncertainty is judged according to the first sensor data, the second sensor data is added when the positioning uncertainty does not meet the conditions, the vehicle calculation force can be reduced by adding the sensor information step by step, and the triggering condition for adding the sensor data each time is that the positioning uncertainty does not meet the conditions, namely, the vehicle calculation force can be further reduced by adopting a step-by-step mode under the condition of ensuring the positioning accuracy.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes:
firstly, when second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
the third sensor may be a gyroscope or an accelerometer, the present embodiment is described by taking the third sensor as an accelerometer, and the manner of acquiring the accelerometer data may be to acquire the accelerometer data in the in-vehicle IMU.
And secondly, obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data and positioning the vehicle.
Illustratively, the manner of deriving the third positioning uncertainty data from the third sensor data, the second sensor data, and the first sensor data may comprise the steps of:
the first step is as follows: and obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data, wherein the motion relation comprises direction, speed and position.
Wherein the content of the first and second substances,the direction of the IMU in frame i +1,is the direction of the IMU in the ith frame, Δ Ri,i+1As a result of the pre-measurement of the orientation of the IMU,the deviation of the gyroscope is represented by a deviation,representing a jacobian matrix, which can be obtained by pre-fusion, Exp (-) is an exponential mapping of lie groups,the velocity of the IMU in the (i + 1) th frame,for the velocity of the IMU in frame i, gw represents the gravity vector, Δ t represents the time difference between frame i and frame i +1, Δ vi,i+1Representing the result of a pre-measurement of the velocity of the IMU,which is a jacobian matrix, can be obtained by pre-fusion,the deviation of the accelerometer is indicated and,indicating the position of the IMU in the i +1 th frame,is the position of the IMU in the ith frame, Δ pi,i+1The result of the prediction of the position of the IMU is shown.
Secondly, according to the second motion relation, obtaining a second motion residual, including the residual of direction, speed and position:
wherein the content of the first and second substances,is the directional residual from the ith frame image to the (i + 1) th frame image,is the velocity residual from the ith frame image to the (i + 1) th frame image,the position residual error from the ith frame image to the (i + 1) th frame image is obtained.
Thirdly, obtaining third positioning uncertainty data according to the second motion residual, wherein the third positioning uncertainty data comprises: step 1, obtaining a second deviation covariance (the deviation covariance of the visual information, the gyroscope and the accelerometer) according to the second motion residual, and step 2, obtaining third positioning uncertainty data according to the second deviation covariance.
The solving method of the second deviation covariance (the deviation covariance of the visual information, the gyroscope and the accelerometer) in the step 1 comprises the following steps:
wherein E is2For the bias covariance (second bias covariance) of the visual information, gyroscope, accelerometer, residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1).
In step 2, in order to quantify the uncertainty, a trajectory of the visual information of the last frame image, and a deviation covariance (second deviation covariance) of a gyroscope and an accelerometer may be calculated, and a Mean Square Error (MSE) of an expected value may be used as third positioning uncertainty data.
The vehicle location may be performed in a manner of minimizing equation (2) based on the third sensor data, the second sensor data, and the first sensor data, thereby obtaining location information. According to the vehicle positioning method provided by the embodiment, the positioning accuracy is improved by gradually adding various sensor data.
As an optional implementation manner of this embodiment, the vehicle positioning method further includes: and when the first positioning uncertainty data meets a first preset condition, positioning the vehicle according to the first sensor data. The vehicle location based on the first sensor data may be performed by inputting the first sensor data into a target neural network to obtain vehicle location information.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a PoseNet model, and the target neural network training process includes: inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight; determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution; and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
Illustratively, the vehicle-mounted camera is used for collecting image information, the image is input into a built Posenet neural network for training, the model defines the motion state x (vehicle position) and q (vehicle head direction) of the vehicle, and the loss function of the network is as follows:
and theta represents a parameter for optimizing the position information and the direction information simultaneously in the training process, and the model is trained in a random gradient descent mode, so that a better result can be obtained under the condition of a smaller sample.
And (3) solving posterior probability distribution of the current network node weight through a data set obtained by training and a corresponding label, using dropout sampling in the training process, and obtaining the uncertainty of the current position of the vehicle by using a Bayesian model.
Firstly, obtaining a data set through training, and firstly, obtaining a posterior probability distribution of the current network weight W through the data set X and the label Y obtained through training, namely:
p(W|X,Y);
variational inference is then applied to minimize the relative entropy between the approximation q (w) and the posterior probability distribution:
KL(q(W)||p(W|X,Y));
wherein the approximate values for each layer satisfy:
bij~Bernouli(pi),j=1,2,...,n-1
Wi=Midiag(bi);
wherein M isiThe approximate distribution of each layer satisfies the bernoulli distribution for the coefficient of variation.
Finally, the objective loss function is minimized, which is also the relative entropy in the approximation and the posterior probability distribution.
The detailed pseudo code is as follows:
the present embodiment provides a vehicle positioning apparatus, as shown in fig. 2, including:
a first sensor data acquisition module 201, configured to acquire first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A first positioning uncertainty data determining module 202, configured to obtain first positioning uncertainty data according to the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A second positioning uncertainty data determining module 203, configured to obtain second sensor data when the first positioning uncertainty data does not meet a first preset condition, and obtain second positioning uncertainty data according to the second sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The first positioning module 204 is configured to, when the second positioning uncertainty data meets a second preset condition, perform vehicle positioning according to the first sensor data and the second sensor data. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the vehicle positioning apparatus further includes:
the third sensor data determining module is used for acquiring third sensor data when the second positioning uncertainty data does not meet a second preset condition; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning module is used for obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data and positioning the vehicle. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the method further includes: and the third positioning module is used for positioning the vehicle according to the first sensor data when the first positioning uncertainty data meets a first preset condition. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the first sensor data is a vision sensor, and the first positioning uncertainty data determining module 202 includes: and the first positioning uncertainty data determining submodule is used for inputting the image data obtained by the visual sensor into a target neural network to obtain first positioning uncertainty data. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second sensor data is a gyroscope, and the second positioning uncertainty data determining module 203 includes:
the first motion relation determining module is used for obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The first motion residual determining module is used for obtaining a first motion residual according to the first motion relation; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning uncertainty data submodule is used for obtaining second positioning uncertainty data according to the first motion residual. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning module includes:
the second motion relation determining module is used for obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The second motion residual determining module is used for obtaining a second motion residual according to the second motion relation; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the second positioning submodule is used for obtaining third positioning uncertainty data according to the second motion residual. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning uncertainty data submodule includes:
the first deviation covariance determination module is used for obtaining a first deviation covariance according to the first motion residual; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The second positioning uncertainty data calculation module is used for obtaining second positioning uncertainty data according to the first deviation covariance; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A first bias covariance determination module that performs operations comprising:
wherein E is1In order to be the first deviation covariance,expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix obtained by adding a gyroscope,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1). For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the second positioning sub-module includes:
the second deviation covariance determination module is used for obtaining a second deviation covariance according to the second motion residual; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A third positioning uncertainty data calculation module, configured to obtain third positioning uncertainty data according to the second deviation covariance; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
A second bias covariance determination module that performs operations comprising:
wherein E is2In order to be the second deviation covariance, residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1). For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
As an optional implementation manner of this embodiment, the target neural network is constructed according to a PoseNet model, and the first positioning uncertainty data determining sub-module includes:
the posterior probability distribution determining module is used for inputting the sample to a pre-trained neural network to obtain the posterior probability distribution of each network weight; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The relative entropy determining module is used for determining the relative entropy between the approximate value of each layer of the network and the posterior probability distribution according to the posterior probability distribution; for details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
And the training module is used for finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network. For details, reference is made to the corresponding parts of the above method embodiments, which are not described herein again.
The embodiment of the present application also provides an electronic device, as shown in fig. 3, including a processor 310 and a memory 320, where the processor 310 and the memory 320 may be connected by a bus or in other manners.
Processor 310 may be a Central Processing Unit (CPU). The Processor 310 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or any combination thereof.
The memory 320, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the vehicle positioning method in embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions, and modules stored in the memory.
The memory 320 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory 320 may optionally include memory located remotely from the processor, which may be connected to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 320 and, when executed by the processor 310, perform a vehicle localization method as in the embodiment shown in FIG. 1.
The details of the electronic device may be understood with reference to the corresponding related description and effects in the embodiment shown in fig. 1, and are not described herein again.
The present embodiment also provides a computer storage medium, where computer-executable instructions are stored, where the computer-executable instructions can execute the vehicle positioning method in any of method embodiments 1. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.
Claims (12)
1. A vehicle positioning method, comprising the steps of:
acquiring first sensor data;
obtaining first positioning uncertainty data according to the first sensor data;
when the first positioning uncertainty data does not meet a first preset condition, acquiring second sensor data, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
and when the second positioning uncertainty data meets a second preset condition, positioning the vehicle according to the first sensor data and the second sensor data.
2. The method of claim 1, further comprising:
when the second positioning uncertainty data does not meet a second preset condition, acquiring third sensor data;
and obtaining third positioning uncertainty data according to the third sensor data, the second sensor data and the first sensor data, and positioning the vehicle.
3. The method of claim 1, further comprising: and when the first positioning uncertainty data meet a first preset condition, positioning the vehicle according to the first sensor data.
4. The method of claim 1, wherein the first sensor data is a vision sensor, and wherein deriving the first positioning uncertainty data from the first sensor data comprises: and inputting the image data obtained by the vision sensor into a target neural network to obtain first positioning uncertainty data.
5. The method of claim 4, wherein the second sensor data is a gyroscope, and wherein deriving second positioning uncertainty data from the second sensor data and the first sensor data comprises:
obtaining a first motion relation in the running process of the vehicle according to the second sensor data and the first sensor data;
obtaining a first motion residual according to the first motion relation;
and obtaining second positioning uncertainty data according to the first motion residual.
6. The method of claim 2, wherein the third sensor data is an accelerometer, and wherein deriving third positioning uncertainty data from the third sensor data, the second sensor data, and the first sensor data comprises:
obtaining a second motion relation in the running process of the vehicle according to the second sensor data, the third sensor data and the first sensor data;
obtaining a second motion residual according to the second motion relation;
and obtaining third positioning uncertainty data according to the second motion residual.
7. The method of claim 5, wherein deriving second positioning uncertainty data from the first motion residuals comprises:
obtaining a first deviation covariance according to the first motion residual;
obtaining second positioning uncertainty data according to the first deviation covariance;
obtaining a first deviation covariance according to the motion residual, including:
wherein E is1Is the first variance of the deviation, p is two residual scaling factors,expressed as the directional residual from the ith frame image to the jth frame image, sigmaR,i,jIs a covariance matrix, sigma, obtained by adding a gyroscopei,jAnd sigmaiCovariance matrices representing the pre-fusion and re-projection errors respectively,a covariance matrix representing the marginalized reprojection error in the ith frame image, wherein JkRepresented is a jacobian matrix of reprojection errors,a residual vector representing the marginalized reprojection error in the ith frame image,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1).
8. The method of claim 6, wherein said deriving third positioning uncertainty data from said second motion residuals comprises:
obtaining a second deviation covariance according to the second motion residual;
obtaining third positioning uncertainty data according to the second deviation covariance;
obtaining a second deviation covariance according to the second motion residual, including:
wherein E is2In order to be the second deviation covariance, residual errors of the direction, the speed and the position from the ith frame image to the jth frame image are respectively represented;representing residual errors obtained by re-projection, wherein rho represents two residual error proportionality coefficients;is a covariance matrix obtained by adding gyroscope data and acceleration data after pre-integration,representing the marginalized re-projection in the ith frame imageThe residual vector of the shadow error is then calculated,is composed ofThe transpose of (a) is performed,is composed ofThe transposing of (1).
9. The method of claim 4, wherein the target neural network is constructed according to a PoseNet model, and wherein the target neural network training process comprises:
inputting the sample into a pre-trained neural network to obtain posterior probability distribution of each network weight;
determining the relative entropy between the approximate value of each layer network and the posterior probability distribution according to the posterior probability distribution;
and finishing the training of the pre-trained neural network by taking the relative entropy between the minimized approximate value and the posterior probability distribution as a target to obtain the target neural network.
10. A vehicle positioning device, comprising:
the first sensor data acquisition module is used for acquiring first sensor data;
the first positioning uncertainty data determining module is used for obtaining first positioning uncertainty data according to the first sensor data;
the second positioning uncertainty data determining module is used for acquiring second sensor data when the first positioning uncertainty data does not meet a first preset condition, and acquiring second positioning uncertainty data according to the second sensor data and the first sensor data;
and the first positioning module is used for positioning the vehicle according to the first sensor data and the second sensor data when the second positioning uncertainty data meets a second preset condition.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the vehicle localization method according to any of claims 1-9 are implemented when the program is executed by the processor.
12. A storage medium having computer instructions stored thereon, wherein the instructions, when executed by a processor, perform the steps of the vehicle localization method of any of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110305408.XA CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110305408.XA CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113155121A true CN113155121A (en) | 2021-07-23 |
CN113155121B CN113155121B (en) | 2024-04-02 |
Family
ID=76887947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110305408.XA Active CN113155121B (en) | 2021-03-22 | 2021-03-22 | Vehicle positioning method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113155121B (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2966477A1 (en) * | 2014-07-09 | 2016-01-13 | ANavS GmbH | Method for determining the position and attitude of a moving object using low-cost receivers |
US20180081027A1 (en) * | 2016-09-21 | 2018-03-22 | Pinhas Ben-Tzvi | Linear optical sensor arrays (losa) tracking system for active marker based 3d motion tracking |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
US20180231385A1 (en) * | 2016-10-25 | 2018-08-16 | Massachusetts Institute Of Technology | Inertial Odometry With Retroactive Sensor Calibration |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
WO2020048623A1 (en) * | 2018-09-07 | 2020-03-12 | Huawei Technologies Co., Ltd. | Estimation of a pose of a robot |
CN111210477A (en) * | 2019-12-26 | 2020-05-29 | 深圳大学 | Method and system for positioning moving target |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111609868A (en) * | 2020-05-29 | 2020-09-01 | 电子科技大学 | Visual inertial odometer method based on improved optical flow method |
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111750853A (en) * | 2020-06-24 | 2020-10-09 | 国汽(北京)智能网联汽车研究院有限公司 | Map establishing method, device and storage medium |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
-
2021
- 2021-03-22 CN CN202110305408.XA patent/CN113155121B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2966477A1 (en) * | 2014-07-09 | 2016-01-13 | ANavS GmbH | Method for determining the position and attitude of a moving object using low-cost receivers |
US20180081027A1 (en) * | 2016-09-21 | 2018-03-22 | Pinhas Ben-Tzvi | Linear optical sensor arrays (losa) tracking system for active marker based 3d motion tracking |
US20180231385A1 (en) * | 2016-10-25 | 2018-08-16 | Massachusetts Institute Of Technology | Inertial Odometry With Retroactive Sensor Calibration |
CN107869989A (en) * | 2017-11-06 | 2018-04-03 | 东北大学 | A kind of localization method and system of the fusion of view-based access control model inertial navigation information |
WO2020048623A1 (en) * | 2018-09-07 | 2020-03-12 | Huawei Technologies Co., Ltd. | Estimation of a pose of a robot |
WO2020155616A1 (en) * | 2019-01-29 | 2020-08-06 | 浙江省北大信息技术高等研究院 | Digital retina-based photographing device positioning method |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
CN111210477A (en) * | 2019-12-26 | 2020-05-29 | 深圳大学 | Method and system for positioning moving target |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111609868A (en) * | 2020-05-29 | 2020-09-01 | 电子科技大学 | Visual inertial odometer method based on improved optical flow method |
CN111795686A (en) * | 2020-06-08 | 2020-10-20 | 南京大学 | Method for positioning and mapping mobile robot |
CN111739063A (en) * | 2020-06-23 | 2020-10-02 | 郑州大学 | Electric power inspection robot positioning method based on multi-sensor fusion |
CN111750853A (en) * | 2020-06-24 | 2020-10-09 | 国汽(北京)智能网联汽车研究院有限公司 | Map establishing method, device and storage medium |
Non-Patent Citations (5)
Title |
---|
刘洪剑;王耀南;谭建豪;李树帅;钟杭;: "一种旋翼无人机组合导航***设计及应用", 传感技术学报, no. 02, 15 February 2017 (2017-02-15) * |
夏凌楠;张波;王营冠;魏建明;: "基于惯性传感器和视觉里程计的机器人定位", 仪器仪表学报, no. 01, 15 January 2013 (2013-01-15), pages 110 - 111 * |
夏凌楠;张波;王营冠;魏建明;: "基于惯性传感器和视觉里程计的机器人定位", 仪器仪表学报, no. 01, pages 110 - 111 * |
敖龙辉;郭杭;: "室内环境下立体视觉惯导融合定位", 测绘通报, no. 12 * |
敖龙辉;郭杭;: "室内环境下立体视觉惯导融合定位", 测绘通报, no. 12, 25 December 2019 (2019-12-25) * |
Also Published As
Publication number | Publication date |
---|---|
CN113155121B (en) | 2024-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113945206B (en) | Positioning method and device based on multi-sensor fusion | |
CN110243358B (en) | Multi-source fusion unmanned vehicle indoor and outdoor positioning method and system | |
CN108731670B (en) | Inertial/visual odometer integrated navigation positioning method based on measurement model optimization | |
CN111947671B (en) | Method, apparatus, computing device and computer-readable storage medium for positioning | |
CN109887057B (en) | Method and device for generating high-precision map | |
Alonso et al. | Accurate global localization using visual odometry and digital maps on urban environments | |
CN112113574B (en) | Method, apparatus, computing device and computer-readable storage medium for positioning | |
US20200364883A1 (en) | Localization of a mobile unit by means of a multi-hypothesis kalman filter method | |
CN104729506A (en) | Unmanned aerial vehicle autonomous navigation positioning method with assistance of visual information | |
CN109059907B (en) | Trajectory data processing method and device, computer equipment and storage medium | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN112629544B (en) | Vehicle positioning method and device based on lane line | |
CN113920198B (en) | Coarse-to-fine multi-sensor fusion positioning method based on semantic edge alignment | |
CN110596741A (en) | Vehicle positioning method and device, computer equipment and storage medium | |
CN114755662A (en) | Calibration method and device for laser radar and GPS with road-vehicle fusion perception | |
CN111241224A (en) | Method, system, computer device and storage medium for target distance estimation | |
CN113252051A (en) | Map construction method and device | |
CN114111818A (en) | Universal visual SLAM method | |
CN114964276A (en) | Dynamic vision SLAM method fusing inertial navigation | |
CN113405555B (en) | Automatic driving positioning sensing method, system and device | |
CN112577479A (en) | Multi-sensor fusion vehicle positioning method and device based on map element data | |
CN112446915A (en) | Picture-establishing method and device based on image group | |
Verentsov et al. | Bayesian localization for autonomous vehicle using sensor fusion and traffic signs | |
CN113155121B (en) | Vehicle positioning method and device and electronic equipment | |
Verentsov et al. | Bayesian framework for vehicle localization using crowdsourced data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |