CN110412635A - A kind of environment beacon support under GNSS/SINS/ vision tight integration method - Google Patents
A kind of environment beacon support under GNSS/SINS/ vision tight integration method Download PDFInfo
- Publication number
- CN110412635A CN110412635A CN201910659046.7A CN201910659046A CN110412635A CN 110412635 A CN110412635 A CN 110412635A CN 201910659046 A CN201910659046 A CN 201910659046A CN 110412635 A CN110412635 A CN 110412635A
- Authority
- CN
- China
- Prior art keywords
- point
- gnss
- sins
- feature
- beacon
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004438 eyesight Effects 0.000 title claims abstract description 46
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000010354 integration Effects 0.000 title claims abstract description 40
- 230000000007 visual effect Effects 0.000 claims abstract description 37
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 238000002271 resection Methods 0.000 claims abstract description 14
- 239000000203 mixture Substances 0.000 claims abstract description 3
- 238000012937 correction Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000012216 screening Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 7
- 241000208340 Araliaceae Species 0.000 claims description 6
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims description 6
- 235000003140 Panax quinquefolius Nutrition 0.000 claims description 6
- 235000008434 ginseng Nutrition 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 4
- 241001061260 Emmelichthys struhsakeri Species 0.000 claims description 3
- 238000012512 characterization method Methods 0.000 claims description 3
- 238000004140 cleaning Methods 0.000 claims description 3
- 238000013135 deep learning Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 230000004043 responsiveness Effects 0.000 claims description 3
- 241000282693 Cercopithecidae Species 0.000 claims description 2
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 230000008030 elimination Effects 0.000 claims description 2
- 238000003379 elimination reaction Methods 0.000 claims description 2
- 238000004064 recycling Methods 0.000 claims description 2
- 238000003860 storage Methods 0.000 claims description 2
- 239000012141 concentrate Substances 0.000 claims 1
- 238000007689 inspection Methods 0.000 claims 1
- 230000001537 neural effect Effects 0.000 claims 1
- 230000000903 blocking effect Effects 0.000 abstract description 2
- 238000001914 filtration Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000009897 systematic effect Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 244000086443 Craterellus fallax Species 0.000 description 1
- 235000007926 Craterellus fallax Nutrition 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000035800 maturation Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
- G01S19/47—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement the supplementary measurement being an inertial measurement, e.g. tightly coupled inertial
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Automation & Control Theory (AREA)
- Navigation (AREA)
- Image Processing (AREA)
Abstract
A kind of GNSS/SINS/ vision tight integration method under being supported the invention discloses environment beacon, this method utilizes visual sensor, and the visual signature point cloud map comprising Feature Descriptor generated offline using view-based access control model+crowdsourcing model is as priori environment beacon database;When carrier carries out real-time resolving navigator fix, based on difference GNSS/SINS tight integration, when GNSS signal is by seriously blocking, pass through the visual sensor filmed image connected firmly on carrier, dynamic disturbance removal and feature extraction are carried out to image, and the environment beacon in the feature extracted and visual signature point cloud map is subjected to characteristic matching;When being matched to more than or equal to 1 road sign point, the resection equation of pixel and Beacon Point composition is added in tight integration as observation information and provides auxiliary positioning for SINS.The present invention can provide continuous, high-precision high accuracy positioning result in the case where the extremely complicated environment in city causes GNSS signal to be interrupted completely.
Description
Technical field
Tight group of GNSS/SINS/ vision under being supported the present invention relates to integrated navigation field more particularly to a kind of environment beacon
Conjunction method.
Background technique
By the research and development of decades, GNSS (Global Navigation Satellite System, GNSS)/
The basic theories of SINS (Strap-down inertial navigation system, SINS) integrated navigation and method compared with
It can reach stable centimeter-level positioning under open observing environment for maturation.It is hardware that Canadian NovAtel company provides, soft
The integrated GNSS/SINS solution of part is widely adopted due to its superior performance.In recent years, this is high-new for automatic Pilot
Industry also develops like a raging fire, and basic module of the precise positioning as automatic Pilot, also results in the attention of industry.
In the automatic Pilots such as Tesla, general, proPILOT enterprise, also all use using GNSS/ inertia as the integrated navigation and location of core
System.But under complex environment, especially high building stands in great numbers, under the densely covered typical urban environment in overhead tunnel, since GNSS is observed
Quality is influenced by signal blocks, interference, and quality drastically reduces, and the performance of GNSS/SINS combined system is caused to be remarkably decreased.
With the development of GPS of America, Russian GLONASS, European Union Galileo and Chinese Beidou, GNSS is just towards multifrequency polyphyly
The direction of system is developed, and the fuzziness that the GNSS/SINS combination of multi-frequency multi-mode is improved to a certain extent under complex environment is fixed
Rate;High-precision pose recursion function has also obtained effective utilization in GNSS data pretreatment direction in the inertial navigation short time;Educational circles
It is ropy to attempt solution status information and observation information to also proposed many different types of robust adaptive filter methods
Problem.The above method can play good repairing effect, but in typical case when short time (30s) interruption occurs once in a while for GNSS
Urban environment in, once the interruption of long period occurs, the mechanization error of SINS can exponentially power dissipate GNSS, should
Problem is still a great problem of GNSS/SINS combination, and the big technical bottleneck in vehicle-mounted real-time positioning.
Visual sensor realizes estimating for self-vector pose by the characteristic information of perception, identification, tracking ambient enviroment
Meter.Visual sensor hardware technology is mature, it is small in size, cheap, be generally integrated in intelligent terminal, and there is semantic point
It cuts, scene understanding, build the computer visions ability such as figure on a large scale, have broad application prospects and researching value.It is with Tesla
What the AutoPilot automated driving system represented used is exactly with vision as leading Multi-sensor Fusion scheme.According to whether
There are the auxiliary of priori environment cartographic information, vision positioning can be divided into relative positioning and absolute fix two ways, relative positioning
Research is more mature at present, and representing algorithm has ORB-SLAM2, LSD-SLAM, DSO, SVO based on pure vision etc. and be based on
Vision/inertial navigation fusion MSCKF, VINS-Mono, OKVIS etc., at present in mobile mapping equipment, unmanned plane, sweeping robot etc.
Have many mature applications in equipment, positioning accuracy is about 1%.Since relative positioning is in no Closed-cycle correction, pose
Error can be built up, and actual vector running environment is often without closed loop.So Gao De, NavInfo, TomTom, Here etc.
Quotient is schemed, all in actively layout high-precision map, to realize the vision matching absolute fix under supporting based on priori map.It is high at present
There are no unified standards for precision map, so mature landing product not yet.
GNSS, SINS, visual sensor have very strong mutual supplement with each other's advantages, and GNSS establishes global range and uniquely unifies
Space-time datum, it is possible to provide global, round-the-clock high-precision absolute positioning system;SINS is independent, not by environmental images, adopts
Sample rate is high, and steady and continuous, can be used as the senior filter of multi-source fusion;Visual sensor can carry out dead reckoning, can also be
Have under the support of priori map, realizes absolute fix, performance is having for GNSS/SINS combined system between GNSS/SINS
Effect supplement.It can be seen that GNSS/SINS/ visual sensor is an important directions of the following multi-source fusion.
In view of the above-mentioned problems, the GNSS/SINS/ vision tight integration side under being supported The present invention gives a kind of environment beacon
Method, entire positioning system is based on difference GNSS/SINS tight integration, under characteristic point cloud map auxiliary, to collected shadow
As carrying out dynamic disturbance removal, feature extraction and the matching of environment beacon, and utilize the conllinear pass of matched pixel and Beacon Point pair
System's building observational constraints equation participates in tight integration and resolves, to realize continuous, high-precision positioning.
Summary of the invention
The technical problem to be solved in the present invention is that for the defects in the prior art, provide under a kind of environment beacon supports
GNSS/SINS/ vision tight integration method.
The technical solution adopted by the present invention to solve the technical problems is:
The present invention provides a kind of GNSS/SINS/ vision tight integration method under environment beacon is supported, in this method, passes through
Visual sensor acquires image, offline to generate the visual signature point cloud map comprising Feature Descriptor as priori environment beacon number
According to library;Carrier carry out real-time resolving navigator fix when, based on difference GNSS/SINS tight integration, when GNSS signal by
When blocking, by the visual sensor filmed image installed on carrier, dynamic disturbance removal and feature extraction are carried out to image, and
Environment beacon in the feature extracted and visual signature point cloud map is subjected to characteristic matching;It is more than or equal to 1 when being matched to
When road sign point, the resection equation for pixel and the Beacon Point composition that will match to is added to tight integration as observation information
In for SINS provide auxiliary positioning.
Further, this method of the invention specifically includes the following steps:
Step 1 obtains the image after correction distortion to the Yunnan snub-nosed monkey acquired in real time, constructs convolution using deep learning
Neural network carries out semantic segmentation, and the corresponding block of pixels of dynamic object is marked;
Step 2 carries out grid dividing to image, and the SIFT feature operator accelerated based on GPU is utilized respectively to each grid
Characteristic point is extracted, whether judging characteristic point falls into dynamic object marked region, if then by this feature point deletion, and will be remaining special
Sign presses response intensity and is ranked up screening, takes out the first two characteristic point of the response intensity of each grid;
Characteristic point cloud map use feature ten is pitched number auxiliary storage, and accelerates characteristic matching by step 3, is retouched using feature
The euclidean geometry Furthest Neighbor for stating son obtains preliminary matches, when the environment Beacon Point being matched to is more than at 4, utilizes RANSAC algorithm
Exterior point interference elimination is carried out with EPnP algorithm;
Step 4 constructs resection equation to the pixel and Beacon Point being matched to, and is closed according to the outer ginseng of inertial navigation and camera
System, is converted to the relationship between inertial navigation position and attitude error and pixel observation value for equation, and be added to as observational equation
Auxiliary positioning in GNSS/INS tight integration;
Step 5, moonscope quantity be greater than certain threshold value when, shield the observation value information of visual feedback, directly adopt
The positioning of GNSS/SINS tight integration;Conversely, using GNSS/SINS/ after screening to it if moonscope quantity is less than threshold value
Vision participates in resolving jointly;If without, without remaining satellite, being carried out only with visual observation and SINS tight after visible satellite or screening
Integrated positioning.
Further, step 1 of the invention method particularly includes:
Before image collection, using Zhang Zhengyou camera calibration method, using the calibration tool case of MATLAB in camera
Ginseng carries out calibration and obtains the focal length of camera, principal point offset and radial direction, tangential distortion parameter;In the image that acquisition acquires in real time
Afterwards, distortion correction is carried out using the internal reference of calibration and distortion parameter;
Image after correction obtains the Feature Mapping of the last one convolution figure layer first with CNN convolution, using pyramid
Pond module obtains the feature extraction effect of 4 different scales, and carries out 1*1 convolution to every layer of pyramid, to reduce dimension
With maintenance global characteristics weight, up-sampled using bilinear interpolation to form the mark sheet comprising part and global information
Characterization is finally fed back to convolutional layer to realize the semantic segmentation of Pixel-level by sign;After obtaining the semantic segmentation result of image, it will belong to
It is marked in the block of pixels of dynamic object.
Further, feature point extraction in step 2 of the invention method particularly includes:
Step 2.1, SIFT feature are extracted: being carried out feature point extraction using SIFT operator, and mentioned using GPU to SIFT feature
It takes and is accelerated;
Step 2.2, dynamic area characteristic point are rejected: judgement are successively marked in the characteristic point that SIFT is extracted, if special
Sign point falls in dynamically labeled region, then leaves out this feature point;
Step 2.3, characteristic point are chosen: the characteristic point after cleaning is carried out row from big to small according to characteristic response degree
Sequence takes responsiveness maximum and secondary two big characteristic points as last candidate feature point;If the characteristic point in grid is less than
Two, then its whole is selected as to last candidate feature point;After the candidate feature point screening of each grid, to each net
The characteristic point of lattice is summarized.
Further, the method that step 3 of the invention carries out characteristic matching specifically:
Step 3.1, ten fork tree aspect indexing of building: to the ten fork tree feature of all data building of visual signature point cloud map
Index establishes L layers of ten fork trees, at most searches for L times and find leaf node;
Step 3.2, closest beacon point search: the Feature Descriptor for the characteristic point extracted first is set into feature rope with ten forks
The feature vector of all nodes of the first layer drawn is compared, and the shortest node continuation of Euclidean distance is selected to search for down, until
L layers;All Feature Descriptors of L layers of child node carry out Euclidean distance calculating, find the most short d of Euclidean distanceminWith it is secondary short
dsminLeaf node as both candidate nodes, if most short Euclidean distance at this time is less than the minimum threshold of distance and process of setting
After ratio is examined,This feature point and Beacon Point are then added to candidate matches point centering;
Step 3.3, RANSAC algorithm and EPnP algorithm exterior point are rejected: if candidate matches point is used to being not less than 8 pairs
RANSAC+EPnP algorithm rejects exterior point, randomly selects four groups of matching double points, camera pose is calculated using EPnP, according to phase seat in the plane
Appearance calculates back projection's error, back projection's error is labeled as interior point less than the matching double points of 2 pixels, and record the interior idea
Collection;Aforesaid operations are recycled K time, taking out in recycling for K time that idea concentration quantity is most is optimal subset, and by optimal subset pair
The camera pose answered is as the camera pose finally estimated, if the camera position of estimation and the geometric distance of inertial navigation forecast position are poor
Less than 10m, then it is assumed that the matching double points outside optimal subset are judged as exterior point, and picked by the success of RANSAC+EPnP algorithm
It removes.
Further, step 4 of the invention method particularly includes:
The building of step 4.1, resection equation: the pixel of the position of known environment Beacon Point and matched pixel
Coordinate acquires position and the posture of camera photocentre;Formula camera position and posture are expressed as errors of form, asked by chain rule
Observation to the local derviation of state obtains that resection observational equation can be constructed for each pair of match point are as follows:
Wherein, [u v] is pixel coordinate, and [x y z] is the Beacon Point coordinate under camera coordinates system, and P is that Beacon Point exists
Coordinate under ECEF system, state to be estimated arePcIt is coordinate of the camera photocentre under ECEF system,It is ECEF system to phase
The spin matrix of machine coordinate system, misalignment areFor camera posture, the observation of position, δ PcFor location error,
ωu,vFor the noise of pixel observation value;
Step 4.2, vision/inertial navigation fusion observational equation: according to the outer ginseng relationship of inertial navigation and camera, by pixel observation value with
The relationship of camera position and attitude error is converted to the relationship of pixel observation value Yu inertial navigation position and attitude error, obtains observational equation are as follows:
Wherein, u1、unFor pixel observation value,For the pixel value that Beacon Point back projection obtains, εuFor pixel error, δ re、δ
ve、φ、ab、εb, δ N respectively indicate position, speed, attitude error, accelerometer bias, gyro zero bias and the double difference mould of inertial navigation
Paste degree parameter,Indicate pixel observation value to the partial derivative of camera position and attitude error.
Further, step 5 of the invention method particularly includes:
When GNSS satellite number is greater than threshold value, and PDOP value is less than threshold value, illustrate that GNSS positioning geometric configuration is met the requirements,
At this point, shielding visual observation information observes renewal equation only with GNSS/SINS tight integration:
Wherein,For using SINS recursion position calculate war star geometry double difference distance,WithThe station star double difference for respectively indicating the pseudorange exported using base station and rover station receiver and carrier phase calculating is seen
Measured value,For the GNSS lever arm reduction under ECEF system,For the double difference direction cosines of each satellite, εPAnd εLFor pseudorange, phase
The observation noise of position;
When GNSS satellite number is less than threshold value, and visual sensor is successfully matched to several Beacon Points, then visual image is added
Element constraint observation, obtains observational equation:
When GNSS satellite is blocked completely, but vision can be extracted successfully when being matched to Beacon Point, only with the pixel of vision
Constrain observation, vision/inertial navigation fusion observational equation that observational equation uses step 4.2 to obtain.
The beneficial effect comprise that: the GNSS/SINS/ vision tight integration side under environment beacon support of the invention
Method, 1) present invention merely adds visual sensor, and the most of vehicle carriers of the sensor have had, and the sensor cost compared with
It is low, have certain landing property in this way.2) present invention makes full use of GNSS, SINS, vision three's positioning means
Complementary characteristic, often visual signature can greatly inhibit the environment that GNSS is blocked compared with horn of plenty, vision resection absolute fix
Error diverging of the inertial navigation in all directions.3) present invention need to only match a road sign point, so that it may drift about to the location error of inertial navigation
Inhibition is played, and can sustainedly and stably provide the positioning result of decimetre or even Centimeter Level in GNSS long-time losing lock.4)
According to carrier traveling environment different characteristics, can dynamically choose whether be added visual information auxiliary constraint, positioning means compared with
It is flexible.
Detailed description of the invention
Present invention will be further explained below with reference to the attached drawings and examples, in attached drawing:
Fig. 1 is the GNSS/SINS/ vision tight integration location algorithm general flow chart of present example;
Fig. 2 is the SINS mechanization flow chart under the ECEF system of present example;
Fig. 3 is that the Vision Constraints observational equation of present example obtains flow chart;
Fig. 4 is that the image of present example removes dynamic disturbance schematic diagram;
Fig. 5 is that the image feature of present example extracts flow chart;
Fig. 6 is the Feature Matching flow chart of present example.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not
For limiting the present invention.
Since GNSS/SINS combined system is non-linear, so whole filter frame of the invention uses spreading kalman
Filtering, general flow chart is as shown in Figure 1, navigational coordinate system is ECEF system, and corresponding mechanization also carries out under ECEF system, such as
Shown in Fig. 2.The present invention eliminates receiver clock-offsets using Double deference between star between station, so GNSS/SINS tight integration model neutralizes
The relevant error state double difference fuzziness of GNSS, in addition to this tight integration error state further include navigational state (position, speed and
Attitude error) and the relevant error state of IMU (gyro zero bias and accelerometer bias).Filtering uses Closed-cycle correction technology,
Feedback correction is carried out to SINS systematic error.The filtering equations of tight integration are described below:
Since fuzziness parameter δ N does not change over time (assuming that not having cycle slip), so its corresponding error state side
Journey are as follows:
According to the error model of ins error model and IMU sensor under ECEF system, GNSS/SINS tight integration can be obtained
Error in reading state equation are as follows:
In above formula,WithRespectively indicate inertial navigation position, speed and attitude error vector, δ bgWith δ baRespectively
For gyro and accelerometer bias error vector.Spin matrix for b system relative to e system,For gyroscope instrument error
Item, fbFor the output valve of accelerometer, δ fbFor accelerometer instrument error item, δ geFor gravitational vectors error,Certainly for the earth
Tarnsition velocity.The correlation time of respectively corresponding single order Gauss-Markov process, ωaAnd ωgIt respectively indicates
The driving white noise of gyro and accelerometer.The error state vector of GNSS/SINS tight integration are as follows:
Discrete-time system error state equation in order to obtain, can be as follows by error in reading state equation discretization:
δxk+1=Φk+1,kδxk+ωk (4)
Wherein, Φk+1,kFor state-transition matrix, ωkTo drive white noise.A step status predication can be carried out according to the following formula:
Xk,k-1To forecast state, Q is process noise, Δ tkFor the time interval between two epoch, Pk,k-1To forecast variance.
The measurement equation that tight integration filtering updates can be expressed as follows:
δ z=H δ X+ η (6)
In formula (6), δ z is observation residual error, and H is design matrix, and η is observation noise.When there is GNSS observation, pass through
After data prediction, the observation information as filtering is observed update:
In formula (7), KkFor gain coefficient, RkFor observation noise, XkAnd PkFor filter state and its variance.
The above tight integration Kalman filter model is the basis of inventive algorithm, below in conjunction with technology road shown in FIG. 1
Line, key technology and its implementation method expansion narration in detail to each module of the present invention.
One, image dynamic disturbance removes
Visual signature point cloud map employed in the present invention is only static comprising building, communal facility, traffic sign etc.
The environment beacon database of object eliminates the unreliable information such as pedestrian, vegetation, vehicle.It only in this way just can guarantee environment
Beacon Point not at any time, the reasons such as weather and significant change occurs.So similarly to guarantee to mention when handling image
The characteristic point taken not on dynamic object, such as vehicle, pedestrian, with the labile vegetation of seasonal law.
Before image collection, need using Zhang Zhengyou camera calibration method, using the calibration tool case of MATLAB to camera
Internal reference carry out calibration obtain camera focal length fx、fy, principal point offset cx、cyAnd radial direction, tangential distortion parameter.
As shown in figure 4, after obtaining the image acquired in real time, it is necessary first to be carried out using the internal reference and distortion parameter of calibration
Distortion correction.Image after correction obtains the Feature Mapping of the last one convolution figure layer first with CNN convolution, using pyramid
Pond module obtains the feature extraction effect of 4 different scales, and carries out 1*1 convolution to every layer of pyramid, to reduce dimension
With maintenance global characteristics weight, up-sampled using bilinear interpolation to form the mark sheet comprising part and global information
Characterization is finally fed back to convolutional layer to realize the semantic segmentation of Pixel-level by sign.After obtaining the semantic segmentation result of image, it will belong to
It is marked in the block of pixels of dynamic object (such as vehicle, pedestrian, vegetation), subsequent environmental characteristic extracts and matching will no longer
This partial pixel is considered, effectively to avoid environment dynamic disturbance.
Two, image feature extracts
Since the geometry distribution of characteristic point directly affects the precision of image resection, geometry distribution is better, after image
The precision just intersected is higher.So the characteristic point in order to ensure image is uniformly distributed, need to the image grid dividing after correction
Feature extraction is carried out to each grid again afterwards.
Image after correction as shown in figure 5, is divided into the grid of M × N, M, N by the flow chart of entire feature extraction first
Numerical value depend on the size of image itself, the size of each grid is controlled in 10*10 pixel or so.Grid is divided
Afterwards, following parallel work-flow is carried out respectively to each grid:
1) SIFT feature is extracted
The present invention carries out feature point extraction using SIFT operator, because of SIFT (Scale Invariant Feature
Transform) operator remains unchanged rotation, scaling, brightness change, also has to visual angle change, radiation transformation and noise
Certain robustness can be realized more reliable characteristic matching in subsequent vision positioning.For speed up processing, I
Using GPU to SIFT feature extraction accelerate.
2) dynamic area characteristic point is rejected
Used the method for deep learning to carry out semantic segmentation to image in step 1, have identified pedestrian, vehicle,
The dynamic objects such as vegetation, and marked.Judgement is successively marked in the characteristic point that SIFT is extracted, if characteristic point is fallen
In dynamically labeled region, then this feature point is left out.
3) characteristic point is chosen
Characteristic point after cleaning is subjected to sequence from big to small according to characteristic response degree, take responsiveness maximum and time
Two big characteristic points are as last candidate feature point.If the characteristic point in grid is less than two, its whole is selected as most
Candidate feature point afterwards.After the candidate feature point screening of each grid, the characteristic point of each grid is summarized.
Three, Feature Matching
Visual signature point cloud map generally comprises millions of or even up to ten million characteristic point and its description, if special every time
Sign point matching require with this up to ten million characteristic point one by one compared with, be very time-consuming.The present invention, which uses, is based on point Yun Tezheng
Data structure --- the bag of words library auxiliary matched that description is clustered, divided, essence are one poly- according to Feature Descriptor
Ten fork trees of class.Each node of ten fork trees is that the SIFT feature of 128 dimensions obtained according to K mean cluster algorithm describes
Son.There is the auxiliary of ten fork trees, can greatly accelerate matching speed.The committed step of entire characteristic matching is as follows:
1) ten fork tree aspect indexing of building
To the ten fork tree aspect indexing of all data building of visual signature point cloud map.The 0th layer first of node, feature
Vector is exactly the mean value of all the points cloud feature vector in visual signature point cloud map, this layer can be ignored, and is made since the 1st layer
With K mean cluster algorithm, 10 classes are divided into according to the Euclidean distance size of feature vector, the center (characteristic mean) of every one kind is
The feature vector of the node, every one kind in 10 classes, is further continued for being divided into 10 classes using K mean cluster algorithm, until that can not divide again
It is segmented into only.L layers of ten fork trees (ignoring the 0th layer) are established in total, and leaf node can be found by most searching for L times.
2) closest beacon point search
The Feature Descriptor for the characteristic point extracted first is set with ten forks to the feature of all nodes of first layer of aspect indexings
Vector is compared, and selects the shortest node continuation of Euclidean distance to search for down, until L layers.It is all with L layers of child node
Feature Descriptor carries out Euclidean distance calculating, finds the most short d of Euclidean distanceminWith secondary short dsminLeaf node as candidate section
Point, if most short Euclidean distance at this time is less than the minimum threshold of distance of setting and after ratio is examined,Then will
This feature point and Beacon Point are added to candidate matches point centering.
3) RANSAC+EPnP exterior point is rejected
According to n three-dimensional point coordinate and its corresponding pixel coordinate, phase can be acquired in the case where known camera internal reference
The pose of machine, which is commonly referred to as PnP (Perspective-n-Point) problem, and EPnP is most effective at present
PnP solution.RANSAC algorithm can be used for concentrating to obtain effective sample data in one group of sample data comprising abnormal data.Two
Person combines the point pair that can effectively identify except part matching error.
Due to RANSAC algorithm to sample size by centainly requiring, if candidate matches point skips the behaviour to less than 8 pairs
Make, on the contrary it is then using RANSAC+EPnP algorithm rejecting exterior point, concrete operations as shown in fig. 6, randomly select four groups of matching double points,
Camera pose is calculated using EPnP, back projection's error is calculated according to camera pose, the matching by back projection's error less than 2 pixels
Point is point to label, and records the interior subset.Aforesaid operations are recycled K times, K times is taken out and recycles interior idea concentration quantity most
Mostly is optimal subset, and using the corresponding camera pose of optimal subset as the camera pose finally estimated, if the camera of estimation
The geometric distance difference of position and inertial navigation forecast position is less than 10m, then it is assumed that RANSAC+EPnP success, by outside optimal subset
With point to being judged as exterior point, and rejected.
Four, vision/inertia fusion positioning under environment beacon is supported
By step 1,2,3, the matching double points of more reliable image pixel point and environment Beacon Point have been obtained, then root
According to resection principle can inverse go out position and the posture of camera.Since the nonlinear degree of vision collinearity equation is larger, therefore
It, which has to accurate initial value, can carry out least square resolving.If being assisted without any external information, at least need
It wants 4 pairs to be effectively matched a little, provides initial position by EPnP.But there is the auxiliary of inertial navigation forecast information, then only need 1 pair of match point i.e.
Auxiliary positioning can be carried out, detailed process is as follows:
1) building of resection equation
The pixel coordinate of the position of known environment Beacon Point and matched pixel asks position and the appearance of camera photocentre
State, fundamental equation are as follows:
Wherein, [u v] is pixel coordinate, and [x y z] is the Beacon Point coordinate under camera coordinates system, and P is that Beacon Point exists
Coordinate under ECEF system, state to be estimated arePcIt is coordinate of the camera photocentre under ECEF system,It is ECEF system to phase
The spin matrix of machine coordinate system.Since spin matrix can not be added and subtracted directly and derivation, so being expressed as misalignmentForm,
That is:
The camera position of formula (8) and posture are expressed as errors of form are as follows:
In above formula,For camera posture, the observation of position, δ PcFor location error.Sight is asked by chain rule
Measured value obtains following formula to the local derviation of state:
Resection observational equation can be constructed for each pair of match point, it is as follows to arrange form:
ωu,vFor the noise of pixel observation value.
2) vision/inertial navigation fusion observational equation
Since between camera pose and inertial navigation pose connection can be stood by taking part in building outside, so adding in GNSS/SINS tight integration
Enter vision not and will increase additional quantity of state, therefore, the state of filtering is still the navigational parameter of ins error model foundation
(position, speed and posture), SINS systematic error and GNSS relevant parameter.Only need to according to the outer ginseng relationship of inertial navigation and camera,
The relationship of the pixel observation value indicated in formula (12) and camera position and attitude error is converted into pixel observation value and inertial navigation position and attitude error
Relationship.
Inertial navigation and camera photocentre can be expressed as follows after space is demarcated:
In formula, PSINSIt is coordinate of the inertial navigation center under ECEF system,It is spin matrix of the inertial navigation coordinate system to ECEF system,
lbIt is lever arm component of the camera photocentre under inertial navigation coordinate,It is spin matrix of the camera coordinates system to ECEF system,It is camera
Spin matrix of the coordinate system to inertial navigation coordinate system.
Formula (10) can be converted according to formula (13) are as follows:
And becauseIt can indicate are as follows:
Therefore available p is to the partial derivative of inertial navigation pose:
Therefore, formula (12) can be converted are as follows:
Its final complete observational equation are as follows:
In formula, u1、unFor pixel observation value,For the pixel value that Beacon Point back projection obtains, i.e., in formula (17)εuFor pixel error.
Five, GNSS/SINS/ vision tight integration
In above-mentioned steps, we are had been obtained under the support of environment beacon, the location observation side of vision and inertial navigation fusion
Journey, such as formula.
When GNSS satellite number is more and PDOP value is smaller, illustrate that GNSS positioning geometric configuration is met the requirements, at this point, screen
Visual observation information is covered, only with GNSS/SINS tight integration, it is as follows to observe renewal equation:
In above formula,For using SINS recursion position calculate war star geometry double difference distance,WithThe station star double difference for respectively indicating the pseudorange exported using base station and rover station receiver and carrier phase calculating is seen
Measured value.For the GNSS lever arm reduction under ECEF system,For the double difference direction cosines of each satellite, εPAnd εLFor pseudorange, phase
The observation noise of position.
When GNSS satellite number is less, and visual sensor is successfully matched to several Beacon Points, then observes and updating in above formula
Vision pixel is added on the basis of equation and constrains observation, it is as follows to obtain observational equation:
When GNSS satellite is blocked completely, but vision can be extracted successfully when being matched to Beacon Point, only with the pixel of vision
Constrain observation, observational equation such as formula (18).
It should be understood that for those of ordinary skills, it can be modified or changed according to the above description,
And all these modifications and variations should all belong to the protection domain of appended claims of the present invention.
Claims (7)
1. a kind of GNSS/SINS/ vision tight integration method under environment beacon support, which is characterized in that in this method, pass through view
Feel that sensor acquires image, it is offline to generate the visual signature point cloud map comprising Feature Descriptor as priori environment bootstrap information
Library;When carrier carries out real-time resolving navigator fix, based on difference GNSS/SINS tight integration, when GNSS signal is hidden
When gear, by the visual sensor filmed image installed on carrier, dynamic disturbance removal and feature extraction are carried out to image, and will
The environment beacon in feature and visual signature point cloud map extracted carries out characteristic matching;It is more than or equal to 1 road when being matched to
When punctuate, the resection equation for pixel and the Beacon Point composition that will match to is added in tight integration as observation information
Auxiliary positioning is provided for SINS.
2. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 1, which is characterized in that
This method specifically includes the following steps:
Step 1 obtains the image after correction distortion to the Yunnan snub-nosed monkey acquired in real time, constructs convolutional Neural using deep learning
Network carries out semantic segmentation, and the corresponding block of pixels of dynamic object is marked;
Step 2 carries out grid dividing to image, and the SIFT feature operator extraction accelerated based on GPU is utilized respectively to each grid
Whether characteristic point, judging characteristic point fall into dynamic object marked region, if then by this feature point deletion, and by residue character point
It is ranked up screening by response intensity, takes out the first two characteristic point of the response intensity of each grid;
Characteristic point cloud map use feature ten is pitched number auxiliary storage, and accelerates characteristic matching by step 3, using Feature Descriptor
Euclidean geometry Furthest Neighbor obtain preliminary matches, when the environment Beacon Point being matched to is more than at 4, using RANSAC algorithm and
EPnP algorithm carries out exterior point interference elimination;
Step 4 constructs resection equation to the pixel and Beacon Point that are matched to, according to the outer ginseng relationship of inertial navigation and camera,
Equation is converted into the relationship between inertial navigation position and attitude error and pixel observation value, and is added to GNSS/ as observational equation
Auxiliary positioning in INS tight integration;
Step 5, moonscope quantity be greater than certain threshold value when, shield the observation value information of visual feedback, directly adopt GNSS/
The positioning of SINS tight integration;Conversely, if moonscope quantity is less than threshold value, it is total using GNSS/SINS/ vision after being screened to it
It is resolved with participation;If without, without remaining satellite, it is fixed to carry out tight integration only with visual observation and SINS after visible satellite or screening
Position.
3. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 2, which is characterized in that
Step 1 method particularly includes:
Before image collection, using Zhang Zhengyou camera calibration method, using MATLAB calibration tool case to the internal reference of camera into
Rower obtains the focal length of camera, principal point offset and radial direction, tangential distortion parameter surely;After obtaining the image acquired in real time,
Distortion correction is carried out using the internal reference and distortion parameter of calibration;
Image after correction obtains the Feature Mapping of the last one convolution figure layer first with CNN convolution, using pyramid pond
Module obtains the feature extraction effect of 4 different scales, and carries out 1*1 convolution to every layer of pyramid, to reduce dimension and dimension
Global characteristics weight is held, is up-sampled using bilinear interpolation to form the characteristic present comprising part and global information, most
Characterization is fed back into convolutional layer to realize the semantic segmentation of Pixel-level afterwards;It is dynamic by belonging to after obtaining the semantic segmentation result of image
The block of pixels of state object is marked.
4. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 2, which is characterized in that
Feature point extraction in step 2 method particularly includes:
Step 2.1, SIFT feature extract: using SIFT operator carry out feature point extraction, and using GPU to SIFT feature extract into
Row accelerates;
Step 2.2, dynamic area characteristic point are rejected: judgement are successively marked in the characteristic point that SIFT is extracted, if characteristic point
Dynamically labeled region is fallen in, then is left out this feature point;
Step 2.3, characteristic point are chosen: the characteristic point after cleaning being carried out sequence from big to small according to characteristic response degree, is taken
Responsiveness maximum and secondary two big characteristic points are as last candidate feature point;If the characteristic point in grid is less than two,
Its whole is then selected as to last candidate feature point;After the candidate feature point screening of each grid, to each grid
Characteristic point is summarized.
5. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 2, which is characterized in that
The method of step 3 progress characteristic matching specifically:
Step 3.1, ten fork tree aspect indexing of building: to the ten fork tree feature rope of all data building of visual signature point cloud map
Draw, establishes L layers of ten fork trees, at most search for L times and find leaf node;
Step 3.2, closest beacon point search: the Feature Descriptor for the characteristic point extracted first is set into aspect indexings with ten forks
The feature vector of all nodes of first layer is compared, and selects the shortest node continuation of Euclidean distance to search for down, until L
Layer;All Feature Descriptors of L layers of child node carry out Euclidean distance calculating, find the most short d of Euclidean distanceminWith secondary short dsmin
Leaf node as both candidate nodes, if most short Euclidean distance at this time is less than the minimum threshold of distance of setting and passes through ratio
After inspection,This feature point and Beacon Point are then added to candidate matches point centering;
Step 3.3, RANSAC algorithm and EPnP algorithm exterior point are rejected: if candidate matches point to being not less than 8 pairs, using RANSAC+
EPnP algorithm rejects exterior point, randomly selects four groups of matching double points, calculates camera pose using EPnP, is calculated according to camera pose anti-
Back projection's error is labeled as interior point less than the matching double points of 2 pixels, and records the interior subset by projection error;To above-mentioned
Operation circulation K times, taking out in recycling for K time that idea concentrates quantity most is optimal subset, and by the corresponding camera of optimal subset
Pose is as the camera pose finally estimated, if the geometric distance difference of the camera position of estimation and inertial navigation forecast position is less than 10m,
Then think the success of RANSAC+EPnP algorithm, the matching double points outside optimal subset is judged as exterior point, and rejected.
6. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 2, which is characterized in that
Step 4 method particularly includes:
The building of step 4.1, resection equation: the pixel of the position of known environment Beacon Point and matched pixel is sat
Mark, acquires position and the posture of camera photocentre;Formula camera position and posture are expressed as errors of form, sight is asked by chain rule
Measured value to the local derviation of state obtains that resection observational equation can be constructed for each pair of match point are as follows:
Wherein, [u v] is pixel coordinate, and [x y z] is the Beacon Point coordinate under camera coordinates system, and P is Beacon Point in ECEF system
Under coordinate, state to be estimated isPcIt is coordinate of the camera photocentre under ECEF system,It is ECEF system to camera coordinates
The spin matrix of system, misalignment areFor camera posture, the observation of position, δ PcFor location error, ωu,vFor
The noise of pixel observation value;
Step 4.2, vision/inertial navigation fusion observational equation: according to the outer ginseng relationship of inertial navigation and camera, by pixel observation value and camera
The relationship of position and attitude error is converted to the relationship of pixel observation value Yu inertial navigation position and attitude error, obtains observational equation are as follows:
Wherein, u1、unFor pixel observation value,For the pixel value that Beacon Point back projection obtains, εuFor pixel error, δ re、δve、
φ、ab、εb, δ N respectively indicate inertial navigation position, speed, attitude error, accelerometer bias, gyro zero bias and double difference it is fuzzy
Parameter is spent,Indicate pixel observation value to the partial derivative of camera position and attitude error.
7. the GNSS/SINS/ vision tight integration method under environment beacon support according to claim 6, which is characterized in that
Step 5 method particularly includes:
When GNSS satellite number is greater than threshold value, and PDOP value is less than threshold value, illustrate that GNSS positioning geometric configuration is met the requirements, this
When, visual observation information is shielded, only with GNSS/SINS tight integration, observes renewal equation:
Wherein,For using SINS recursion position calculate war star geometry double difference distance,WithPoint
The station star double difference observation that the pseudorange and carrier phase that Biao Shi not be exported using base station and rover station receiver are calculated,For
GNSS lever arm reduction under ECEF system,For the double difference direction cosines of each satellite, εPAnd εLObservation for pseudorange, phase is made an uproar
Sound;
When GNSS satellite number is less than threshold value, and visual sensor is successfully matched to several Beacon Points, then vision pixel is added about
Beam observation, obtains observational equation:
When GNSS satellite is blocked completely, but vision can be extracted successfully when being matched to Beacon Point, constrain only with the pixel of vision
Observation, vision/inertial navigation fusion observational equation that observational equation uses step 4.2 to obtain.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659046.7A CN110412635B (en) | 2019-07-22 | 2019-07-22 | GNSS/SINS/visual tight combination method under environment beacon support |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910659046.7A CN110412635B (en) | 2019-07-22 | 2019-07-22 | GNSS/SINS/visual tight combination method under environment beacon support |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110412635A true CN110412635A (en) | 2019-11-05 |
CN110412635B CN110412635B (en) | 2023-11-24 |
Family
ID=68362204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910659046.7A Active CN110412635B (en) | 2019-07-22 | 2019-07-22 | GNSS/SINS/visual tight combination method under environment beacon support |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110412635B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110871824A (en) * | 2019-11-22 | 2020-03-10 | 武汉纵横天地空间信息技术有限公司 | Method and system for monitoring surrounding environment of track |
CN111274343A (en) * | 2020-01-20 | 2020-06-12 | 北京百度网讯科技有限公司 | Vehicle positioning method and device, electronic equipment and storage medium |
CN111339483A (en) * | 2020-01-19 | 2020-06-26 | 武汉大学 | GNSS image generation method based on trend-removing cross-correlation analysis |
CN111340952A (en) * | 2020-05-19 | 2020-06-26 | 北京数字绿土科技有限公司 | Method and device for mapping mobile measurement unlocking region |
CN111735451A (en) * | 2020-04-16 | 2020-10-02 | 中国北方车辆研究所 | Point cloud matching high-precision positioning method based on multi-source prior information |
CN111854678A (en) * | 2020-07-17 | 2020-10-30 | 浙江工业大学 | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision |
CN112363193A (en) * | 2020-11-17 | 2021-02-12 | 西安恒图智源信息科技有限责任公司 | High-precision positioning system based on residual modeling and GNSS combined inertial navigation system |
CN112946697A (en) * | 2021-01-29 | 2021-06-11 | 合肥工业大学智能制造技术研究院 | Satellite signal cycle slip detection and restoration method based on deep learning |
CN113111973A (en) * | 2021-05-10 | 2021-07-13 | 北京华捷艾米科技有限公司 | Depth camera-based dynamic scene processing method and device |
CN113255600A (en) * | 2021-06-29 | 2021-08-13 | 上海影创信息科技有限公司 | Point cloud map updating optimization method, system, medium and equipment based on client |
CN113433576A (en) * | 2021-06-28 | 2021-09-24 | 中国科学院国家授时中心 | GNSS and V-SLAM fusion positioning method and system |
CN114199259A (en) * | 2022-02-21 | 2022-03-18 | 南京航空航天大学 | Multi-source fusion navigation positioning method based on motion state and environment perception |
CN114396943A (en) * | 2022-01-12 | 2022-04-26 | 国家电网有限公司 | Fusion positioning method and terminal |
CN115661453A (en) * | 2022-10-25 | 2023-01-31 | 腾晖科技建筑智能(深圳)有限公司 | Tower crane hanging object detection and segmentation method and system based on downward viewing angle camera |
CN117268373A (en) * | 2023-11-21 | 2023-12-22 | 武汉大学 | Autonomous navigation method and system for multi-sensor information fusion |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103292804A (en) * | 2013-05-27 | 2013-09-11 | 浙江大学 | Monocular natural vision landmark assisted mobile robot positioning method |
CN103424114A (en) * | 2012-05-22 | 2013-12-04 | 同济大学 | Visual navigation/inertial navigation full combination method |
CN103644904A (en) * | 2013-12-17 | 2014-03-19 | 上海电机学院 | Visual navigation method based on SIFT (scale invariant feature transform) algorithm |
CN106780699A (en) * | 2017-01-09 | 2017-05-31 | 东南大学 | A kind of vision SLAM methods aided in based on SINS/GPS and odometer |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
-
2019
- 2019-07-22 CN CN201910659046.7A patent/CN110412635B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103424114A (en) * | 2012-05-22 | 2013-12-04 | 同济大学 | Visual navigation/inertial navigation full combination method |
CN103292804A (en) * | 2013-05-27 | 2013-09-11 | 浙江大学 | Monocular natural vision landmark assisted mobile robot positioning method |
CN103644904A (en) * | 2013-12-17 | 2014-03-19 | 上海电机学院 | Visual navigation method based on SIFT (scale invariant feature transform) algorithm |
CN106780699A (en) * | 2017-01-09 | 2017-05-31 | 东南大学 | A kind of vision SLAM methods aided in based on SINS/GPS and odometer |
CN109991636A (en) * | 2019-03-25 | 2019-07-09 | 启明信息技术股份有限公司 | Map constructing method and system based on GPS, IMU and binocular vision |
Non-Patent Citations (3)
Title |
---|
SHUNSUKE KAMIJO 等: "GNSS/INS/On-Board Camera Integration for Vehicle Self-Localization in Urban Canyon", 《2015 IEEE 18TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS》 * |
TUAN LI 等: "Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments", 《REMOTE SENSING》 * |
XIAOHONG ZHANG 等: "New optimal smoothing scheme for improving relative and absolute accuracy of tightly coupled GNSS/SINS integration", 《SPRINGER》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110871824A (en) * | 2019-11-22 | 2020-03-10 | 武汉纵横天地空间信息技术有限公司 | Method and system for monitoring surrounding environment of track |
CN111339483A (en) * | 2020-01-19 | 2020-06-26 | 武汉大学 | GNSS image generation method based on trend-removing cross-correlation analysis |
CN111339483B (en) * | 2020-01-19 | 2022-03-11 | 武汉大学 | GNSS image generation method based on trend-removing cross-correlation analysis |
CN111274343A (en) * | 2020-01-20 | 2020-06-12 | 北京百度网讯科技有限公司 | Vehicle positioning method and device, electronic equipment and storage medium |
CN111274343B (en) * | 2020-01-20 | 2023-11-24 | 阿波罗智能技术(北京)有限公司 | Vehicle positioning method and device, electronic equipment and storage medium |
CN111735451B (en) * | 2020-04-16 | 2022-06-07 | 中国北方车辆研究所 | Point cloud matching high-precision positioning method based on multi-source prior information |
CN111735451A (en) * | 2020-04-16 | 2020-10-02 | 中国北方车辆研究所 | Point cloud matching high-precision positioning method based on multi-source prior information |
CN111340952B (en) * | 2020-05-19 | 2020-09-04 | 北京数字绿土科技有限公司 | Method and device for mapping mobile measurement unlocking region |
CN111340952A (en) * | 2020-05-19 | 2020-06-26 | 北京数字绿土科技有限公司 | Method and device for mapping mobile measurement unlocking region |
CN111854678B (en) * | 2020-07-17 | 2022-02-15 | 浙江工业大学 | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision |
CN111854678A (en) * | 2020-07-17 | 2020-10-30 | 浙江工业大学 | Pose measurement method based on semantic segmentation and Kalman filtering under monocular vision |
CN112363193B (en) * | 2020-11-17 | 2023-11-24 | 西安恒图智源信息科技有限责任公司 | High-precision positioning system based on residual modeling and GNSS combined inertial navigation system |
CN112363193A (en) * | 2020-11-17 | 2021-02-12 | 西安恒图智源信息科技有限责任公司 | High-precision positioning system based on residual modeling and GNSS combined inertial navigation system |
CN112946697A (en) * | 2021-01-29 | 2021-06-11 | 合肥工业大学智能制造技术研究院 | Satellite signal cycle slip detection and restoration method based on deep learning |
CN113111973A (en) * | 2021-05-10 | 2021-07-13 | 北京华捷艾米科技有限公司 | Depth camera-based dynamic scene processing method and device |
CN113433576A (en) * | 2021-06-28 | 2021-09-24 | 中国科学院国家授时中心 | GNSS and V-SLAM fusion positioning method and system |
CN113433576B (en) * | 2021-06-28 | 2023-09-01 | 中国科学院国家授时中心 | GNSS and V-SLAM fusion positioning method and system |
CN113255600B (en) * | 2021-06-29 | 2021-10-01 | 上海影创信息科技有限公司 | Point cloud map updating optimization method, system, medium and equipment based on client |
CN113255600A (en) * | 2021-06-29 | 2021-08-13 | 上海影创信息科技有限公司 | Point cloud map updating optimization method, system, medium and equipment based on client |
CN114396943A (en) * | 2022-01-12 | 2022-04-26 | 国家电网有限公司 | Fusion positioning method and terminal |
CN114199259A (en) * | 2022-02-21 | 2022-03-18 | 南京航空航天大学 | Multi-source fusion navigation positioning method based on motion state and environment perception |
CN115661453A (en) * | 2022-10-25 | 2023-01-31 | 腾晖科技建筑智能(深圳)有限公司 | Tower crane hanging object detection and segmentation method and system based on downward viewing angle camera |
CN115661453B (en) * | 2022-10-25 | 2023-08-04 | 腾晖科技建筑智能(深圳)有限公司 | Tower crane object detection and segmentation method and system based on downward view camera |
CN117268373A (en) * | 2023-11-21 | 2023-12-22 | 武汉大学 | Autonomous navigation method and system for multi-sensor information fusion |
CN117268373B (en) * | 2023-11-21 | 2024-02-13 | 武汉大学 | Autonomous navigation method and system for multi-sensor information fusion |
Also Published As
Publication number | Publication date |
---|---|
CN110412635B (en) | 2023-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110412635A (en) | A kind of environment beacon support under GNSS/SINS/ vision tight integration method | |
CN108765487B (en) | Method, device, equipment and computer readable storage medium for reconstructing three-dimensional scene | |
CN109059906B (en) | Vehicle positioning method and device, electronic equipment and storage medium | |
CN105930819A (en) | System for real-time identifying urban traffic lights based on single eye vision and GPS integrated navigation system | |
CN110411462A (en) | A kind of GNSS/ inertia/lane line constraint/odometer multi-source fusion method | |
CN107690840B (en) | Unmanned plane vision auxiliary navigation method and system | |
CN111652179A (en) | Semantic high-precision map construction and positioning method based on dotted line feature fusion laser | |
CN114199259B (en) | Multi-source fusion navigation positioning method based on motion state and environment perception | |
CN108986037A (en) | Monocular vision odometer localization method and positioning system based on semi-direct method | |
CN109596121B (en) | Automatic target detection and space positioning method for mobile station | |
CN113358112B (en) | Map construction method and laser inertia odometer | |
CN114526745B (en) | Drawing construction method and system for tightly coupled laser radar and inertial odometer | |
US20200364883A1 (en) | Localization of a mobile unit by means of a multi-hypothesis kalman filter method | |
WO2021021862A1 (en) | Mapping and localization system for autonomous vehicles | |
CN111340855A (en) | Road moving target detection method based on track prediction | |
CN113516664A (en) | Visual SLAM method based on semantic segmentation dynamic points | |
CN108846333A (en) | Sign board landmark data collection generates and vehicle positioning method | |
Brenner et al. | Extracting landmarks for car navigation systems using existing GIS databases and laser scanning | |
CN114565674B (en) | Method and device for purely visually positioning urban structured scene of automatic driving vehicle | |
CN114019552A (en) | Bayesian multi-sensor error constraint-based location reliability optimization method | |
CN116839600A (en) | Visual mapping navigation positioning method based on lightweight point cloud map | |
CN116448100A (en) | Multi-sensor fusion type offshore unmanned ship SLAM method | |
CN108921896B (en) | Downward vision compass integrating dotted line characteristics | |
Sun et al. | Geographic, geometrical and semantic reconstruction of urban scene from high resolution oblique aerial images. | |
CN110246165A (en) | It improves visible images and SAR image matches the method and system of Quasi velosity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |