CN110490900A - Binocular visual positioning method and system under dynamic environment - Google Patents
Binocular visual positioning method and system under dynamic environment Download PDFInfo
- Publication number
- CN110490900A CN110490900A CN201910634021.1A CN201910634021A CN110490900A CN 110490900 A CN110490900 A CN 110490900A CN 201910634021 A CN201910634021 A CN 201910634021A CN 110490900 A CN110490900 A CN 110490900A
- Authority
- CN
- China
- Prior art keywords
- characteristic point
- pose
- point
- map
- estimation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
A kind of binocular visual positioning method and system under dynamic environment, applied to robot localization technical field, extract image characteristic point, characteristic point in characteristic point and non-moving objects is distinguished on moving object point by way of to feature points clustering, this method can effectively reject the characteristic point on the object moved, it avoids rejecting the characteristic point on static dynamic attribute object, and then retain more available feature points, it is screened by interframe module and carriage transformation matrix that IMU pre-integration algorithm obtains and the immediate pose based on the estimation of different characteristic point set of IMU data, to effectively reject the characteristic point that some exterior points such as belong to moving object;The pose and depth estimation method that the stereoscopic vision of the movement vision of monocular and binocular is blended, can effectively improve the precision of estimation.
Description
Technical field
The present invention relates to a kind of binocular visual positioning methods under robot localization technical field more particularly to dynamic environment
And system.
Background technique
Traditional GPS positioning technology is in, in groups of building more, serious shielding in the environment of essence strong to satellite-signal dependence
Sharp fall is spent, especially environment is not available substantially indoors.Inertia based on inertial navigation module (gyroscope, accelerometer)
Airmanship does not depend on external information, can be obtained by the current pose of carrier by the integral of itself inertial navigation metrical information and believes
Breath, but inertial navigation position error increases at any time, due to the presence of accumulated error, long-term accuracy is poor.View-based access control model
Localization method, then be the pose by determining robot to the analysis of image sequence, and the method for view-based access control model both can be used for
The abundant information that environment is blocked also and can use vision in interior etc., which carries out closed loop detection, to be eliminated accumulated error and realizes long-term high-precision
Positioning.The localization method of view-based access control model positioned while robot field is mainly based upon vision with build figure (SLAM,
Simultaneous Localization and Mapping), abbreviation vision SLAM, the vision SLAM without establishing map
Referred to as visual odometry, we are referred to as vision SLAM here.Current vision SLAM is divided into monocular according to the difference of sensor
SLAM, binocular SLAM, RGB-D SLAM, due to monocular SLAM there are scale uncertain problem RGB-D SLAM cost mistake
Height, precision is lower, therefore the scale uncertain problem of monocular had both been not present using binocular SLAM in we, and cost is also relatively
It is low.At present SLAM location technology be all based on static environment mostly it is assumed that however actual environment is complicated dynamic scene mostly,
In dynamic scene, these precision based on the SLAM algorithm that static state is assumed are substantially reduced, some in recent years semantic-based dynamic
State scene SLAM technology is proposed to, and semantics for short SLAM, semantic SLAM pass through deep learning or other computer vision sides
Method detects the object category in scene with dynamic attribute, is carrying out first rejecting dynamic attribute before pose is estimated using image information
The object of classification, to reduce the influence that dynamic object estimates pose.
But this method has the disadvantage that: (1) it is image detecting technique itself has very high computation complexity, depending on
Position technology has very high requirement to real-time, and the real-time that the frames such as addition deep learning can make in SLAM system is by very
Big influence.(2) it in SLAM system, does not establish the visual odometry of environmental map especially, is carried out using image information
When pose is estimated, influence pose estimated accuracy is not the object for having the object of dynamic attribute, but moving, such as
The automobile for being parked in parking lot belongs to dynamic attribute object, but the automobile may linger in parking lot, not influence at least
The pose of present image information is estimated, more characteristic informations is provided instead for pose estimation, especially in the field of feature rareness
Scape, this temporarily static dynamic object can provide the accuracy that more features increase pose estimation instead.
Summary of the invention
The main purpose of the present invention is to provide the binocular visual positioning method and system under a kind of dynamic environment.
To achieve the above object, first aspect of the embodiment of the present invention provides the binocular visual positioning side under a kind of dynamic environment
Method, comprising:
The left mesh image and right mesh image under camera binocular vision are obtained, and extracts the left mesh image and right mesh image institute
The corresponding characteristic point of point map in characteristic point and affiliated previous frame in the present frame of category, the point map are that map is added
Characteristic point;
According to the characteristic point, the left mesh and right purpose movement vision matching characteristic point pair are obtained, and, stereoscopic vision
Matching characteristic point pair;
Screen the stereoscopic vision matching characteristic point to and movement vision matching characteristic point pair shared matching characteristic point pair;
Respectively to the shared matching characteristic point to the characteristic point of concentration to three-dimensional depth estimation is carried out, obtain three-dimensional depth
Estimated value;
Using K-means++ clustering algorithm, by the left mesh and right purpose movement vision matching characteristic point to according to movement
Optical flow velocity and rotation angle between vision matching characteristic point pair cluster, and obtain cluster result, and will according to cluster result
The movement vision matching characteristic point is to being divided into multiple classifications;
Using IMU pre-integration algorithm, the left mesh and right mesh are calculated in the period between present frame and previous frame
The pose estimated result of IMU, the pose estimated result include the interframe pose between the present frame and previous frame;
To lower movement vision matching characteristic point of all categories to the estimation of EPNP pose and map point estimation is carried out respectively, obtain
The pose estimated result and point map estimated result of EPNP;
It chooses and estimates with the pose estimated result of the most similar EPNP of pose estimated result of IMU as best EPNP pose
As a result with point map estimated result, and record the best EPNP pose estimated result and point map estimated result it is corresponding effectively
Matching characteristic point pair;
Characteristic point is effectively matched to corresponding characteristic point pair described in selecting from the shared matching characteristic point pair, and to institute
It states characteristic point and corresponding three-dimensional depth estimated value is merged with the depth value progress Kalman filtering that EPNP estimates, obtain estimation position
Appearance and estimation point map.
Further, the IMU pre-integration algorithm:
Wherein,It is illustrated respectively under kth frame camera coordinates system from kth frame to k+1 frame period
Interior displacement increment, speed increment and the quaternary number for rotating angle change, atIndicate accelerometer measured value, wtIndicate angular speed
Measured value, batAnd bwtRespectively indicate the biasing of accelerometer and gyroscope, naAnd nwNoise is respectively indicated,Indicate moment t phase
It, can be by for the rotational transformation matrix of kth frame camera coordinates systemCorrelation formula obtains, and Ω is relevant to the calculating of quaternary number
Matrix, Δ tkIndicate tkTo tk+1Time interval, gwIndicate the gravity vector under world coordinate system.
Further, it is described according to cluster result by the movement vision matching characteristic point to multiple classifications are divided into after,
Include:
Histogram statistics are done to the number of the characteristic point under of all categories;
For corresponding characteristic point under of all categories quantity less than the first predetermined number and with the characteristic point under other classifications
Number difference be greater than the second predetermined number classification, delete the classification, and, corresponding characteristic point under the classification.
Further, it is described to lower movement vision matching characteristic point of all categories to carrying out the estimation of EPNP pose and map respectively
Point estimation, the pose estimated result and point map estimated result for obtaining EPNP include:
The estimation of EPNP pose and map point estimation are carried out to lower characteristic point of all categories respectively, use one at random by RANSAC
Cause property algorithm carries out exterior point rejecting to the pose estimated result of each classification, and the pose for obtaining the corresponding EPNP of each classification is estimated
Count result and point map estimated result.
It is further, described to obtain after estimating pose and estimating point map, comprising:
According to the interframe pose of the IMU, using the optimization algorithm based on figure, to the estimation pose and estimation point map
It optimizes, the point map after being optimized;
The optimization algorithm of the figure are as follows:
Its byIndicate the estimation module and carriage transformation matrix that EPNP pose is estimated,Indicate IMU pre-integration algorithm
Obtained module and carriage transformation matrix, zk+1Indicate the pixel coordinate of camera image, PkIndicate the point map under kth frame camera coordinates system
3D coordinate, πc() is the projective transformation of the model based on camera, for being camera image by 3D changes in coordinates under camera coordinates system
In 2D pixel coordinate.
Further, it is described to the estimation pose and estimation point map optimize, the point map after being optimized it
Afterwards, comprising:
Using the point map before the point map replacement optimization after the optimization, the local map is updated, rejecting is not belonging to
The point map of the local map;
Whether that compares between present frame and secondary new key frame total is higher than preset threshold depending on characteristic point;
If being higher than the preset threshold, newest key frame is replaced using current key frame;
If being lower than the preset threshold, key frame set is added in present frame.
Further, described according to the characteristic point, obtain the left mesh and right purpose movement vision matching characteristic point pair
Include:
To the corresponding spy of point map in the characteristic point and affiliated previous frame in present frame belonging to the left mesh image
Sign point carries out ORB description son matching, obtains left purpose matching characteristic point pair;
To the corresponding spy of point map in the characteristic point and affiliated previous frame in present frame belonging to the right mesh image
Sign point carries out ORB description son matching, obtains right purpose matching characteristic point pair;Wherein, the left mesh and right purpose movement vision
With characteristic point to include the left purpose matching characteristic point to the right purpose matching characteristic point pair.
Further, described according to the characteristic point, obtain the left mesh and right purpose stereoscopic vision matching characteristic point pair
Include:
Respectively in present frame belonging to the left mesh image characteristic point and the right mesh image belonging in present frame
Characteristic point carry out ORB description matching and SAD Stereo matching respectively, obtain the left mesh and the matching of right purpose stereoscopic vision be special
Sign point pair.
Second aspect of the embodiment of the present invention provides the binocular visual positioning system under a kind of dynamic environment, comprising:
Main thread, pose estimation thread, winding thread and rear end optimize thread;
The main thread for obtaining left mesh image and right mesh image under camera binocular vision, and is initialized, when
After initializing successfully, the Restart Signal that pose estimation thread or winding thread are sent is received, with according to the Restart Signal,
It is initialized again;
The pose estimates thread, for obtaining left mesh image and right mesh image under camera binocular vision, and extracts institute
State the corresponding feature of point map in the characteristic point and affiliated previous frame in present frame belonging to left mesh image and right mesh image
Point calculates the estimation pose and estimation point map of present frame according to the characteristic point;
The winding thread estimates the module and carriage transformation matrix between present frame and the winding frame for detecting winding frame,
And on the basis of the pose of the winding frame, it is updated by pose of the module and carriage transformation matrix to the present frame;
The rear end optimizes thread, for the pose estimated result and present frame and winding according to IMU pre-integration algorithm
Module and carriage transformation matrix between frame optimizes the pose of all frames.
Further, the estimation pose and estimation point map for according to the characteristic point, calculating present frame includes:
According to the characteristic point, the left mesh and right purpose movement vision matching characteristic point pair are obtained, and, stereoscopic vision
Matching characteristic point pair;
Screen the stereoscopic vision matching characteristic point to and movement vision matching characteristic point pair shared matching characteristic point pair;
Respectively to the shared matching characteristic point to the characteristic point of concentration to three-dimensional depth estimation is carried out, obtain three-dimensional depth
Estimated value;
Using K-means++ clustering algorithm, by the left mesh and right purpose movement vision matching characteristic point to according to movement
Optical flow velocity and rotation angle between vision matching characteristic point pair cluster, and obtain cluster result, and will according to cluster result
The movement vision matching characteristic point is to being divided into multiple classifications;
Using IMU pre-integration algorithm, the left mesh and right mesh are calculated in the period between present frame and previous frame
The pose estimated result of IMU, the pose estimated result include the interframe pose between the present frame and previous frame;
To lower movement vision matching characteristic point of all categories to the estimation of EPNP pose and map point estimation is carried out respectively, obtain
The pose estimated result and point map estimated result of EPNP;
It chooses and estimates with the pose estimated result of the most similar EPNP of pose estimated result of IMU as best EPNP pose
As a result with point map estimated result, and record the best EPNP pose estimated result and point map estimated result it is corresponding effectively
Matching characteristic point pair;
Characteristic point is effectively matched to corresponding characteristic point pair described in selecting from the shared matching characteristic point pair, and to institute
It states characteristic point and corresponding three-dimensional depth estimated value is merged with the depth value progress Kalman filtering that EPNP estimates, obtain estimation position
Appearance and estimation point map.
The present invention extracts image characteristic point, and characteristic point and non-is distinguished on moving object point by way of to feature points clustering
Characteristic point in moving object, this method can effectively reject the characteristic point on the object moved, avoid rejecting static
Characteristic point on dynamic attribute object, and then retain more available feature points;The interframe pose obtained by IMU pre-integration algorithm
Transformation matrix come screen with IMU data it is immediate based on different characteristic point set estimation pose, to effectively reject
Exterior point such as belongs to the characteristic point of moving object;The pose and depth that the stereoscopic vision of the movement vision of monocular and binocular is blended
Estimation method is spent, can effectively improve the precision of estimation.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of invention for those skilled in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.
Fig. 1 is the flow diagram of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides;
Fig. 2 is total block schematic illustration of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides;
Fig. 3 is that the process of the main thread of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides is shown
It is intended to;
Fig. 4 is that the pose of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides estimates thread
Flow diagram;
Fig. 5 is that the pose of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides estimates thread
The partial enlarged view of flow diagram;
Fig. 6 is that the pose of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides estimates thread
The partial enlarged view of flow diagram;
Fig. 7 is that the pose of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides estimates thread
The partial enlarged view of flow diagram;
Fig. 8 is the characteristic point of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides to relatively fast
Degree and angle schematic diagram;
Fig. 9 is the process of the winding thread of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides
Schematic diagram;
Figure 10 is the winding detection schematic diagram that one embodiment of the invention provides;
Figure 11 is the rear end Optimizing Flow schematic diagram that one embodiment of the invention provides;
It is rear end effect of optimization schematic diagram that Figure 12, which is that one embodiment of the invention provides,.
Specific embodiment
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described reality
Applying example is only a part of the embodiment of the present invention, and not all embodiments.Based on the embodiments of the present invention, those skilled in the art
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
In the application, the method for screening moving object is not only applicable to binocular camera, for monocular, more mesh, RGB-D phase
Machine similarly uses, therefore can similarly reach the purpose of the present invention using monocular, more mesh, RGB-D camera.The embodiment of the present application
By taking vision SLAM algorithm as an example, but for laser sensor, based on method proposed by the present invention screening moving object
Non-dynamic object mechanism can also be completed in laser SLAM, therefore use in laser SLAM and carry out cluster side to similar feature
Method can also complete the pose estimation that moving object is proposed under dynamic environment.
Referring to Fig. 1, Fig. 1 is the process of the binocular visual positioning method under the dynamic environment that one embodiment of the invention provides
Schematic diagram this method specifically includes that
Left mesh image and right mesh image under S101, acquisition camera binocular vision;
S102, it extracts in characteristic point and affiliated previous frame in present frame belonging to the left mesh image and right mesh image
The corresponding characteristic point of point map, the point map are the characteristic point that map is added;
S103, according to this feature point, obtain the left mesh and right purpose movement vision matching characteristic point pair, and, stereopsis
Feel matching characteristic point pair;
S104, screen the stereoscopic vision matching characteristic point to and movement vision matching characteristic point pair shared matching characteristic point
It is right;
S105, matching characteristic point is shared to the characteristic point of concentration to three-dimensional depth estimation is carried out to this respectively, obtains solid
Estimation of Depth value;
S106, using K-means++ clustering algorithm, by the left mesh and right purpose movement vision matching characteristic point to according to fortune
Optical flow velocity and rotation angle between dynamic vision matching characteristic point pair cluster, and obtain cluster result, and according to cluster result
By the movement vision matching characteristic point to being divided into multiple classifications;
S107, using IMU pre-integration algorithm, calculate the left mesh and right mesh in the period between present frame and previous frame
IMU pose estimated result;
S108, to lower movement vision matching characteristic point of all categories to carrying out the estimation of EPNP pose and map point estimation respectively,
Obtain the pose estimated result and point map estimated result of EPNP;
The pose estimated result of the most similar EPNP of the pose estimated result of S109, selection and IMU is as EPNP best
Appearance estimated result and point map estimated result, and record the best EPNP pose estimated result and point map estimated result is corresponding
It is effectively matched characteristic point pair;
S110, it is shared from this and selects this in matching characteristic point pair and be effectively matched characteristic point to corresponding characteristic point pair, and is right
This feature point carries out Kalman filtering with the depth value that EPNP estimates to corresponding three-dimensional depth estimated value and merges, and obtains estimation position
Appearance and estimation point map.
Further, the IMU pre-integration algorithm:
Wherein,It is illustrated respectively under kth frame camera coordinates system from kth frame to k+1 frame period
Interior displacement increment, speed increment and the quaternary number for rotating angle change, atIndicate accelerometer measured value, wtIndicate angular speed
Measured value, batAnd bwtRespectively indicate the biasing of accelerometer and gyroscope, naAnd nwNoise is respectively indicated,Indicate moment t phase
It, can be by for the rotational transformation matrix of kth frame camera coordinates systemCorrelation formula obtains, and Ω is relevant to the calculating of quaternary number
Matrix, Δ tkIndicate tkTo tk+1Time interval, gwIndicate the gravity vector under world coordinate system.
Further, should according to cluster result by the movement vision matching characteristic point to being divided into after multiple classifications, comprising:
Histogram statistics are done to the number of the characteristic point under of all categories;
For corresponding characteristic point under of all categories quantity less than the first predetermined number and with the characteristic point under other classifications
Number difference be greater than the second predetermined number classification, delete the category, and, corresponding characteristic point under the category.
Further, this is to lower movement vision matching characteristic point of all categories to progress EPNP pose estimation and point map respectively
Estimation, the pose estimated result and point map estimated result for obtaining EPNP include:
The estimation of EPNP pose and map point estimation are carried out to lower characteristic point of all categories respectively, use one at random by RANSAC
Cause property algorithm carries out exterior point rejecting to the pose estimated result of each classification, and the pose for obtaining the corresponding EPNP of each classification is estimated
Count result and point map estimated result.
Further, this is obtained after estimating pose and estimating point map, comprising:
According to the pose estimated result of the IMU, using the optimization algorithm based on figure, to the estimation pose and estimation point map
It optimizes, the point map after being optimized;
The optimization algorithm of the figure are as follows:
Its byIndicate the estimation module and carriage transformation matrix that EPNP pose is estimated,Indicate IMU pre-integration algorithm
Obtained module and carriage transformation matrix, zk+1Indicate that the pixel coordinate of camera image, Pk indicate the point map under kth frame camera coordinates system
3D coordinate, πc() is the projective transformation of the model based on camera, for being camera image by 3D changes in coordinates under camera coordinates system
In 2D pixel coordinate.
Further, this optimizes the estimation pose and estimation point map, after the point map after being optimized, packet
It includes:
Using the point map before the point map replacement optimization after the optimization, the local map is updated, rejecting is not belonging to the office
The point of portion's map;
Whether that compares between present frame and secondary new key frame total is higher than preset threshold depending on characteristic point;
If being higher than the preset threshold, newest key frame is replaced using current key frame;
If being lower than the preset threshold, key frame set is added in present frame.
Further, this is according to this feature point, obtains the left mesh and right purpose movement vision matching characteristic point to including:
To the corresponding feature of point map in the characteristic point and affiliated previous frame in present frame belonging to the left mesh image
Point carries out ORB description son matching, obtains left purpose matching characteristic point pair;
To the corresponding feature of point map in the characteristic point and affiliated previous frame in present frame belonging to the right mesh image
Point carries out ORB description son matching, obtains right purpose matching characteristic point pair;Wherein, the left mesh and the matching of right purpose movement vision are special
Sign point to include the left purpose matching characteristic point to the right purpose matching characteristic point pair.
Further, this is according to this feature point, obtains the left mesh and right purpose stereoscopic vision matching characteristic point to including:
Respectively in present frame belonging to the left mesh image characteristic point and the right mesh image belonging to spy in present frame
Sign point carries out ORB description matching and SAD Stereo matching respectively, obtains the left mesh and right purpose stereoscopic vision matching characteristic point
It is right.
In embodiments of the present invention, the present invention is rejecting the object moved rather than dynamic object, passes through the side of cluster
Formula is come the method for distinguishing moving object and non-moving objects, by being clustered to interframe characteristic point to speed and angle, thus
The matching characteristic point set and non-moving objects feature point set of moving object are distinguished;It is obtained by IMU pre-integration algorithm
Interframe module and carriage transformation matrix come screen with IMU data it is immediate based on different characteristic point set estimation pose, thus effectively
Reject the characteristic point that some exterior points such as belong to moving object;The stereoscopic vision of the movement vision of monocular and binocular is blended
Pose and depth estimation method can effectively improve the precision of estimation.
Referring to Fig. 2, Fig. 2 is the knot of the binocular visual positioning system under the dynamic environment that further embodiment of this invention provides
Structure schematic diagram, the system specifically include that
Main thread, pose estimation thread, winding thread and rear end optimize thread;
The main thread for obtaining left mesh image and right mesh image under camera binocular vision, and is initialized, originally
Begin to receive the Restart Signal that pose estimation thread or winding thread are sent after chemical conversion function, with according to the Restart Signal, again into
Row initialization;
Specifically, as shown in figure 3, the function of system main thread mainly opens other threads and initialization map, after being
Continue other threads and initial trusted data be provided, main flow is as follows with working principle:
Image is obtained from the left mesh of binocular camera and right mesh respectively, extracts left mesh image and right mesh figure using FAST Corner Detection
Then the characteristic point of picture divides an image into 10 sub-image areas, each sub-image area according to Harris response from
The small N number of angle point with maximum response of selection is arrived greatly, as final set of characteristic points.And calculate the rotation of each characteristic point
Turn direction, the direction of rotation of characteristic point be defined as the geometric center of the image block centered on changing characteristic point to the direction of mass center to
Amount.Then calculate characteristic point description son, description son using with ORB (Oriented FAST and Rotated
BRIEF) identical binary system BRIEF description.
ORB description is first carried out with right mesh image characteristic point to left mesh image characteristic point to match, and then carries out SAD (Sum
Of absolute differences) sub-pixel matching, validity feature point is carried out followed by characteristic point rotation angle consistency
Screening is to obtain left and right camera image accurately special matching characteristic point pair.
Utilize binocular camera modelThe real depth of matching characteristic point is calculated, z indicates to refer in camera here
Under system, the true coordinate of characteristic point along the z-axis direction, f indicates that the focal length of camera, b are the baseline of camera, uL、uRRespectively characteristic point
Pixel abscissa in left mesh image and right mesh image.Whether detection z is positive, and if the success of canonical estimation of Depth, reduction should
The 3D coordinate of characteristic point, we are called point map to this 3D coordinate, are not the failures of this feature point estimation of Depth for canonical, reject the spy
Sign point pair.
If successful match characteristic point is to greater than certain threshold value, then it is assumed that initialize successfully, otherwise re-start initialization.
Once initialization confirm successfully, then by point map addition local map and global map class in the middle and by present frame
It is added to key frame, and present frame is labeled as previous frame, to carry out subsequent processing.Local map refers to current image frame
And the point map that corresponding point map is made of matching characteristic points all on the total past frame depending on relationship with current image frame
Set, and global map is then the map point set of all point map compositions in the past.Key frame set is then to safeguard data
Qualified picture frame is only saved in locally as key frame, is made by subsequent closed loop detection and rear end optimization by collection scale
With.
After whole system initializes successfully, just enter other thread signal states of poll, if receiving other threads hair
The Restart Signal come is just again introduced into and reinitializes system since the left mesh of binocular camera and right mesh obtain image respectively.
The pose estimates thread, for obtaining left mesh image and right mesh image under camera binocular vision, and extracts the left side
The corresponding characteristic point of point map in characteristic point and affiliated previous frame in present frame belonging to mesh image and right mesh image, root
According to this feature point, the estimation pose and estimation point map of present frame are calculated;
Specifically, please referring to Fig. 4, Fig. 5, Fig. 6 and Fig. 7, pose estimates that the course of work of thread is as follows:
Image is obtained from the left mesh of binocular camera and right mesh respectively,
Image is obtained from the left mesh of binocular camera and right mesh respectively, extracts left mesh image and right mesh figure using FAST Corner Detection
Then the characteristic point of picture divides an image into 10 sub-image areas, each sub-image area according to Harris response from
The small N number of angle point with maximum response of selection is arrived greatly, as final set of characteristic points.And calculate the rotation of each characteristic point
Turn direction, the direction of rotation of characteristic point be defined as the geometric center of the image block centered on changing characteristic point to the direction of mass center to
Amount.Then calculate characteristic point description son, description son using with ORB (Oriented FAST and Rotated
BRIEF) identical binary system BRIEF description.Acceleration a is read from preset inertial navigation module (IMU module)tWith angular speed wt, and
And acceleration and angular speed are filtered.
Wherein, IMU inertial navigation module is as supplementary means for screening suitable vision positioning under dynamic environment as a result, utilizing
Code-disc, encoder, electronic compass etc. can obtain the sensor of robot displacement, speed, angular speed, angle as supplementary means
It equally can also complete the treatment effect of the lower cooperation visual sensor of dynamic environment of the invention.
Respectively by the feature corresponding with the point map in affiliated previous frame of present frame characteristic point belonging to left mesh and right mesh
Point carries out ORB description son matching, is accelerated here using DBOW3 bag of words technology to Feature Points Matching, then utilizes characteristic point
Rotation angle consistency carry out the screening of validity feature point, respectively obtain left mesh matching characteristic point pair and right mesh characteristic point pair, here I
Be referred to as left mesh and right purpose movement vision matching characteristic point pair.Left mesh present frame and right mesh are currently subjected to ORB description
Matching and SAD Stereo matching optimize to obtain more accurate left mesh and right mesh Stereo matching characteristic point pair, we are known as stereopsis
Feel matching characteristic point pair, then filters out movement vision matching characteristic point pair and propose the shared matching of vision matching characteristic point pair
Characteristic point carries out three-dimensional depth estimation to collection, to corresponding characteristic point, restores characteristic point 3D coordinate.
Calculate the optical flow velocity v and rotation angle θ between left mesh and right purpose movement vision matching characteristic point pair, optical flow velocity
Illustrate with rotation angle principle as shown in figure 8, calculation formula are as follows:
Using K-means++ clustering algorithm by left mesh and right purpose movement vision matching characteristic point to according to optical flow velocity and
Rotation angle is polymerized to M classification.In the actual environment, the speed of moving object is to have with direction with the speed of direction and background element
Very big difference, the moving object of friction speed can be isolated by cluster with the characteristic point of stationary object, obtain
After cluster result, statistics with histogram is done to the feature of each classification points, weed out statistical magnitude it is too small and with other gap data
Biggish class and corresponding characteristic point pair.
The interframe position of left mesh and right mesh camera in previous frame and present frame period is calculated using IMU pre-integration technology
It moves, IMU pre-integration principle core formula is as follows
Wherein,It is illustrated respectively under kth frame camera coordinates system from kth frame to k+1 frame period
Interior displacement increment, speed increment and the quaternary number for rotating angle change, atIndicate accelerometer measured value, wtIndicate angular speed
Measured value, batAnd bwtRespectively indicate the biasing of accelerometer and gyroscope, naAnd nwNoise is respectively indicated,Indicate moment t phase
It, can be by for the rotational transformation matrix of kth frame camera coordinates systemCorrelation formula obtains, and Ω is relevant to the calculating of quaternary number
Matrix, Δ tkIndicate tkTo tk+1Time interval, gwIndicate the gravity vector under world coordinate system.
Using the different clusters of obtained movement vision matching characteristic point set, according to different classes of matching characteristic point set
The estimation of EPNP pose and map point estimation are carried out respectively, consistency algorithm are used by RANSAC at random, to the position of each classification
Appearance estimated result carries out exterior point rejecting.After obtaining the corresponding pose estimated result of each classification, compared with IMU pose result
Compared with, in general by the IMU short-term pose estimation of filtering optimization when it is accurate, therefore can be with using the estimation of IMU interframe pose
The true EPNP of Effective selection excludes the pose estimated result of moving object feature point set, so that image pose estimation knot
Fruit is more true accurate, reduces influence of the moving object to estimated result.By combining the Filtering system of IMU data, Wo Menke
To obtain accurate EPNP pose and point map estimated result, we are known as best EPNP pose and map point estimation, right
The matching characteristic point set answered is referred to as movement vision and is effectively matched characteristic point to collection.
The characteristic point for belonging to and being effectively matched characteristic point to collection is filtered out from shared matching characteristic point concentration, that is to say, that is selected
Belong to the characteristic point pair of the corresponding characteristic point class of best EPNP pose estimated result, then to this feature point to corresponding three-dimensional deep
Degree estimated value carries out Kalman filtering with the depth value that EPNP estimates and merges, to obtain more accurate point map.
The pose obtained using the optimization algorithm based on figure is optimized with point map, and it is pre- that IMU is incorporated during optimization
Integrate obtaining as a result, optimizing by IMU pre-integration result also as one of optimization constraint condition to achieve the effect that merge,
The core formula of optimization is as follows:
WhereinIndicate the estimation module and carriage transformation matrix that EPNP pose is estimated,Indicate IMU pre-integration algorithm
Obtained module and carriage transformation matrix, zk+1Indicate that the pixel coordinate of camera image, Pk indicate the point map under kth frame camera coordinates system
3D coordinate, πc() is the projective transformation of the model based on camera, for being camera image by 3D changes in coordinates under camera coordinates system
In 2D pixel coordinate.
After completing pose estimation, correct point map is added in local map and global map, to locally
Figure carries out once update and replacement, for not rejecting in the point for belonging to local map, while comparing present frame and secondary newly pass
Whether total between key frame is higher than threshold value depending on characteristic point, illustrates to be overlapped between present frame and nearest key frame if being higher than threshold value
Information is excessive, it is not necessary to new key frame be added, but go to replace newest key frame using current key frame, and to local map
Point and global map point also carry out partial replacement and delete.If illustrating information between present frame and newest key frame lower than threshold value
It differs greatly, key frame set is added to increase the information capacity to environment in present frame.
To the pose that has the key frame for regarding characteristic point altogether enough to newest key frame and relevant with these key frames
Figure point carries out BundleAdjustment optimization and exterior point is rejected.And there is total view relationship by previous frame and with present frame
Key frame is added in the total view of present frame, more new data diagram structure.In lower whorl circulation.
It in above process, is all to consider to carry out under the premise of having previous frame point map and the enough matching characteristic points of present frame
, if matching characteristic point is insufficient, characteristic point is required supplementation with, there are mainly two types of the modes of complementary features point, and one is search
Local map, increase can match point map, another then be to re-start characteristic matching, matching characteristic to previous frame and present frame
Point range be not confined to the corresponding previous frame characteristic point of point map, but all characteristic points are involved in matching in previous frame, from
And characteristic point vacancy is effectively made up, and continue pose estimation if characteristic point meet demand, if being unsatisfactory for demand, benefit
Short-term pose transformation is recorded to keep the continuity of pose track with IMU, is recycled into next round, until finding enough features
Point, if the continuous some time does not find enough characteristic points and illustrates positioning failure, IMU is also because accumulated error is without essence at this time
Really, it needs to reinitialize system.
The winding thread estimates the module and carriage transformation matrix between present frame and the winding frame for detecting winding frame, and with
On the basis of the pose of the winding frame, it is updated by pose of the module and carriage transformation matrix to the present frame;
Specifically, please referring to Fig. 9 and Figure 10, winding detection refers to whether detection robot returns to the place having been to,
If detecting the place for returning to once to come, this can establish the geometrical constraint between present frame and winding frame, by excellent
Change the accumulated error for eliminating that this is intermediate from winding frame to present frame.Winding detection is similar with characteristic matching process, main to use
DBOW3 bag of words accelerate, and the frame for having very high matching rate with newest key frame is detected in past all key frames, once detection
It arrives, which is probably winding frame, then according to a series of Filtering systems such as Geometrical consistency, group matching, time consistency
Performance methodology determines whether to be estimated between present frame and winding frame once if finding winding frame in key frame set for winding frame
Module and carriage transformation matrix, the point map of module and carriage transformation matrix and present frame between winding frame and present frame is carried out
BundleAdjustment optimization and exterior point are rejected, finally on the basis of the pose of winding frame by winding frame and present frame it
Between module and carriage transformation matrix the pose of present frame is updated, to present frame be total to the high total view frame of visual range degree carry out pose into
Row updates, and is updated to the relevant point map of associated frame, and partial buildup error is eliminated in the effect of entire winding optimization, builds
Vertical set constraint provides constraint condition for global coherency optimization.
The rear end optimizes thread, for the pose estimated result and present frame and winding frame according to IMU pre-integration algorithm
Between module and carriage transformation matrix the pose of all frames is optimized.
Specifically, please referring to the end Figure 11 and Figure 12 optimization thread is to improve positioning accuracy, the pass of global accumulated error is reduced
Key, in rear end, optimization module will optimize all key frames and the total of present frame depending on frame and global map point.Entirely
Optimizing Flow is divided into two big steps again, the optimization of first step pose figure only optimizes the pose of key frame, by introducing IMU
The interframe pose of pre-integration constrains and the constraint between present frame and winding frame optimizes all poses, and the step is main
Purpose is that fusion IMU data optimize the pose of key frame, second step global coherency optimization, on the basis of the first step
Accumulated error is further eliminated based on winding optimization, the pose of global map point and whole frame is adjusted under winding constraint, from
And track drift is reduced, keep global coherency.
The present invention changes the mode of existing processing dynamic environment, rejecting moving object rather than by way of dynamic object,
Mass efficient characteristic point can be retained, static dynamic object characteristic point can be retained to improve the precision of pose estimation, and
It compares and rejects the mode of dynamic object, the present invention is adapted under extreme case, for example there was only static goer in scene
Body, if can not obtain enough characteristic points if rejecting by the way of dynamic object, and use the method for rejecting moving object
Static dynamic object characteristic point can be retained to estimate for pose.
The present invention changes the existing mode screened based on the semantic label of deep learning to cope with the side SLAM of dynamic environment
Method proposes that the method that the speed of a kind of pair of frame matching characteristic point pair and direction of rotation are clustered distinguishes moving object and non-fortune
Animal body (stationary body and static dynamic object) screens different poly- in such a way that IMU data are as auxiliary constraint information
Pose estimation in the result of class data set pose estimation closest to IMU data is estimated as best pose to reject dynamic scene
The interference that pose is estimated in lower moving object, compare the systematic fashion based on deep learning, greatly reduces computational complexity
Improve the real-time of location algorithm.
It, can also be in addition, each functional module in each embodiment of the present invention can integrate in a processing module
It is that modules physically exist alone, can also be integrated in two or more modules in a module.Above-mentioned integrated mould
Block both can take the form of hardware realization, can also be realized in the form of software function module.
It should be noted that for the various method embodiments described above, describing for simplicity, therefore, it is stated as a series of
Combination of actions, but those skilled in the art should understand that, the present invention is not limited by the sequence of acts described because
According to the present invention, certain steps can use other sequences or carry out simultaneously.Secondly, those skilled in the art should also know
It knows, the embodiments described in the specification are all preferred embodiments, and related actions and modules might not all be this hair
Necessary to bright.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, there is no the portion being described in detail in some embodiment
Point, it may refer to the associated description of other embodiments.
The above are the descriptions to the binocular visual positioning method and system under dynamic environment provided by the present invention, for this
The those skilled in the art in field, thought according to an embodiment of the present invention, have change in specific embodiments and applications
Become place, to sum up, the contents of this specification are not to be construed as limiting the invention.
Claims (10)
1. a kind of binocular visual positioning method under dynamic environment characterized by comprising
The left mesh image and right mesh image under camera binocular vision are obtained, and is extracted belonging to the left mesh image and right mesh image
The corresponding characteristic point of point map in characteristic point and affiliated previous frame in present frame, the point map are the spy that map is added
Sign point;
According to the characteristic point, the left mesh and right purpose movement vision matching characteristic point pair are obtained, and, stereoscopic vision matching
Characteristic point pair;
Screen the stereoscopic vision matching characteristic point to and movement vision matching characteristic point pair shared matching characteristic point pair;
Respectively to the shared matching characteristic point to the characteristic point of concentration to three-dimensional depth estimation is carried out, obtain three-dimensional depth estimation
Value;
Using K-means++ clustering algorithm, by the left mesh and right purpose movement vision matching characteristic point to according to movement vision
Optical flow velocity and rotation angle between matching characteristic point pair cluster, and obtain cluster result, and will be described according to cluster result
Movement vision matching characteristic point is to being divided into multiple classifications;
Using IMU pre-integration algorithm, calculate the IMU of the left mesh and right mesh in the period between present frame and previous frame
Pose estimated result, the pose estimated result include the interframe pose between the present frame and previous frame;
To lower movement vision matching characteristic point of all categories to the estimation of EPNP pose and map point estimation is carried out respectively, obtain EPNP's
Pose estimated result and point map estimated result;
The pose estimated result with the most similar EPNP of pose estimated result of IMU is chosen as best EPNP pose estimated result
With point map estimated result, and the best EPNP pose estimated result is recorded and point map estimated result is corresponding is effectively matched
Characteristic point pair;
Characteristic point is effectively matched to corresponding characteristic point pair described in selecting from the shared matching characteristic point pair, and to the spy
Sign point carries out Kalman filtering to the depth value that corresponding three-dimensional depth estimated value and EPNP estimates and merges, obtain estimating pose with
Estimate point map.
2. the binocular visual positioning method under dynamic environment according to claim 1, which is characterized in that the pre- product of IMU
Divide algorithm:
WhereinIt is illustrated respectively under kth frame camera coordinates system from kth frame to the position in k+1 frame period
Move increment, speed increment and the quaternary number for rotating angle change, atIndicate accelerometer measured value, wtIndicate angular velocity measurement
Value, batAnd bwtRespectively indicate the biasing of accelerometer and gyroscope, naAnd nwNoise is respectively indicated,Indicate moment t relative to
The rotational transformation matrix of kth frame camera coordinates system, can be byCorrelation formula obtains, and Ω is square relevant to the calculating of quaternary number
Battle array, Δ tkIndicate tkTo tk+1Time interval, gwIndicate the gravity vector under world coordinate system.
3. the binocular visual positioning method under dynamic environment according to claim 1, which is characterized in that described according to cluster
As a result by the movement vision matching characteristic point to being divided into after multiple classifications, comprising:
Histogram statistics are done to the number of the characteristic point under of all categories;
For corresponding characteristic point under of all categories quantity less than the first predetermined number and with of the characteristic point under other classifications
Several differences is greater than the classification of the second predetermined number, deletes the classification, and, corresponding characteristic point under the classification.
4. the binocular visual positioning method under dynamic environment according to claim 3, which is characterized in that described to of all categories
Lower movement vision matching characteristic point obtains the pose estimated result of EPNP to the estimation of EPNP pose and map point estimation is carried out respectively
Include: with point map estimated result
The estimation of EPNP pose and map point estimation are carried out to lower characteristic point of all categories respectively, consistency is used by RANSAC at random
Algorithm carries out exterior point rejecting to the pose estimated result of each classification, obtains the pose estimation knot of the corresponding EPNP of each classification
Fruit and point map estimated result.
5. the binocular visual positioning method under dynamic environment according to claim 4, which is characterized in that described to be estimated
After pose and estimation point map, comprising:
The estimation pose and estimation point map are carried out using the optimization algorithm based on figure according to the interframe pose of the IMU
Optimization, the point map after being optimized;
The optimization algorithm of the figure are as follows:
Wherein,Indicate the estimation module and carriage transformation matrix that EPNP pose is estimated,Indicate that IMU pre-integration algorithm obtains
Module and carriage transformation matrix, zk+1Indicate that the pixel coordinate of camera image, Pk indicate that point map 3D is sat under kth frame camera coordinates system
Mark, πc() is the projective transformation of the model based on camera, for being in camera image by 3D changes in coordinates under camera coordinates system
2D pixel coordinate.
6. the binocular visual positioning method under dynamic environment according to claim 5, which is characterized in that described to estimate to described
Meter pose and estimation point map optimize, after the point map after being optimized, comprising:
Using the point map before the point map replacement optimization after the optimization, the local map is updated, rejecting is not belonging to described
The point map of local map;
Whether that compares between present frame and secondary new key frame total is higher than preset threshold depending on characteristic point;
If being higher than the preset threshold, newest key frame is replaced using current key frame;
If being lower than the preset threshold, key frame set is added in present frame.
7. the binocular visual positioning method under dynamic environment according to claim 1, which is characterized in that described according to
Characteristic point obtains the left mesh and right purpose movement vision matching characteristic point to including:
To the corresponding characteristic point of point map in the characteristic point and affiliated previous frame in present frame belonging to the left mesh image
ORB description son matching is carried out, left purpose matching characteristic point pair is obtained;
To the corresponding characteristic point of point map in the characteristic point and affiliated previous frame in present frame belonging to the right mesh image
ORB description son matching is carried out, right purpose matching characteristic point pair is obtained;Wherein, the left mesh and the matching of right purpose movement vision are special
Sign point to include the left purpose matching characteristic point to the right purpose matching characteristic point pair.
8. the binocular visual positioning method under dynamic environment according to claim 1, which is characterized in that described according to
Characteristic point obtains the left mesh and right purpose stereoscopic vision matching characteristic point to including:
Respectively in present frame belonging to the left mesh image characteristic point and the right mesh image belonging to spy in present frame
Sign point carries out ORB description matching and SAD Stereo matching respectively, obtains the left mesh and right purpose stereoscopic vision matching characteristic point
It is right.
9. the binocular visual positioning system under a kind of dynamic environment characterized by comprising
Main thread, pose estimation thread, winding thread and rear end optimize thread;
The main thread for obtaining left mesh image and right mesh image under camera binocular vision, and is initialized, when initial
After being melted into function, the Restart Signal that pose estimation thread or winding thread are sent is received, with according to the Restart Signal, again
It is initialized;
The pose estimates thread, for obtaining left mesh image and right mesh image under camera binocular vision, and extracts the left side
The corresponding characteristic point of point map in characteristic point and affiliated previous frame in present frame belonging to mesh image and right mesh image, root
According to the characteristic point, the estimation pose and estimation point map of present frame are calculated;
The winding thread estimates the module and carriage transformation matrix between present frame and the winding frame for detecting winding frame, and with
On the basis of the pose of the winding frame, it is updated by pose of the module and carriage transformation matrix to the present frame;
The rear end optimizes thread, for according to the pose estimated result and present frame and winding frame of IMU pre-integration algorithm it
Between module and carriage transformation matrix the pose of all frames is optimized.
10. the binocular visual positioning system under dynamic environment according to claim 9, which is characterized in that described according to institute
Characteristic point is stated, the estimation pose and estimation point map for calculating present frame include:
According to the characteristic point, the left mesh and right purpose movement vision matching characteristic point pair are obtained, and, stereoscopic vision matching
Characteristic point pair;
Screen the stereoscopic vision matching characteristic point to and movement vision matching characteristic point pair shared matching characteristic point pair;
Respectively to the shared matching characteristic point to the characteristic point of concentration to three-dimensional depth estimation is carried out, obtain three-dimensional depth estimation
Value;
Using K-means++ clustering algorithm, by the left mesh and right purpose movement vision matching characteristic point to according to movement vision
Optical flow velocity and rotation angle between matching characteristic point pair cluster, and obtain cluster result, and will be described according to cluster result
Movement vision matching characteristic point is to being divided into multiple classifications;
Using IMU pre-integration algorithm, calculate the IMU of the left mesh and right mesh in the period between present frame and previous frame
Pose estimated result, the pose estimated result include the interframe pose between the present frame and previous frame;
To lower movement vision matching characteristic point of all categories to the estimation of EPNP pose and map point estimation is carried out respectively, obtain EPNP's
Pose estimated result and point map estimated result;
The pose estimated result with the most similar EPNP of pose estimated result of IMU is chosen as best EPNP pose estimated result
With point map estimated result, and the best EPNP pose estimated result is recorded and point map estimated result is corresponding is effectively matched
Characteristic point pair;
Characteristic point is effectively matched to corresponding characteristic point pair described in selecting from the shared matching characteristic point pair, and to the spy
Sign point carries out Kalman filtering to the depth value that corresponding three-dimensional depth estimated value and EPNP estimates and merges, obtain estimating pose with
Estimate point map.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910634021.1A CN110490900B (en) | 2019-07-12 | 2019-07-12 | Binocular vision positioning method and system under dynamic environment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910634021.1A CN110490900B (en) | 2019-07-12 | 2019-07-12 | Binocular vision positioning method and system under dynamic environment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490900A true CN110490900A (en) | 2019-11-22 |
CN110490900B CN110490900B (en) | 2022-04-19 |
Family
ID=68546103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910634021.1A Active CN110490900B (en) | 2019-07-12 | 2019-07-12 | Binocular vision positioning method and system under dynamic environment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490900B (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111016887A (en) * | 2019-12-23 | 2020-04-17 | 深圳市豪恩汽车电子装备股份有限公司 | Automatic parking device and method for motor vehicle |
CN111060113A (en) * | 2019-12-31 | 2020-04-24 | 歌尔股份有限公司 | Map updating method and device |
CN111091621A (en) * | 2019-12-11 | 2020-05-01 | 东南数字经济发展研究院 | Binocular vision synchronous positioning and composition method, device, equipment and storage medium |
CN111476812A (en) * | 2020-04-03 | 2020-07-31 | 浙江大学 | Map segmentation method and device, pose estimation method and equipment terminal |
CN111598939A (en) * | 2020-05-22 | 2020-08-28 | 中原工学院 | Human body circumference measuring method based on multi-vision system |
CN111639658A (en) * | 2020-06-03 | 2020-09-08 | 北京维盛泰科科技有限公司 | Method and device for detecting and eliminating dynamic characteristic points in image matching |
CN111683203A (en) * | 2020-06-12 | 2020-09-18 | 达闼机器人有限公司 | Grid map generation method and device and computer readable storage medium |
CN111862150A (en) * | 2020-06-19 | 2020-10-30 | 杭州易现先进科技有限公司 | Image tracking method and device, AR device and computer device |
CN111914784A (en) * | 2020-08-10 | 2020-11-10 | 北京大成国测科技有限公司 | Method and device for detecting intrusion of trackside obstacle in real time and electronic equipment |
CN112241983A (en) * | 2020-10-19 | 2021-01-19 | 深圳市目心智能科技有限公司 | Perception system and robot based on initiative binocular vision |
CN112444246A (en) * | 2020-11-06 | 2021-03-05 | 北京易达恩能科技有限公司 | Laser fusion positioning method in high-precision digital twin scene |
CN112734842A (en) * | 2020-12-31 | 2021-04-30 | 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) | Auxiliary positioning method and system for centering installation of large ship equipment |
CN113220818A (en) * | 2021-05-27 | 2021-08-06 | 南昌智能新能源汽车研究院 | Automatic mapping and high-precision positioning method for parking lot |
CN113503873A (en) * | 2021-07-14 | 2021-10-15 | 北京理工大学 | Multi-sensor fusion visual positioning method |
CN113759384A (en) * | 2020-09-22 | 2021-12-07 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN113920194A (en) * | 2021-10-08 | 2022-01-11 | 电子科技大学 | Four-rotor aircraft positioning method based on visual inertia fusion |
CN114066824A (en) * | 2021-10-28 | 2022-02-18 | 华南理工大学 | Binocular vision odometer method with dynamic target detection function |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109465832A (en) * | 2018-12-18 | 2019-03-15 | 哈尔滨工业大学(深圳) | High-precision vision and the tight fusion and positioning method of IMU and system |
CN109631855A (en) * | 2019-01-25 | 2019-04-16 | 西安电子科技大学 | High-precision vehicle positioning method based on ORB-SLAM |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
-
2019
- 2019-07-12 CN CN201910634021.1A patent/CN110490900B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105469405A (en) * | 2015-11-26 | 2016-04-06 | 清华大学 | Visual ranging-based simultaneous localization and map construction method |
CN105953796A (en) * | 2016-05-23 | 2016-09-21 | 北京暴风魔镜科技有限公司 | Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone |
CN107193279A (en) * | 2017-05-09 | 2017-09-22 | 复旦大学 | Robot localization and map structuring system based on monocular vision and IMU information |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN109307508A (en) * | 2018-08-29 | 2019-02-05 | 中国科学院合肥物质科学研究院 | A kind of panorama inertial navigation SLAM method based on more key frames |
CN109465832A (en) * | 2018-12-18 | 2019-03-15 | 哈尔滨工业大学(深圳) | High-precision vision and the tight fusion and positioning method of IMU and system |
CN109631855A (en) * | 2019-01-25 | 2019-04-16 | 西安电子科技大学 | High-precision vehicle positioning method based on ORB-SLAM |
CN109993113A (en) * | 2019-03-29 | 2019-07-09 | 东北大学 | A kind of position and orientation estimation method based on the fusion of RGB-D and IMU information |
Non-Patent Citations (3)
Title |
---|
CADENA C: "Simultaneous localization and mapping: Present,future,and the robust-perception age", 《IEEE TRANSACTIONS ON ROBOTICS》 * |
姚二亮: "基于 Vision-IMU 的机器人同时定位与地图创建算法", 《仪器仪表学报》 * |
徐宽: "融合IMU信息的双目视觉SLAM研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111091621A (en) * | 2019-12-11 | 2020-05-01 | 东南数字经济发展研究院 | Binocular vision synchronous positioning and composition method, device, equipment and storage medium |
CN111016887A (en) * | 2019-12-23 | 2020-04-17 | 深圳市豪恩汽车电子装备股份有限公司 | Automatic parking device and method for motor vehicle |
CN111060113A (en) * | 2019-12-31 | 2020-04-24 | 歌尔股份有限公司 | Map updating method and device |
US12031837B2 (en) | 2019-12-31 | 2024-07-09 | Goertek Inc. | Method and device for updating map |
CN111060113B (en) * | 2019-12-31 | 2022-04-08 | 歌尔股份有限公司 | Map updating method and device |
CN111476812A (en) * | 2020-04-03 | 2020-07-31 | 浙江大学 | Map segmentation method and device, pose estimation method and equipment terminal |
CN111598939A (en) * | 2020-05-22 | 2020-08-28 | 中原工学院 | Human body circumference measuring method based on multi-vision system |
CN111639658A (en) * | 2020-06-03 | 2020-09-08 | 北京维盛泰科科技有限公司 | Method and device for detecting and eliminating dynamic characteristic points in image matching |
CN111639658B (en) * | 2020-06-03 | 2023-07-21 | 北京维盛泰科科技有限公司 | Method and device for detecting and eliminating dynamic feature points in image matching |
CN111683203B (en) * | 2020-06-12 | 2021-11-09 | 达闼机器人有限公司 | Grid map generation method and device and computer readable storage medium |
CN111683203A (en) * | 2020-06-12 | 2020-09-18 | 达闼机器人有限公司 | Grid map generation method and device and computer readable storage medium |
US11972523B2 (en) | 2020-06-12 | 2024-04-30 | Cloudminds Robotics Co., Ltd. | Grid map generation method and device, and computer-readable storage medium |
CN111862150A (en) * | 2020-06-19 | 2020-10-30 | 杭州易现先进科技有限公司 | Image tracking method and device, AR device and computer device |
CN111914784A (en) * | 2020-08-10 | 2020-11-10 | 北京大成国测科技有限公司 | Method and device for detecting intrusion of trackside obstacle in real time and electronic equipment |
CN113759384B (en) * | 2020-09-22 | 2024-04-05 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN113759384A (en) * | 2020-09-22 | 2021-12-07 | 北京京东乾石科技有限公司 | Method, device, equipment and medium for determining pose conversion relation of sensor |
CN112241983A (en) * | 2020-10-19 | 2021-01-19 | 深圳市目心智能科技有限公司 | Perception system and robot based on initiative binocular vision |
CN112444246A (en) * | 2020-11-06 | 2021-03-05 | 北京易达恩能科技有限公司 | Laser fusion positioning method in high-precision digital twin scene |
CN112444246B (en) * | 2020-11-06 | 2024-01-26 | 北京易达恩能科技有限公司 | Laser fusion positioning method in high-precision digital twin scene |
CN112734842A (en) * | 2020-12-31 | 2021-04-30 | 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) | Auxiliary positioning method and system for centering installation of large ship equipment |
CN112734842B (en) * | 2020-12-31 | 2022-07-01 | 武汉第二船舶设计研究所(中国船舶重工集团公司第七一九研究所) | Auxiliary positioning method and system for centering installation of large ship equipment |
CN113220818A (en) * | 2021-05-27 | 2021-08-06 | 南昌智能新能源汽车研究院 | Automatic mapping and high-precision positioning method for parking lot |
CN113220818B (en) * | 2021-05-27 | 2023-04-07 | 南昌智能新能源汽车研究院 | Automatic mapping and high-precision positioning method for parking lot |
CN113503873A (en) * | 2021-07-14 | 2021-10-15 | 北京理工大学 | Multi-sensor fusion visual positioning method |
CN113503873B (en) * | 2021-07-14 | 2024-03-12 | 北京理工大学 | Visual positioning method for multi-sensor fusion |
CN113920194B (en) * | 2021-10-08 | 2023-04-21 | 电子科技大学 | Positioning method of four-rotor aircraft based on visual inertia fusion |
CN113920194A (en) * | 2021-10-08 | 2022-01-11 | 电子科技大学 | Four-rotor aircraft positioning method based on visual inertia fusion |
CN114066824A (en) * | 2021-10-28 | 2022-02-18 | 华南理工大学 | Binocular vision odometer method with dynamic target detection function |
CN114066824B (en) * | 2021-10-28 | 2024-05-14 | 华南理工大学 | Binocular vision odometer method with dynamic target detection function |
Also Published As
Publication number | Publication date |
---|---|
CN110490900B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490900A (en) | Binocular visual positioning method and system under dynamic environment | |
CN111024066B (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN109993113B (en) | Pose estimation method based on RGB-D and IMU information fusion | |
US10192113B1 (en) | Quadocular sensor design in autonomous platforms | |
US10390003B1 (en) | Visual-inertial positional awareness for autonomous and non-autonomous device | |
US10496104B1 (en) | Positional awareness with quadocular sensor in autonomous platforms | |
US10762643B2 (en) | Method for evaluating image data of a vehicle camera | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
CN110044354A (en) | A kind of binocular vision indoor positioning and build drawing method and device | |
CN112233177B (en) | Unmanned aerial vehicle pose estimation method and system | |
CN105783913A (en) | SLAM device integrating multiple vehicle-mounted sensors and control method of device | |
CN112344923B (en) | Robot positioning method and positioning device thereof | |
CN109443348A (en) | It is a kind of based on the underground garage warehouse compartment tracking for looking around vision and inertial navigation fusion | |
CN112802196B (en) | Binocular inertia simultaneous positioning and map construction method based on dotted line feature fusion | |
CN208323361U (en) | A kind of positioning device and robot based on deep vision | |
CN112115980A (en) | Binocular vision odometer design method based on optical flow tracking and point line feature matching | |
CN114623817A (en) | Self-calibration-containing visual inertial odometer method based on key frame sliding window filtering | |
CN113223045A (en) | Vision and IMU sensor fusion positioning system based on dynamic object semantic segmentation | |
CN110533719A (en) | Augmented reality localization method and device based on environmental visual Feature point recognition technology | |
CN110599545A (en) | Feature-based dense map construction system | |
CN110207693A (en) | A kind of robust stereoscopic vision inertia pre-integration SLAM method | |
CN114485640A (en) | Monocular vision inertia synchronous positioning and mapping method and system based on point-line characteristics | |
CN114648584B (en) | Robustness control method and system for multi-source fusion positioning | |
CN112731503B (en) | Pose estimation method and system based on front end tight coupling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |