CN107909612A - A kind of method and system of vision based on 3D point cloud positioning immediately with building figure - Google Patents
A kind of method and system of vision based on 3D point cloud positioning immediately with building figure Download PDFInfo
- Publication number
- CN107909612A CN107909612A CN201711252235.XA CN201711252235A CN107909612A CN 107909612 A CN107909612 A CN 107909612A CN 201711252235 A CN201711252235 A CN 201711252235A CN 107909612 A CN107909612 A CN 107909612A
- Authority
- CN
- China
- Prior art keywords
- frame
- picture frame
- point cloud
- map
- straight lines
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Automation & Control Theory (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The purpose of the application is to provide a kind of method and system of vision based on 3D point cloud positioning immediately with building figure, specifically includes:Determine the camera posture information of picture frame newly obtained;Detect whether the picture frame is key frame based on the camera posture information;If the picture frame is key frame, according to the 3D straight lines in the corresponding 3D point cloud fitting generation map of the picture frame.The application proposes a kind of brand-new scheme based on dotted line feature on the basis of the existing vSLAM methods based on point feature, and this programme is more apparent in edge graded by the characteristic point that direct method is extracted, and is conducive to extract straight line in three dimensions;Moreover, the detection of straight lines in the point cloud of three dimensions, not only reduces the detection for mismatching straight line, additionally it is possible to omit the calculating of Linear triangular.
Description
Technical field
This application involves intelligent driving field, more particularly to a kind of skill of vision based on 3D point cloud positioning immediately with building figure
Art.
Background technology
Immediately positioning and map structuring (simultaneous localization and mapping, SLAM) are machines
The smart machines such as people move in circumstances not known since a unknown position, according to location estimation and map in moving process
Self poisoning is carried out, while increment type map is built on the basis of self poisoning, realizes autonomous positioning and the navigation of robot.
Due to its important theory and application value, instant positioning is considered to realize really certainly by many scholars with map structuring technology
The key of main mobile robot or intelligent driving.
Positioning was carried out using laser radar compared to the past and builds figure, uses positioning of the camera as sensor now with building
Drawing method has been increasingly becoming mainstream, is known as vision positioning immediately and map structuring (vSLAM).VSLAM methods at this stage are main
Including distinguished point based, minimize the indirect method of match point re-projection error and based on image pixel intensities, minimize photometric error
Direct method, both approaches depend on the extraction and matching of point feature, can preferably handle the scene of abundant texture information.
The content of the invention
The purpose of the application is to provide a kind of method and system of vision based on 3D point cloud positioning immediately with building figure.
According to the one side of the application, there is provided a kind of for vision positioning immediately and the method for building figure, this method bag
Include:
Determine the camera posture information of picture frame newly obtained;
Detect whether the picture frame is key frame based on the camera posture information;
If the picture frame is key frame, the 3D in the corresponding 3D point cloud fitting generation map of the picture frame is straight
Line.
According to the one side of the application, there is provided a kind of for vision positioning immediately and the system for building figure, the system bag
Include:
Pose determining module, for the camera posture information for the picture frame for determining newly to obtain;
Key frame detection module, for detecting whether the picture frame is key frame based on the camera posture information;
Fitting a straight line module, if the picture frame is key frame, for being fitted according to the corresponding 3D point cloud of the picture frame
Generate the 3D straight lines in map.
According to the one side of the application, there is provided a kind of for vision positioning immediately and the equipment for building figure, the equipment bag
Include:
Processor;And
The memory of storage computer executable instructions is arranged to, the executable instruction makes the place when executed
Device is managed to perform:
Determine the camera posture information of picture frame newly obtained;
Detect whether the picture frame is key frame based on the camera posture information;
If the picture frame is key frame, the 3D in the corresponding 3D point cloud fitting generation map of the picture frame is straight
Line.
According to the one side of the application, there is provided a kind of computer-readable medium including instructing, described instruction is in quilt
System is caused to carry out during execution:
Determine the camera posture information of picture frame newly obtained;
Detect whether the picture frame is key frame based on the camera posture information;
If the picture frame is key frame, the 3D in the corresponding 3D point cloud fitting generation map of the picture frame is straight
Line.
Compared with prior art, the application proposes a kind of complete on the basis of the existing vSLAM methods based on point feature
The new scheme based on dotted line feature, this programme is more apparent in edge graded by the characteristic point that direct method is extracted, and has
Beneficial to extracting straight line in three dimensions;Moreover, the detection of straight lines in the point cloud of three dimensions, not only reduces mismatch straight line
Detection, additionally it is possible to omit the calculating of Linear triangular.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is shown according to a kind of for method flow diagram of the vision positioning immediately with building figure of the application one embodiment;
Fig. 2 shows the sub-step of a step in Fig. 1;
Fig. 3 is shown according to a kind of for system construction drawing of the vision positioning immediately with building figure of the application one embodiment;
Fig. 4 shows the exemplary system according to each embodiment of the application.
The same or similar reference numeral represents the same or similar component in attached drawing.
Embodiment
The application is described in further detail below in conjunction with the accompanying drawings.
In one typical configuration of the application, terminal, the equipment of service network and trusted party include one or more
Processor (CPU), input/output interface, network interface and memory.
Memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or
The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flash RAM).Memory is computer-readable medium
Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable
Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM),
Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, magnetic disk storage or other magnetic storage apparatus or
Any other non-transmission medium, the information that can be accessed by a computing device available for storage.
The application meaning equipment includes but not limited to user equipment, the network equipment or user equipment and the network equipment passes through
Network is integrated formed equipment.The user equipment, which includes but not limited to any type, to carry out human-computer interaction with user
The mobile electronic product of (such as human-computer interaction is carried out by touch pad), such as smart mobile phone, tablet computer etc., the mobile electricity
Sub- product can use any operating system, such as android operating systems, iOS operating systems.Wherein, the network equipment
Including it is a kind of can be according to the instruction for being previously set or storing, the automatic electronic equipment for carrying out numerical computations and information processing, its
Hardware includes but not limited to microprocessor, application-specific integrated circuit (ASIC), programmable logic device (PLD), field programmable gate
Array (FPGA), digital signal processor (DSP), embedded device etc..The network equipment includes but not limited to computer, net
The cloud that network host, single network server, multiple webserver collection or multiple servers are formed;Here, cloud is by based on cloud meter
The a large amount of computers or the webserver for calculating (Cloud Computing) are formed, wherein, cloud computing is the one of Distributed Calculation
Kind, a virtual supercomputer being made of the computer collection of a group loose couplings.The network includes but not limited to interconnect
Net, wide area network, Metropolitan Area Network (MAN), LAN, VPN network, wireless self-organization network (Ad Hoc networks) etc..Preferably, the equipment
Can also be run on the user equipment, the network equipment or user equipment and the network equipment, the network equipment, touch terminal or
The network equipment is integrated the program in formed equipment with touch terminal by network.
Certainly, those skilled in the art will be understood that the said equipment is only for example, other are existing or are likely to occur from now on
Equipment be such as applicable to the application, should also be included within the application protection domain, and be incorporated herein by reference herein.
In the description of the present application, " multiple " are meant that two or more, unless otherwise specifically defined.
Fig. 1 shows that the method comprising the steps of according to a kind of for vision positioning immediately and the method for building figure of the application
S11, step S12 and step S13.In step s 11, vision positions the phase of the picture frame newly obtained with building drawing system to determine immediately
Machine posture information;In step s 12, vision positioning immediately is based on the camera posture information detection picture with building drawing system
Whether frame is key frame;In step s 13, if the picture frame is key frame, vision positioning immediately is with building drawing system according to institute
State the 3D straight lines in the corresponding 3D point cloud fitting generation map of picture frame.
Specifically, in step s 11, vision positions the phase seat in the plane of the picture frame newly obtained with building drawing system to determine immediately
Appearance information.For example, vision positioning immediately receives new picture frame, the feature based on the picture frame Yu other interframe with building drawing system
Point is matched, and using matching result by minimizing re-projection error, to obtain the camera posture information of photo current frame.
And for example, vision positioning immediately receives new picture frame with building drawing system, by by the previous figure of new picture frame and the picture frame
Piece frame carries out through image registration, using image pyramid technology, and using tracking mode from coarse to fine, determines the picture frame
Information camera posture information.Also such as, vision immediately positioning with build drawing system using first two acquisition camera posture information as
Initial value, the dotted line feature in map is instead thrown back in image, obtains more accurate camera posture information, the embodiment of this programme
In in this way exemplified by obtain the higher camera posture information of precision.
In step s 12, vision positioning immediately is based on the camera posture information detection picture frame with building drawing system
Whether it is key frame.For example, vision immediately positioning with build drawing system according to the camera posture information of photo current frame and other
The relevant information of crucial interframe detects whether the picture frame is key frame.
In step s 13, if the picture frame is key frame, vision positioning immediately is with building drawing system according to the picture frame
3D straight lines in corresponding 3D point cloud fitting generation map.For example, if picture frame is determined as key frame after testing, vision is instant
Position and the dotted line in map is projected to by the corresponding 3D points of picture frame generation according to the camera pose of picture frame with building drawing system
Cloud, and the 3D straight lines in generation map are fitted according to the 3D point cloud.
For example, vision positioning immediately receives a new picture frame with building drawing system, according to the picture frame and other pictures
The contact of frame, the initial phase seat in the plane of photo current frame is obtained by the direct method based on image pixel intensities or distinguished point based indirect method
Appearance information, and using the initial camera posture information as initial value, by the dotted line Projection Character in map to photo current frame, according to throwing
The corresponding matching dotted line feature of shadow dotted line feature is calculated, and obtains the accurate pose of precision higher, wherein, match point is characterized as
The nearest corresponding points of distance projection point feature, matched line are characterized as in projection line feature neighborhood with LSD (Line Segment
Detector, straight-line detection segmentation) straight line that detects of algorithm.Vision positioning immediately chooses multiple and present frame with building drawing system
Key frame closer to the distance over time and space, and the camera posture information based on present frame and multiple key frames associate inspection
Survey whether present frame is key frame.If present frame is determined as key frame after testing, positioning will be multiple with building drawing system immediately for vision
Key point in key frame projects to current key frame, and the depth of subpoint is determined using the neighborhood information around subpoint, raw
The depth map dense into half, obtains the corresponding 3D point cloud of the picture frame.Immediately positioning uses RANSAC to vision with building drawing system
(Random Sample Consensus, random sampling uniformity) algorithm utilizes the 3D points of photo current frame in three dimensions
Cloud is fitted 3D straight lines, and in this process in order to consider probabilistic influence, calculating uses mahalanobis distance, and then, vision is instant
Position and build during drawing system is put inside and recover more accurate straight line with least square method, and delete corresponding interior point, recovered
Journey iteration is run untill it can not extract straight line.
In certain embodiments, this method further includes step S14 (not shown).In step S14, vision immediately positioning with
Drawing system is built to optimize the dotted line feature in the map and the camera posture information.For example, vision positions immediately
System optimizes the dotted line feature in map and the camera posture information of present frame, and optimization method includes but not limited to:
Global optimization, local optimum.
For example, vision positioning immediately is with building after drawing system establishes new map dotted line, to the dotted line feature in map and
Second posture information of present frame optimizes processing.It is contemplated herein that arriving efficiency, vision positioning immediately is with building drawing system using cunning
Dynamic window filter carries out local optimum to dotted line feature and the second posture information, specifically includes:
1) add for the pixel value in dotted line feature and the second posture information, vision positioning immediately with building drawing system and using
The method optimizing of power has the geometric error for surveying optical path difference and straight line of point feature more, wherein, the survey optical path difference of point includes each projection
The space length error of point corresponding points into image, the combination error of straight line include projection straight line into image between line correspondence
Space length error;
2) Huber error functions are used, and use the weights based on gradient, eliminate the influence of outlier, wherein, weights are public
Formula is:
In formula, wpFor the weights of the error based on gradient, c is constant,For the graded of pixel value,
For example, work as | p |≤δ, then
When | p | > δ, then
Wherein, δ is a threshold value, for judging the objective function Equation corresponding to different size of error;
3) optimized using the optimization method of Gauss-Newton;
4) uniformity of single order Jacobi approximation guarantee system is used.
In certain embodiments, step S13 further includes sub-step S133 (not shown), and in sub-step S133, vision is
Shi Dingwei determines with building drawing system according to view field of the 3D straight lines in the picture frame associated with the 3D straight lines
2D straight lines;Wherein, this method further includes step S15 (not shown), and in step S15, vision positioning immediately is with building drawing system root
The map is presented according to the 3D straight lines and 2D straight lines associated with the 3D straight lines.For example, vision positioning immediately is with building figure system
After system generates corresponding 3D straight lines according to the corresponding 3D point cloud fitting of photo current frame, the 3D straight lines on photo current two field picture
Detection 2D straight lines in neighborhood are projected, 3D straight lines are associated with corresponding 2D straight lines;Then, vision positioning immediately is with building figure system
The 3D straight lines in the map are presented according to the incidence relation for system.
For example, vision positioning immediately is with building drawing system according to the corresponding 3D of photo current frame corresponding 3D point cloud fitting generation
After straight line, on photo current two field picture in the projection neighborhood of 3D straight lines using LSD algorithm detection 2D straight lines, by 3D straight lines with it is right
The 2D linear positions answered associate, and adaptively adjust the position of other straight lines;Then, vision positioning immediately is with building figure system
For system when the map is presented, according to 3D straight lines and the incidence relation of 2D straight lines, the 3D more accurately presented in the map is straight
Line.
In certain embodiments, as shown in Fig. 2, step S13 includes sub-step S131 and sub-step S132.In sub-step
In S131, if the picture frame is key frame, vision immediately positioning with build drawing system by by the activation spot projection in map extremely
The picture frame, generates the corresponding 3D point cloud of the picture frame;In sub-step S132, vision positioning immediately is with building drawing system root
The 3D straight lines in the map are generated according to 3D point cloud fitting.
For example, if photo current frame is key frame, vision positioning immediately is believed with building drawing system with the pose of previous picture frame
Cease for initial value, the posture information based on photo current frame calculates the rigid body translation pose of present frame, and the key point in map is thrown
Shadow obtains the depth information of subpoint using the neighborhood information around subpoint, is believed according to the depth of each key point to present frame
Breath generation includes the dense depth map of present frame half, obtains the corresponding 3D point cloud of present frame.Vision positioning immediately is with building drawing system
3D straight lines are fitted using the 3D point cloud of photo current frame using RAVSAC algorithms in three dimensions, in this process in order to consider
Probabilistic influence, calculating use mahalanobis distance, and then, vision positioning immediately uses least square with building during drawing system is put inside
Method recovers more accurate straight line, and deletes corresponding interior point, and recovery process iteration is run untill it can not extract straight line.
In certain embodiments, in sub-step S132, vision positioning immediately carries out the 3D point cloud with building drawing system
Pretreatment, and the 3D straight lines in the pretreated 3D point cloud fitting generation map.In certain embodiments, institute
State and the 3D point cloud is pre-processed including but not limited to:Vision positioning immediately is with building drawing system by interpolation processing described
New point is generated in 3D point cloud;Vision positioning immediately is deleted in the 3D point cloud with having straight line in the map with building drawing system
Distance be less than distance threshold point.For example, vision positioning immediately carries out in advance the corresponding 3D point cloud of present frame with building drawing system
Processing, for example, new point is generated in the 3D point cloud by interpolation processing, and for example, by delete in 3D point cloud with it is straight in map
The distance of line is less than the point of distance threshold;Then, vision positioning immediately is carried out with building drawing system according to pretreated 3D point cloud
Fitting a straight line obtains corresponding 3D straight lines.
For example, positioning pre-processes present frame corresponding 3D point cloud with building drawing system vision immediately, for example, by inserting
Value processing generates new point in the 3D point cloud, and and for example, it is existing in drawing system extension map with building that vision positions positioning immediately
Straight line, and delete the point for being less than distance threshold in 3D point cloud with the distance of map cathetus, obtain new 3D point cloud;Then,
Vision positioning immediately is with building drawing system according to the corresponding 3D straight lines of pretreated 3D point cloud progress fitting a straight line acquisition.
In certain embodiments, the step S13 further includes sub-step S134 (not shown), in sub-step S134, depending on
Feel positioning immediately with building the coordinate of point feature of the drawing system in the 3D point cloud renewal map.For example, for a spy
Sign, positioning recovers its depth to vision with building drawing system by the way of trigonometric ratio immediately, and propagates uncertainty.
For example, vision positioning immediately matches point feature, and point feature with building a pair of drawing system in two views
Corresponding present frame and the camera posture information of previous interframe, by point feature, depth is believed in camera coordinates system in previous frame image
Cease trigonometric ratio and calculate and recover point feature depth information in present frame camera coordinates system, and propagate uncertain information.
In certain embodiments, in step s 12, vision positioning immediately chooses multiple key frames with building drawing system, is based on
The multiple key frame determines the key frame parameters of the picture frame with the camera posture information, and is joined according to the key frame
Number determines whether the picture frame is key frame.In certain embodiments, the key frame parameters include but not limited to:The visual field becomes
Change information, camera translation change information, time for exposure change information.For example, vision positioning immediately is multiple with building drawing system selection
The key frame closer to the distance with present frame over time and space, is believed by the camera pose of multiple key frames and photo current frame
Breath calculates the key frame parameters of the picture frame, wherein, key frame parameters include but not limited to:Including visual field change information, camera
Translate change information and time for exposure change information;Then, positioning is corresponding according to photo current frame with building drawing system immediately for vision
Key frame parameters judge whether the picture frame is key frame.
For example, according to the relevant information of present frame to select time and space to be separated by nearer multiple for the instant alignment system of vision
Key frame, and the second posture information based on the plurality of key frame and photo current frame determines the key frame ginseng of photo current frame
Number.Wherein, key frame parameters include:
1) visual field changes:
2) camera translation change:
3) time for exposure changes:
F is distance metric unit in above-mentioned 1) formula, and p represents the Pixel Information of the corresponding points of the key point of present frame, p' tables
Show the Pixel Information of the key point in multiple key frames of multiple key frames;2) f in formulatFor distance metric unit, p represents current
The positional information of the key point of frame, pt' for multiple key frames key point projected position information;3) a is luminosity calibration in formula
In parameter.
Vision positioning immediately determines current figure with building the second posture information of the drawing system based on multiple key frames and present frame
Three key frame parameters of piece frame, and determine the weighted sum of three key frames, and by it compared with predetermined threshold, such as:
In formula, wf、waRespectively vision positioning immediately is with building the default visual field change information of drawing system, camera translation
Change information and time for exposure change corresponding weight, and
If the weighted sum of three key parameters is equal to or more than predetermined threshold Tkf, then vision positioning immediately is with building drawing system
It is key frame to determine photo current frame.
It is existing or occur from now on here, those skilled in the art are it should be appreciated that above-mentioned key frame parameters are only for example
The other guide of key frame parameters, if it is possible to suitable for the application, should also be included in the protection domain of the application, and with
The form of reference is incorporated herein.
In certain embodiments, this method further includes step S15 (not shown).In step S15, if the picture frame is
Non-key frame, vision positioning immediately update the coordinate of the dotted line feature in the map with building drawing system.For example, if present frame is not
For key frame, positioning is based on photo current frame to vision with building drawing system immediately, using a kind of depth filter based on probability more
The depth value of each point and 3D straight line endpoints in new map.
For example, if present frame is not key frame, for the point not determined also to depth on present frame on other key frames
{ p, u }, the corresponding polar curve L of p are found according to the second posture informationp, searching and point u ' most like u, pass through triangle on polar curve
Survey calculation obtains depth x and uncertainty τ, then utilizes the estimation of Depth of bayesian probability model renewal p points.When the depth of p
During degree estimation convergence, its three-dimensional coordinate is calculated, and add map.
Fig. 3 is shown includes pose according to a kind of of the application for vision positioning immediately and the system for building figure, the system
Determining module 11, key frame detection 12 and fitting a straight line module 13.Pose determining module 11, for the picture frame for determining newly to obtain
Camera posture information;Key frame detection 12, for detecting whether the picture frame is crucial based on the camera posture information
Frame;Fitting a straight line module 13, if the picture frame is key frame, for according to the corresponding 3D point cloud fitting generation of the picture frame
3D straight lines in map.
Specifically, pose determining module 11, for the camera posture information for the picture frame for determining newly to obtain.For example, regard
Feel that positioning immediately receives new picture frame with building drawing system, matched based on the picture frame with the characteristic point of other interframe, and
Using matching result by minimizing re-projection error, to obtain the camera posture information of photo current frame.And for example, vision is instant
Position and build drawing system and receive new picture frame, by the way that the previous picture frame of new picture frame and the picture frame is directly schemed
As registration, using image pyramid technology, and tracking mode from coarse to fine is used, determine the information camera pose of the picture frame
Information.Also such as, positioning is with building drawing system using the camera posture information of first two acquisitions as initial value immediately for vision, by map
Dotted line feature is instead thrown back in image, the more accurate camera posture information of acquisition, in the embodiment of this programme by taking the system as an example
Obtain the higher camera posture information of precision.
Key frame detection 12, for detecting whether the picture frame is key frame based on the camera posture information.For example,
Vision positioning immediately is with building drawing system according to the camera posture information of photo current frame and the relevant information of other crucial interframe
Detect whether the picture frame is key frame.
Fitting a straight line module 13, if the picture frame is key frame, for being intended according to the corresponding 3D point cloud of the picture frame
Symphysis is into the 3D straight lines in map.For example, if picture frame is determined as key frame after testing, vision positioning immediately is with building drawing system
Dotted line in map is projected to by the picture frame according to the camera pose of picture frame and generates corresponding 3D point cloud, and according to the 3D points
3D straight lines in cloud fitting generation map.
For example, vision positioning immediately receives a new picture frame with building drawing system, according to the picture frame and other pictures
The contact of frame, the initial phase seat in the plane of photo current frame is obtained by the direct method based on image pixel intensities or distinguished point based indirect method
Appearance information, and using the initial camera posture information as initial value, by the dotted line Projection Character in map to photo current frame, according to throwing
The corresponding matching dotted line feature of shadow dotted line feature is calculated, and obtains the accurate pose of precision higher, wherein, match point is characterized as
The nearest corresponding points of distance projection point feature, matched line are characterized as in projection line feature neighborhood with LSD (Line Segment
Detector, straight-line detection segmentation) straight line that detects of algorithm.Vision positioning immediately chooses multiple and present frame with building drawing system
Key frame closer to the distance over time and space, and the camera posture information based on present frame and multiple key frames associate inspection
Survey whether present frame is key frame.If present frame is determined as key frame after testing, positioning will be multiple with building drawing system immediately for vision
Key point in key frame projects to current key frame, and the depth of subpoint is determined using the neighborhood information around subpoint, raw
The depth map dense into half, obtains the corresponding 3D point cloud of the picture frame.Immediately positioning uses RANSAC to vision with building drawing system
(Random Sample Consensus, random sampling uniformity) algorithm utilizes the 3D points of photo current frame in three dimensions
Cloud is fitted 3D straight lines, and in this process in order to consider probabilistic influence, calculating uses mahalanobis distance, and then, vision is instant
Position and build during drawing system is put inside and recover more accurate straight line with least square method, and delete corresponding interior point, recovered
Journey iteration is run untill it can not extract straight line.
In certain embodiments, which further includes 14 (not shown) of optimization module.Optimization module 14, for described
Dotted line feature and the camera posture information in figure optimize.For example, the instant alignment system of vision is to the point in map
Line feature and the camera posture information of present frame optimize, and optimal way includes but not limited to:Global optimization, it is local excellent
Change.
For example, vision positioning immediately is with building after drawing system establishes new map dotted line, to the dotted line feature in map and
Second posture information of present frame optimizes processing.It is contemplated herein that arriving efficiency, vision positioning immediately is with building drawing system using cunning
Dynamic window filter carries out local optimum to dotted line feature and the second posture information, specifically includes:
1) add for the pixel value in dotted line feature and the second posture information, vision positioning immediately with building drawing system and using
The method optimizing of power has the geometric error for surveying optical path difference and straight line of point feature more, wherein, the survey optical path difference of point includes each projection
The space length error of point corresponding points into image, the combination error of straight line include projection straight line into image between line correspondence
Space length error;
2) Huber error functions are used, and use the weights based on gradient, eliminate the influence of outlier, wherein, weights are public
Formula is:
In formula, wpFor the weights of the error based on gradient, c is constant,For the graded of pixel value,
For example, work as | p |≤δ, then
When | p | > δ, then
Wherein, δ is a threshold value, for judging the objective function Equation corresponding to different size of error;
3) optimized using the optimization method of Gauss-Newton;
4) uniformity of single order Jacobi approximation guarantee system is used.
In certain embodiments, fitting a straight line module further includes 133 (not shown) of associative cell.Associative cell 133, is used for
Determined and the associated 2D straight lines of the 3D straight lines according to view field of the 3D straight lines in the picture frame;Wherein, this is
System, which further includes, is presented 15 (not shown) of module.Module 15 is presented, for according to the 3D straight lines and associated with the 3D straight lines
The map is presented in 2D straight lines.For example, vision positioning immediately is with building drawing system according to the fitting of photo current frame corresponding 3D point cloud
After generating corresponding 3D straight lines, on photo current two field picture in the projection neighborhood of 3D straight lines detect 2D straight lines, by 3D straight lines with
Corresponding 2D straight lines associate;Then, vision positioning immediately is presented in the map with building drawing system according to the incidence relation
3D straight lines.
For example, vision positioning immediately is with building drawing system according to the corresponding 3D of photo current frame corresponding 3D point cloud fitting generation
After straight line, on photo current two field picture in the projection neighborhood of 3D straight lines using LSD algorithm detection 2D straight lines, by 3D straight lines with it is right
The 2D linear positions answered associate, and adaptively adjust the position of other straight lines;Then, vision positioning immediately is with building figure system
For system when the map is presented, according to 3D straight lines and the incidence relation of 2D straight lines, the 3D more accurately presented in the map is straight
Line.
In certain embodiments, as shown in Fig. 2, fitting a straight line module 13 includes point cloud generation unit 131 and fitting a straight line
Unit 132.Point cloud generation unit 131, if the picture frame is key frame, for by by the activation spot projection in map to institute
Picture frame is stated, generates the corresponding 3D point cloud of the picture frame;Line fitting unit 132, gives birth to for being fitted according to the 3D point cloud
Into the 3D straight lines in the map.
For example, if photo current frame is key frame, vision positioning immediately is believed with building drawing system with the pose of previous picture frame
Cease for initial value, the posture information based on photo current frame calculates the rigid body translation pose of present frame, and the key point in map is thrown
Shadow obtains the depth information of subpoint using the neighborhood information around subpoint, is believed according to the depth of each key point to present frame
Breath generation includes the dense depth map of present frame half, obtains the corresponding 3D point cloud of present frame.Vision positioning immediately is with building drawing system
3D straight lines are fitted using the 3D point cloud of photo current frame using RAVSAC algorithms in three dimensions, in this process in order to consider
Probabilistic influence, calculating use mahalanobis distance, and then, vision positioning immediately uses least square with building during drawing system is put inside
Method recovers more accurate straight line, and deletes corresponding interior point, and recovery process iteration is run untill it can not extract straight line.
In certain embodiments, line fitting unit 132, for being pre-processed to the 3D point cloud, and according to pre- place
3D point cloud fitting after reason generates the 3D straight lines in the map.In certain embodiments, it is described to the 3D point cloud into
Row pretreatment includes but not limited to:Vision positions and builds immediately drawing system and generated newly in the 3D point cloud by interpolation processing
Point;Vision positioning immediately is less than apart from threshold with building the distance that drawing system is deleted in the 3D point cloud with existing straight line in the map
The point of value.For example, positioning pre-processes present frame corresponding 3D point cloud with building drawing system vision immediately, for example, by inserting
Value processing generates new point in the 3D point cloud, and for example, is less than distance by deleting the distance in 3D point cloud with map cathetus
The point of threshold value;Then, vision positioning immediately is corresponding according to the progress fitting a straight line acquisition of pretreated 3D point cloud with building drawing system
3D straight lines.
For example, positioning pre-processes present frame corresponding 3D point cloud with building drawing system vision immediately, for example, by inserting
Value processing generates new point in the 3D point cloud, and and for example, it is existing in drawing system extension map with building that vision positions positioning immediately
Straight line, and delete the point for being less than distance threshold in 3D point cloud with the distance of map cathetus, obtain new 3D point cloud;Then,
Vision positioning immediately is with building drawing system according to the corresponding 3D straight lines of pretreated 3D point cloud progress fitting a straight line acquisition.
In certain embodiments, the fitting a straight line module 13 further includes 134 (not shown) of coordinate updating block.Coordinate is more
New unit 134, for updating the coordinate of the point feature in the map according to the 3D point cloud.For example, for point feature, vision
Immediately positioning recovers its depth with building drawing system by the way of trigonometric ratio, and propagates uncertainty.
For example, vision positioning immediately matches point feature, and point feature with building a pair of drawing system in two views
Corresponding present frame and the camera posture information of previous interframe, by point feature, depth is believed in camera coordinates system in previous frame image
Cease trigonometric ratio and calculate and recover point feature depth information in present frame camera coordinates system, and propagate uncertain information.
In certain embodiments, key frame detection module 12, for choosing multiple key frames, based on the multiple key frame
The key frame parameters of the picture frame are determined with the camera posture information, and the picture is determined according to the key frame parameters
Whether frame is key frame.In certain embodiments, the key frame parameters include but not limited to:Visual field change information, camera are put down
Move change information, time for exposure change information.For example, vision positioning immediately is multiple over time and space with building drawing system selection
The key frame closer to the distance with present frame, the picture frame is calculated by the camera posture information of multiple key frames and photo current frame
Key frame parameters, wherein, key frame parameters include but not limited to:Including visual field change information, camera translation change information and
Time for exposure change information;Then, vision positioning immediately is sentenced with building drawing system according to the corresponding key frame parameters of photo current frame
Whether the disconnected picture frame is key frame.
For example, according to the relevant information of present frame to select time and space to be separated by nearer multiple for the instant alignment system of vision
Key frame, and the second posture information based on the plurality of key frame and photo current frame determines the key frame ginseng of photo current frame
Number.Wherein, key frame parameters include:
4) visual field changes:
5) camera translation change:
6) time for exposure changes:
F is distance metric unit in above-mentioned 1) formula, and p represents the Pixel Information of the corresponding points of the key point of present frame, p' tables
Show the Pixel Information of the key point in multiple key frames of multiple key frames;2) f in formulatFor distance metric unit, p represents current
The positional information of the key point of frame, pt' for multiple key frames key point projected position information;3) a is luminosity calibration in formula
In parameter.
Vision positioning immediately determines current figure with building the second posture information of the drawing system based on multiple key frames and present frame
Three key frame parameters of piece frame, and determine the weighted sum of three key frames, and by it compared with predetermined threshold, such as:
In formula, wf、waRespectively vision positioning immediately is with building the default visual field change information of drawing system, camera translation
Change information and time for exposure change corresponding weight, and
If the weighted sum of three key parameters is equal to or more than predetermined threshold Tkf, then vision positioning immediately is with building drawing system
It is key frame to determine photo current frame.
It is existing or occur from now on here, those skilled in the art are it should be appreciated that above-mentioned key frame parameters are only for example
The other guide of key frame parameters, if it is possible to suitable for the application, should also be included in the protection domain of the application, and with
The form of reference is incorporated herein.
In certain embodiments, which further includes 15 (not shown) of coordinate update module.Coordinate update module 15, if institute
It is non-key frame to state picture frame, for updating the coordinate of the dotted line feature in the map.For example, if present frame is not key
Frame, positioning is based on photo current frame to vision with building drawing system immediately, updates map using a kind of depth filter based on probability
The depth value of middle each point and 3D straight line endpoint.
For example, if present frame is not key frame, for the point not determined also to depth on present frame on other key frames
{ p, u }, the corresponding polar curve L of p are found according to the second posture informationp, searching and point u ' most like u, pass through triangle on polar curve
Survey calculation obtains depth x and uncertainty τ, then utilizes the estimation of Depth of bayesian probability model renewal p points.When the depth of p
During degree estimation convergence, its three-dimensional coordinate is calculated, and add map.
Present invention also provides a kind of computer-readable recording medium, the computer-readable recording medium storage has calculating
Machine code, when the computer code is performed, such as preceding any one of them method is performed.
Present invention also provides a kind of computer program product, when the computer program product is performed by computer equipment
When, such as preceding any one of them method is performed.
Present invention also provides a kind of computer equipment, the computer equipment includes:
One or more processors;
Memory, for storing one or more computer programs;
When one or more of computer programs are performed by one or more of processors so that it is one or
Multiple processors realize such as preceding any one of them method.
As shown in Figure 4 in certain embodiments, system 300 can be as described in embodiment shown in those figures or other
Any one computer equipment in embodiment.In certain embodiments, system 300 may include the one or more with instruction
Computer-readable medium (for example, system storage or NVM/ storage devices 320) and computer-readable with the one or more
Medium couples are simultaneously configured as execute instruction to realize module so as to perform at the one or more of action described herein
Manage device (for example, (one or more) processor 305).
For one embodiment, system control module 310 may include any suitable interface controller, with to (one or
It is multiple) any suitable equipment or component at least one and/or communicate with system control module 310 in processor 305 carries
For any suitable interface.
System control module 310 may include Memory Controller module 330, to provide interface to system storage 315.Deposit
Memory controller module 330 can be hardware module, software module and/or firmware module.
System storage 315 can be used for for example, system 300 and load and store data and/or instruction.For a reality
Example is applied, system storage 315 may include any suitable volatile memory, for example, appropriate DRAM.In some embodiments
In, system storage 315 may include four Synchronous Dynamic Random Access Memory of Double Data Rate type (DDR4SDRAM).
For one embodiment, system control module 310 may include one or more input/output (I/O) controller, with
Interface is provided to NVM/ storage devices 320 and (one or more) communication interface 325.
For example, NVM/ storage devices 320 can be used for storing data and/or instruction.NVM/ storage devices 320 may include to appoint
Anticipating appropriate nonvolatile memory (for example, flash memory) and/or may include that any suitable (one or more) is non-volatile and deposits
Equipment is stored up (for example, one or more hard disk drives (HDD), one or more CD (CD) drivers and/or one or more
Digital versatile disc (DVD) driver).
NVM/ storage devices 320 may include a part for the equipment being physically mounted on as system 300
Storage resource, or it can be by equipment access without the part as the equipment.For example, NVM/ storage devices 320 can
Accessed by network via (one or more) communication interface 325.
(one or more) communication interface 325 can be system 300 provide interface with by one or more networks and/or with
Other any appropriate equipment communications.System 300 can be in one or more wireless network standards and/or agreement any mark
Accurate and/or agreement to carry out wireless communication with the one or more assemblies of wireless network.
For one embodiment, at least one in (one or more) processor 305 can be with system control module 310
The logic of one or more controllers (for example, Memory Controller module 330) is packaged together.For one embodiment, (one
It is a or multiple) at least one in processor 305 can encapsulate with the logic of one or more controllers of system control module 310
Together to form system in package (SiP).It is at least one in (one or more) processor 305 for one embodiment
It can be integrated in the logic of one or more controllers of system control module 310 on same mould.For one embodiment,
At least one in (one or more) processor 305 can be with the logic of one or more controllers of system control module 310
It is integrated on same mould to form system-on-chip (SoC).
In various embodiments, system 300 can be, but not limited to be:Server, work station, desk-top computing device or movement
Computing device (for example, lap-top computing devices, handheld computing device, tablet computer, net book etc.).In various embodiments,
System 300 can have more or fewer components and/or different frameworks.For example, in certain embodiments, system 300 includes
One or more video cameras, keyboard, liquid crystal display (LCD) screen (including touch screen displays), nonvolatile memory port,
Mutiple antennas, graphic chips, application-specific integrated circuit (ASIC) and loudspeaker.
It should be noted that the application can be carried out in the assembly of software and/or software and hardware, for example, can adopt
With application-specific integrated circuit (ASIC), general purpose computer or any other realized similar to hardware device.In one embodiment
In, the software program of the application can be performed by processor to realize steps described above or function.Similarly, the application
Software program (including relevant data structure) can be stored in computer readable recording medium storing program for performing, for example, RAM memory,
Magnetically or optically driver or floppy disc and similar devices.In addition, some steps or function of the application can employ hardware to realize, example
Such as, as coordinating with processor so as to performing the circuit of each step or function.
In addition, the part of the application can be applied to computer program product, such as computer program instructions, when its quilt
When computer performs, by the operation of the computer, it can call or provide according to the present processes and/or technical solution.
Those skilled in the art will be understood that existence form of the computer program instructions in computer-readable medium includes but not limited to
Source file, executable file, installation package file etc., correspondingly, the mode that computer program instructions are computer-executed include but
It is not limited to:The computer directly performs the instruction, or the computer compile the instruction after perform program after corresponding compiling again,
Either the computer reads and performs the instruction or after the computer reads and install and perform corresponding installation again after the instruction
Program.Here, computer-readable medium can be for computer access any available computer-readable recording medium or
Communication media.
Communication media includes thereby including such as computer-readable instruction, data structure, program module or other data
Signal of communication is transmitted to the medium of another system from a system.Communication media may include there is transmission medium (such as electricity led
Cable and line (for example, optical fiber, coaxial etc.)) and can propagate wireless (not having the transmission the led) medium of energy wave, such as sound, electricity
Magnetic, RF, microwave and infrared.Computer-readable instruction, data structure, program module or other data can be embodied as example wireless
Medium (such as carrier wave or be such as embodied as spread spectrum technique a part similar mechanism) in modulated message signal.
Term " modulated message signal " refers to that one or more feature is modified or is set in a manner of coding information in the signal
Fixed signal.Modulation can be simulation, digital or Hybrid Modulation Technology.
As an example, not a limit, computer-readable recording medium may include for storing such as computer-readable finger
Make, the volatile and non-volatile that any method or technique of the information of data structure, program module or other data is realized, can
Mobile and immovable medium.For example, computer-readable recording medium includes, but not limited to volatile memory, such as with
Machine memory (RAM, DRAM, SRAM);And nonvolatile memory, such as flash memory, various read-only storages (ROM, PROM,
EPROM, EEPROM), magnetic and ferromagnetic/ferroelectric memory (MRAM, FeRAM);And magnetic and optical storage apparatus (hard disk,
Tape, CD, DVD);Or other currently known media or Future Development can store the computer used for computer system
Readable information/data.
Here, including a device according to one embodiment of the application, which includes being used to store computer program
The memory of instruction and the processor for execute program instructions, wherein, when the computer program instructions are performed by the processor
When, trigger methods and/or techniques scheme of the device operation based on foregoing multiple embodiments according to the application.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned one exemplary embodiment, Er Qie
In the case of without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and scope of the present application is by appended power
Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling
Change is included in the application.Any reference numeral in claim should not be considered as to the involved claim of limitation.This
Outside, it is clear that one word of " comprising " is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in device claim is multiple
Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table
Show title, and be not offered as any specific order.
The various aspects of each embodiment are defined in detail in the claims.Each reality is defined in following numbering clause
Apply these and other aspects of example:
1. it is a kind of for vision positioning immediately and the method for building figure, wherein, this method includes:
Determine the camera posture information of picture frame newly obtained;
Detect whether the picture frame is key frame based on the camera posture information;
If the picture frame is key frame, the 3D in the corresponding 3D point cloud fitting generation map of the picture frame is straight
Line.
2. according to the method described in clause 1, wherein, the method further includes:
Dotted line feature in the map and the camera posture information are optimized.
3. according to the method described in clause 1, wherein, if the picture frame is key frame, according to the picture frame pair
3D straight lines in the 3D point cloud fitting generation map answered, further include:
Determined and the associated 2D straight lines of the 3D straight lines according to view field of the 3D straight lines in the picture frame;
Wherein, the method further includes:
The map is presented according to the 3D straight lines and 2D straight lines associated with the 3D straight lines.
4. the method according to any one of clause 1 to 3, wherein, if the picture frame is key frame, according to institute
The 3D straight lines in the corresponding 3D point cloud fitting generation map of picture frame are stated, including:
If the picture frame is key frame, by by the activation spot projection in map to the picture frame, generating the figure
The corresponding 3D point cloud of piece frame;
The 3D straight lines in the map are generated according to 3D point cloud fitting.
5. according to the method described in clause 4, wherein, the 3D in the map according to 3D point cloud fitting generation is straight
Line, including:
The 3D point cloud is pre-processed;
The 3D straight lines in the map are generated according to the pretreated 3D point cloud fitting.
6. according to the method described in clause 5, wherein, it is described the 3D point cloud is carried out pretreatment include it is following at least any
:
New point is generated in the 3D point cloud by interpolation processing;
Delete the point for being less than distance threshold in the 3D point cloud with the distance of existing straight line in the map.
7. according to the method described in clause 1, wherein, if the picture frame is key frame, according to the picture frame pair
3D straight lines in the 3D point cloud fitting generation map answered, further include:
The coordinate of the point feature in the map is updated according to the 3D point cloud.
8. according to the method described in clause 1, wherein, it is described that whether the picture frame is detected based on the camera posture information
For key frame, including:
Multiple key frames are chosen, determine that the picture frame determines based on the multiple key frame and the camera posture information
Key frame parameters, and determine whether the picture frame is key frame according to the key frame parameters.
9. according to the method described in clause 8, wherein, the key frame parameters include visual field change information, camera translation becomes
Change at least one in information, time for exposure change information.
10. according to the method described in clause 1, wherein, the method further includes:
If the picture frame is non-key frame, the coordinate of the dotted line feature in the map is updated.
11. it is a kind of for vision positioning immediately and the system for building figure, wherein, which includes:
Pose determining module, for the camera posture information for the picture frame for determining newly to obtain;
Key frame detection module, for detecting whether the picture frame is key frame based on the camera posture information;
Fitting a straight line module, if the picture frame is key frame, for being fitted according to the corresponding 3D point cloud of the picture frame
Generate the 3D straight lines in map.
12. according to the system described in clause 11, wherein, the system also includes:
Optimization module, for being optimized to the dotted line feature in the map and the camera posture information.
13. according to the system described in clause 11, wherein, the fitting a straight line module further includes:
Associative cell, determines associated with the 3D straight lines according to view field of the 3D straight lines in the picture frame
2D straight lines;
Wherein, the system also includes:
Module is presented, for the map to be presented according to the 3D straight lines and 2D straight lines associated with the 3D straight lines.
14. the system according to any one of clause 11 to 13, wherein, the fitting a straight line module includes:
Point cloud generation unit, if the picture frame is key frame, for by by the activation spot projection in map to described
Picture frame, generates the corresponding 3D point cloud of the picture frame;
Line fitting unit, for generating the 3D straight lines in the map according to 3D point cloud fitting.
15. according to the system described in clause 14, wherein, the line fitting unit is used for:
The 3D point cloud is pre-processed;
The 3D straight lines in the map are generated according to the pretreated 3D point cloud fitting.
16. according to the system described in clause 15, wherein, it is described pretreatment is carried out to the 3D point cloud to include following at least appointing
One:
New point is generated in the 3D point cloud by interpolation processing;
Delete the point for being less than distance threshold in the 3D point cloud with the distance of existing straight line in the map.
17. according to the system described in clause 11, wherein, the fitting a straight line module further includes:
Coordinate updating block, for updating the coordinate of the point feature in the map according to the 3D point cloud.
18. according to the system described in clause 11, wherein, the key frame detection module is used for:
Multiple key frames are chosen, the pass of the picture frame is determined based on the multiple key frame and the camera posture information
Key frame parameter, and determine whether the picture frame is key frame according to the key frame parameters.
19. according to the system described in clause 18, wherein, the key frame parameters include visual field change information, camera translates
At least one of in change information, time for exposure change information.
20. according to the system described in clause 11, wherein, the system also includes:
Coordinate update module, if the picture frame is non-key frame, for updating the seat of the dotted line feature in the map
Mark.
21. it is a kind of for vision positioning immediately and the equipment for building figure, wherein, which includes:
Processor;And
The memory of storage computer executable instructions is arranged to, the executable instruction makes the place when executed
Manage operation of the device execution as any one of clause 1 to 10.
22. a kind of computer-readable medium including instructing, described instruction make it that system progress is following such as when executed
Operation any one of clause 1 to 10.
Claims (10)
1. it is a kind of for vision positioning immediately and the method for building figure, wherein, this method includes:
Determine the camera posture information of picture frame newly obtained;
Detect whether the picture frame is key frame based on the camera posture information;
If the picture frame is key frame, according to the 3D straight lines in the corresponding 3D point cloud fitting generation map of the picture frame.
2. according to the method described in claim 1, wherein, the method further includes:
Dotted line feature in the map and the camera posture information are optimized.
3. according to the method described in claim 1, wherein, if the picture frame is key frame, according to the picture frame pair
3D straight lines in the 3D point cloud fitting generation map answered, further include:
Determined and the associated 2D straight lines of the 3D straight lines according to view field of the 3D straight lines in the picture frame;
Wherein, the method further includes:
The map is presented according to the 3D straight lines and 2D straight lines associated with the 3D straight lines.
4. according to the method in any one of claims 1 to 3, wherein, if the picture frame is key frame, according to institute
The 3D straight lines in the corresponding 3D point cloud fitting generation map of picture frame are stated, including:
If the picture frame is key frame, by by the activation spot projection in map to the picture frame, generating the picture frame
Corresponding 3D point cloud;
The 3D straight lines in the map are generated according to 3D point cloud fitting.
5. according to the method described in claim 4, wherein, the 3D in the map according to 3D point cloud fitting generation is straight
Line, including:
The 3D point cloud is pre-processed;
The 3D straight lines in the map are generated according to the pretreated 3D point cloud fitting.
6. according to the method described in claim 5, wherein, it is described the 3D point cloud is carried out pretreatment include it is following at least any
:
New point is generated in the 3D point cloud by interpolation processing;
Delete the point for being less than distance threshold in the 3D point cloud with the distance of existing straight line in the map.
7. according to the method described in claim 1, wherein, if the picture frame is key frame, according to the picture frame pair
3D straight lines in the 3D point cloud fitting generation map answered, further include:
The coordinate of the point feature in the map is updated according to the 3D point cloud.
It is 8. described that whether the picture frame is detected based on the camera posture information according to the method described in claim 1, wherein
For key frame, including:
Multiple key frames are chosen, determine that the picture frame determines key based on the multiple key frame and the camera posture information
Frame parameter, and determine whether the picture frame is key frame according to the key frame parameters.
9. according to the method described in claim 8, wherein, the key frame parameters include visual field change information, camera translation becomes
Change at least one in information, time for exposure change information.
10. according to the method described in claim 1, wherein, the method further includes:
If the picture frame is non-key frame, the coordinate of the dotted line feature in the map is updated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711252235.XA CN107909612B (en) | 2017-12-01 | 2017-12-01 | Method and system for visual instant positioning and mapping based on 3D point cloud |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711252235.XA CN107909612B (en) | 2017-12-01 | 2017-12-01 | Method and system for visual instant positioning and mapping based on 3D point cloud |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107909612A true CN107909612A (en) | 2018-04-13 |
CN107909612B CN107909612B (en) | 2021-01-29 |
Family
ID=61849663
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711252235.XA Active CN107909612B (en) | 2017-12-01 | 2017-12-01 | Method and system for visual instant positioning and mapping based on 3D point cloud |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107909612B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648274A (en) * | 2018-05-10 | 2018-10-12 | 华南理工大学 | A kind of cognition point cloud map creation system of vision SLAM |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN109443320A (en) * | 2019-01-10 | 2019-03-08 | 轻客小觅智能科技(北京)有限公司 | Binocular vision speedometer and measurement method based on direct method and line feature |
CN109556596A (en) * | 2018-10-19 | 2019-04-02 | 北京极智嘉科技有限公司 | Air navigation aid, device, equipment and storage medium based on ground texture image |
CN109636897A (en) * | 2018-11-23 | 2019-04-16 | 桂林电子科技大学 | A kind of Octomap optimization method based on improvement RGB-D SLAM |
CN109724586A (en) * | 2018-08-21 | 2019-05-07 | 南京理工大学 | A kind of spacecraft relative pose measurement method of fusion depth map and point cloud |
CN109814572A (en) * | 2019-02-20 | 2019-05-28 | 广州市山丘智能科技有限公司 | Localization for Mobile Robot builds drawing method, device, mobile robot and storage medium |
CN110310326A (en) * | 2019-06-28 | 2019-10-08 | 北京百度网讯科技有限公司 | A kind of pose data processing method, device, terminal and computer readable storage medium |
CN110515089A (en) * | 2018-05-21 | 2019-11-29 | 华创车电技术中心股份有限公司 | Driving householder method based on optical radar |
CN110533716A (en) * | 2019-08-20 | 2019-12-03 | 西安电子科技大学 | A kind of semantic SLAM system and method based on 3D constraint |
CN110853085A (en) * | 2018-08-21 | 2020-02-28 | 深圳地平线机器人科技有限公司 | Semantic SLAM-based mapping method and device and electronic equipment |
CN111062233A (en) * | 2018-10-17 | 2020-04-24 | 北京地平线机器人技术研发有限公司 | Marker representation acquisition method, marker representation acquisition device and electronic equipment |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111383324A (en) * | 2018-12-29 | 2020-07-07 | 广州文远知行科技有限公司 | Point cloud map construction method and device, computer equipment and storage medium |
WO2020140431A1 (en) * | 2019-01-04 | 2020-07-09 | 南京人工智能高等研究院有限公司 | Camera pose determination method and apparatus, electronic device and storage medium |
CN111813882A (en) * | 2020-06-18 | 2020-10-23 | 浙江大华技术股份有限公司 | Robot map construction method, device and storage medium |
CN111971574A (en) * | 2019-01-30 | 2020-11-20 | 百度时代网络技术(北京)有限公司 | Deep learning based feature extraction for LIDAR localization of autonomous vehicles |
CN113196784A (en) * | 2018-12-19 | 2021-07-30 | 索尼集团公司 | Point cloud coding structure |
CN113284224A (en) * | 2021-04-20 | 2021-08-20 | 北京行动智能科技有限公司 | Automatic mapping method and device based on simplex code, and positioning method and equipment |
CN113589306A (en) * | 2020-04-30 | 2021-11-02 | 北京猎户星空科技有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101287142A (en) * | 2008-05-16 | 2008-10-15 | 清华大学 | Method for converting flat video to tridimensional video based on bidirectional tracing and characteristic points correction |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN106570507A (en) * | 2016-10-26 | 2017-04-19 | 北京航空航天大学 | Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure |
-
2017
- 2017-12-01 CN CN201711252235.XA patent/CN107909612B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101287142A (en) * | 2008-05-16 | 2008-10-15 | 清华大学 | Method for converting flat video to tridimensional video based on bidirectional tracing and characteristic points correction |
CN106446815A (en) * | 2016-09-14 | 2017-02-22 | 浙江大学 | Simultaneous positioning and map building method |
CN106570507A (en) * | 2016-10-26 | 2017-04-19 | 北京航空航天大学 | Multi-angle consistent plane detection and analysis method for monocular video scene three dimensional structure |
Non-Patent Citations (1)
Title |
---|
STEFAN LEUTENEGGER 等: "Keyframe-Based Visual-Inertial SLAM Using Nonlinear Optimization", 《ROBOTICS: SCIENCE AND SYSTEMS (RSS)》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648274B (en) * | 2018-05-10 | 2020-05-22 | 华南理工大学 | Cognitive point cloud map creating system of visual SLAM |
CN108648274A (en) * | 2018-05-10 | 2018-10-12 | 华南理工大学 | A kind of cognition point cloud map creation system of vision SLAM |
CN108682027A (en) * | 2018-05-11 | 2018-10-19 | 北京华捷艾米科技有限公司 | VSLAM realization method and systems based on point, line Fusion Features |
CN110515089A (en) * | 2018-05-21 | 2019-11-29 | 华创车电技术中心股份有限公司 | Driving householder method based on optical radar |
CN110515089B (en) * | 2018-05-21 | 2023-06-02 | 华创车电技术中心股份有限公司 | Driving auxiliary method based on optical radar |
CN110853085A (en) * | 2018-08-21 | 2020-02-28 | 深圳地平线机器人科技有限公司 | Semantic SLAM-based mapping method and device and electronic equipment |
CN109724586A (en) * | 2018-08-21 | 2019-05-07 | 南京理工大学 | A kind of spacecraft relative pose measurement method of fusion depth map and point cloud |
CN109724586B (en) * | 2018-08-21 | 2022-08-02 | 南京理工大学 | Spacecraft relative pose measurement method integrating depth map and point cloud |
CN110853085B (en) * | 2018-08-21 | 2022-08-19 | 深圳地平线机器人科技有限公司 | Semantic SLAM-based mapping method and device and electronic equipment |
CN111062233A (en) * | 2018-10-17 | 2020-04-24 | 北京地平线机器人技术研发有限公司 | Marker representation acquisition method, marker representation acquisition device and electronic equipment |
US11644338B2 (en) * | 2018-10-19 | 2023-05-09 | Beijing Geekplus Technology Co., Ltd. | Ground texture image-based navigation method and device, and storage medium |
CN109556596A (en) * | 2018-10-19 | 2019-04-02 | 北京极智嘉科技有限公司 | Air navigation aid, device, equipment and storage medium based on ground texture image |
CN109636897B (en) * | 2018-11-23 | 2022-08-23 | 桂林电子科技大学 | Octmap optimization method based on improved RGB-D SLAM |
CN109636897A (en) * | 2018-11-23 | 2019-04-16 | 桂林电子科技大学 | A kind of Octomap optimization method based on improvement RGB-D SLAM |
CN113196784A (en) * | 2018-12-19 | 2021-07-30 | 索尼集团公司 | Point cloud coding structure |
CN111383324B (en) * | 2018-12-29 | 2023-03-28 | 广州文远知行科技有限公司 | Point cloud map construction method and device, computer equipment and storage medium |
CN111383324A (en) * | 2018-12-29 | 2020-07-07 | 广州文远知行科技有限公司 | Point cloud map construction method and device, computer equipment and storage medium |
WO2020140431A1 (en) * | 2019-01-04 | 2020-07-09 | 南京人工智能高等研究院有限公司 | Camera pose determination method and apparatus, electronic device and storage medium |
CN109443320A (en) * | 2019-01-10 | 2019-03-08 | 轻客小觅智能科技(北京)有限公司 | Binocular vision speedometer and measurement method based on direct method and line feature |
CN111971574B (en) * | 2019-01-30 | 2022-07-22 | 百度时代网络技术(北京)有限公司 | Deep learning based feature extraction for LIDAR localization of autonomous vehicles |
CN111971574A (en) * | 2019-01-30 | 2020-11-20 | 百度时代网络技术(北京)有限公司 | Deep learning based feature extraction for LIDAR localization of autonomous vehicles |
CN109814572A (en) * | 2019-02-20 | 2019-05-28 | 广州市山丘智能科技有限公司 | Localization for Mobile Robot builds drawing method, device, mobile robot and storage medium |
CN109814572B (en) * | 2019-02-20 | 2022-02-01 | 广州市山丘智能科技有限公司 | Mobile robot positioning and mapping method and device, mobile robot and storage medium |
CN110310326A (en) * | 2019-06-28 | 2019-10-08 | 北京百度网讯科技有限公司 | A kind of pose data processing method, device, terminal and computer readable storage medium |
CN110533716A (en) * | 2019-08-20 | 2019-12-03 | 西安电子科技大学 | A kind of semantic SLAM system and method based on 3D constraint |
CN110533716B (en) * | 2019-08-20 | 2022-12-02 | 西安电子科技大学 | Semantic SLAM system and method based on 3D constraint |
CN111311684B (en) * | 2020-04-01 | 2021-02-05 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
CN111311684A (en) * | 2020-04-01 | 2020-06-19 | 亮风台(上海)信息科技有限公司 | Method and equipment for initializing SLAM |
WO2021219023A1 (en) * | 2020-04-30 | 2021-11-04 | 北京猎户星空科技有限公司 | Positioning method and apparatus, electronic device, and storage medium |
CN113589306A (en) * | 2020-04-30 | 2021-11-02 | 北京猎户星空科技有限公司 | Positioning method, positioning device, electronic equipment and storage medium |
CN111813882A (en) * | 2020-06-18 | 2020-10-23 | 浙江大华技术股份有限公司 | Robot map construction method, device and storage medium |
CN111813882B (en) * | 2020-06-18 | 2024-05-14 | 浙江华睿科技股份有限公司 | Robot map construction method, device and storage medium |
CN113284224A (en) * | 2021-04-20 | 2021-08-20 | 北京行动智能科技有限公司 | Automatic mapping method and device based on simplex code, and positioning method and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107909612B (en) | 2021-01-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107909612A (en) | A kind of method and system of vision based on 3D point cloud positioning immediately with building figure | |
CN107784671A (en) | A kind of method and system positioned immediately for vision with building figure | |
US11054912B2 (en) | Three-dimensional graphical user interface for informational input in virtual reality environment | |
CN109084746A (en) | Monocular mode for the autonomous platform guidance system with aiding sensors | |
US9213899B2 (en) | Context-aware tracking of a video object using a sparse representation framework | |
JP2023500969A (en) | Target Tracking Method, Apparatus, Electronics, Computer Readable Storage Medium and Computer Program Product | |
CN109887003A (en) | A kind of method and apparatus initialized for carrying out three-dimensional tracking | |
WO2016183464A1 (en) | Deepstereo: learning to predict new views from real world imagery | |
US20150286893A1 (en) | System And Method For Extracting Dominant Orientations From A Scene | |
US20220111869A1 (en) | Indoor scene understanding from single-perspective images | |
CN110648363A (en) | Camera posture determining method and device, storage medium and electronic equipment | |
CN111739005A (en) | Image detection method, image detection device, electronic equipment and storage medium | |
Zhang et al. | A new high resolution depth map estimation system using stereo vision and kinect depth sensing | |
CN108898669A (en) | Data processing method, device, medium and calculating equipment | |
CN112733641B (en) | Object size measuring method, device, equipment and storage medium | |
CN110349212A (en) | Immediately optimization method and device, medium and the electronic equipment of positioning and map structuring | |
CN109584377A (en) | A kind of method and apparatus of the content of augmented reality for rendering | |
WO2023140990A1 (en) | Visual inertial odometry with machine learning depth | |
EP3827301A1 (en) | System and method for mapping | |
US11188787B1 (en) | End-to-end room layout estimation | |
Pintore et al. | Interactive mapping of indoor building structures through mobile devices | |
WO2022177666A1 (en) | Personalized local image features using bilevel optimization | |
Lee et al. | Real-time camera tracking using a particle filter and multiple feature trackers | |
Tao et al. | 3d semantic vslam of indoor environment based on mask scoring rcnn | |
Li et al. | Light field SLAM based on ray-space projection model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |