CN109784232A - A kind of vision SLAM winding detection method and device merging depth information - Google Patents
A kind of vision SLAM winding detection method and device merging depth information Download PDFInfo
- Publication number
- CN109784232A CN109784232A CN201811632974.6A CN201811632974A CN109784232A CN 109784232 A CN109784232 A CN 109784232A CN 201811632974 A CN201811632974 A CN 201811632974A CN 109784232 A CN109784232 A CN 109784232A
- Authority
- CN
- China
- Prior art keywords
- key frame
- map
- winding detection
- depth information
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 51
- 238000004804 winding Methods 0.000 title claims abstract description 44
- 239000013598 vector Substances 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 14
- 238000013528 artificial neural network Methods 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 7
- 238000005286 illumination Methods 0.000 abstract description 6
- 238000013507 mapping Methods 0.000 abstract description 3
- 230000004807 localization Effects 0.000 abstract description 2
- 230000004927 fusion Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 235000001968 nicotinic acid Nutrition 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000035922 thirst Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention relates to robot simultaneous localization and mapping technical fields, more particularly to a kind of vision SLAM winding detection method and device for merging depth information, obtain the video streaming image shot in robot kinematics, and the key frame in video streaming image is extracted offline, the depth map of key frame is obtained with deep neural network training, the global characteristics descriptor of depth map is extracted using M2DP algorithm, and then the COS distance of depth map is calculated using matrix-vector multiplication, to match the similarity of key frame, winding detection is carried out to key frame, method provided by the invention has the invariance to illumination, it is able to achieve accurate positionin and the map structuring of robot.
Description
Technical field
The present invention relates to robot simultaneous localization and mapping technical field, and in particular to a kind of fusion depth information
Vision SLAM winding detection method and device.
Background technique
Since the appearance of bionics and intelligent robot technology, researchers just thirst for some day, and robot can
As the mankind, by eyes go to observe and understand around the world, and can deftly autonomous in the natural environment,
Realize man-machine harmony and co-existence.
Wherein, one it is important and fundamental problem is, how by the three-dimensional structure of two-dimensional image information analysis scenery,
Determine camera in position wherein.The solution of this problem be unable to do without the research of a basic fundamental: while positioning and map structure
(Simultaneous-Localizationand-Mapping, SLAM) is built, the SLAM technology of vision is based particularly on.
In order to achieve the effect that the SLAM technology of view-based access control model realizes that human eye is the same, as long as looking around surrounding, object is recognized, just
It can judge the position of oneself, and be currently based on the algorithm of characteristic point and pixel, it is obviously far from enough from such purpose.Almost institute
Closed loop detection method be all vision description to be carried out using key frame to environment, then by currently using visual sensing
Image matched with key frame in map complete closed loop detection work.
On closed loop test problems, robot research work mainly stresses to solve two problems: first is to have to expand
Property, images match suitable for overall situation because many mission requirements robots need to use thousands of or even million width
Key frame describes environment, thus generates the requirement that can be expanded i.e. suitable for the high-speed, high precision image matching algorithm of overall situation.
The problem that second needs solves should have environmental condition invariance when being images match, this is referred to various different items
The image that acquires under part carries out accurate match, including the processing to illumination variation, and to dynamic environment, season, weather and
The ability of visual angle change processing.
It is weaker to illumination invariant in current method, and robot how is improved to the invariance of illumination, to realize
The accurate positionin of robot and map structuring are to be worth solving the problems, such as.
Summary of the invention
The purpose of the present invention is to provide a kind of vision SLAM winding detection method and device for merging depth information, it is intended to
Robot is improved to the invariance of illumination, to realize accurate positionin and the map structuring of robot.
To achieve the goals above, the present invention the following technical schemes are provided:
The present invention provides a kind of vision SLAM winding detection method for merging depth information, comprising the following steps:
Step S100, the video streaming image shot in robot kinematics is obtained;
Step S200, the key frame in video flowing is extracted offline;
Step S300, the depth map of key frame is obtained with deep neural network training;
Step S400, the global characteristics descriptor of depth map is extracted using M2DP algorithm;
Step S500, the COS distance that depth map is calculated using matrix-vector multiplication, to match the similarity of key frame;
Step S600, winding detection is carried out to key frame.
Further, the video streaming image in the step S100 is acquired by the camera being set in robot.
Further, the step S300 is specifically included:
Step S310, the database of key frame relative depth is constructed;
Step S320, two picture points are randomly selected in same key frame, mark two figures in same key frame
The relatively far and near information of picture point;
Step S330, the relative depth information among original image is obtained by the training of neural network;
Step S340, the image with object information is obtained, described image and original image are in the same size.
Further, the step S400 is specifically included: after depth map is transformed into a cloud, the plane that Xiang Ruogan is set is thrown
Shadow generates 2D image descriptor, then generates 128 dimensions to the 2D image descriptor dimensionality reduction of overlapping by principal component analytical method
Global characteristics descriptor.
Further, the COS distance of depth map is calculated in the step S500 using matrix-vector multiplication specifically: pass through
Obtain the matrix-vector of key frame in current key frame and map, and using matrix-vector multiplication calculate the cosine of depth map away from
From to obtain key frame similarity in current key frame and map.
Further, the step S600 is specifically included:
When key frame similarity reaches setting ratio in the current key frame and map, then winding detection has been determined
Occur, so as to adjust map offset and update global map;When the current key frame is low with key frame similarity in map
When setting ratio, then determine that there is no to create key frame and expand map for winding detection.
The present invention also provides a kind of vision SLAM winding detection device for merging depth information, described device includes: storage
Device, processor and storage in the memory and the computer program that can run on the processor, the processor
It executes the computer program and operates in lower module of described device:
Module is obtained, for obtaining the video streaming image shot in robot kinematics;
Abstraction module, for extracting the key frame in video flowing offline;
Training module, for obtaining the depth map of key frame with deep neural network training;
Extraction module, for extracting the global characteristics descriptor of depth map using M2DP algorithm;
Matching module, for calculating the COS distance of depth map using matrix-vector multiplication, to match the phase of key frame
Like degree;
Detection module, for carrying out winding detection to key frame.
The beneficial effects of the present invention are: the present invention disclose it is a kind of merge depth information vision SLAM winding detection method and
Device obtains the video streaming image shot in robot kinematics, and extracts the key frame in video streaming image offline, uses
Deep neural network training obtains the depth map of key frame, and the global characteristics descriptor of depth map is extracted using M2DP algorithm, into
And the COS distance for utilizing matrix-vector multiplication to calculate depth map returns key frame to match the similarity of key frame
Ring detection, method provided by the invention have the invariance to illumination, are able to achieve accurate positionin and the map structuring of robot.
Detailed description of the invention
By the way that the embodiment in conjunction with shown by attached drawing is described in detail, above-mentioned and other features of the disclosure will
More obvious, identical reference label indicates the same or similar element in disclosure attached drawing, it should be apparent that, it is described below
Attached drawing be only some embodiments of the present disclosure, for those of ordinary skill in the art, do not making the creative labor
Under the premise of, it is also possible to obtain other drawings based on these drawings, in the accompanying drawings:
Fig. 1 is a kind of flow chart for the vision SLAM winding detection method for merging depth information of the embodiment of the present invention;
Fig. 2 is a kind of structural schematic diagram for the vision SLAM winding detection device for merging depth information of the embodiment of the present invention.
Specific embodiment
Clear, complete description is carried out to technical solution of the present invention below in conjunction with attached drawing, it is clear that described implementation
Example is a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill
Personnel are obtained without making creative work so other embodiments, belong to protection scope of the present invention.
As shown in Figure 1, a kind of vision SLAM winding detection method for merging depth information provided in an embodiment of the present invention,
Robot operating system (Robot Operating System abbreviation ROS) is carried in 2 mobile robot of Turtlebot, it is upper
Machine is set as NVIDIA TX2, and the Kinect V2 camera being loaded on Turtlebot 2 transmits video by ROS system and arrives
In SLAM winding detection system;
Detection method includes the following steps for the winding:
Step S100, the video streaming image shot in robot kinematics is obtained;
Step S200, the key frame in video flowing is extracted offline;
Step S300, the depth map of key frame is obtained with deep neural network training;
Step S400, the global characteristics descriptor of depth map is extracted using M2DP algorithm;
Step S500, the COS distance that depth map is calculated using matrix-vector multiplication, to match the similarity of key frame;
Step S600, winding detection is carried out to key frame, provides reliable support for robot autonomous navigation.
Further, the video streaming image in the step S100 is acquired by the camera being set in robot.
Further, the method for the key frame in video flowing is extracted in the step S200 offline are as follows: detect using similarity
Method, the detection method are based on perceptual hash algorithm, specifically include:
Step S210, the image in video flowing is extracted, and reduces the size of described image;
Step S220, simplify the color of image;
Step S230, the discrete cosine transform of image is calculated;
Step S240, the discrete cosine transform of downscaled images;
Step S250, the average value of image is calculated;
Step S260, the cryptographic Hash of image is calculated with Hamming distance method.
Further, the step S300 is specifically included:
Step S310, the database of key frame relative depth is constructed;
Step S320, two picture points are randomly selected in same key frame, mark two figures in same key frame
The relatively far and near information of picture point;
Step S330, the relative depth information among original image is obtained by the training of neural network;
Step S340, the image with object information is obtained, described image and original image are in the same size.
Further, the step S400 is specifically included: after depth map is transformed into a cloud, the plane that Xiang Ruogan is set is thrown
Shadow generates 2D image descriptor, then forms the complete of 128 dimensions to the 2D descriptor dimensionality reduction of overlapping by principal component analytical method
Office's feature descriptor, the global characteristics descriptor succinctly can comprehensively express characteristics of image, though the 2D image descriptor
Right redundancy is still conducive to point Yun Chongjian.
Specially following steps:
Step S410, three-dimensional space is defined, and sets the two-dimensional surface of n different location;
It step S420, will in OpenCV and PCL (Point Cloud Library point Yun Ku) using coordinate conversion relation
Depth image is converted to m three dimensional point cloud;
Step S430, m three dimensional point cloud is normalized using principal component analytical method (PCA);
Step S440, m three dimensional point cloud is projected in respectively on n two-dimensional surface, obtains the n two-dimensional surfaces
It is respectively provided with m subpoint;
Step S450, the point cloud data center after projecting in the n two-dimensional surfaces is calculated;
Step S460, using the point cloud data center as origin, calculate separately m subpoint to the n origin away from
From to obtain n Vector Groups;
Step S470, the n vector is formed into matrix A, singular value decomposition then is carried out to matrix A, it will be in result
First left singular vector and first right singular vector connect, to obtain the image descriptor of one 128 dimension.
Further, since the dimension of description vectors is all in 1K hereinafter, utilize matrix-vector multiplication meter in the step S500
Calculate the COS distance of depth map specifically: by obtaining the matrix-vector of key frame in current key frame and map, and utilize square
Battle array vector multiplication calculates cosine (angle) distance of depth map, to obtain key frame similarity in current key frame and map.
Further, the step S600 is specifically included:
When key frame similarity reaches setting ratio in the current key frame and map, then winding detection has been determined
Occur, so as to adjust map offset and update global map;When the current key frame is low with key frame similarity in map
When setting ratio, then determine that there is no to create key frame and expand map for winding detection.
A kind of vision SLAM winding detection device for fusion depth information that embodiment of the disclosure provides, as shown in Figure 2
For a kind of vision SLAM winding detection device figure of fusion depth information of the disclosure, a kind of fusion depth information of the embodiment
Vision SLAM winding detection device include: processor, memory and storage in the memory and can be in the processing
The computer program run on device, the processor realize a kind of above-mentioned fusion depth information when executing the computer program
Step in vision SLAM winding detection device embodiment.
Described device includes: memory, processor and storage in the memory and can transport on the processor
Capable computer program, the processor execute the computer program and operate in lower module of described device:
Module is obtained, for obtaining the video streaming image shot in robot kinematics;
Abstraction module, for extracting the key frame in video flowing offline;
Training module, for obtaining the depth map of key frame with deep neural network training;
Extraction module, for extracting the global characteristics descriptor of depth map using M2DP algorithm;
Matching module, for calculating the COS distance of depth map using matrix-vector multiplication, to match the phase of key frame
Like degree;
Detection module, for carrying out winding detection to key frame.
A kind of vision SLAM winding detection device merging depth information, include but are not limited to, processor is deposited
Reservoir.It will be understood by those skilled in the art that the example is only a kind of vision SLAM winding detection for merging depth information
The example of device does not constitute the restriction to a kind of vision SLAM winding detection device for merging depth information, may include ratio
The more components of example perhaps combine certain components or different components, such as a kind of vision for merging depth information
SLAM winding detection device can also be including input-output equipment etc..
Alleged processor can be central processing unit (Central-Processing-Unit, CPU), can also be it
His general processor, digital signal processor (Digital-Signal-Processor, DSP), specific integrated circuit
(Application-Specific-Integrated-Circuit, ASIC), ready-made programmable gate array (Field-
Programmable-Gate-Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng, the processor is a kind of control centre of vision SLAM winding detection device running gear for merging depth information,
It can running gear using a kind of various interfaces and connection entire vision SLAM winding detection device for merging depth information
Various pieces.
The memory can be used for storing the computer program and/or module, and the processor is by operation or executes
Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization
A kind of various functions for the vision SLAM winding detection device merging depth information.The memory can mainly include storage program
Area and storage data area, wherein storing program area can application program needed for storage program area, at least one function;Storage
Data field can store the data of creation.In addition, memory may include high-speed random access memory, it can also include non-volatile
Property memory, such as intelligent memory card (Smart-Media-Card, SMC), secure digital (Secure-Digital, SD) card,
Flash card (Flash-Card), at least one disk memory, flush memory device or other volatile solid-state parts.
Although the description of the disclosure is quite detailed and especially several embodiments are described, it is not
Any of these details or embodiment or any specific embodiments are intended to be limited to, but should be considered as is by reference to appended
A possibility that claim provides broad sense in view of the prior art for these claims explanation, to effectively cover the disclosure
Preset range.In addition, the disclosure is described with inventor's foreseeable embodiment above, its purpose is to be provided with
Description, and those equivalent modifications that the disclosure can be still represented to the unsubstantiality change of the disclosure still unforeseen at present.
Claims (7)
1. a kind of vision SLAM winding detection method for merging depth information, which comprises the following steps:
Step S100, the video streaming image shot in robot kinematics is obtained;
Step S200, the key frame in video flowing is extracted offline;
Step S300, the depth map of key frame is obtained with deep neural network training;
Step S400, the global characteristics descriptor of depth map is extracted using M2DP algorithm;
Step S500, the COS distance that depth map is calculated using matrix-vector multiplication, to match the similarity of key frame;
Step S600, winding detection is carried out to key frame.
2. a kind of vision SLAM winding detection method for merging depth information according to claim 1, which is characterized in that institute
The video streaming image stated in step S100 is acquired by the camera being set in robot.
3. a kind of vision SLAM winding detection method for merging depth information according to claim 1, which is characterized in that institute
Step S300 is stated to specifically include:
Step S310, the database of key frame relative depth is constructed;
Step S320, two picture points are randomly selected in same key frame, mark two described image points in same key frame
Relatively far and near information;
Step S330, the relative depth information among original image is obtained by the training of neural network;
Step S340, the image with object information is obtained, described image and original image are in the same size.
4. a kind of vision SLAM winding detection method for merging depth information according to claim 1, which is characterized in that institute
State step S400 to specifically include: after depth map is transformed into a cloud, the plane projection that Xiang Ruogan is set generates 2D iamge description
Then symbol generates the global characteristics descriptor of 128 dimensions to the 2D image descriptor dimensionality reduction of overlapping by principal component analytical method.
5. a kind of vision SLAM winding detection method for merging depth information according to claim 1, which is characterized in that institute
State the COS distance for calculating depth map in step S500 using matrix-vector multiplication specifically:
Depth map is calculated by obtaining the matrix-vector of key frame in current key frame and map, and using matrix-vector multiplication
COS distance, to obtain key frame similarity in current key frame and map.
6. a kind of vision SLAM winding detection method for merging depth information according to claim 5, which is characterized in that institute
Step S600 is stated to specifically include:
When key frame similarity reaches setting ratio in the current key frame and map, then determine that winding detection has been sent out
It is raw, so as to adjust map offset and update global map;When the current key frame and key frame similarity in map are lower than
When setting ratio, then determine that there is no to create key frame and expand map for winding detection.
7. it is a kind of merge depth information vision SLAM winding detection device, which is characterized in that described device include: memory,
Processor and storage in the memory and the computer program that can run on the processor, the processor execution
The computer program operates in lower module of described device:
Module is obtained, for obtaining the video streaming image shot in robot kinematics;
Abstraction module, for extracting the key frame in video flowing offline;
Training module, for obtaining the depth map of key frame with deep neural network training;
Extraction module, for extracting the global characteristics descriptor of depth map using M2DP algorithm;
Matching module, for calculating the COS distance of depth map using matrix-vector multiplication, to match the similarity of key frame;
Detection module, for carrying out winding detection to key frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811632974.6A CN109784232A (en) | 2018-12-29 | 2018-12-29 | A kind of vision SLAM winding detection method and device merging depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811632974.6A CN109784232A (en) | 2018-12-29 | 2018-12-29 | A kind of vision SLAM winding detection method and device merging depth information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109784232A true CN109784232A (en) | 2019-05-21 |
Family
ID=66498938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811632974.6A Pending CN109784232A (en) | 2018-12-29 | 2018-12-29 | A kind of vision SLAM winding detection method and device merging depth information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109784232A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110455299A (en) * | 2019-07-26 | 2019-11-15 | 中国第一汽车股份有限公司 | A kind of route generation method, device, equipment and storage medium |
CN110910389A (en) * | 2019-10-30 | 2020-03-24 | 中山大学 | Laser SLAM loop detection system and method based on graph descriptor |
CN111340870A (en) * | 2020-01-15 | 2020-06-26 | 西安交通大学 | Topological map generation method based on vision |
CN111612847A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN111767854A (en) * | 2020-06-29 | 2020-10-13 | 浙江大学 | SLAM loop detection method combined with scene text semantic information |
CN112648997A (en) * | 2019-10-10 | 2021-04-13 | 成都鼎桥通信技术有限公司 | Method and system for positioning based on multitask network model |
CN112990040A (en) * | 2021-03-25 | 2021-06-18 | 北京理工大学 | Robust loopback detection method combining global and local features |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780631A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | A kind of robot closed loop detection method based on deep learning |
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
CN107330357A (en) * | 2017-05-18 | 2017-11-07 | 东北大学 | Vision SLAM closed loop detection methods based on deep neural network |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108986168A (en) * | 2018-06-13 | 2018-12-11 | 深圳市感动智能科技有限公司 | A kind of robot winding detection method and device combining bag of words tree-model based on depth measure study |
-
2018
- 2018-12-29 CN CN201811632974.6A patent/CN109784232A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803267A (en) * | 2017-01-10 | 2017-06-06 | 西安电子科技大学 | Indoor scene three-dimensional rebuilding method based on Kinect |
CN106780631A (en) * | 2017-01-11 | 2017-05-31 | 山东大学 | A kind of robot closed loop detection method based on deep learning |
CN107330357A (en) * | 2017-05-18 | 2017-11-07 | 东北大学 | Vision SLAM closed loop detection methods based on deep neural network |
CN108537844A (en) * | 2018-03-16 | 2018-09-14 | 上海交通大学 | A kind of vision SLAM winding detection methods of fusion geological information |
CN108986168A (en) * | 2018-06-13 | 2018-12-11 | 深圳市感动智能科技有限公司 | A kind of robot winding detection method and device combining bag of words tree-model based on depth measure study |
Non-Patent Citations (1)
Title |
---|
胡春梅 等: "《地面激光雷达与近景摄影测量技术集成》" * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110455299A (en) * | 2019-07-26 | 2019-11-15 | 中国第一汽车股份有限公司 | A kind of route generation method, device, equipment and storage medium |
CN112648997A (en) * | 2019-10-10 | 2021-04-13 | 成都鼎桥通信技术有限公司 | Method and system for positioning based on multitask network model |
CN110910389A (en) * | 2019-10-30 | 2020-03-24 | 中山大学 | Laser SLAM loop detection system and method based on graph descriptor |
CN110910389B (en) * | 2019-10-30 | 2021-04-09 | 中山大学 | Laser SLAM loop detection system and method based on graph descriptor |
CN111340870A (en) * | 2020-01-15 | 2020-06-26 | 西安交通大学 | Topological map generation method based on vision |
CN111340870B (en) * | 2020-01-15 | 2022-04-01 | 西安交通大学 | Topological map generation method based on vision |
CN111612847A (en) * | 2020-04-30 | 2020-09-01 | 重庆见芒信息技术咨询服务有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN111612847B (en) * | 2020-04-30 | 2023-10-20 | 湖北煌朝智能自动化装备有限公司 | Point cloud data matching method and system for robot grabbing operation |
CN111767854A (en) * | 2020-06-29 | 2020-10-13 | 浙江大学 | SLAM loop detection method combined with scene text semantic information |
CN111767854B (en) * | 2020-06-29 | 2022-07-01 | 浙江大学 | SLAM loop detection method combined with scene text semantic information |
CN112990040A (en) * | 2021-03-25 | 2021-06-18 | 北京理工大学 | Robust loopback detection method combining global and local features |
CN112990040B (en) * | 2021-03-25 | 2022-09-06 | 北京理工大学 | Robust loopback detection method combining global and local features |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sahu et al. | Artificial intelligence (AI) in augmented reality (AR)-assisted manufacturing applications: a review | |
CN109784232A (en) | A kind of vision SLAM winding detection method and device merging depth information | |
CN107742311B (en) | Visual positioning method and device | |
US10269147B2 (en) | Real-time camera position estimation with drift mitigation in incremental structure from motion | |
US10872227B2 (en) | Automatic object recognition method and system thereof, shopping device and storage medium | |
Chen et al. | City-scale landmark identification on mobile devices | |
US10269148B2 (en) | Real-time image undistortion for incremental 3D reconstruction | |
Zhuang et al. | Semantic part segmentation method based 3D object pose estimation with RGB-D images for bin-picking | |
Vieira et al. | On the improvement of human action recognition from depth map sequences using space–time occupancy patterns | |
KR101691903B1 (en) | Methods and apparatus for using optical character recognition to provide augmented reality | |
US20180315232A1 (en) | Real-time incremental 3d reconstruction of sensor data | |
US10580148B2 (en) | Graphical coordinate system transform for video frames | |
Bazin et al. | Motion estimation by decoupling rotation and translation in catadioptric vision | |
CN110135455A (en) | Image matching method, device and computer readable storage medium | |
Ren et al. | Change their perception: RGB-D for 3-D modeling and recognition | |
Fehr et al. | Covariance based point cloud descriptors for object detection and recognition | |
JP5833507B2 (en) | Image processing device | |
CN106030610A (en) | Real-time 3D gesture recognition and tracking system for mobile devices | |
Alam et al. | Pose estimation algorithm for mobile augmented reality based on inertial sensor fusion. | |
JP6016242B2 (en) | Viewpoint estimation apparatus and classifier learning method thereof | |
Li et al. | A hybrid pose tracking approach for handheld augmented reality | |
Chew et al. | Panorama stitching using overlap area weighted image plane projection and dynamic programming for visual localization | |
Amorós et al. | Towards relative altitude estimation in topological navigation tasks using the global appearance of visual information | |
Fritz et al. | Size matters: Metric visual search constraints from monocular metadata | |
Porzi et al. | An automatic image-to-DEM alignment approach for annotating mountains pictures on a smartphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190521 |