CN113688816B - Calculation method of visual odometer for improving ORB feature point extraction - Google Patents

Calculation method of visual odometer for improving ORB feature point extraction Download PDF

Info

Publication number
CN113688816B
CN113688816B CN202110825003.9A CN202110825003A CN113688816B CN 113688816 B CN113688816 B CN 113688816B CN 202110825003 A CN202110825003 A CN 202110825003A CN 113688816 B CN113688816 B CN 113688816B
Authority
CN
China
Prior art keywords
points
point
pixel
feature point
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110825003.9A
Other languages
Chinese (zh)
Other versions
CN113688816A (en
Inventor
陈丽
邓宇翔
高其远
张凯波
吴泽州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110825003.9A priority Critical patent/CN113688816B/en
Publication of CN113688816A publication Critical patent/CN113688816A/en
Application granted granted Critical
Publication of CN113688816B publication Critical patent/CN113688816B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of automatic control, and discloses a calculation method of a visual odometer for improving ORB feature point extraction, which comprises the following steps that firstly, a color image and a depth image are obtained by using a depth camera, the color image is subjected to graying and zoning treatment successively, a region with small gray level change is removed, and the remaining region forms a new region N; step two, extracting a plurality of characteristic points from the new region N by using a circular search method; uniformly distributing all the characteristic points to a new area N by using a quadtree method, and acquiring three-dimensional coordinates of the corresponding characteristic points by combining the depth image; step four, repeating the steps one to three, obtaining a plurality of characteristic points and corresponding three-dimensional coordinates on every two adjacent frames of images, and performing characteristic point matching so as to obtain a plurality of characteristic point pairs; and fifthly, acquiring an external parameter matrix of the depth camera corresponding to each adjacent two-frame image by utilizing a plurality of characteristic point pairs of each adjacent two-frame image, so as to finish the calculation of the visual odometer.

Description

Calculation method of visual odometer for improving ORB feature point extraction
Technical Field
The invention belongs to the technical field of automatic control, and particularly relates to a calculation method of a visual odometer for improving ORB feature point extraction.
Background
Real-time localization and mapping (Simultaneous Localization and Mapping, SLAM) in an unknown environment is one of the hot spot problems in mobile robot research. With the development of computer technology, the capability of image data processing is greatly improved, and visual SLAM based on visual sensors is becoming the mainstream. A Visual Odometer (VO) is used as a front end of vision, and the pose of the camera is estimated through image information acquired by a Visual sensor, so that an initial value is provided for back end optimization. In the visual SLAM, the main stream of VO is a feature point method, and the feature point method is divided into three feature point methods of sparse, semi-dense and dense, wherein the method based on sparse feature points mainly relies on the matching relation of the feature points such as SIFT, SURF, ORB to establish the association between frames so as to calculate the pose of a camera.
The ORB feature point method is currently the main stream and many researchers have explored this. Mur-Artal et al propose an ORB feature region division-based algorithm to extract features of image blocks, but the extraction process is a violence search, the speed is too slow, and the real-time performance is poor; yuan Xiaoping proposes an ORB algorithm based on improved FAST detection, which blocks an image, searches for a sub-region block of interest, and then performs FAST detection, but the method has defects in searching for a region of interest, resulting in loss of image information, which is unfavorable for subsequent FAST detection. In summary, there is still room for improvement in the vision odometer as the front end of the vision SLAM.
Disclosure of Invention
The invention provides a calculation method of a visual odometer for improving ORB feature point extraction, which solves the problems of the traditional calculation method that the feature point extraction process is violence search, the speed is too slow, the real-time performance is poor and the like.
The invention can be realized by the following technical scheme:
a method of computing a visual odometer that improves ORB feature point extraction, comprising the steps of:
firstly, acquiring a color image and a depth image by using a depth camera, sequentially carrying out graying and partitioning treatment on the color image, removing a region with small gray level change, and forming a new region N by the residual region;
step two, extracting a plurality of characteristic points from the new region N by using a circular search method;
uniformly distributing all the characteristic points to a new area N by using a quadtree method, and acquiring three-dimensional coordinates of the corresponding characteristic points by combining the depth image;
step four, repeating the steps one to three, obtaining a plurality of characteristic points and corresponding three-dimensional coordinates on every two adjacent frames of images, and performing characteristic point matching so as to obtain a plurality of characteristic point pairs;
and fifthly, acquiring an external parameter matrix of the depth camera corresponding to each adjacent two-frame image by utilizing a plurality of characteristic point pairs of each adjacent two-frame image, so as to finish the calculation of the visual odometer.
Further, the method for extracting the feature points comprises the following steps:
step I, drawing a circle by taking the pixel point P in the new area N as a circle center and taking 3 as a radius, and marking 16 pixel points on the circumference;
step II, comparing the gray values of the four pixel points with the numbers of 1, 5, 9 and 13 with the gray value of the pixel point P, if the gray value of 3 points is larger than I at the same time p +t or simultaneously less than I p -t, preliminarily identifying the pixel point P as a feature point, and entering step iv; otherwise, go to step III, where I p A gray value representing a pixel point P, t representing a threshold value;
step III, comparing the gray values of the four pixel points with the numbers of 3, 7, 11 and 15 with the gray value of the pixel point P, if the gray value of 3 points is larger than I at the same time p +t or simultaneously less than I p -t, preliminarily identifying the pixel point P as a feature point, and entering step iv; otherwise, the pixel point P is not considered to be a feature point;
step IV, comparing the gray values of 16 pixel points in the step I with the gray value of the pixel point P, if the gray values of 9 points are simultaneously larger than the gray value I p +t or simultaneously less than I p -t, determining that the pixel point P is a feature point, otherwise, considering that the pixel point P is not a feature point;
and V, repeating the steps I to IV to finish the judgment of all the pixel points in the new area N.
Further, the threshold t is set to 20% of the gray value of the pixel point P.
Further, taking a square with a side length of W pixels as a subarea, dividing the image after graying into PxQ blocks, finding out the geometric center point of each subarea, calculating the average value of the sum of gray value variances of eight pixel points in total of four pixel points with the geometric center point of each subarea being separated from the geometric center point by R and four pixel points with a distance of 2R in the vertical and horizontal directions by using the following equation,
Figure BDA0003173424810000031
wherein N is (i,j) I e {1,2, …, P }, j e {1,2, …, Q }, k e {1,2, …,8}, which is the average of the sum of the geometric center points of the subregions and the gray value variances of the 8 points thereof that are vertically and horizontally separated by R and 2R; i (i,j) Gray values of geometric center points of all subareas; i (i,j,k) Is equal to I (i,j) The gray values of 8 pixels apart from R and 2R, R equals one sixth of W.
And finally, sorting the average value of the gray value variance from large to small, and eliminating the sub-region in the last third, wherein the reserved sub-region forms a new region N.
Further, matching BRIEF descriptors corresponding to a plurality of feature points of two adjacent frames of images by adopting a Hamming distance to obtain a plurality of feature point pairs, calculating an external reference matrix of the depth camera by using the following equation,
P 1 =RP 2 +t
wherein P is 1 Representing three-dimensional coordinates, P, of one feature point of the feature point pair 2 And representing the three-dimensional coordinates of the other feature point of the feature point pair, wherein R and t represent the extrinsic matrix of the depth camera.
The beneficial technical effects of the invention are as follows:
the ORB feature point extraction can be improved, firstly, an image is divided into a plurality of subareas according to a specified side length, subareas with small gray level change are removed by taking the gray level change condition of the subareas as a reference, subareas with large gray level change are reserved, an effective feature extraction area is obtained, the feature point extraction range is reduced, and the algorithm speed is accelerated; secondly, the FAST corner extraction condition is relaxed, the sufficient number of corners is reserved, the pose estimation precision is improved, the ORB feature points are homogenized through a quadtree method, the pose estimation precision is further improved, finally, feature point matching is carried out, an external reference matrix of the depth camera is obtained, and more accurate camera pose and track are obtained through an EPnP algorithm.
Drawings
FIG. 1 is a general flow diagram of the present invention;
FIG. 2 is a comparative schematic of the results of a visual odometer calculation using the method of the present invention and the ORB-SLAM algorithm.
Detailed Description
The following detailed description of the invention refers to the accompanying drawings and preferred embodiments.
In the visual odometer, the accuracy and the instantaneity of feature point extraction have important influence on the pose estimation of a robot, but the traditional ORB feature extraction algorithm is slower in speed and has redundancy, the invention provides a calculation method for improving the ORB feature point extraction, an image is divided into a plurality of subareas according to a specified side length, the subareas with small gray level change are removed by taking the gray level change condition of the subareas as a reference, the subareas with large gray level change are reserved, an effective feature extraction area is obtained, the feature point extraction range is reduced, and the algorithm speed is accelerated; secondly, the FAST corner extraction condition is relaxed, the number of the corners is reserved fully, the pose estimation precision is improved, the ORB characteristic points are homogenized through a quadtree method, the pose estimation precision is further improved, finally, feature matching is carried out, and the pose and the track of the camera are obtained through an EPnP algorithm. The method comprises the following steps:
(1) A color image and a depth image are acquired from a depth camera, and the color image is converted into a grayscale image.
(2) Obtaining an effective feature extraction area by using a partition reservation strategy:
the first step: and converting the acquired color image into a gray image, and carrying out weighted average processing on the image by using Gaussian filtering to remove noise.
And a second step of: dividing the image into P X Q blocks according to a square with W pixels as a subarea, and finding out the geometric center point of the subarea of the P X Q blocks.
And a third step of: and calculating the average value of the sum of gray value variances of eight pixel points in total of four pixel points of each sub-area geometric center point and four pixel points which are separated by R and 2R in the vertical and horizontal directions. The calculation formula is as follows:
Figure BDA0003173424810000041
wherein N is (i,j) I e {1,2, …, P }, j e {1,2, …, Q }, k e {1,2, …,8}, which is the average of the sum of the geometric center points of the subregions and the gray value variances of the 8 points thereof that are vertically and horizontally separated by R and 2R; i (i,j) Gray values of geometric center points of all subareas; i (i,j,k) Is equal to I (i,j) The gray values of 8 pixel points separated by R and 2R, R is equal to one sixth of W, and the 8 points selected in this way can better represent the gray change condition of the subarea.
Fourth step: and (3) sorting the average value of the gray value variance sum of each subarea calculated in the previous step from large to small, removing the subareas in the last third of the sorted sequence, namely removing subareas with smaller gray value conversion, and forming a new area N by the reserved subareas.
Fifth step: and extracting feature points from the new region N obtained in the fourth step by using the FAST corner detection method. The specific steps of the algorithm are as follows:
(3) And (5) relaxing a coarse screening condition FAST algorithm to extract FAST corner points.
The first step: taking a pixel point P, wherein the gray value of the pixel point P is I p . A suitable threshold t (generally I p 20%) of (c).
And a second step of: taking the pixel point P as the center of a circle, 16 pixel points on a circle with the radius of 3 are selected and respectively numbered 1 to 16.
And a third step of: comparing the gray values of the four pixel points with the numbers of 1, 5, 9 and 13 with the gray value of the pixel point P, if the gray value of 3 points is simultaneously larger than I p +t or the sameTime is less than I p -t, preliminarily recognizing the pixel point P as a feature point, and entering a fifth step; otherwise, the fourth step is entered.
Fourth step: comparing the gray values of the four pixel points with the numbers of 3, 7, 11 and 15 with the gray value of the pixel point P, if the gray value of 3 points is larger than I at the same time p +t or simultaneously less than I p -t, preliminarily recognizing the pixel point P as a feature point, and entering a fifth step; otherwise, the pixel point P is not considered to be a feature point.
Fifth step: comparing the gray values of 16 pixel points with the gray value of the pixel point P in the second step, if the gray values of 9 points are simultaneously larger than I p +t or simultaneously less than I p T, then the pixel point P can be considered to be a feature point, otherwise the pixel point P is not a feature point.
(4) These feature points are uniformly distributed on the image by the quadtree method. The main idea of the quadtree is to divide the image area into 4 quadrants and uniformly distribute the feature points. In the distribution process of the feature points, firstly setting an initialization node as a whole picture, obtaining original quadtree nodes, then detecting the number of the feature points in each node, if the number is larger than 1, continuing splitting the child nodes, if the number is equal to 1, not splitting the child nodes and storing the child nodes; when the number of nodes reaches the desired number of feature points, the quadtree splitting is completed and the splitting is not continued.
(5) And obtaining the three-dimensional coordinates of each feature point according to the information of the depth image.
(6) Calculating BRIEF descriptors by taking the characteristic points as the center, matching the BRIEF descriptors corresponding to the characteristic points of every two adjacent frames of images acquired by the depth camera by adopting the Hamming distance, namely measuring the descriptor distance between each characteristic point in one image and all the characteristic points in the other image, and if the distance between the two characteristic points is smaller than a certain threshold value, considering that the two characteristic points can be matched.
(7) After feature point matching, the feature points inevitably have mismatching phenomenon due to the influence of various aspects of the environment, and RANSAC (Random Sample Consensus) algorithm is adopted to reject the mismatching of the image feature points
(8) And calculating the pose of the camera according to the matching relation. The pose of the camera is represented by its rotation matrix R and translation vector t:
P 1 =RP 2 +t
wherein P is 1 Representing 3D coordinates of a set of feature points in a world coordinate system, P 2 Representing the 3D coordinates of another set of feature points in the world coordinate system, R, t representing the camera's extrinsic matrix. Compared with the constant internal parameters, the external parameters are continuously changed along with the movement of the camera, are also targets to be estimated and represent the track of the robot, and are generally solved by adopting an iterative closest point method, such as adopting an EPnP algorithm, so as to obtain more accurate pose and track of the camera.
In order to verify the positioning accuracy of the vision mileage calculation method, the calculation method and the ORB-SLAM algorithm of the invention are tested on part of TUM data set image sequences, a depth camera is adopted to shoot office tables placed in a certain space such as an office, the handheld depth camera is respectively moved in a translational mode along three main shafts, moved in a rotary mode around the three main shafts, shot in a rotary mode around the office tables, and the depth camera is loaded on a mobile robot to shoot, and detailed information of the acquired data sets is shown in the table below.
Table dataset information
Figure BDA0003173424810000061
To quantify the accuracy of the positioning, an absolute track error (absolute trajectory error, ATE) evaluation algorithm is chosen herein for positioning accuracy. ATE is a direct difference value between estimated pose and real pose, and can reflect algorithm precision and track global consistency very intuitively.
FIG. 2 is an ORB-SLAM algorithm and the absolute pose error root mean square error of the algorithms herein on different data sets. By comparing the data, the root mean square error of the absolute pose error of the algorithm is smaller than that of the ORB-SLAM algorithm, which shows that the algorithm is more accurate in positioning accuracy than that of the ORB-SLAM algorithm.
While particular embodiments of the present invention have been described above, it will be appreciated by those skilled in the art that these are merely illustrative, and that many changes and modifications may be made to these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims.

Claims (2)

1. A method of computing a visual odometer that improves ORB feature point extraction, comprising the steps of:
firstly, acquiring a color image and a depth image by using a depth camera, sequentially carrying out graying and partitioning treatment on the color image, removing areas with small gray level change, and forming a new area N by the residual areas;
step two, extracting a plurality of characteristic points from the new region N by using a circular search method;
uniformly distributing all the characteristic points to a new area N by using a quadtree method, and acquiring three-dimensional coordinates of the corresponding characteristic points by combining the depth image;
step four, repeating the steps one to three, obtaining a plurality of characteristic points and corresponding three-dimensional coordinates on every two adjacent frames of images, and performing characteristic point matching so as to obtain a plurality of characteristic point pairs;
step five, obtaining an external parameter matrix of a depth camera corresponding to each adjacent two frames of images by utilizing a plurality of characteristic point pairs of each adjacent two frames of images, so as to finish the calculation of the visual odometer;
the method for extracting the characteristic points comprises the following steps:
step I, drawing a circle by taking the pixel point P in the new area N as a circle center and taking 3 as a radius, and marking 16 pixel points on the circumference;
step II, comparing the gray values of the four pixel points with the numbers of 1, 5, 9 and 13 with the gray value of the pixel point P, if the gray value of 3 points is larger than I at the same time p +t or simultaneously less than I p -t, preliminarily identifying the pixel point P as a feature point, and entering step iv; otherwise, go to step III, where I p A gray value representing a pixel point P, t representing a threshold value;
step III, willComparing the gray values of the four pixel points with the numbers of 3, 7, 11 and 15 with the gray value of the pixel point P, if the gray value of 3 points is larger than I at the same time p +t or simultaneously less than I p -t, preliminarily identifying the pixel point P as a feature point, and entering step iv; otherwise, the pixel point P is not considered to be a feature point;
step IV, comparing the gray values of 16 pixel points in the step I with the gray value of the pixel point P, if the gray values of 9 points are simultaneously larger than the gray value I p +t or simultaneously less than I p -t, determining that the pixel point P is a feature point, otherwise, considering that the pixel point P is not a feature point;
step V, repeating the steps I-IV to finish the judgment of all the pixel points in the new area N;
dividing the image after graying into P by taking square with W pixels as subarea x Q block, find out the geometric center point of each sub-region, then calculate the average value of the sum of gray value variances of eight pixel points in total of the geometric center point of each sub-region and four pixel points which are separated by R and four pixel points which are separated by 2R in the vertical and horizontal directions by using the following equation,
Figure FDA0004233378990000021
wherein N is (i,j) I e {1,2, …, P }, j e {1, 2..the term, Q }, k e {1, 2..8 }, which is the average of the sum of the gray value variances of the sub-region geometric center point and its 8 points that are vertically and horizontally separated by R and 2R; i (i,j) Gray values of geometric center points of all subareas; i (i,j,k) Is equal to I (i,j) The gray values of 8 pixels, R and 2R apart, R equals one sixth of W,
finally, sorting the average value of the gray value variance sum from large to small, eliminating the sub-region in the rear third, and forming a new region N by the reserved sub-region;
matching BRIEF descriptors corresponding to a plurality of characteristic points of two adjacent frames of images by adopting a Hamming distance to obtain a plurality of characteristic point pairs, calculating an external reference matrix of the depth camera by using the following equation,
P 1 =RP 2 +t
wherein P is 1 Representing three-dimensional coordinates, P, of one feature point of the feature point pair 2 And representing the three-dimensional coordinates of the other feature point of the feature point pair, wherein R and t represent the extrinsic matrix of the depth camera.
2. The method of computing a visual odometer for improved ORB feature extraction of claim 1 wherein: the threshold t is set to 20% of the gray value of the pixel point P.
CN202110825003.9A 2021-07-21 2021-07-21 Calculation method of visual odometer for improving ORB feature point extraction Active CN113688816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110825003.9A CN113688816B (en) 2021-07-21 2021-07-21 Calculation method of visual odometer for improving ORB feature point extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110825003.9A CN113688816B (en) 2021-07-21 2021-07-21 Calculation method of visual odometer for improving ORB feature point extraction

Publications (2)

Publication Number Publication Date
CN113688816A CN113688816A (en) 2021-11-23
CN113688816B true CN113688816B (en) 2023-06-23

Family

ID=78577599

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110825003.9A Active CN113688816B (en) 2021-07-21 2021-07-21 Calculation method of visual odometer for improving ORB feature point extraction

Country Status (1)

Country Link
CN (1) CN113688816B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114372510A (en) * 2021-12-15 2022-04-19 北京工业大学 Interframe matching slam method based on image region segmentation

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334762A (en) * 2019-07-04 2019-10-15 华南师范大学 A kind of feature matching method combining ORB and SIFT based on quaternary tree
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB
CN110503688A (en) * 2019-08-20 2019-11-26 上海工程技术大学 A kind of position and orientation estimation method for depth camera
CN110675437A (en) * 2019-09-24 2020-01-10 重庆邮电大学 Image matching method based on improved GMS-ORB characteristics and storage medium
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111709893A (en) * 2020-06-16 2020-09-25 华南师范大学 ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996976B2 (en) * 2014-05-05 2018-06-12 Avigilon Fortress Corporation System and method for real-time overlay of map features onto a video feed
US20200158517A1 (en) * 2017-01-19 2020-05-21 Mindmaze Holding Sa System, methods, device and apparatuses for preforming simultaneous localization and mapping

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414533A (en) * 2019-06-24 2019-11-05 东南大学 A kind of feature extracting and matching method for improving ORB
CN110334762A (en) * 2019-07-04 2019-10-15 华南师范大学 A kind of feature matching method combining ORB and SIFT based on quaternary tree
CN110503688A (en) * 2019-08-20 2019-11-26 上海工程技术大学 A kind of position and orientation estimation method for depth camera
CN110675437A (en) * 2019-09-24 2020-01-10 重庆邮电大学 Image matching method based on improved GMS-ORB characteristics and storage medium
CN111047620A (en) * 2019-11-15 2020-04-21 广东工业大学 Unmanned aerial vehicle visual odometer method based on depth point-line characteristics
CN111709893A (en) * 2020-06-16 2020-09-25 华南师范大学 ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment
CN111899334A (en) * 2020-07-28 2020-11-06 北京科技大学 Visual synchronous positioning and map building method and device based on point-line characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进FAST检测的ORB特征匹配算法;袁小平 等;科学技术与工程;第19卷(第21期);233-238 *

Also Published As

Publication number Publication date
CN113688816A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN109887015B (en) Point cloud automatic registration method based on local curved surface feature histogram
CN107169487B (en) Salient object detection method based on superpixel segmentation and depth feature positioning
CN104090972B (en) The image characteristics extraction retrieved for D Urban model and method for measuring similarity
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN109949340A (en) Target scale adaptive tracking method based on OpenCV
CN104063711B (en) A kind of corridor end point fast algorithm of detecting based on K means methods
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN108550166B (en) Spatial target image matching method
CN109323697B (en) Method for rapidly converging particles during starting of indoor robot at any point
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN112364865B (en) Method for detecting small moving target in complex scene
CN113393439A (en) Forging defect detection method based on deep learning
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN111783722B (en) Lane line extraction method of laser point cloud and electronic equipment
CN112614161A (en) Three-dimensional object tracking method based on edge confidence
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN113688816B (en) Calculation method of visual odometer for improving ORB feature point extraction
CN113887624A (en) Improved feature stereo matching method based on binocular vision
CN112489088A (en) Twin network visual tracking method based on memory unit
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN109086350B (en) Mixed image retrieval method based on WiFi
CN111160362B (en) FAST feature homogenizing extraction and interframe feature mismatching removal method
CN106802149B (en) Rapid sequence image matching navigation method based on high-dimensional combination characteristics
CN116894876A (en) 6-DOF positioning method based on real-time image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant