CN110570473A - weight self-adaptive posture estimation method based on point-line fusion - Google Patents

weight self-adaptive posture estimation method based on point-line fusion Download PDF

Info

Publication number
CN110570473A
CN110570473A CN201910862237.3A CN201910862237A CN110570473A CN 110570473 A CN110570473 A CN 110570473A CN 201910862237 A CN201910862237 A CN 201910862237A CN 110570473 A CN110570473 A CN 110570473A
Authority
CN
China
Prior art keywords
line
point
grid
feature
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910862237.3A
Other languages
Chinese (zh)
Inventor
张建华
周有杰
李辉
薛原
赵岩
何伟
张霖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN201910862237.3A priority Critical patent/CN110570473A/en
Publication of CN110570473A publication Critical patent/CN110570473A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a weight self-adaptive posture estimation method based on point-line fusion. The method uses a dotted line fusion algorithm to extract and match the characteristics of the environment, and realizes the description of the environment. The method improves the adaptability and robustness to the environment by using the dotted line fusion algorithm, ensures that the environmental characteristics can be stably extracted under various complex environments, and simultaneously improves the descriptive performance to the environment. The method adopts a mode of region division and region growth to adapt to the distribution situation and the density degree of the point characteristic and line characteristic end points, and then adaptively adjusts the weight distribution of the point characteristic and line characteristic end points in the grown grid, thereby reducing the influence of uneven characteristic distribution on pose estimation to the greatest extent. The reprojection error of the line characteristics is calculated by calculating the distance from the two end points of the projection line segment to the corresponding detection straight line, so that the calculation of the reprojection error of the line characteristics is divided into two parts, the two end points do not interfere with each other in different grids, and the description precision of the reprojection error of the line characteristics is improved.

Description

Weight self-adaptive posture estimation method based on point-line fusion
Technical Field
the invention belongs to the field of image processing and visual positioning, and particularly relates to a weight self-adaptive pose estimation method based on point-line fusion.
Background
With the development of robotics, the vision-based simultaneous localization and mapping (SLAM) technology is becoming a research hotspot of the key technology of robots. The vision SLAM technology utilizes information extracted by a vision sensor to perform simultaneous positioning and map creation, obtains pose tracks and environment information of robot motion, and plays an increasingly important role in a robot navigation system. The calculation procedure of the visual odometer can be currently accomplished in real time by using different types of cameras including a monocular camera, a binocular camera, or an RGB-D camera.
in order to adapt to feature extraction and matching in a low-texture structured scene, feature extraction and matching in the scene are generally realized in a dotted-line fusion mode, and an estimated pose is obtained through a reprojection error. In the process of utilizing the visual odometer to extract indoor features and estimate pose, the phenomenon of uneven feature extraction of each part in each frame of image is easily caused due to the influence of factors such as illumination, angle and the like due to the uncertainty of environment, especially aiming at a dotted line fusion algorithm. After analysis, the positioning accuracy of the visual odometer is seriously influenced by the nonuniformity of the feature distribution, so that the weight distribution is carried out on the point feature and line feature reprojection errors in the point-line fusion algorithm, the influence of the nonuniformity of the feature distribution on the pose estimation is reduced, and the method is the key for improving the positioning accuracy of the visual odometer.
The weight distribution algorithm mainly utilizes the distribution situation of the features in each frame to adjust the weight distribution coefficients of different features. In a common weight distribution algorithm, the weights of points and lines are distributed by respectively counting the number of point features and line features in each frame and comparing the number of point features in each frame, so that the effect of weight distribution is achieved, but the influence of uneven feature distribution on pose estimation cannot be solved. The document with the application number of 201810213575.X discloses a fast robust RGB-D indoor three-dimensional scene reconstruction method, an RGB-D camera is used as a sensor, and feature extraction and description matching of an environment are achieved through a dotted line fusion algorithm, so that the stability of the algorithm in a complex environment is improved to a certain extent, however, the problem of uneven feature distribution still exists, and the pose estimation effect is seriously influenced.
disclosure of Invention
Aiming at the defects in the prior art, the invention aims to solve the technical problem of providing a weight self-adaptive distribution method based on a dotted line fusion algorithm.
the technical scheme for solving the technical problem is to provide a weight self-adaptive posture estimation method based on dotted line fusion, which is characterized by comprising the following steps of:
acquiring images by using a binocular camera to obtain a continuous image sequence;
Secondly, extracting and processing the features of the image to obtain the total number of point features and line feature endpoints in each frame and a pixel coordinate set of the point features and the line feature endpoints in each frame;
Step three, initializing the point-line characteristics: restoring a space point line through corresponding point line characteristics in left and right images of a binocular camera, and projecting the point line with the restored space state to a current frame;
Fourthly, carrying out region division on each frame of image by using grids, and calculating the feature number formed by the sum of the number of point features and the number of line feature endpoints in each grid;
Step five, carrying out region growing processing on the grids;
When the grid of the area to be increased does not have the adjacent seed grid, the grid of the area to be increased is invalid;
When the grid of the area to be increased has the adjacent seed grids and the number of the seed grids is 1, combining the grid of the area to be increased and the seed grid;
when the grid of the area to be increased has adjacent seed grids and the number of the seed grids is more than or equal to 2, if the feature numbers in the seed grids are different, merging the grid of the area to be increased and the seed grid with the largest feature number; if the feature numbers in the seed grids are the same, combining the area grid to be increased with each seed grid;
the grid of the area to be increased is adjacent to the seed grid and has no characteristic in the area;
step six, carrying out weight distribution on the internal features of the grid according to the size of the grid area after the area is increased: assuming that the proportion of a certain increased grid in an image is q, and calculating to obtain the characteristic number s of the increased gridmThen the weight ω q (1/s) occupied by each feature in the grid ism);
step seven, projecting the spatial point features and the line features to the current frame in a re-projection mode respectively, and calculating re-projection errors of the point features and the line features respectively; adding the corresponding weight obtained in the sixth step into the reprojection error of the point characteristic and the line characteristic to form a cost function E (xi) of each frame;
And step eight, performing least square on the cost function E (xi) to realize pose estimation between frames.
compared with the prior art, the invention has the beneficial effects that:
(1) the method adopts a mode of region division and region growth to adapt to the distribution situation and the density degree of the point characteristic and line characteristic end points, and then adaptively adjusts the weight distribution of the point characteristic and line characteristic end points in the grid after growth, thereby reducing the influence of uneven characteristic distribution on the pose estimation to the maximum extent.
(2) And performing feature extraction and matching on the environment by using a dotted line fusion algorithm to realize the description of the environment. The method improves the adaptability and robustness to the environment by using the dotted line fusion algorithm, ensures that the environmental characteristics can be stably extracted under various complex environments, and simultaneously improves the descriptive performance to the environment.
(3) The reprojection error of the line characteristics is calculated by calculating the distance from the two end points of the projection line segment to the corresponding detection straight line, so that the calculation of the reprojection error of the line characteristics is divided into two parts, the two end points do not interfere with each other in different grids, and the description precision of the reprojection error of the line characteristics is improved.
Drawings
FIG. 1 is a schematic diagram of point and line features extracted from a fused point-line feature according to the present invention;
FIG. 2 is a schematic diagram illustrating image region division by fusing dotted line features according to the present invention;
FIG. 3 is a diagram of four cases encountered during region growing in accordance with the present invention;
FIG. 4 is a graph of the results of growing the grid area of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples and accompanying drawings. The specific examples are only intended to illustrate the invention in further detail and do not limit the scope of protection of the claims of the present application.
the invention provides a weight self-adaptive attitude estimation method (short method) based on point-line fusion, which is characterized by comprising the following steps of:
the method comprises the following steps of firstly, acquiring images by using a binocular camera to obtain a continuous image sequence: acquiring scene information in an environment by using a binocular camera, acquiring real-time environment information, and forming a continuous image sequence;
Step two, extracting and processing the image features to obtain the total number n of the point features and the line feature end points in each framefand its pixel coordinate set Cj
(1) Processing the image by using an ORB algorithm and an LSD algorithm, respectively extracting point features and line features (as shown in figure 1), and matching the point features with the point features and matching the line features with the line features;
(2) mismatching is rejected using the RANSAC algorithm: in the obtained matching result, sample data in the matching result is randomly selected by using a traversal method, and an optimal inner point set is iteratively calculated;
(3) After extracting the features and rejecting errorsAfter matching, taking the left image as a reference, calculating the total number n of point features and line feature endpoints in each frame of imagefAnd obtaining a pixel coordinate set C of the point feature and the line feature end point in each framej
Step three, initializing the point-line characteristics: restoring a space point line through corresponding point line characteristics in left and right images of a binocular camera, and projecting the point line with the restored space state to a current frame; the method comprises the following steps: the point characteristics firstly convert the pixel coordinates into camera coordinates through formula 1), and then convert the camera coordinates into world coordinates through formula 2); line features are converted into camera coordinates by formula 3) and then converted into world coordinates by formula 4) (i.e., represented by the Prock coordinates); wherein the dot characteristic satisfies formula 1) and formula 2), and the line characteristic satisfies formula 3) and formula 4):
in the formulae 1 to 4),Pixel coordinates representing the ith spatial point under k frames;camera coordinates representing the ith spatial point at k frames; k is a camera calibration parameter; n () represents the conversion of homogeneous coordinates to non-homogeneous coordinates;represents the second under k framesworld coordinates of i spatial points; rCWAnd tcwrespectively representing the rotation and displacement of the world coordinate system relative to the camera coordinate system;
Pixel coordinates representing the jth line feature under k frames;Is a camera model parameter;Camera coordinates representing the jth line feature under k frames;world coordinates of j-th line feature under k framesa corresponding matrix representation;A normal vector that passes through the line feature and passes through the origin;Is the direction vector of the line feature;
step four, grid division:
(1) Dividing each frame of image into regions by using n-x-n grids, and dividing each frame of image into a plurality of grids by region division (as shown in fig. 2);
(2) calculating the feature number formed by the sum of the number of the point features and the number of the line feature end points in each grid;
Step five, carrying out region growing processing on the grids; four cases are encountered in the region growing process (as shown in fig. 4);
when the grid near (S) of the region to be grown has no adjacent seed grid (using S)m、Sm+1、Sm+2、Sm+3……Sm+nIndicating), then the grid is cancelled, i.e. the grid of the region to be grown is invalid, as shown in fig. 3 (a);
when the grid near (S) of the region to be grown has the adjacent seed grids and the number of the seed grids is 1, merging the grid of the region to be grown and the seed grid for region growth, as shown in fig. 3 (b);
when the grid near (S) of the region to be increased has adjacent seed grids and the number of the seed grids is greater than or equal to 2, if the number of features in each seed grid is different, merging the grid of the region to be increased and the seed grid with the largest number of features in the region to perform region increase, as shown in fig. 3 (c); if the feature numbers in the seed grids are the same, merging the area grid to be increased and each seed grid in the area for area increase, as shown in fig. 3 (d);
The grid near (S) of the region to be grown is a grid (the most middle grid in fig. 3) adjacent to the seed grid and having no feature in the region, and the effect after the growth processing is completed is shown in fig. 4;
step six, carrying out weight distribution on the internal features of the grid according to the size of the grid area after the area is increased: assuming that the proportion of a certain increased grid in the whole image is q, and calculating to obtain the characteristic number s of the increased gridmthen the weight ω q (1/s) occupied by each feature in the grid ism);
Step seven, projecting the spatial point features and the line features to the current frame in a re-projection mode respectively, and calculating re-projection errors of the point features and the line features respectively; adding the corresponding weight obtained in the sixth step into the reprojection error of the point characteristic and the line characteristic to form a cost function E (xi) of each frame, wherein the expression is shown as a formula 5-7; the reprojection error of the point characteristic is the distance error between the projection point and the detection characteristic point, and the reprojection error of the line characteristic is the distance error between two end points of the projection line segment and the detection straight line; detecting a straight line is equivalent to a line feature detected in the image sequence;
in formulae 5-7): x is the number ofsand xeRespectively representing two end points of the projection line segment; lj,kRepresenting the jth detection straight line of the kth frame image;representing the reprojection error corresponding to the detected feature points;andtwo end points respectively representing a detected straight lineanda corresponding reprojection error; e (ξ) represents a cost function formed by the point feature and the line feature end point in the same frame;AndRespectively representing the inverses of covariance matrixes corresponding to the reprojection errors of the ith point characteristic and the jth line characteristic; hpAnd Hlhuber robust kernel functions of points and lines respectively; n is a radical ofpAnd NlRespectively representing the number of point features and line features; omegap、ωxsand ωxeRespectively representing the weights of two end points of the corresponding point characteristic and the corresponding line characteristic;
and step eight, performing least square on the cost function E (xi) to estimate the pose between the frames: the optimal pose increment T (ξ) is currently calculated by iterative minimization of Maximum Likelihood Estimation (MLE), during which the probability of the observed data becomes maximum; under the condition that data is assumed to be damaged by unbiased Gaussian noise, the maximum likelihood estimation is consistent with the nonlinear least square estimation, so that the pose estimation problem can be converted into a least square problem, and the pose estimation between frames is realized through the least square;
ξ*=arg min E(ξ) 8)
In formula 8), xi*The pose change result is obtained by a least square method.
Nothing in this specification is said to apply to the prior art.

Claims (5)

1. A weight self-adaptive posture estimation method based on dotted line fusion is characterized by comprising the following steps:
acquiring images by using a binocular camera to obtain a continuous image sequence;
secondly, extracting and processing the features of the image to obtain the total number of point features and line feature endpoints in each frame and a pixel coordinate set of the point features and the line feature endpoints in each frame;
Step three, initializing the point-line characteristics: restoring a space point line through corresponding point line characteristics in left and right images of a binocular camera, and projecting the point line with the restored space state to a current frame;
Fourthly, carrying out region division on each frame of image by using grids, and calculating the feature number formed by the sum of the number of point features and the number of line feature endpoints in each grid;
step five, carrying out region growing processing on the grids;
when the grid of the area to be increased does not have the adjacent seed grid, the grid of the area to be increased is invalid;
when the grid of the area to be increased has the adjacent seed grids and the number of the seed grids is 1, combining the grid of the area to be increased and the seed grid;
when the grid of the area to be increased has adjacent seed grids and the number of the seed grids is more than or equal to 2, if the feature numbers in the seed grids are different, merging the grid of the area to be increased and the seed grid with the largest feature number; if the feature numbers in the seed grids are the same, combining the area grid to be increased with each seed grid;
the grid of the area to be increased is adjacent to the seed grid and has no characteristic in the area;
step six, carrying out weight distribution on the internal features of the grid according to the size of the grid area after the area is increased: assuming that the proportion of a certain increased grid in an image is q, and calculating to obtain the characteristic number s of the increased gridmthen the weight ω q (1/s) occupied by each feature in the grid ism);
Step seven, projecting the spatial point features and the line features to the current frame in a re-projection mode respectively, and calculating re-projection errors of the point features and the line features respectively; adding the corresponding weight obtained in the sixth step into the reprojection error of the point characteristic and the line characteristic to form a cost function E (xi) of each frame;
and step eight, performing least square on the cost function E (xi) to realize pose estimation between frames.
2. the method for estimating weight adaptive pose based on dotted line fusion according to claim 1, wherein the second step is specifically:
(1) processing the image by using an ORB algorithm and an LSD algorithm, respectively extracting point features and line features, and matching the point features with the point features and matching the line features with the line features;
(2) Mismatching is rejected using the RANSAC algorithm: randomly decimating sample data in the matching result by using a traversal method, and iteratively calculating an optimal inner point set;
(3) After the features are extracted and the mismatching is eliminated, the total number of point feature and line feature end points in each frame of image is calculated by taking the left image as a reference, and a pixel coordinate set of the point feature and the line feature end points in each frame is obtained.
3. the method for estimating weight adaptive pose based on dotted line fusion according to claim 1, wherein the third step is specifically: the point characteristics firstly convert the pixel coordinates into camera coordinates through formula 1), and then convert the camera coordinates into world coordinates through formula 2); line characteristics are firstly converted into camera coordinates through formula 3), and then converted into world coordinates through formula 4);
in the formulae 1 to 4),Pixel coordinates representing the ith spatial point under k frames;camera coordinates representing the ith spatial point at k frames; k is a camera calibration parameter; n () represents the conversion of homogeneous coordinates to non-homogeneous coordinates;World coordinates representing the ith spatial point under k frames; rCWand tcwrespectively representing the rotation and displacement of the world coordinate system relative to the camera coordinate system;
Pixel coordinates representing the jth line feature under k frames;is a camera model parameter;Camera coordinates representing the jth line feature under k frames;World coordinates of j-th line feature under k framesA corresponding matrix representation;a normal vector that passes through the line feature and passes through the origin;Is the direction vector of the line feature.
4. the method according to claim 1, wherein in step seven, the reprojection error of the point feature is the distance error between the projected point and the detected feature point, and the reprojection error of the line feature is the distance error between the two end points of the projected line segment and the detected straight line; the detection straight line is a line feature detected in the image sequence.
5. the method for estimating weight adaptive pose based on dotted line fusion according to claim 1, wherein in step seven, the expression of cost function E (ξ) is shown as equation 5-7);
in formulae 5-7): x is the number ofsand xerespectively representing two end points of the projection line segment; lj,krepresenting the jth detection straight line of the kth frame image;representing the reprojection error corresponding to the detected feature points;andTwo end points respectively representing a detected straight lineanda corresponding reprojection error; e (ξ) represents a cost function formed by the point feature and the line feature end point in the same frame;andRespectively representing the inverses of covariance matrixes corresponding to the reprojection errors of the ith point characteristic and the jth line characteristic; hpand Hlhuber robust kernel functions of points and lines respectively; n is a radical ofpand NlRespectively representing the number of point features and line features; omegap、ωxsand ωxerespectively representing two corresponding point features and line featuresthe weight of each endpoint.
CN201910862237.3A 2019-09-12 2019-09-12 weight self-adaptive posture estimation method based on point-line fusion Pending CN110570473A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910862237.3A CN110570473A (en) 2019-09-12 2019-09-12 weight self-adaptive posture estimation method based on point-line fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910862237.3A CN110570473A (en) 2019-09-12 2019-09-12 weight self-adaptive posture estimation method based on point-line fusion

Publications (1)

Publication Number Publication Date
CN110570473A true CN110570473A (en) 2019-12-13

Family

ID=68779672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910862237.3A Pending CN110570473A (en) 2019-09-12 2019-09-12 weight self-adaptive posture estimation method based on point-line fusion

Country Status (1)

Country Link
CN (1) CN110570473A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics
WO2022073172A1 (en) * 2020-10-09 2022-04-14 浙江大学 Global optimal robot vision localization method and apparatus based on point-line features

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features
WO2018235923A1 (en) * 2017-06-21 2018-12-27 国立大学法人 東京大学 Position estimating device, position estimating method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018235923A1 (en) * 2017-06-21 2018-12-27 国立大学法人 東京大学 Position estimating device, position estimating method, and program
CN107291093A (en) * 2017-07-04 2017-10-24 西北工业大学 Unmanned plane Autonomous landing regional selection method under view-based access control model SLAM complex environment
CN108564616A (en) * 2018-03-15 2018-09-21 中国科学院自动化研究所 Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
CN108682027A (en) * 2018-05-11 2018-10-19 北京华捷艾米科技有限公司 VSLAM realization method and systems based on point, line Fusion Features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANG JIANHUA ET AL.: "Robust stereo visual odometry based on points and lines", 《SECOND TARGET RECOGNITION AND ARTIFICIAL INTELLIGENCE SUMMIT FORUM》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022073172A1 (en) * 2020-10-09 2022-04-14 浙江大学 Global optimal robot vision localization method and apparatus based on point-line features
US11964401B2 (en) 2020-10-09 2024-04-23 Zhejiang University Robot globally optimal visual positioning method and device based on point-line features
CN113514067A (en) * 2021-06-24 2021-10-19 上海大学 Mobile robot positioning method based on point-line characteristics

Similar Documents

Publication Publication Date Title
CN109344882B (en) Convolutional neural network-based robot control target pose identification method
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN111897349B (en) Autonomous obstacle avoidance method for underwater robot based on binocular vision
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN112396595B (en) Semantic SLAM method based on point-line characteristics in dynamic environment
JP5627325B2 (en) Position / orientation measuring apparatus, position / orientation measuring method, and program
CN106650701B (en) Binocular vision-based obstacle detection method and device in indoor shadow environment
CN112435262A (en) Dynamic environment information detection method based on semantic segmentation network and multi-view geometry
CN112381841A (en) Semantic SLAM method based on GMS feature matching in dynamic scene
CN111998862B (en) BNN-based dense binocular SLAM method
CN109215118B (en) Incremental motion structure recovery optimization method based on image sequence
CN113313732A (en) Forward-looking scene depth estimation method based on self-supervision learning
CN112132874A (en) Calibration-board-free different-source image registration method and device, electronic equipment and storage medium
CN112652020B (en) Visual SLAM method based on AdaLAM algorithm
CN110570474B (en) Pose estimation method and system of depth camera
CN112001859A (en) Method and system for repairing face image
CN113284179A (en) Robot multi-object sorting method based on deep learning
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN110570473A (en) weight self-adaptive posture estimation method based on point-line fusion
JP6922348B2 (en) Information processing equipment, methods, and programs
CN114549629A (en) Method for estimating three-dimensional pose of target by underwater monocular vision
CN110807799A (en) Line feature visual odometer method combining depth map inference
CN112767481B (en) High-precision positioning and mapping method based on visual edge features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213

RJ01 Rejection of invention patent application after publication