CN111210463B - Virtual wide-view visual odometer method and system based on feature point auxiliary matching - Google Patents

Virtual wide-view visual odometer method and system based on feature point auxiliary matching Download PDF

Info

Publication number
CN111210463B
CN111210463B CN202010042019.8A CN202010042019A CN111210463B CN 111210463 B CN111210463 B CN 111210463B CN 202010042019 A CN202010042019 A CN 202010042019A CN 111210463 B CN111210463 B CN 111210463B
Authority
CN
China
Prior art keywords
point
calculating
gradient
image
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010042019.8A
Other languages
Chinese (zh)
Other versions
CN111210463A (en
Inventor
缪瑞航
应忍冬
刘佩林
龚正
薛午阳
赵忆漠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202010042019.8A priority Critical patent/CN111210463B/en
Publication of CN111210463A publication Critical patent/CN111210463A/en
Application granted granted Critical
Publication of CN111210463B publication Critical patent/CN111210463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a virtual wide-view visual odometry method and a system based on feature point auxiliary matching, which comprises the following steps: the method comprises the steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected; according to the extracted feature points, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images; carrying out abnormal matching point pair elimination by utilizing a homography matrix and/or a basic matrix, and then calculating the pose of the current camera by utilizing a PnP algorithm and depth information acquired from a historical key frame; calculating pixel errors of all integral gradient points, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera; the method can effectively enhance the characteristic point tracking effect when the camera rotates, so that the whole system is more robust.

Description

Virtual wide-view visual odometer method and system based on feature point auxiliary matching
Technical Field
The invention relates to the technical field of computer vision and the field of vision synchronous positioning and map construction (SLAM), in particular to a virtual wide-view visual odometry method and system based on feature point auxiliary matching, more particularly to a robust virtual wide-view visual odometry method, system and storage medium based on feature point auxiliary matching, which can be applied to pose estimation and autonomous navigation of a mobile robot, and can also be applied to augmented reality application and virtual reality application of a mobile terminal.
Background
Navigation of autonomous robots in complex environments (e.g., indoors, in jungles, caves, etc.) is extremely difficult at present, especially in situations where the robots move at high speeds and the surrounding environment changes (changes in light, movement of objects, etc.). Under the condition, a common visual odometer is easily interrupted due to sudden change of the environment, the visual odometer using a characteristic point matching mode has high requirement on texture information, once the visual odometer enters a non-rich texture environment such as a corridor and a stair, the visual odometer loses working capacity, the other direct method visual odometer directly using pixel gray scale for matching has high requirement on illumination, and once the visual odometer is interrupted once, the direct method visual odometer based on the panoramic camera is difficult to recover to the state before interruption due to no characteristic matching, and meanwhile, the problem of rotation loss can be overcome by a direct visual odometer system based on the panoramic camera. Inspired by the characteristics of feature point matching and a panoramic camera, a virtual wide-view visual mileage calculation method based on feature point matching assistance for a common camera is provided.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a virtual wide-view visual odometry method and a system based on feature point auxiliary matching.
The invention provides a virtual wide-viewing-angle visual odometry method based on feature point auxiliary matching, which comprises the following steps:
a motion prediction step: the method comprises the steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a characteristic matching step: according to the feature points extracted in the motion prediction step, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images;
rough pose estimation: after the characteristic matching step is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame;
and (3) optimizing photometric error: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for predicting the position of the camera at the next moment by making the motion of the camera move at a constant speed within preset time;
the historical key frame depth information includes: the information which is saved in the calculation process and is corresponding to the calculation valuable image frame comprises the position, the posture and the depth of the corresponding three-dimensional point cloud;
the gradient point means that the gradient of the position of the pixel point in the image is large, and the gradient point is obviously different from the periphery.
Preferably, the feature point selection in the motion prediction step includes: and (3) the original image is down-sampled to a preset value, and then the Harris corner algorithm is utilized to extract and retain the existing corners of the whole original image.
Preferably, the feature matching step includes: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted in the motion prediction step and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
Preferably, the PnP algorithm in the rough pose estimation step includes: calculating an algorithm of the position and the posture of a camera for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
the calculating of the coordinates of the corresponding three-dimensional points in the space comprises:
determining the key frame: calculating gradient information of each pixel point for each frame image, reserving points with the maximum gradient and larger than a preset threshold value within a preset range, and reserving the required number of gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
and a depth information intermediate frame calculation step: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto a last key frame of a key frame which is reserved for the last time, and giving a weight of a gradient point to the depth according to the gradient information of the reserved gradient point to generate a depth information intermediate frame;
calculating a virtual wide visual frame: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
and (3) calculating the coordinates of the three-dimensional gradient points in the corresponding space: calculating the coordinates of three-dimensional gradient points of a corresponding space according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter.
Preferably, the optimizing photometric error step comprises: establishing an optimization problem by using the pose of the current camera calculated in the rough pose estimation step as an initial solution according to luminosity errors serving as error functions and the remaining information of the historical key frames and the remaining information of the gradient points, solving the optimization problem by using an iterative Gauss Newton algorithm, and calculating the final pose of the camera and the depth of the gradient points in real time;
the luminosity error is used as an error function for calculating the luminosity transformation parameters of the camera and the image, so that the optimized pixel brightness value can meet a constant condition.
The invention provides a virtual wide-viewing-angle visual odometer system based on feature point auxiliary matching, which comprises:
a motion prediction module: the method comprises the following steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a feature matching module: according to the feature points extracted by the motion prediction module, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images;
a rough pose estimation module: after the feature matching module is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame;
and a photometric error optimizing module: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for predicting the position of the camera at the next moment by making the motion of the camera move at a constant speed within preset time;
the historical key frame depth information includes: the information which is saved in the calculation process and is corresponding to the image frame with value in calculation and comprises the position, the posture and the depth of the corresponding three-dimensional point cloud
The gradient point means that the gradient of the position of the pixel point in the image is large, and the gradient point is obviously different from the periphery.
Preferably, the feature point selection in the motion prediction module includes: and (4) the original image is down-sampled to a preset value, and then the angular points of the whole original image are extracted and retained by using a Harris angular point algorithm.
Preferably, the feature matching module comprises: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted by the motion prediction module and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
Preferably, the PnP algorithm in the rough pose estimation module includes: calculating an algorithm of the position and the posture of a camera for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
the calculating of the coordinates of the corresponding three-dimensional points in the space comprises:
determining a key frame module: calculating gradient information of each pixel point for each frame of image, reserving points with the maximum gradient and larger than a preset threshold value within a preset range, and reserving the number of required gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
the depth information intermediate frame calculation module: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto a last key frame of a key frame which is reserved for the last time, and giving a weight of a gradient point to the depth according to the gradient information of the reserved gradient point to generate a depth information intermediate frame;
a calculation module of virtual wide visual frames: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
the corresponding space three-dimensional gradient point coordinate calculation module: calculating the coordinates of three-dimensional gradient points of a corresponding space according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter.
Preferably, the optimization photometric error module comprises: establishing an optimization problem by taking the position and pose of the current camera calculated in the rough position and pose estimation module as an initial solution according to luminosity errors serving as error functions and the retained information of the historical key frames and the retained information of the gradient points, solving the optimization problem by using an iterative Gauss Newton algorithm, and calculating the final position and pose of the camera and the depth of the gradient points in real time;
the luminosity error is used as an error function to calculate the luminosity transformation parameters of the camera and the image, so that the pixel brightness value after optimization can meet a constant condition.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention can effectively enhance the characteristic point tracking effect when the camera (robot) rotates, so that the whole system is more robust;
2. the invention can overcome the problem that the traditional visual odometer cannot work normally during violent movement and rapid rotation, can be used for navigation algorithms of autonomous navigation robots such as unmanned planes, unmanned vehicles and the like, and can be used for providing self position information in tasks such as autonomous navigation, exploration, investigation and the like.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of a virtual wide-view intermediate frame provided by the present invention;
FIG. 2 is a schematic diagram of a process for matching and tracking middle feature points according to the present invention;
FIG. 3 is a diagram illustrating feature point matching results provided by the present invention;
fig. 4 is a schematic diagram of an overall algorithm flow of the virtual wide-viewing-angle visual odometry method based on feature point matching assistance provided by the invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will aid those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any manner. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
For autonomous navigation of the robot, the basic requirements are accuracy and robustness of the local relative positioning. When the direct method visual odometer is subjected to severe rotational motion, the direct method visual odometer system may enter a massive drift or loss state. On the one hand, many new keyframes will be created to prevent loss of tracking when active rotation occurs. But these newly created keyframes are typically too close together and the direct method visual odometry system enters a scale drift state. On the other hand, when the rotation is too fast for direct vision to successfully track the feature, the system will enter a lost state. Under the above conditions, large positioning errors will be introduced into the direct method visual odometer system.
It is well known that direct method visual odometer systems based on panoramic cameras can overcome the problems of pure rotation and fast turn. Inspired by the characteristics of panoramic cameras, a direct method vision odometer system is provided for pinhole cameras, and one of the characteristics of the system is that the system is provided with a virtual wide-view-angle tracking module. Conventional direct vision odometry systems retain only the functionality that appears in the current view of the camera. In order to track adequate functionality, conventional direct method visual odometry systems will continually create new keyframes. Therefore, we will observe more point-pair constrained relationships between frames by creating virtual wide-view frames. Even if the rotation is violent, the tracking can be better carried out. In addition, because the feature-based method can track features at sub-pixel precision, in order to further improve pose calculation precision, the matching result of the feature-based method is used as an initialization state, so that tracking loss is prevented, and precision is improved.
The binocular vision odometer framework based on the direct method directly uses a corrected binocular sensor, directly deduces the motion state of a camera by using the brightness value of pixel points in an image, and provides odometer information of the camera; specifically, a robust key frame tracking initial value is obtained by using the stability matched by a feature point method, and the visual process, the stability and the robustness are increased; a virtual wide-view projection key frame is created, more inter-frame constraint information is acquired, and the spacing distance between key frames is increased, so that the problem of size loss caused by over-compact key frames is solved. The invention can overcome the problem that the traditional visual odometer cannot work normally during violent movement and quick rotation, can be used for navigation algorithms of autonomous navigation robots such as unmanned aerial vehicles, unmanned vehicles and the like, and can be used for providing self position information in tasks such as autonomous navigation, exploration, investigation and the like.
The invention provides a virtual wide-viewing-angle visual odometry method based on feature point auxiliary matching, which comprises the following steps:
a motion prediction step: the method comprises the following steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a characteristic matching step: as shown in fig. 3, according to the feature points extracted in the motion prediction step, performing feature matching on the original image and the new image by an optical flow method to obtain a matching relationship of the same point in the two images; matching and calculating the positions of the same points in different images before and after feature point matching;
rough pose estimation: after the characteristic matching step is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame; the single-frame pose calculation of the original direct method is more robust;
and (3) optimizing photometric error: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for predicting the position of the camera at the next moment by making the motion of the camera move at a constant speed within preset time;
the historical key frame depth information includes: the information which is stored in the calculation process and is corresponding to the calculation valuable image frame and comprises the position, the posture and the depth of the corresponding three-dimensional point cloud;
the gradient point indicates that the gradient of the position of the pixel point in the image is large, so that the point is obviously different from the surrounding, the function is used for photometric error calculation in a later algorithm, and the matching accuracy can be improved because the gradient point is obviously different from the surrounding points.
Specifically, the feature point selection in the motion prediction step includes: and (4) the original image is down-sampled to one eighth, and then the angular points of the whole original image are extracted and retained by using a Harris angular point algorithm.
Specifically, the feature matching step includes: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted in the motion prediction step and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
Specifically, the PnP algorithm in the rough pose estimation step includes: calculating an algorithm of a camera position and a camera attitude for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
the calculating of the coordinates of the corresponding three-dimensional points in the space comprises:
determining a key frame: calculating gradient information of each pixel point for each frame image, reserving the points with the maximum gradient and larger than a preset threshold value within the range of 2x2, and reserving the number of required gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
and a depth information intermediate frame calculation step: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto the last key frame of the key frame which is reserved for the last time, giving the gradient point weight to the depth according to the gradient information of the reserved gradient point, and generating a depth information intermediate frame;
calculating a virtual wide visual frame: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
and (3) calculating the coordinates of the three-dimensional gradient points in the corresponding space: calculating the coordinates of the corresponding spatial three-dimensional gradient points according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
and the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter. And the intermediate frame visual angle is expanded to store more depth information, so that the visual odometer can obtain more matching points in the rotation process, and the rotation robustness of the visual odometer is enhanced.
Specifically, the step of optimizing photometric errors includes: establishing an optimization problem by taking the position and pose of the current camera calculated in the rough position and pose estimation step as an initial solution according to luminosity errors serving as error functions and the retained information of the historical key frames and the retained information of the gradient points, solving the optimization problem by using an iterative Gauss Newton algorithm, and calculating the final position and pose of the camera and the depth of the gradient points in real time;
the luminosity error is used as an error function for calculating the luminosity transformation parameters of the camera and the image, so that the pixel brightness value after optimization can meet the approximately constant condition.
The relationship between the luminosity transformation parameter and the pixel brightness is that the longer the exposure time is, the larger the pixel value is; calculating luminosity parameters, namely correcting pixel brightness values, and calculating pixel brightness values in other exposure times according to the exposure time, so that as long as one is selected as a standard, the other pixels are unified on the standard, namely correcting;
the luminosity transformation parameters refer to parameters which have influences on pixels, such as light transmittance, exposure time and the like, and the brightness values of the front and the back pixels can be expressed in the uniform exposure time and the same light transmittance according to the calculated illumination parameters, so that the brightness values can be ensured to be basically constant for the same space point.
The invention provides a virtual wide-view visual odometer system based on feature point auxiliary matching, which comprises:
a motion prediction module: the method comprises the following steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a feature matching module: according to the feature points extracted by the motion prediction module, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images; matching and calculating the positions of the same points in different images before and after feature point matching;
a rough pose estimation module: after the feature matching module is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame; the single-frame pose calculation of the original direct method is more robust;
an optimization photometric error module: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for predicting the position of the camera at the next moment by making the motion of the camera move at a constant speed within preset time;
the historical key frame depth information includes: the information which is stored in the calculation process and is corresponding to the calculation valuable image frame and comprises the position, the posture and the depth of the corresponding three-dimensional point cloud;
the gradient point indicates that the position gradient of the pixel point in the image is larger, so that the point is obviously different from the surrounding, the function is used for photometric error calculation in the following algorithm, and the matching accuracy can be improved because the gradient point is obviously different from the surrounding points.
Specifically, the feature point selection in the motion prediction module includes: and (4) the original image is down-sampled to one eighth, and then the angular points of the whole original image are extracted and retained by using a Harris angular point algorithm.
Specifically, the feature matching module includes: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted by the motion prediction module and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
Specifically, the PnP algorithm in the rough pose estimation module includes: calculating an algorithm of a camera position and a camera attitude for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
the calculating of the coordinates of the corresponding three-dimensional points in the space comprises:
determining a key frame module: calculating gradient information of each pixel point for each frame image, reserving the points with the maximum gradient and larger than a preset threshold value within the range of 2x2, and reserving the number of required gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
the depth information intermediate frame calculation module: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto the last key frame of the key frame which is reserved for the last time, giving the gradient point weight to the depth according to the gradient information of the reserved gradient point, and generating a depth information intermediate frame;
a calculation module of virtual wide visual frames: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
the corresponding space three-dimensional gradient point coordinate calculation module: calculating the coordinates of the corresponding spatial three-dimensional gradient points according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter. And the intermediate frame visual angle is expanded to store more depth information, so that the visual odometer can obtain more matching points in the rotation process, and the rotation robustness of the visual odometer is enhanced.
Specifically, the optimization photometric error module includes: establishing an optimization problem by taking the position and pose of the current camera calculated in the rough position and pose estimation module as an initial solution according to luminosity errors serving as error functions and the retained information of the historical key frames and the retained information of the gradient points, solving the optimization problem by using an iterative Gauss Newton algorithm, and calculating the final position and pose of the camera and the depth of the gradient points in real time;
the luminosity error is used as an error function for calculating the luminosity transformation parameters of the camera and the image, so that the pixel brightness value after optimization can meet the approximately constant condition.
The relationship between the luminosity transformation parameter and the pixel brightness is that the longer the exposure time is, the larger the pixel value is; calculating luminosity parameters, namely correcting pixel brightness values, and calculating pixel brightness values in other exposure times according to the exposure time, so that as long as one is selected as a standard, the other pixels are unified on the standard, namely correcting;
the luminosity transformation parameters refer to parameters which have influences on pixels, such as light transmittance, exposure time and the like, and the brightness values of the front and the rear pixels can be expressed in the uniform exposure time and the same light transmittance according to the calculated illumination parameters, so that the brightness values can be ensured to be basically constant for the same space point.
The following preferred examples further illustrate the invention:
the present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications can be made by persons skilled in the art without departing from the concept of the invention. All falling within the scope of the present invention.
The example is an improvement over the traditional direct method visual odometer machine, which provides an auxiliary pose computation method for feature point matching and a computation method for virtual wide view frames. The invention provides a virtual wide-view visual process and a method based on feature point matching assistance, which comprises the following steps:
initializing a visual odometer system: and extracting the depth information of the high gradient point of the current scene through binocular vision to generate a first key frame, and calculating the relative pose of the next frame by utilizing a PnP algorithm.
Gradient point selection: and for each frame of image, calculating gradient information of each pixel point, and reserving the points with the maximum gradient and larger than a certain threshold value in the range of 2x2, and finally reserving the number of gradient points required by a system by using random sampling.
Selecting characteristic points: for each frame of image, the image is down-sampled to one eighth, the time for calculating the angular point is reduced, and then the Harris angular point calculation algorithm is utilized to extract the angular point of the whole image and reserve the angular point
Calculating camera image parameters: and while deriving and calculating the motion state of the camera, calculating luminosity transformation parameters of the camera and the image so that the pixel brightness value after correction can meet an approximately constant condition.
Matching the feature points: before feature point matching, a constant-speed motion model is used for predicting the pose of the current camera, feature point selection is carried out, the rough initial position of the corner points on the downsampled image on the current frame is deduced by using the predicted pose calculated by the motion model and depth information acquired from a historical key frame, and finally the accurate position of the corner points on the downsampled image on the current frame is calculated by using an optical flow method, so that corner point matching is completed.
And (3) feature point auxiliary pose calculation: after the corner matching is finished, removing abnormal matching point pairs by using a homography matrix or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame.
Depth information intermediate frame calculation: and calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto the key frame of the last frame, endowing the point with the depth weight according to the gradient information of the point, and finally generating a depth information intermediate frame.
Calculating a virtual wide view frame: and expanding the view angle of the depth information intermediate frame calculated in the previous step, so that more points of the historical frame can be projected onto the intermediate frame, and more historical information is utilized when the current camera pose is calculated. Meanwhile, the expansion visual angle of the intermediate frame of the calculated depth information can be set according to the proportion parameters, so that the information range of the acquired historical key frame is adjusted.
Determining strategy of key frame: and when the result is greater than a set threshold value, setting the current image frame as the key frame and keeping the current image frame.
And (3) pose result optimization: and (3) establishing an optimization problem by using photometric errors as error functions according to the retained historical key frames and the retained gradient point information, solving the optimization problem by using an iterative Gauss-Newton algorithm, and calculating the final camera pose and the gradient point depth in real time.
Wherein the traditional binocular direct method visual mileage system:
for a single-purpose direct method vision odometry system, the optimization target is the total luminosity error of a plurality of key frames under a sliding window, and the variables to be optimized comprise the pose parameter, the depth parameter and the illumination change parameter of a camera. Suppose we use a camera calibrated with both intrinsic and breadth parameters. For any pair of images, assume that the reference image is denoted as IiThe target image is IjThe selected gradient point in the reference image is denoted as p and the selected gradient point projected onto the target image is denoted as p'. The error function of the pair of image frames can thus be written
Figure BDA0002368089940000111
Wherein N ispPixel points, ω, representing neighborhoods of selected gradient pointspIs a weight determined by the magnitude of the gradient at which the gradient point is selected, tj,ti,aj,ai,bj,biRepresenting exposure time and illumination transformation parameters of the camera, | · | | survivalhRepresenting the Huber norm. Wherein, IjRepresenting the target image, IiWhich represents a reference image, is shown,
meanwhile, the position relation between the selected point and the projection point can be given, and the projection relation is as follows:
p'=π(Rjiπ-1(p,dp)+tji), (2)
the pi (·) function represents a projection function, Rji,tjiRespectively showing the relative rotation and the relative displacement of the jth frame image relative to the ith frame image. Where p represents the two-dimensional coordinates of a pixel point on the image, dpRepresenting depth of p pixels
We can then give the following overall photometric error minimization problem for the whole sliding window:
Figure BDA0002368089940000112
where F denotes the set of key frames under the sliding window, PiRepresents a set of gradient points, T, selected from the ith frame key framei,TjE SE (3) respectively represents the representation of the pose of the ith frame and the jth frame on a 3-order special Euclidean group, dpIndicating the depth value of the selected gradient point. Therefore, for a binocular camera, a luminosity error term of a left camera and a right camera is added, and finally, the optimization problem of the binocular direct method vision odometer is represented as follows:
Figure BDA0002368089940000113
wherein E is addedp,l,rThe luminance error term between the left camera image and the right camera image is obtained, and the lambda represents the weight of the binocular luminance error. And finally, the optimization problem can be quickly solved by using a Gauss-Newton algorithm, and finally optimized camera pose values and depth values of the gradient points are obtained.
Feature point matching assistance:
in the direct method visual odometry system, each new key frame is tracked by a tracking module to calculate pose information to be used as an initial value of final optimization, and the initial value has great influence on a final optimization result, so that the pose information needs to be calculated by the tracking moduleThe tracking module obtains an accurate pose. The tracking module will calculate the pose of the current camera position with respect to the last key frame, denoted Tk,k-1. The first step of calculating the relative pose is to roughly estimate the pose of the current camera by using a uniform motion model, and the calculation method is
Figure BDA0002368089940000121
Tk,initRepresents the predicted initial pose, Tk-1Pose, T, of the last keyframek-2The pose of the last key frame is represented.
And then, calculating a relatively accurate relative pose by using a characteristic point matching mode, wherein the process refers to fig. 2. Assuming that the position of a two-dimensional point in the image is represented as X, and the position of a corresponding unit point is represented as X, according to the projection geometry, the position and the pose T of the frame image have a relational expression:
x=π(T,,X), (6)
and then calculating the matched two-dimensional feature points of the K-1 th frame and the K-th frame by using a Lucas-Kanade algorithm, wherein the position of the K-th frame is represented as xkThen, an abnormal matching point pair is proposed by using a homography matrix, finally a matching error-free point pair is obtained, and the relative pose of the current Kth frame relative to the Kth-1 th frame is calculated by utilizing a PnP algorithm
Figure BDA0002368089940000122
And taking the relative pose as the last step of the tracking module: initial value for photometric error optimization. The optimization problem of the final tracking module is expressed as:
Figure BDA0002368089940000123
wherein E isp,i,kRepresenting the error function in equation (1)
The luminosity error sum of the key frames under all the sliding windows relative to the current frame is optimized, the optimization problem is solved by using the Gauss-Newton algorithm again, and the pose of the current frame can be obtained.
Constructing a virtual wide-view depth frame:
referring to fig. 1, in a tracking module process of a direct method visual odometer, gradient points of known depths of all keyframes in a sliding window need to be projected onto a previous keyframe, so that a depth frame with a depth value is obtained. Therefore, the construction of the virtual wide-view depth frame has important significance. In the process of constructing the virtual wide view depth frame, gradient points outside the view range of the original camera are reserved, and the change rate of the view is S times as large as the size of the original virtual frame
Figure BDA0002368089940000131
W, h are the width and height of the original depth frame, w ', h' are the width and height of the virtual depth frame, phix,φyIs the view angle size of the original depth frame in the x direction and the y direction, phi'x,φ′yThe view angle size of the virtual depth frame in the x direction and the y direction. Wherein f represents the focal length of the camera, and after such conversion, more gradient point depth information can be usually reserved in the rotation process, as shown in fig. 4, so as to ensure the robustness of the tracking module in the fast rotation.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (8)

1. A virtual wide-view visual odometry method based on feature point auxiliary matching is characterized by comprising the following steps:
a motion prediction step: the method comprises the following steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a characteristic matching step: according to the feature points extracted in the motion prediction step, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images;
rough pose estimation: after the characteristic matching step is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame;
and (3) optimizing photometric error: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for performing constant-speed motion on the motion of the camera within preset time and predicting the position of the camera at the next moment;
the historical key frame depth information includes: the information which is saved in the calculation process and is corresponding to the calculation valuable image frame comprises the position, the posture and the depth of the corresponding three-dimensional point cloud;
the PnP algorithm in the rough pose estimation step comprises the following steps: calculating an algorithm of the position and the posture of a camera for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
calculating the coordinates of the three-dimensional gradient points in the corresponding space comprises the following steps:
determining a key frame: calculating gradient information of each pixel point for each frame of image, reserving points with the maximum gradient and larger than a preset threshold value within a preset range, and reserving the number of required gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
and a depth information intermediate frame calculation step: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto the last key frame of the key frame which is reserved for the last time, giving the gradient point weight to the depth according to the gradient information of the reserved gradient point, and generating a depth information intermediate frame;
calculating a virtual wide view angle: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
and (3) calculating the coordinates of the three-dimensional gradient points in the corresponding space: calculating the coordinates of the corresponding spatial three-dimensional gradient points according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
and the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter.
2. The virtual wide-viewing-angle visual odometry method based on feature point auxiliary matching as claimed in claim 1, wherein the feature point selection in the motion prediction step comprises: and (4) the original image is down-sampled to a preset value, and then the angular points of the whole original image are extracted and retained by using a Harris angular point algorithm.
3. The virtual wide-viewing-angle visual odometry method based on feature point auxiliary matching is characterized in that the feature matching step comprises the following steps: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted in the motion prediction step and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
4. The virtual wide-view visual odometry method based on feature point assisted matching as claimed in claim 1, wherein the step of optimizing photometric error comprises: establishing an optimization problem by taking the position and pose of the current camera calculated in the rough position and pose estimation step as an initial solution according to luminosity errors serving as error functions and the retained information of the historical key frames and the retained information of the gradient points, solving the optimization problem by using an iterative Gauss Newton algorithm, and calculating the final position and pose of the camera and the depth of the gradient points in real time;
the luminosity error is used as an error function for calculating the luminosity transformation parameters of the camera and the image, so that the optimized pixel brightness value can meet a constant condition.
5. A virtual wide-view visual odometry system based on feature point assisted matching, comprising:
a motion prediction module: the method comprises the steps that an original image is subjected to motion prediction through a constant-speed motion model, the pose of a current camera is predicted, a new image is obtained, and feature points of the original image are selected;
a feature matching module: according to the feature points extracted by the motion prediction module, performing feature matching on the original image and the new image through an optical flow method to obtain the matching relation of the same point in the two images;
a rough pose estimation module: after the feature matching module is completed, removing abnormal matching point pairs by using a homography matrix and/or a basic matrix, and calculating the pose of the current camera by using a PnP algorithm and depth information acquired from a historical key frame;
an optimization photometric error module: calculating pixel errors of all gradient points of the whole body, and optimizing luminosity errors through a Gauss-Newton descent method according to the obtained pose of the current camera to finally obtain an optimized accurate position pose of the camera;
the constant-speed motion model is used for performing constant-speed motion on the motion of the camera within preset time and predicting the position of the camera at the next moment;
the historical key frame depth information includes: the information which is stored in the calculation process and is corresponding to the calculation valuable image frame and comprises the position, the posture and the depth of the corresponding three-dimensional point cloud;
the PnP algorithm in the rough pose estimation module comprises the following steps: calculating an algorithm of the position and the posture of a camera for shooting a new image under the condition of knowing the coordinates of the three-dimensional characteristic gradient points of the space and the two-dimensional coordinates corresponding to the image according to the geometric projection relation;
calculating the coordinates of the three-dimensional gradient points in the corresponding space comprises the following steps:
determining a key frame module: calculating gradient information of each pixel point for each frame of image, reserving points with the maximum gradient and larger than a preset threshold value within a preset range, and reserving the number of required gradient points by using random sampling; after the number of the gradient points is determined, the current image frame is set as a key frame and reserved when the result is greater than a preset threshold value by calculating the visual angle, displacement and luminosity errors of the current camera relative to the last key frame and carrying out weighted summation on the visual angle, the displacement and the luminosity errors;
the depth information intermediate frame calculation module: calculating a depth information value by using the information of the historical key frame, projecting a point with known depth in the historical key frame onto the last key frame of the key frame which is reserved for the last time, giving the gradient point weight to the depth according to the gradient information of the reserved gradient point, and generating a depth information intermediate frame; a virtual wide view calculation module: expanding the view angle of the depth information intermediate frame obtained by calculation, so that more points of the historical frame can be projected onto the intermediate frame to obtain a historical key frame;
the corresponding space three-dimensional gradient point coordinate calculation module: calculating the coordinates of the corresponding spatial three-dimensional gradient points according to the depth information of the historical key frames and the position postures of the cameras corresponding to the historical key frames;
the intermediate frame expansion visual angle comprises a historical key frame information range obtained by adjusting according to a preset proportion parameter.
6. The system of claim 5, wherein the feature point selection in the motion prediction module comprises: and (3) the original image is down-sampled to a preset value, and then the Harris corner algorithm is utilized to extract and retain the existing corners of the whole original image.
7. The system of claim 5, wherein the feature matching module comprises: deducing the rough initial position of the corner point on the original down-sampled image on the new image according to the feature point extracted by the motion prediction module and the depth information acquired from the historical key frame, and then calculating the accurate position of the corner point on the down-sampled image on the new image by using an optical flow method to complete corner point matching.
8. The virtual wide-perspective visual odometry system based on feature point assisted matching as claimed in claim 5, wherein
Characterized in that said optimization photometric error module comprises: establishing an optimization problem by using the retained historical key frame and the retained gradient point information as an error function according to photometric errors and using the pose of the current camera calculated in the rough pose estimation module as an initial solution, solving the optimization problem by using an iterative Gauss-Newton algorithm, and calculating the final camera pose and the gradient point depth in real time;
the luminosity error is used as an error function for calculating the luminosity transformation parameters of the camera and the image, so that the optimized pixel brightness value can meet a constant condition.
CN202010042019.8A 2020-01-15 2020-01-15 Virtual wide-view visual odometer method and system based on feature point auxiliary matching Active CN111210463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010042019.8A CN111210463B (en) 2020-01-15 2020-01-15 Virtual wide-view visual odometer method and system based on feature point auxiliary matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010042019.8A CN111210463B (en) 2020-01-15 2020-01-15 Virtual wide-view visual odometer method and system based on feature point auxiliary matching

Publications (2)

Publication Number Publication Date
CN111210463A CN111210463A (en) 2020-05-29
CN111210463B true CN111210463B (en) 2022-07-15

Family

ID=70786923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010042019.8A Active CN111210463B (en) 2020-01-15 2020-01-15 Virtual wide-view visual odometer method and system based on feature point auxiliary matching

Country Status (1)

Country Link
CN (1) CN111210463B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111829522B (en) * 2020-07-02 2022-07-12 浙江大华技术股份有限公司 Instant positioning and map construction method, computer equipment and device
CN111815679B (en) * 2020-07-27 2022-07-26 西北工业大学 Binocular camera-based trajectory prediction method during loss of spatial target feature points
CN111968157B (en) * 2020-08-13 2024-05-28 深圳国信泰富科技有限公司 Visual positioning system and method applied to high-intelligent robot
CN112066988B (en) * 2020-08-17 2022-07-26 联想(北京)有限公司 Positioning method and positioning equipment
CN112037261A (en) * 2020-09-03 2020-12-04 北京华捷艾米科技有限公司 Method and device for removing dynamic features of image
CN112348889B (en) * 2020-10-23 2024-06-07 浙江商汤科技开发有限公司 Visual positioning method, and related device and equipment
CN112525326A (en) * 2020-11-21 2021-03-19 西安交通大学 Computer vision measurement method for three-dimensional vibration of unmarked structure
CN112634305B (en) * 2021-01-08 2023-07-04 哈尔滨工业大学(深圳) Infrared visual odometer implementation method based on edge feature matching
CN113012197B (en) * 2021-03-19 2023-07-18 华南理工大学 Binocular vision odometer positioning method suitable for dynamic traffic scene
CN113362377B (en) * 2021-06-29 2022-06-03 东南大学 VO weighted optimization method based on monocular camera
CN113592947B (en) * 2021-07-30 2024-03-12 北京理工大学 Method for realizing visual odometer by semi-direct method
CN115115708B (en) * 2022-08-22 2023-01-17 荣耀终端有限公司 Image pose calculation method and system
CN115690205B (en) * 2022-10-09 2023-12-05 北京自动化控制设备研究所 Visual relative pose measurement error estimation method based on point-line comprehensive characteristics
CN117191047B (en) * 2023-11-03 2024-02-23 南京信息工程大学 Unmanned aerial vehicle self-adaptive active visual navigation method and device in low-light environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107610175A (en) * 2017-08-04 2018-01-19 华南理工大学 The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109871024A (en) * 2019-01-04 2019-06-11 中国计量大学 A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107610175A (en) * 2017-08-04 2018-01-19 华南理工大学 The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109871024A (en) * 2019-01-04 2019-06-11 中国计量大学 A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN110375765A (en) * 2019-06-28 2019-10-25 上海交通大学 Visual odometry method, system and storage medium based on direct method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于光流跟踪和特征匹配的视觉里程计研究;贾哲;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20190815;第23-31页 *

Also Published As

Publication number Publication date
CN111210463A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111210463B (en) Virtual wide-view visual odometer method and system based on feature point auxiliary matching
CN111024066B (en) Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN111561923B (en) SLAM (simultaneous localization and mapping) mapping method and system based on multi-sensor fusion
CN109307508B (en) Panoramic inertial navigation SLAM method based on multiple key frames
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
CN111707281B (en) SLAM system based on luminosity information and ORB characteristics
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
CN112304307A (en) Positioning method and device based on multi-sensor fusion and storage medium
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
US20180308240A1 (en) Method for estimating the speed of movement of a camera
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
CN108776976B (en) Method, system and storage medium for simultaneously positioning and establishing image
CN113108771B (en) Movement pose estimation method based on closed-loop direct sparse visual odometer
US11082633B2 (en) Method of estimating the speed of displacement of a camera
CN110375765B (en) Visual odometer method, system and storage medium based on direct method
CN112802096A (en) Device and method for realizing real-time positioning and mapping
CN114001733A (en) Map-based consistency efficient visual inertial positioning algorithm
CN112541423A (en) Synchronous positioning and map construction method and system
CN112967340A (en) Simultaneous positioning and map construction method and device, electronic equipment and storage medium
CN113345032B (en) Initialization map building method and system based on wide-angle camera large distortion map
Ok et al. Simultaneous tracking and rendering: Real-time monocular localization for MAVs
CN108827287B (en) Robust visual SLAM system in complex environment
CN112419411A (en) Method for realizing visual odometer based on convolutional neural network and optical flow characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant