CN116452986A - Method for quickly searching satellite docking - Google Patents

Method for quickly searching satellite docking Download PDF

Info

Publication number
CN116452986A
CN116452986A CN202310213516.3A CN202310213516A CN116452986A CN 116452986 A CN116452986 A CN 116452986A CN 202310213516 A CN202310213516 A CN 202310213516A CN 116452986 A CN116452986 A CN 116452986A
Authority
CN
China
Prior art keywords
target
satellite
camera
frame
binocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310213516.3A
Other languages
Chinese (zh)
Inventor
武俊峰
李旭
康国华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202310213516.3A priority Critical patent/CN116452986A/en
Publication of CN116452986A publication Critical patent/CN116452986A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for quickly finding satellite docking, which comprises the following steps: (1) And quickly rotating the cradle head to search for a target satellite so that the target satellite is within the field of view. (2) And detecting the target satellite by adopting a regression-based target detection algorithm. (3) And tracking the specific target on each frame of image by adopting a target tracking algorithm based on deep learning and outputting position information. And calculating the relative distance between the satellites according to the position information of the target satellites on the image. According to the invention, the target satellite is quickly searched through a binocular vision target detection and tracking algorithm and the control of the cradle head. The method reduces energy consumption and can quickly find the target satellite by using a cradle head mode. The position relationship of the two docking satellites can be established through the conversion of the coordinate system, so that the position information of the target is determined.

Description

Method for quickly searching satellite docking
Technical Field
The invention relates to a target searching method, in particular to a method for quickly searching for satellite docking.
Background
With the rapid development of computer technology in recent years, real-time target detection and tracking technology has become a hot spot for research. The target detection is also called target extraction, which combines the segmentation and recognition of targets into one, and the accurate target detection has great influence on the follow-up tracking system. The target tracking is based on the initial state of the target, and the target is tracked by a target tracking algorithm in a space. Compared with other technologies, the visual tracking is characterized by better recognition rate on the category of the tracked object and lower price of the visual sensor. Therefore, a visual target detection and tracking algorithm is adopted to detect and track a target object, and the camera is rotated by controlling the cradle head.
Because the satellite needs to determine the azimuth of the target satellite before docking, the target is detected and tracked by placing the binocular camera on the cradle head, the target is always in the field of view through the control of the cradle head, and meanwhile, the binocular camera can acquire the depth information of the target, so that the method provides assistance for subsequent processing.
Disclosure of Invention
The invention aims to provide a method for quickly searching a target satellite before satellite docking.
In order to achieve the above purpose, the present invention provides a method for constructing a pan-tilt binocular camera on a satellite, which can realize fast searching of a target satellite through a target detection and tracking algorithm, comprising:
a method for fast finding satellite docking, comprising the steps of:
1) Installing a cradle head binocular camera on a reference satellite, rapidly rotating the cradle head binocular camera to acquire a picture, detecting a target satellite in the picture, and inputting a target frame of a first frame picture when the target satellite is detected;
2) Generating a candidate frame in the second frame of picture, extracting the characteristics of objects in the candidate frame, obtaining confidence scores, and determining the highest confidence score as a target candidate frame;
3) Obtaining a target frame of each frame of picture through the steps 1) and 2), outputting position information of a target satellite in the picture, and enabling the target satellite to be always positioned in the center of a field of view through rotating a cradle head binocular camera;
4) And calculating the distance from the target satellite to the reference satellite by using a parallax formula of the binocular camera by taking the left-eye camera of the tripod head binocular camera as a reference.
Preferably, the implementation process of step 1) is as follows:
step 1.1) obtaining a plurality of pictures containing target satellites, dividing the pictures into a training set and a verification set, inputting the training set into a target detection model for training to obtain a trained target detection model, and verifying the performance of the trained target detection model through the verification set;
step 1.2) rapidly rotating the cradle head binocular camera, inputting pictures acquired by the cradle head binocular camera into a trained target detection model, and inputting a target frame of a first frame of pictures when a target satellite is detected.
Preferably, the implementation process of step 2) is as follows: and generating a candidate frame in the second frame of picture, determining a candidate region through sliding window type sampling, performing appearance modeling, calculating the confidence score of the object in the candidate frame according to the appearance modeling, and determining the highest confidence score as a target candidate frame.
Preferably, the implementation process of step 3) is as follows: obtaining a target frame of each frame of picture through the steps 1) and 2), taking a central pixel point of the target frame as a position of a target satellite in the picture, outputting position information of the target satellite, and controlling the rotation of the pan-tilt binocular camera to enable the target satellite to be positioned at the center of a field of view.
Preferably, the implementation process of the step 4) is as follows: the left-eye camera of the cradle head binocular camera is used as a reference, a coordinate transformation is carried out from a target satellite coordinate under a world coordinate system to a camera coordinate system, then to an image coordinate system, finally, the target satellite coordinate is transformed to a pixel coordinate system, and a binocular vision parallax formula is utilizedCalculating the distance Z from the target satellite to the reference satellite; wherein T is the target matched on the left and right camera images of the cradle head binocular cameraAnd f is the focal length of the left and right eye cameras, and d is the distance between the optical centers of the left and right eye cameras.
Preferably, before the coordinate conversion in the step 4), the inner parameter and the outer parameter of the pan-tilt binocular camera are obtained by a pan-tilt binocular camera defocus rapid calibration method, which specifically comprises the following steps:
step A), obtaining accurate sub-pixel coordinates of characteristic points in a camera defocusing state through a phase shift coding circular pattern;
step B), calculating initial parameters of the monocular camera according to the accurate sub-pixel coordinates of the feature points obtained in the step A, constructing a monocular objective function with the minimum re-projection error of the monocular camera according to the initial parameters, and calculating internal parameters of the monocular camera according to the objective function;
and C) after obtaining accurate internal parameters of each camera, optimizing the re-projection error function of the binocular camera, thereby obtaining accurate internal parameters and external parameters of the binocular camera.
Preferably, the implementation process of the step A) is as follows:
acquiring a phase shift coding circular pattern, and acquiring an image corresponding to the phase shift coding circular pattern, wherein the phase shift value of the phase shift coding circular pattern is 2/3 pi, and the light intensity distribution function is as follows:
wherein I is 1 (x, y) is the pixel gray value of the phase coding circular pattern, and I' (x, y) is the background average gray of the corresponding image, and the value is 0.5; i "(x, y) is the modulation gray scale of the corresponding image, and the value is 0.5; phi (x, y) is the phase principal value, defined as:
wherein T is the period of the phase-shift coded circular pattern, and r (x, y) is the distance from one point (x, y) on the phase-shift coded circular pattern to the center (x) of the phase-shift coded circular pattern 0 ,y 0 ) Is expressed as:
the phase principal value of the pixel point on the image is as follows:
wherein I is 1 、I 2 、I 3 The method comprises the steps of respectively carrying out contour extraction on calculated phase main values by using gray values of pixels corresponding to three phase shift coding circle patterns to fit ellipses, so as to obtain accurate sub-pixel coordinates of the feature points.
Preferably, the implementation process of the step C) is as follows: the reprojection error function of the binocular camera is:
wherein m is i Two-dimensional points of the image observed for the image; g is a projection equation; a is an internal reference matrix, and the initial value of the internal reference matrix uses parameters provided by a camera hardware manufacturer; k is a distortion parameter; r, t is an external parameter between each camera and the target, and an initial value of the R, t is obtained through an n-point perspective algorithm; m is M i The three-dimensional coordinates of the feature points are space; err is the reprojection error.
The beneficial effects are that:
according to the invention, the target satellite is quickly searched through a binocular vision target detection and tracking algorithm and the control of the cradle head. The method reduces energy consumption and can quickly find the target satellite by using a cradle head mode. The position relationship of the two docking satellites can be established through the conversion of the coordinate system, so that the position information of the target is determined.
Drawings
FIG. 1 is a flow chart of a method for satellite docking quick seeking according to the present invention.
Detailed Description
The present invention will be further described in detail below with reference to the drawings and detailed description for the purpose of enabling those skilled in the art to better understand the aspects of the present invention.
The invention provides a method for quickly searching for satellite docking, which comprises the following steps: s1: the fast rotating cradle head detects a target satellite in the video stream, and when the target satellite is detected, a target frame of a first frame is input. S2: and generating a candidate frame in the second frame, extracting the characteristics of objects in the candidate frame, acquiring confidence scores, and determining the object candidate frame as a target candidate frame if the confidence score is the highest. S3: the target frame of each frame is obtained through S1 and S2, the position information of the target in the image is output, and the target satellite is always positioned in the center of the field of view through the control of the cradle head. S4: and calculating the distance of the target by using the parallax principle of the binocular camera and taking the left-eye camera as a reference.
The specific process for satellite fast docking seeking can be set as follows:
first, the target detection algorithm is classified into a conventional target detection algorithm, a candidate region-based target detection algorithm, a regression-based target detection algorithm, and an reinforcement learning-based target detection algorithm. The superiority and absence of various algorithms are plotted in table 1 below:
table 1 summary of target detection algorithm characteristics
The method is used for quickly searching the satellite docking, has rapid target detection and higher accuracy requirements, and can select regression-based target detection algorithms, such as YOLO (You Only Look Once) series algorithm and SSD (Single Shot multibox Detector).
S1: in the YOLOv4 algorithm, the input picture is first adjusted to 416 x 416 size, the input image is divided into cells of sx S size in the CSPDarknet-53 feature extraction network,
and then obtaining three feature graphs with different sizes, namely 13×13, 26×26 and 52×52, which are respectively used for three targets of large, medium and small, wherein the regression prediction in each cell has 3 anchor boxes for predicting three frames, and the frame with the highest confidence is selected as a final detection result.
S2: in the aspect of tracking algorithm, discriminant tracking algorithm can be selected, and discriminant tracking algorithm can be divided into tracking algorithm based on sparse representation, tracking algorithm based on correlation filtering and tracking algorithm based on deep learning. Since the tracking algorithm based on the deep learning is insensitive to problems such as deformation, blurring and partial shielding and can realize faster and accurate tracking, the tracking algorithm based on the deep learning is used for tracking the target satellite.
S3: the method comprises the steps of rapidly detecting and tracking a target satellite through S1 and S2, displaying a frame of the target satellite in a video stream, taking a pixel of a central point of the frame as a position of the target satellite, outputting position information of the target satellite, and controlling a cradle head to rotate a binocular camera to enable the target satellite to be always in a view field range, so that rapid and accurate tracking of the target satellite is achieved, and the posture of the satellite is not affected all the time in the process.
S4: the position coordinates of the target satellite need to be converted into pixel coordinates by taking the left-eye camera as a reference. Assume that the point of the target satellite corresponding to the target frame center point pixel in the world coordinate system is (X w ,Y W ,Z W ) Converting the world coordinate system into a camera coordinate system by rotating and translating the matrix:
writing it into its secondary coordinate form:
points in the camera coordinate system are transformed by similarity into the image coordinate system:
finally, through a conversion formula from image coordinates to pixel coordinates:
and transforming the coordinates of the central point of the target satellite under the world coordinate system to the coordinates of the pixels of the target frame. Combining the coordinate transformation formulas:
using binocular vision parallax formulaThe depth Z can be calculated. Where T is the parallax of the center point of the matched target frame on the left and right camera images, f is the focal length of the camera, and d is the distance, i.e., the distance, between the optical centers of the left and right cameras.
Aiming at the problems of large camera calibration difficulty and low efficiency of a large-scale vision system, the camera defocusing rapid calibration method based on the phase-shift coding circle is provided.
Firstly, extracting characteristic points in a defocusing state through a phase displacement coding circle; constructing a target optimization function of the monocular camera by using target space position information, and solving internal parameters of the camera by using a nonlinear least square method; and establishing a target optimization function of the binocular system to calculate external parameters of the camera system.
The three-step phase shift method is adopted to generate the phase shift coding circular pattern, the phase shift value is 2/3 pi, and the light intensity distribution function can be expressed as
Wherein, I' (x, y) is the average gray value of the image background of 0.5; i "(x, y) is the value of 0.5 of the image modulation gray scale; phi (x, y) is the principal value of the phase, described as
In the formula, T is the period of the phase shift coding circle, namely the number of pixels spanned by the phase main value from 0 to 2 pi, and the value size is determined by the screen resolution and the total number of rows and columns of the planned phase shift coding circle. r (x, y) is the distance between a point (x, y) on the phase-shift code circle and its center (x) 0 ,y 0 ) Can be expressed as the Euclidean distance of
And performing image processing on the phase circular pattern to obtain the accurate position of the characteristic point. The phase principal value of the pixel point of the image collected by the camera is changed with the generation, so that the collected image is needed to be calculated, and the phase principal value of the pixel point on the phase shift coding circle image is as follows
Wherein Φ (x, y) is the principal value of the phase to be solved at the pixel (x, y), I 1 、I 2 、I 3 The three phase shift patterns correspond to the gray values of the pixels, respectively. The phase principal value range obtained by phase calculation is 0-2 pi, and due to the nature of the arctangent function, the phase principal value can generate a break at the 2 pi position, so that the calculated phase principal value can be directly subjected to contour extraction and fitting ellipse, and the accurate sub-pixel coordinates of the characteristic points can be obtained.
The calibration process of the system parameters is a process of solving an internal parameter matrix and an external parameter matrix of the camera. The camera internal parameter matrix is only related to the internal parameters of the camera itself, and the camera external parameter matrix of the binocular system is generally the transformation relation from the right camera to the left camera. Respectively calculating internal parameters of each camera of the binocular system, and establishing objective function optimization with minimum single-camera re-projection error; after obtaining accurate internal reference results of each camera, re-establishing a re-projection error function of the binocular camera for optimization, so as to obtain accurate internal parameters and external parameters of the binocular camera. The reprojection error function is constructed as follows:
it will be understood that the invention has been described in terms of several embodiments, and that various changes and equivalents may be made to these features and embodiments by those skilled in the art without departing from the spirit and scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (8)

1. A method for fast finding satellite docking, comprising the steps of:
1) Installing a cradle head binocular camera on a reference satellite, rapidly rotating the cradle head binocular camera to acquire a picture, detecting a target satellite in the picture, and inputting a target frame of a first frame picture when the target satellite is detected;
2) Generating a candidate frame in the second frame of picture, extracting the characteristics of objects in the candidate frame, obtaining confidence scores, and determining the highest confidence score as a target candidate frame;
3) Obtaining a target frame of each frame of picture through the steps 1) and 2), outputting position information of a target satellite in the picture, and enabling the target satellite to be always positioned in the center of a field of view through rotating a cradle head binocular camera;
4) And calculating the distance from the target satellite to the reference satellite by using a parallax formula of the binocular camera by taking the left-eye camera of the tripod head binocular camera as a reference.
2. The method for fast finding a satellite docking according to claim 1, wherein the implementation process of step 1) is:
step 1.1) obtaining a plurality of pictures containing target satellites, dividing the pictures into a training set and a verification set, inputting the training set into a target detection model for training to obtain a trained target detection model, and verifying the performance of the trained target detection model through the verification set;
step 1.2) rapidly rotating the cradle head binocular camera, inputting pictures acquired by the cradle head binocular camera into a trained target detection model, and inputting a target frame of a first frame of pictures when a target satellite is detected.
3. The method for fast finding a satellite docking according to claim 2, wherein the implementation process of step 2) is: and generating a candidate frame in the second frame of picture, determining a candidate region through sliding window type sampling, performing appearance modeling, calculating the confidence score of the object in the candidate frame according to the appearance modeling, and determining the highest confidence score as a target candidate frame.
4. A method for fast finding a satellite docking as claimed in claim 3, wherein step 3) is implemented as follows: obtaining a target frame of each frame of picture through the steps 1) and 2), taking a central pixel point of the target frame as a position of a target satellite in the picture, outputting position information of the target satellite, and controlling the rotation of the pan-tilt binocular camera to enable the target satellite to be positioned at the center of a field of view.
5. The method for fast finding a satellite docking according to claim 4, wherein the implementation process of step 4) is: the left-eye camera of the cradle head binocular camera is used as a reference, and the coordinate transformation is carried outConverting the coordinates of a target satellite under the world coordinate system to a camera coordinate system, then to an image coordinate system, and finally converting the coordinates to a pixel coordinate system, and utilizing a binocular vision parallax formulaCalculating the distance Z from the target satellite to the reference satellite; wherein T is the parallax of the center point of the matched target frame on the images of the left and right eyes of the cradle head binocular camera, f is the focal length of the left and right eyes of the cradle head binocular camera, and d is the distance between the optical centers of the left and right eyes of the cradle head binocular camera.
6. The method for quickly finding satellite docking according to claim 5, wherein in the step 4), before the coordinate conversion, the internal parameters and the external parameters of the pan-tilt binocular camera are obtained by a pan-tilt binocular camera defocus quick calibration method, specifically comprising the following steps:
step A), obtaining accurate sub-pixel coordinates of characteristic points in a camera defocusing state through a phase shift coding circular pattern;
step B), calculating initial parameters of the monocular camera according to the accurate sub-pixel coordinates of the feature points obtained in the step A, constructing a monocular objective function with the minimum re-projection error of the monocular camera according to the initial parameters, and calculating internal parameters of the monocular camera according to the objective function;
and C) after obtaining accurate internal parameters of each camera, optimizing the re-projection error function of the binocular camera, thereby obtaining accurate internal parameters and external parameters of the binocular camera.
7. The method for rapidly calibrating the defocus of the pan-tilt binocular camera according to claim 6, wherein the implementation process of the step A) is as follows:
acquiring a phase shift coding circular pattern, and acquiring an image corresponding to the phase shift coding circular pattern, wherein the phase shift value of the phase shift coding circular pattern is 2/3 pi, and the light intensity distribution function is as follows:
wherein I is 1 (x, y) is the pixel gray value of the phase coding circular pattern, and I' (x, y) is the background average gray of the corresponding image, and the value is 0.5; i "(x, y) is the modulation gray scale of the corresponding image, and the value is 0.5; phi (x, y) is the phase principal value, defined as:
wherein T is the period of the phase-shift coded circular pattern, and r (x, y) is the distance from one point (x, y) on the phase-shift coded circular pattern to the center (x) of the phase-shift coded circular pattern 0 ,y 0 ) Is expressed as:
the phase principal value of the pixel point on the image is as follows:
in the middle, I 1 、I 2 、I 3 The method comprises the steps of respectively carrying out contour extraction on calculated phase main values by using gray values of pixels corresponding to three phase shift coding circle patterns to fit ellipses, so as to obtain accurate sub-pixel coordinates of the feature points.
8. The method for fast finding a satellite docking according to claim 7, wherein the implementation process of step C) is: the reprojection error function of the binocular camera is:
wherein the method comprises the steps ofm i Two-dimensional points of the image observed for the image; g is a projection equation; a is an internal reference matrix, and the initial value of the internal reference matrix uses parameters provided by a camera hardware manufacturer; k is a distortion parameter; r, t is an external parameter between each camera and the target, and an initial value of the R, t is obtained through an n-point perspective algorithm; m is M i The three-dimensional coordinates of the feature points are space; err is the reprojection error.
CN202310213516.3A 2023-03-08 2023-03-08 Method for quickly searching satellite docking Pending CN116452986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310213516.3A CN116452986A (en) 2023-03-08 2023-03-08 Method for quickly searching satellite docking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310213516.3A CN116452986A (en) 2023-03-08 2023-03-08 Method for quickly searching satellite docking

Publications (1)

Publication Number Publication Date
CN116452986A true CN116452986A (en) 2023-07-18

Family

ID=87129166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310213516.3A Pending CN116452986A (en) 2023-03-08 2023-03-08 Method for quickly searching satellite docking

Country Status (1)

Country Link
CN (1) CN116452986A (en)

Similar Documents

Publication Publication Date Title
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
CN109345588B (en) Tag-based six-degree-of-freedom attitude estimation method
CN110675418B (en) Target track optimization method based on DS evidence theory
US10334168B2 (en) Threshold determination in a RANSAC algorithm
CN103325112B (en) Moving target method for quick in dynamic scene
CN107635129B (en) Three-dimensional trinocular camera device and depth fusion method
CN108537844B (en) Visual SLAM loop detection method fusing geometric information
TWI682326B (en) Tracking system and method thereof
JP2013050947A (en) Method for object pose estimation, apparatus for object pose estimation, method for object estimation pose refinement and computer readable medium
CN109165680A (en) Single target object dictionary model refinement method under the indoor scene of view-based access control model SLAM
CN111768447B (en) Monocular camera object pose estimation method and system based on template matching
CN110070578B (en) Loop detection method
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
CN113190120B (en) Pose acquisition method and device, electronic equipment and storage medium
Zou et al. Microarray camera image segmentation with Faster-RCNN
CN112652020A (en) Visual SLAM method based on AdaLAM algorithm
CN114596382A (en) Binocular vision SLAM method and system based on panoramic camera
Haggui et al. Human detection in moving fisheye camera using an improved YOLOv3 framework
Wan et al. Boosting image-based localization via randomly geometric data augmentation
CN113011359A (en) Method for simultaneously detecting plane structure and generating plane description based on image and application
Zhou et al. MH pose: 3D human pose estimation based on high-quality heatmap
CN107330436B (en) Scale criterion-based panoramic image SIFT optimization method
CN106709942B (en) Panorama image mismatching elimination method based on characteristic azimuth angle
CN113112532B (en) Real-time registration method for multi-TOF camera system
CN116452986A (en) Method for quickly searching satellite docking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination