CN116862984A - Space pose estimation method of camera - Google Patents

Space pose estimation method of camera Download PDF

Info

Publication number
CN116862984A
CN116862984A CN202310839460.2A CN202310839460A CN116862984A CN 116862984 A CN116862984 A CN 116862984A CN 202310839460 A CN202310839460 A CN 202310839460A CN 116862984 A CN116862984 A CN 116862984A
Authority
CN
China
Prior art keywords
camera
angle
points
coordinate system
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310839460.2A
Other languages
Chinese (zh)
Inventor
崔志超
兰琪
赵祥模
惠飞
徐志刚
张赞
王昱宸
黄曜辉
何育超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202310839460.2A priority Critical patent/CN116862984A/en
Publication of CN116862984A publication Critical patent/CN116862984A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a space pose estimation method of a camera, which belongs to the technical field of camera pose estimation and comprises the following steps: matching the feature points of the image shot by the camera with the real world points corresponding to the pixel points of the image shot by the camera to obtain a group of matched feature points; constructing a first angle diagram model under a camera coordinate system; constructing a second angle diagram model under a real world coordinate system; subtracting the values of the corresponding edges in the first angle diagram model and the second angle diagram model to obtain an angle difference diagram; the feature points with the least number of connected edges with other feature points; and estimating the spatial pose of the camera by utilizing an LHM algorithm according to the position relation between the correctly matched characteristic points. The method can acquire accurate camera pose.

Description

Space pose estimation method of camera
Technical Field
The invention belongs to the technical field of camera pose estimation, and particularly relates to a spatial pose estimation method of a camera.
Background
The space pose estimation of the camera is an important research task in the fields of computer vision, machine vision and robots, and aims to estimate the position and the pose of the camera in a three-dimensional scene by utilizing the characteristics between the scene and a shot image. The method for estimating the space pose of the camera comprises the steps of firstly establishing a mapping corresponding relation between an image and a three-dimensional scene, namely matching three-dimensional features and two-dimensional features, wherein common features comprise 3D-2D point pair features, line features and the like; and constructing a function related to the spatial parameters by utilizing the spatial constraint relation of the features according to the three-dimensional to two-dimensional matching features. Thus, the accuracy of the feature information largely determines the accuracy of the camera pose estimation, however in a real environment, precise features are difficult to obtain. On one hand, due to factors such as illumination of a scene, blurring generated by object motion, image resolution and the like, accurate three-dimensional features and image features cannot be obtained, and features accompanied by a large amount of noise are normalized expression; on the other hand, since many scenes contain a large number of pattern areas of similar type, mismatching, i.e. outliers, is likely to occur when features of different dimensions are associated. Under the combined action of noise and outlier features, accurate camera pose estimation has great difficulty and challenges.
Currently, many studies have been devoted to solving outlier feature rejection and camera pose estimation. Mature solutions can be divided into two categories, generic and targeted. The universality method aims at researching extraction of effective data conforming to the same distribution or rule from the data, and simultaneously eliminating outlier data against the distribution, and in the problem of camera pose estimation, the constraint relation (namely perspective projection) between three-dimensional and two-dimensional features is regarded as the same distribution rule, and outlier pairs in feature matching are eliminated. The random sampling consensus algorithm (Random Sample Consensus, RANSAC) is used for searching matching features conforming to perspective projection constraint under pose assumptions by proposing a plurality of groups of camera space pose assumptions, and taking the assumptions corresponding to the matching features with the largest number as the camera space pose so as to realize camera pose estimation and outlier rejection. The way of establishing multiple pose hypotheses for verification seriously hinders the algorithm operation speed, and the R-RANSAC method reduces the algorithm time complexity by reducing a large number of invalid pose hypotheses. In addition, a Branch and Bound (BnB) method can effectively divide a pose space closed set, and find the pose parameters with the largest number by traversing all possible pose parameters; compared with the RANSAC method, the method is orderly divided and traversed, has stable time complexity, and obtains the optimal result through global search.
The pertinence method aims at constructing a personalized model according to the characteristics of the perspective projection of the camera, can more effectively estimate the pose of the camera and remove outlier features, but the specificity of the model leads to a limited application range, and can only meet the requirements of specified features or scenes. Ferraz et al convert the pose estimation problem into a low rank matrix and determine outlier features by estimating one-dimensional zero space vectors of feature pair matrices. Camposeco et al found that the camera optical center should be located on the surface of the circle defined by any two projection lines, and constructed corresponding position constraints to reject outlier features. Moreno et al use a kernel-based outlier rejection method and apply it to visual odometer tasks. Li et al build a Gaussian mixture model with correctly matched features with optical flow consistency characteristics and solve using the Expectation Maximization (EM) method. The best results are currently obtained by Zhou et al using a single point RANSAC method based on a soft weighting mechanism to handle outlier features.
The existing method considers the internal relation of the matched features, namely perspective projection constraint between feature pairs, and solves the problem by modeling the characteristics of the internal relation. The method can remove the outlier features in the scene to a certain extent and obtain the accurate pose, but in the outdoor scene with larger noise (such as urban roads, traffic roads, squares and the like), the outlier features are difficult to accurately detect by singly utilizing the constraint of the internal relation of the matching features, the positioning precision of a camera is seriously influenced by the outlier features which are missed to be detected, and experiments prove that when the image noise is larger than 4 pixels, the missed detection rate of the outlier features is gradually increased. However, the external constraint relation between the matched features belongs to stronger constraint information as well, and detection of outlier features can be greatly improved by combining the internal and external constraint relation.
The existing method lacks a modeling method for matching the external relation of the feature and a modeling method for combining the internal and external constraint relation, so that the existing method utilizes the internal relation to realize outlier feature rejection and pose estimation accuracy is limited.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a space pose estimation method of a camera.
In order to achieve the above object, the present invention provides the following technical solutions:
a method for estimating a spatial pose of a camera, comprising:
matching the characteristic points of the image shot by the camera with real world object points corresponding to the characteristic points of the image shot by the camera to obtain a group of matched characteristic points;
under a camera coordinate system, taking a characteristic point of an image shot by a camera in the matched characteristic points as a node, and taking an included angle value between a connecting line of an optical center of the camera and the characteristic point as a weight value of an edge to construct a first angle diagram model;
under a world coordinate system, a real world physical point corresponding to a characteristic point of an image shot by a camera in the matched characteristic points is taken as a node, and an included angle value between a connecting line of an optical center of the camera and the physical point is taken as a weight value of an edge to construct a second angle diagram model;
subtracting weights of corresponding edges in the first angle graph model and the second angle graph model, taking an absolute value, and binarizing the absolute value to obtain an angle difference graph;
removing the feature points with the least number of connected edges with other feature points from the angle difference graph;
and estimating the spatial pose of the camera by utilizing an LHM algorithm according to the position relation of the characteristic points in the angle difference diagram under the real world coordinate system.
Further, the included angle between the connecting line from the optical center of the camera to the characteristic point is calculated as follows:
under the camera coordinate system, converting any image pixel point p into a space ray l=K under the action of a camera internal parameter K -1 And p, the included angle between two rays in the space rays is as follows:
middle l i ,l j Representing two projected straight lines corresponding to the feature points, d i,j And the included angle between the corresponding rays of the characteristic points i and j is shown.
Further, the calculation of the included angle between the optical center of the camera and the line connecting the real world points corresponding to the pixel points of the image shot by the camera includes:
under the world coordinate system, the camera optical center is set as C, and the ray corresponding to the point P of the real world is L=P-C;
and calculating the included angle between the rays.
Further, subtracting weights of corresponding edges in the first angle graph model and the second angle graph model, taking an absolute value, and binarizing the absolute value to obtain an angle difference graph; removing the feature points with the least number of connected edges with other feature points from the angle difference graph; comprising the following steps:
subtracting the values of the corresponding edges in the first angle diagram model and the second angle diagram model, and taking an absolute value;
binarizing the absolute value by using a set threshold value to obtain an angle difference chart:
in the middle ofAnd->Represents the angles, d, of the ith and jth rays in the world and camera coordinate systems, respectively th Indicating a set threshold;
converting an angle difference diagram into an adjacency matrix M D
Using adjacency matrix M D Calculating the degree of each node in the angle difference graph;
reject M D And the row and the column corresponding to the node with the minimum degree.
Further, M is removed D After the row and column corresponding to the node with the minimum degree, M is calculated D If the average degree is equal to the node number, the characteristic points with the least number of connected edges with other characteristic points are completely removed.
Further, estimating the spatial pose of the camera by utilizing an LHM algorithm according to the position relation of the reserved characteristic points under a real world coordinate system; comprising the following steps:
and inputting the coordinates of the feature points in the world coordinate system and the coordinates of the feature points in the camera coordinate system into an LHM function in MATLA software, and calculating by the LHM function to obtain a space pose estimation result of the camera.
The method for estimating the spatial pose of the camera has the following beneficial effects:
in the method, characteristic points of an image shot by a camera in a camera coordinate system are used as nodes, an included angle between the light center of the camera and a connecting line of the characteristic points is used as a first angle graph model, a real-world point corresponding to a pixel point of the image shot by the camera is used as a node in a real-world coordinate system, an included angle between the light center of the camera and the connecting line of the real-world point corresponding to the pixel point of the image shot by the camera is used as an edge to construct a second angle graph model, the angle graph model in the camera coordinate system and the world coordinate system is subtracted to obtain an angle difference graph, the angle difference graph combines the internal and external relations between matched characteristics, the accurately matched characteristic points are screened out by using the angle difference graph, and the accurately matched characteristic points are used to obtain the accurate camera pose. The method solves the problems that in the prior art, a modeling method for matching the external relation of the feature and a modeling method for combining the internal constraint relation and the external constraint relation are lacked, so that the existing method for realizing outlier feature elimination and pose estimation accuracy by utilizing the internal relation is limited.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some of the embodiments of the present invention and other drawings may be made by those skilled in the art without the exercise of inventive faculty.
FIG. 1 is a schematic diagram of the internal and external relationships of a 3D-2D matching feature of the present invention;
FIG. 2 is an angular diagram model and an angular difference diagram model under a camera coordinate system and a world coordinate system of the present invention;
FIG. 3 is a graph of the average error of the rotation matrix versus pixel noise according to the present invention;
FIG. 4 is a graph of the average error of the translation vector versus the pixel noise according to the present invention.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the embodiments, so that those skilled in the art can better understand the technical scheme of the present invention and can implement the same. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
Examples:
the invention provides a method for estimating the spatial pose of a camera, which is shown in fig. 1 specifically and comprises the following steps: matching the feature points of the image shot by the camera with the real world points corresponding to the pixel points of the image shot by the camera to obtain a group of matched feature points; under a camera coordinate system, taking a characteristic point of an image shot by a camera in the matched characteristic points as a node, and taking an included angle between a light center of the camera and a connecting line of the characteristic point as an edge to construct a first angle diagram model; under a real world coordinate system, taking a real world point corresponding to a pixel point of an image shot by a camera in the matched feature points as a node, and taking an included angle between an optical center of the camera and a connecting line of the real world point corresponding to the pixel point of the image shot by the camera as an edge to construct a second angle diagram model; subtracting the weights of the corresponding edges in the first angle graph model and the second angle graph model to obtain an angle difference graph; eliminating feature points which are not correctly matched in the angle difference graph, and reserving the feature points which are correctly matched; and estimating the spatial pose of the camera by utilizing an LHM algorithm according to the position relation between the correctly matched characteristic points.
The method comprises the steps of modeling the internal and external relations of matching features by using an angle difference graph model, analyzing and describing that the maximum full-connection subgraph of the model is a non-outlier feature, simultaneously providing a maximum full-connection subgraph extraction method based on node degree, and finally estimating the pose parameters of a scene camera by combining the graph model.
The following are details of the implementation of the invention:
1) Angle graph model definition and construction
The angle graph model is a full-connection graph, the nodes of the angle graph are each group of 3D-2D matching characteristic points, and the connecting line between the two nodes represents the included angle between projection rays corresponding to the two pairs of point characteristics.
The patent respectively builds an angle diagram model under a camera and a world coordinate system, and under the camera coordinate system, any image pixel point p is converted into a space ray l=K under the action of a camera internal parameter K -1 And p, calculating the following formula of the included angle of the two rays:
middle l i ,l j Representing two projected straight lines corresponding to the characteristic points, d i,j And the included angle between the corresponding rays of the characteristic points i and j is shown. The angle between any two rays can be determined by using the formula, so that an angle diagram model under a camera coordinate system is constructed.
In the world coordinate system, assuming that the camera optical center is C, the characteristic point C corresponds to the emissionThe line denoted l=p-C, rays L i And L j Substituting the formula (1) can obtain the included angle of any two rays. The angle graph model under the world coordinate system and the camera coordinate system can be respectively obtained by the above method and is marked as M c And M w
2) Generating an angle difference diagram:
the angle difference map model is constructed on the basis of the angle map model, and the angle map M under the known camera coordinate system and world coordinate system c And M w Firstly, subtracting edges corresponding to the same characteristics of the two angle diagrams from each other and taking an absolute value to obtain an angle difference diagram;
secondly, binarizing the edge value of the angle difference graph after difference by using a set threshold value, as shown in a formula (2)
In the middle ofAnd->Represents the i and j ray angles, d, respectively in the world coordinate system and in the camera coordinate system th Indicating that a threshold is set.
According to the definition of the angle difference graph, when two groups of matching features are correctly matched, the two groups of matching features exist on the edges in the angle difference graph, and the weight is 1; when only one of the two sets of matching features is correctly matched (namely, one set is an outlier feature), the edge of the two sets of matching features in the angle difference graph is absent in a large probability, namely, the weight is 0; when the two groups of matching features are outlier features, the large probability of the edges in the angle difference graph does not exist, namely the weight value is 0.
Thus, in the angle difference diagram, there must be interconnecting edges between the correctly matched features; while the presence or absence of corresponding edges between the outlier feature and other features may be very low if the presence of edges between the outlier feature and all correctly matched features is met. Thus, outliers can be culled by extracting the largest connected subgraph of the angle difference graph.
3) Eliminating feature points which are not correctly matched in angle difference diagram
The patent provides a maximum full-connection subgraph extraction method based on node degrees, which comprises the steps of iteratively deleting nodes with minimum degrees, and judging whether the degree of each node reaches the maximum or not in each iteration.
Specifically, the angular difference map corresponds to the adjacency matrix M in agreement with itself D The method comprises the steps of carrying out a first treatment on the surface of the Calculating the degree of each node using the adjacency matrix, i.e. d= Σ i M D (: i); ordering the elements in vector D according to a descending order; rejecting the node with the smallest degree in D, and rejecting M as well D Rows and columns corresponding to the middle degree minimum node; calculating the average degree of the deleted nodes, wherein the average degree is equal to the node number to indicate that the model is a full-connection graph, and otherwise, the model is a non-full-connection graph; if the graph is not the full connection graph, continuing to iterate the process until the graph is the full connection graph. The nodes corresponding to the full-connection subgraph are matched with the correct characteristics; nodes in the angle difference graph but not in the subgraph are outliers. And (5) rejecting the outlier features and then retaining correctly matched feature points.
4) Camera pose estimation
Estimating the pose of the camera by using 3D-2D feature matching points belongs to the PnP problem, and the existing mature solution is layered endlessly. The outlier feature eliminating method based on the angle difference graph model is not limited to a specific pose estimating method. Therefore, without losing generality, the patent uses the LHM method to estimate the pose of the camera.
The following are specific embodiments of the present invention:
step 1: randomly selecting 6 groups of point feature pairs from 3D-2D matching features;
step 2: estimating a position T and a pose R of the camera using the LHM;
step 3: based on the camera position T, respectively calculating a world coordinate system and an angle diagram model M under the camera coordinate system c And M w
Step 4: at M c And M w Based on the above, an angle difference map model M is calculated by using the formula (2) D
Step 5: using a differential graph model M D Calculating all node degree vectors D;
step 6: arranging the node degrees from large to small;
step 7: deleting the corresponding node with minimum degree and deleting the graph M D Corresponding to the node row and column;
step 8: calculation map M D Average node degree of (a); if the average node degree is greater than 0.7 times of the node number, entering the next step, otherwise, jumping to the step 6;
step 9: by M D The camera positions T and R are optimized according to the characteristics corresponding to the middle node; and (3) judging the iteration times, if the iteration times are smaller than 7 times, jumping to the step (3), otherwise, exiting.
And (3) experimental verification:
the simulation experiment is used for evaluating the method, three-dimensional point features are generated within the range of 20, 16, 5 and 30, corresponding image pixel features are generated by projecting the internal and external parameters of the camera to the image, outlier features are randomly generated based on a certain proportion, and simulation data are generated by the method.
The experiment evaluates the camera pose and outlier feature detection, respectively, and the camera pose is estimated using the following formula:
in the middle ofAnd->The true value and the estimated value of the ith row of the rotation matrix are respectively represented, and the camera position estimation index is as follows:
t is in es And t gt Representing the estimated and true values of the translation vector, respectively. Outlier feature detection is evaluated using a precision and recall, respectively, the precision expressed as:
where TP and FP represent positive and negative samples, respectively. The precision is expressed as
Where FN represents the false negative samples. The precision represents the correctness of the detected outlier feature, while the recall represents the comprehensiveness of the detected outlier feature.
TABLE 1 outlier feature detection accuracy (%)
TABLE 2 outlier feature detection recall (%)
The above embodiments are merely preferred embodiments of the present invention, the protection scope of the present invention is not limited thereto, and any simple changes or equivalent substitutions of technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention disclosed in the present invention belong to the protection scope of the present invention.

Claims (6)

1. A method for estimating the spatial pose of a camera, comprising:
matching the characteristic points of the image shot by the camera with real world object points corresponding to the characteristic points of the image shot by the camera to obtain a group of matched characteristic points;
under a camera coordinate system, taking a characteristic point of an image shot by a camera in the matched characteristic points as a node, and taking an included angle value between a connecting line of an optical center of the camera and the characteristic point as a weight value of an edge to construct a first angle diagram model;
under a world coordinate system, a real world physical point corresponding to a characteristic point of an image shot by a camera in the matched characteristic points is taken as a node, and an included angle value between a connecting line of an optical center of the camera and the physical point is taken as a weight value of an edge to construct a second angle diagram model;
subtracting weights of corresponding edges in the first angle graph model and the second angle graph model, taking an absolute value, and binarizing the absolute value to obtain an angle difference graph;
removing the feature points with the least number of connected edges with other feature points from the angle difference graph;
and estimating the spatial pose of the camera by utilizing an LHM algorithm according to the position relation of the characteristic points in the angle difference diagram under the real world coordinate system.
2. The method for estimating the spatial pose of a camera according to claim 1, wherein the included angle between the line connecting the optical center of the camera to the feature point is calculated as:
under the camera coordinate system, converting any image pixel point p into a space ray l=K under the action of a camera internal parameter K -1 And p, the included angle between two rays in the space rays is as follows:
middle l i ,l j Representing two projected straight lines corresponding to the feature points, d i,j And the included angle between the corresponding rays of the characteristic points i and j is shown.
3. The method for estimating the spatial pose of a camera according to claim 2, wherein the calculating of the included angle between the optical center of the camera and the line connecting the real world points corresponding to the pixel points of the image captured by the camera comprises:
under the world coordinate system, the camera optical center is set as C, and the ray corresponding to the point P of the real world is L=P-C;
and calculating the included angle between the rays.
4. The method for estimating the spatial pose of a camera according to claim 1, wherein the subtracting weights of corresponding edges in the first angle graph model and the second angle graph model takes an absolute value and binarizes the absolute value to obtain an angle difference graph; removing the feature points with the least number of connected edges with other feature points from the angle difference graph; comprising the following steps:
subtracting the values of the corresponding edges in the first angle diagram model and the second angle diagram model, and taking an absolute value;
binarizing the absolute value by using a set threshold value to obtain an angle difference chart:
in the middle ofAnd->Represents the angles, d, of the ith and jth rays in the world and camera coordinate systems, respectively th Indicating a set threshold;
converting an angle difference diagram into an adjacency matrix M D
Using adjacency matrix M D Calculating the degree of each node in the angle difference graph;
reject M D And the row and the column corresponding to the node with the minimum degree.
5. The method for estimating a spatial pose of a camera according to claim 4, wherein M is eliminated D After the row and column corresponding to the node with the minimum degree, M is calculated D If the average degree is equal to the node number, the characteristic points with the least number of connected edges with other characteristic points are completely removed.
6. The method for estimating the spatial pose of the camera according to claim 1, wherein the spatial pose of the camera is estimated by utilizing an LHM algorithm according to the position relation of the reserved feature points under a real world coordinate system; comprising the following steps:
and inputting the coordinates of the feature points in the world coordinate system and the coordinates of the feature points in the camera coordinate system into an LHM function in MATLA software, and calculating by the LHM function to obtain a space pose estimation result of the camera.
CN202310839460.2A 2023-07-10 2023-07-10 Space pose estimation method of camera Pending CN116862984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310839460.2A CN116862984A (en) 2023-07-10 2023-07-10 Space pose estimation method of camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310839460.2A CN116862984A (en) 2023-07-10 2023-07-10 Space pose estimation method of camera

Publications (1)

Publication Number Publication Date
CN116862984A true CN116862984A (en) 2023-10-10

Family

ID=88229975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310839460.2A Pending CN116862984A (en) 2023-07-10 2023-07-10 Space pose estimation method of camera

Country Status (1)

Country Link
CN (1) CN116862984A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152257A (en) * 2023-10-31 2023-12-01 罗普特科技集团股份有限公司 Method and device for multidimensional angle calculation of ground monitoring camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152257A (en) * 2023-10-31 2023-12-01 罗普特科技集团股份有限公司 Method and device for multidimensional angle calculation of ground monitoring camera
CN117152257B (en) * 2023-10-31 2024-02-27 罗普特科技集团股份有限公司 Method and device for multidimensional angle calculation of ground monitoring camera

Similar Documents

Publication Publication Date Title
CN110108258B (en) Monocular vision odometer positioning method
CN106204574B (en) Camera pose self-calibrating method based on objective plane motion feature
CN109960402B (en) Virtual and real registration method based on point cloud and visual feature fusion
David et al. Softposit: Simultaneous pose and correspondence determination
CN114782691A (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN109658445A (en) Network training method, increment build drawing method, localization method, device and equipment
WO2021051526A1 (en) Multi-view 3d human pose estimation method and related apparatus
CN111899280A (en) Monocular vision odometer method adopting deep learning and mixed pose estimation
CN113393524B (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN110942476A (en) Improved three-dimensional point cloud registration method and system based on two-dimensional image guidance and readable storage medium
CN111524168A (en) Point cloud data registration method, system and device and computer storage medium
CN116862984A (en) Space pose estimation method of camera
CN107123138B (en) Based on vanilla-R point to the point cloud registration method for rejecting strategy
CN110634149B (en) Non-rigid target characteristic point matching method for optical motion capture system
Zhang et al. A visual-inertial dynamic object tracking SLAM tightly coupled system
CN116894876A (en) 6-DOF positioning method based on real-time image
Xiao et al. Robust precise dynamic point reconstruction from multi-view
CN110570473A (en) weight self-adaptive posture estimation method based on point-line fusion
CN113643328B (en) Calibration object reconstruction method and device, electronic equipment and computer readable medium
CN112950787B (en) Target object three-dimensional point cloud generation method based on image sequence
CN115359119A (en) Workpiece pose estimation method and device for disordered sorting scene
Tian et al. Efficient ego-motion estimation for multi-camera systems with decoupled rotation and translation
CN113628104A (en) Initial image pair selection method for disordered image incremental SfM
CN113436264A (en) Pose calculation method and system based on monocular and monocular hybrid positioning
CN113688816A (en) Calculation method of visual odometer for improving ORB feature point extraction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination