CN116563336A - Self-adaptive positioning algorithm for digital twin machine room target tracking - Google Patents

Self-adaptive positioning algorithm for digital twin machine room target tracking Download PDF

Info

Publication number
CN116563336A
CN116563336A CN202310343093.7A CN202310343093A CN116563336A CN 116563336 A CN116563336 A CN 116563336A CN 202310343093 A CN202310343093 A CN 202310343093A CN 116563336 A CN116563336 A CN 116563336A
Authority
CN
China
Prior art keywords
camera
image
images
matrix
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310343093.7A
Other languages
Chinese (zh)
Inventor
王晨璐
陈晔
姜婧
张燕
陈震伟
李�一
季文嬿
顾炜曦
刘超
张辉
陆文卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co
Original Assignee
Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co filed Critical Nantong Power Supply Co Of State Grid Jiangsu Electric Power Co
Priority to CN202310343093.7A priority Critical patent/CN116563336A/en
Publication of CN116563336A publication Critical patent/CN116563336A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a self-adaptive positioning algorithm for target tracking of a digital twin machine room, which comprises the steps of firstly establishing a monocular mobile vision system model and obtaining internal reference calibration data of a virtual camera. And then extracting image features and matching the two images of the same target point at different moments by aiming at the virtual camera to obtain homography matrixes of the two images. And then solving external parameters of the model by using the internal parameter data of the virtual camera and utilizing a triangulation principle, and calculating three-dimensional information of the target point positions at different moments. And finally, the deflection angle of the digital twin machine room virtual camera is obtained through an algorithm, and the problem of imaging deviation of the tracking target is solved. According to the method, three-dimensional information of the same target point relative to the pose of the virtual camera at different moments is calculated to obtain the deflection angle of the virtual camera, and the problem that a tracking target deviates from the visual field of the virtual camera is solved. Experimental results show that the algorithm provided by the invention has higher target positioning accuracy and calculation efficiency.

Description

Self-adaptive positioning algorithm for digital twin machine room target tracking
Technical Field
The invention relates to the field of machine vision measurement, in particular to self-adaptive adjustment of a monocular virtual camera angle oriented to a digital twin machine room.
Background
In the digital twin machine room virtual target tracking application, a certain positioning error exists when a camera moves and stops under the influence of a complex virtual environment. The camera has 6 degrees of freedom, and control errors are easy to generate, so that a tracking target deviates from the center of a field of view imaged by the camera. In severe cases, the target is completely deviated from the imaging visual field range of the camera and cannot be imaged, so that a certain difficulty is brought to subsequent work. Therefore, according to images shot by the virtual camera at different moments and according to the prior knowledge constraint of the existing targets of the digital twin machine room, the self-adaptive positioning method of the virtual camera of the digital twin machine room is researched, and the method has important significance in solving the problem that the tracking targets deviate from the center position of the visual field of the camera.
Disclosure of Invention
The invention aims to provide a self-adaptive positioning algorithm for tracking a target in a digital twin machine room, which solves the problems that a tracking target deviates from the center position of a visual field of a camera and cannot be accurately tracked.
The technical scheme of the invention is as follows:
a self-adaptive positioning algorithm for target tracking of a digital twin machine room is characterized by firstly establishing a monocular mobile vision system model and obtaining internal reference calibration data of a virtual camera. And then extracting image features and matching the two images of the same target point at different moments by aiming at the virtual camera to obtain homography matrixes of the two images. And then solving external parameters of the model by using the internal parameter data of the virtual camera and utilizing a triangulation principle, and calculating three-dimensional information of the target point positions at different moments. And finally, the deflection angle of the digital twin machine room virtual camera is obtained through an algorithm, and the problem of imaging deviation of the tracking target is solved.
The method specifically comprises the following steps:
according to a pinhole imaging model of the camera, accurate calibration of the virtual camera can be realized by using a camera calibration method of plane square points; assume that the three-dimensional points of the target plane are marked as being homogeneousThe two-dimensional point homogeneous coordinates of the image plane are +.>The projective relationship between the two is that
Wherein s is an arbitrary non-zero scale factor, [ R t ]]Is a matrix of 3 rows and 4 columns, called the off-camera parameter matrix, R is called the rotation matrix, t= (t) 1 ,t 2 ,t 3 ) T Known as a translation matrix,a is called an internal parameter matrix of the camera; alpha x 、α y Is the scale factor of the u-axis and v-axis, (u) 0 ,v 0 ) R is a non-perpendicular factor of the u-axis and the v-axis for principal point coordinates; the internal parameter matrix A of the camera can be obtained by a Zhang plane calibration method;
the mobile monocular vision measurement system is formed by moving one camera to virtually form a plurality of cameras; taking two-view vision measurement as an example, images at the same position at different moments of a camera, analyzing the principle of a mobile monocular vision measurement system;
assume that the world three-dimensional homogeneous coordinate of a point P in space is X W The homogeneous coordinates of two-dimensional images in two images respectively shot at two moments are p 1 And p 2 Then the projection equation of the cameras at two moments can be obtained by the formula (1) as
Wherein s is 1 Sum s 2 Is the non-zero scale factor of two cameras, A 1 And A 2 The camera parameters at time 1 and time 2 are respectively camera parameters, and because the camera only moves rigidly, the internal structural parameters are not changed, so A 1 =A 2
By combining the polar line geometric constraint relation, the expressions of the basic matrix F and the essential matrix E can be obtained by the expression (2), and are respectively as follows:
E=SR (4)
wherein S is an antisymmetric matrix,
as can be seen from the formula (3), the basic matrix F is only related to the internal parameters of the two cameras and the external structural parameters of the system, and the cameras only do rigid motion due to rotation of the pan-tilt, so that the internal parameters of the cameras are unchanged; therefore, an essential matrix E in the formula (4) can be obtained, and it can be seen that E is only related to external parameters of the vision system, and the E can be decomposed to calculate external structural parameters R and t between two view models of the mobile monocular vision measurement system;
from the polar geometric constraint relation and the definition of the essential matrix, the basic matrix is a matrix with 7 degrees of freedom and rank of 2, the basic matrix F between two view images can be obtained by utilizing an 8-point algorithm through extraction and matching of the characteristic points of the two images, and the essential matrix E can be obtained by combining the internal parameters of a camera; the external structural parameters R and t between the two views can be finally obtained through decomposing the matrix E;
(II) image feature point matching and three-dimensional reconstruction
Performing ORB-FAST feature extraction on images which are captured by a camera, have overlapping areas and are at the same position and at different moments by utilizing the characteristics of ORB-FAST feature points, and eliminating mismatching points in an image pair by utilizing a RANSAC algorithm so as to realize accurate registration of SIFT feature points between the two images; at a certain position, as the virtual camera has positioning errors and control errors, the virtual camera images the same target at different moments, two images with overlapping areas are obtained, and ORB-FAST characteristic points are extracted and matched in the overlapping areas of the images; combining formulas (3) and (4), and solving external structural parameters R and t between two-view image machine coordinate systems by utilizing an 8-point algorithm, wherein the external parameters have important significance for reconstructing three-dimensional points in space;
combining the information of the matching characteristic points of the two images, the perspective projection model of the camera can be utilized to respectively obtain the relational expression between the image coordinate system and the world coordinate system of the matching points between the images at two moments,
as can be derived from equations (5) and (6),
order theH is a matrix of 3×3, the H matrix reflects the mapping relation between the two image feature points, as shown in FIG. 1, H is defined as a homography matrix between two planes, assuming +.>Substitution into equation (7) yields
From equation (8)
Wherein, (u) a ,v a ) Sum (u) b ,v b ) Matching point pairs on the two images;
as can be seen from the formula (9), each pair of feature points can obtain two equations, and the H matrix is a singular matrix with the rank of 8, so that at least 4 pairs of matching points can solve a homography matrix H of two planes;
after solving homography matrix H of two images and external structural parameters R and t of two-view image machine coordinate system, selecting a group of matching feature point pairs in matching feature points in the two images by taking the first image as a reference image p And q, the corresponding spatial feature point is P, and the image coordinates of the spatial feature point in the two corresponding images are p= (x) a ,y a ) And q= (x) b ,y b ) The method comprises the steps of carrying out a first treatment on the surface of the Combining with binocular stereo vision measurement principle, the method is accurate in passingThe calibrated two-view image machine measuring system can accurately calculate the three-dimensional world coordinate system of the space point P by knowing the image coordinates of two points and external structural parameters R and t, and establishes the world coordinate system under the coordinate system of the camera 2, so as to facilitate the calculation of the rotation angle of the camera; let the world three-dimensional coordinates of the spatial point P calculated at this time be p= (X) 1 ,Y 1 ,Z 1 );
Because the virtual camera has positioning errors, the image coordinates of the matching feature point images of the two images shot at different moments have deviation; matching the characteristic point pairs p and q, one on the right side of the principal point and one on the left side of the principal point; assuming that the camera is accurately positioned, the above error is not present, and the image coordinate is set to p= (x) in the image 1 a ,y a ) Then the position and angle of the camera at time 1 should be the same as the position and angle of the camera at time 2, so in image 2, the image coordinates of the matching point corresponding to its p-point in image 1 should be q' = (x) a ,y a ) The method comprises the steps of carrying out a first treatment on the surface of the In fact, due to the error, the position and angle of the camera at time 2 deviate from the position and angle of the camera at time 1;
under the condition that positioning errors exist, the three-dimensional coordinate position is not changed, and the shooting position of the virtual camera is adjusted by changing the space rotation angle, so that the target is not deviated from the center position of the image; the spatial rotation angle is obtained through an angle error calculated by a camera;
from the above analysis, it can be seen that if the target point is to have the same image coordinates as the image taken at time 1 in order to reach the position on the image taken at time 2, the camera at time 2 should be moved by an angle qo 2 q';
In practice, the homography matrix H of the two images is obtained from the above analysis, and the point q' = (x) in the image 2 can be obtained a ,y a ) Corresponding matching points p' in image 1, i.e.
p'=H -T q'=H -T (x a ,y a ,1) (10)
P ' and q ' are called virtual image matching point pairs, and the corresponding space points are called P ' and virtual space three-dimensional points; similarly, known pairs ofThe three-dimensional world coordinates of the virtual space point P 'can be obtained by combining the image coordinates of the corresponding point pair P' and q 'and the external structural parameters R and t of the system and the binocular stereo vision measurement principle, and the calculated P' = (X ')' 1 ,Y 1 ',Z' 1 ) The method comprises the steps of carrying out a first treatment on the surface of the Because the three-dimensional coordinates of the space points are established under the camera 2 coordinate system, +.qo 2 q'=∠Po 2 P' the angle of movement of the camera 2 should be calculated by
The angle at which the virtual camera should be rotated can be obtained by decomposing the expression (11).
The invention provides an algorithm for solving three-dimensional pose information of a target point based on a mobile monocular vision measurement system, which calculates three-dimensional information of the same target point relative to the pose of a virtual camera at different moments to obtain a deflection angle of the virtual camera and solves the problem that a tracking target deviates from the visual field of the virtual camera. Experimental results show that the algorithm provided by the invention has higher target positioning accuracy and calculation efficiency.
Drawings
The invention is further described below with reference to the drawings and examples.
FIG. 1 is a schematic illustration of homography constraints for imaging the same object at two different times.
Fig. 2 is a schematic diagram of a three-dimensional reconstruction of a mobile object.
Detailed Description
Monocular vision measuring model of intelligent inspection robot
The imaging position of the tracking target is not fixed due to the positioning error and the control error of the virtual camera. The invention uses the monocular movement vision technology to collect the target images of the virtual camera at the same position at different moments, extracts the characteristic points of the two images, calculates the pose and the angle of the relative camera, realizes the self-adaptive adjustment of the angle of the virtual camera, calibrates and tracks the imaging position of the target, and provides important basis for the follow-up data intelligent analysis and the intelligent image detection.
According to a pinhole imaging model of the camera, the accurate calibration of the virtual camera can be realized by using a camera calibration method of plane square points; assume that the three-dimensional points of the target plane are marked as being homogeneousThe two-dimensional point homogeneous coordinates of the image plane are +.>The projective relationship between the two is that
Wherein s is an arbitrary non-zero scale factor, [ R t ]]Is a matrix of 3 rows and 4 columns, called the off-camera parameter matrix, R is called the rotation matrix, t= (t) 1 ,t 2 ,t 3 ) T Known as a translation matrix,a is referred to as the internal parameter matrix of the camera. Alpha x 、α y Is the scale factor of the u-axis and v-axis, (u) 0 ,v 0 ) For principal point coordinates, r is a non-perpendicular factor of the u-axis and v-axis. The internal parameter matrix A of the camera can be obtained by the Zhang plane calibration method.
The mobile monocular vision measuring system is formed by moving one camera to virtually form a plurality of cameras. The invention takes two-view vision measurement formed by images at the same position at different moments of a camera as an example, and analyzes the principle of a mobile monocular vision measurement system.
Assume that the world three-dimensional homogeneous coordinate of a point P in space is X W The homogeneous coordinates of two-dimensional images in two images respectively shot at two moments are p 1 And p 2 Then the projection equation of the cameras at two moments can be obtained by the formula (1) as
Wherein s is 1 Sum s 2 Is the non-zero scale factor of two cameras, A 1 And A 2 The camera parameters at time 1 and time 2 are respectively camera parameters, and because the camera only moves rigidly, the internal structural parameters are not changed, so A 1 =A 2
By combining the polar line geometric constraint relation, the expressions of the basic matrix F and the essential matrix E can be obtained by the expression (2), and are respectively as follows:
E=SR (4)
wherein S is an antisymmetric matrix,
as can be seen from the formula (3), the basis matrix F is only related to the internal parameters of the two cameras and the external structural parameters of the system, and the cameras only do rigid motion due to rotation of the pan-tilt, so that the internal parameters of the cameras are unchanged. Thus, an essential matrix E of formula (4) can be obtained, and it can be seen that E is related to only external parameters of the vision system, and E can be decomposed to find external structural parameters R and t between two view models of the mobile monocular vision measurement system.
From the epipolar geometry constraint relation and the definition of the essential matrix, the essential matrix is a matrix with 7 degrees of freedom and rank of 2, the basic matrix F between two view images can be obtained by utilizing an 8-point algorithm through extraction and matching of the characteristic points of the two images, and the essential matrix E can be obtained by combining the internal parameters of the camera. By decomposing the matrix E, the external structural parameters R and t between the two views can be finally determined.
(II) image feature point matching and three-dimensional reconstruction
According to the invention, the ORB-FAST characteristic point characteristics (namely, the ORB-FAST characteristic is unchanged for image rotation, scale scaling, brightness change and the like) are utilized, the ORB-FAST characteristic extraction is carried out on the images which are captured by the camera and have the overlapping area at two same positions and different moments, and the mismatch points in the image pairs are removed by utilizing the RANSAC algorithm, so that the SIFT characteristic points between the two images are accurately registered. As shown in fig. 1, at a certain position, because of positioning errors and steering errors of the virtual camera, the virtual camera images the same target at different moments, two images with overlapping areas are obtained, and ORB-FAST feature points are extracted and matched in the overlapping areas of the images. By combining the formulas (3) and (4), the external structural parameters R and t between the two-view image machine coordinate systems can be obtained by utilizing the 8-point algorithm of the analysis, and the external parameters have important significance on the reconstruction of the space three-dimensional points.
Combining the information of the matching characteristic points of the two images, the perspective projection model of the camera can be utilized to respectively obtain the relational expression between the image coordinate system and the world coordinate system of the matching points between the images at two moments,
as can be derived from equations (5) and (6),
let h=k b [R b T b ]K a [R a T a ] -1 K a -1 H is a matrix of 3×3, the H matrix reflects the mapping relation between the feature points of two images, as shown in FIG. 1, H is defined as a homography matrix between two planes, assuming thatSubstitution into equation (7) yields
From equation (8)
Wherein, (u) a ,v a ) Sum (u) b ,v b ) Is a matching point pair on both images.
As can be seen from the formula (9), each pair of feature points can obtain two equations, and the H matrix is a singular matrix with the rank of 8, so that at least 4 pairs of matching points can solve the homography matrix H of two planes.
After solving homography matrix H of two images and external structural parameters R and t of two-view image machine coordinate system, selecting a group of matching feature point pairs in matching feature points in the two images by taking the first image as a reference image p And q, the corresponding spatial feature point is P, and the image coordinates of the spatial feature point in the two corresponding images are p= (x) a ,y a ) And q= (x) b ,y b ) As shown in fig. 2. By combining the binocular stereoscopic vision measurement principle, the three-dimensional world coordinate system of the space point P can be accurately calculated by knowing the image coordinates of two points and external structural parameters R and t of a two-view image machine measurement system after accurate calibration, and the world coordinate system is built under the coordinate system of the camera 2, so that the purpose of calculating the rotation angle of the camera is convenient. Let the world three-dimensional coordinates of the spatial point P calculated at this time be p= (X) 1 ,Y 1 ,Z 1 )。
Because of the positioning error of the virtual camera, the image coordinates of the matching feature points of the two images photographed at different times have a deviation, as shown in fig. 4, of the matching feature point pairs p and q, one on the right side of the principal point and one on the left side of the principal point. Assuming that the camera is accurately positioned, the above error is not present, and the image coordinate is set to p= (x) in the image 1 a ,y a ) ThenThe position and angle of the camera at time 1 should be the same as the position and angle of the camera at time 2, so in the 2 nd image, the image coordinates of the matching point corresponding to its p point in image 1 should be q' = (x) a ,y a ). In fact, the camera position and angle at time 2 deviates from the camera position and angle at time 1 due to the presence of errors.
According to the invention, under the condition that positioning errors exist, the three-dimensional coordinate position is not changed, and the shooting position of the virtual camera is adjusted by changing the space rotation angle, so that the goal is not deviated from the center position of the image. The spatial rotation angle is obtained from the angle error calculated by the camera.
From the above analysis, it can be seen that if the target point is to have the same image coordinates as the image taken at time 1 in order to reach the position on the image taken at time 2, the camera at time 2 should be moved by an angle qo 2 q'。
In practice, the homography matrix H of the two images is obtained from the above analysis, and the point q' = (x) in the image 2 can be obtained a ,y a ) Corresponding matching points p' in image 1, i.e.
p'=H -T q'=H -T (x a ,y a ,1) (10)
We call P 'and q' as virtual image matching point pairs, and the corresponding spatial points are P ", which is called virtual spatial three-dimensional point. Similarly, knowing the image coordinates of the corresponding point pair P 'and q', and the external structural parameters R and t of the system, by combining the binocular stereo vision measurement principle, the three-dimensional world coordinates of the virtual space point P 'can be obtained, and the calculated P' = (X ') is assumed' 1 ,Y 1 ',Z' 1 ). Analysis from fig. 2, because the three-dimensional coordinates of the spatial points are established under the camera 2 coordinate system, +.qo 2 q'=∠Po 2 P' the angle of movement of the camera 2 should be calculated by
The angle at which the virtual camera should be rotated can be obtained by decomposing the expression (11).
The experimental process comprises the following steps:
the simulation software adopted in the experiment is Visual C++2019, the host information is computer main frequency 3.6GHZ, the memory is 32GB, and the operating system is Windows 10 and 32 bits. In the experiment, the internal reference calibration of the virtual camera is realized by using a Zhang Zhengyou plane target camera calibration method, as shown in table 1. Then, the algorithm provided by the invention is utilized to analyze images acquired by the virtual camera at the same place at two different moments, the image shot for the first time is used as a reference image, and the angular offset value of the image acquired for the second time relative to the reference image is solved according to the algorithm provided by the invention. According to the calculated deflection angle of the camera, the rotation angle of the virtual camera is automatically adjusted, so that the adjustment of the visual field angle of the virtual camera to the same target at different moments can be realized, and the larger deflection of the virtual camera caused by positioning errors is reduced. According to the above experimental analysis, the θ=7.8 degrees can be obtained by calculating two images as shown in fig. 2. The experimental result shows that the algorithm provided by the invention has a good effect on the self-adaptive calibration of the angle of the virtual camera by utilizing the geometric constraint relation among images of images shot at different moments, and the algorithm provided by the invention has a certain robustness on the self-adaptive calibration of the angle of the virtual camera in the complex environment in the digital twin machine room.
TABLE 1 calibration of parameters in camera
Analysis of experimental results
According to the method provided by the invention, as the virtual camera rotates in three-dimensional space for each circle, the deflection angle of the camera is not too large. Therefore, according to the situation, the algorithm provided by the invention adopts the ORB-FAST rapid feature extraction and matching method, and realizes rapid matching and space position calculation of the features of two images. Compared with other classical feature extraction algorithms, such as a SIFT feature point matching positioning method and a SURF feature point matching positioning method, the ORB-FAST algorithm adopted by the invention respectively analyzes the digital twin-machine room electric cabinet instrument images shot by the virtual camera, calculates the average value of the time used for target positioning by using the 3 algorithms respectively by comparing the instrument in the 50 images shot by the shooting, and the algorithm provided by the invention has higher processing speed, is particularly suitable for the angle self-adaptive calibration of the virtual camera in a complex environment, has higher calibration efficiency and can provide important technical support for the tracking target of the virtual camera of the digital twin-machine room.
Table 2 comparison of different algorithms for target detection time
Algorithm used Run time (seconds)
SIFT feature matching location 57
SURF feature matching localization 26
The invention provides algorithm positioning 19
Aiming at the tracking target positioning error of the virtual camera of the digital twin machine room, the invention takes image feature extraction and matching as starting points, analyzes the geometric constraint relation of the two-view images, provides an ORB-FAST feature-based image three-dimensional point solving algorithm, calibrates the positioning error of the virtual camera, and reduces the influence of the camera on the inaccuracy of target positioning caused by the movement accumulated error. Finally, the algorithm and analysis provided by the invention are compared with other classical methods, so that the algorithm provided by the invention has higher processing efficiency, the angle self-adaptive calibration of the virtual camera in the complex digital twin machine room environment is successfully realized, and the powerful guarantee is provided for the tracking target of the virtual camera of the digital twin machine room.

Claims (2)

1. A self-adaptive positioning algorithm for digital twin machine room target tracking is characterized in that: establishing a monocular mobile vision system model, and acquiring internal reference calibration data of a virtual camera; then, extracting image features and matching the two images of the same target point at different moments of the virtual camera to obtain homography matrixes of the two images; then solving external parameters of the model by using the internal parameter data of the virtual camera and utilizing a triangulation principle, and calculating three-dimensional information of target point positions at different moments; and finally, the deflection angle of the digital twin machine room virtual camera is obtained through an algorithm, and the problem of imaging deviation of the tracking target is solved.
2. The adaptive positioning algorithm for digital twin machine room target tracking according to claim 1, wherein the adaptive positioning algorithm is characterized in that: the method specifically comprises the following steps:
according to a pinhole imaging model of the camera, accurate calibration of the virtual camera can be realized by using a camera calibration method of plane square points; assume that the three-dimensional points of the target plane are marked as being homogeneousThe two-dimensional point homogeneous coordinates of the image plane are +.>The projective relationship between the two is that
Wherein s is an arbitrary non-zero scale factor, [ R t ]]Is a matrix of 3 rows and 4 columns, called the off-camera parameter matrix, R is called the rotation matrix, t= (t) 1 ,t 2 ,t 3 ) T Known as a translation matrix,a is called an internal parameter matrix of the camera; alpha x 、α y Is the scale factor of the u-axis and v-axis, (u) 0 ,v 0 ) R is a non-perpendicular factor of the u-axis and the v-axis for principal point coordinates; the internal parameter matrix A of the camera can be obtained by a Zhang plane calibration method;
the mobile monocular vision measurement system is formed by moving one camera to virtually form a plurality of cameras; taking two-view vision measurement as an example, images at the same position at different moments of a camera, analyzing the principle of a mobile monocular vision measurement system;
assume that the world three-dimensional homogeneous coordinate of a point P in space is X W The homogeneous coordinates of two-dimensional images in two images respectively shot at two moments are p 1 And p 2 Then the projection equation of the cameras at two moments can be obtained by the formula (1) as
Wherein s is 1 Sum s 2 Is the non-zero scale factor of two cameras, A 1 And A 2 The camera parameters at time 1 and time 2 are respectively camera parameters, and because the camera only moves rigidly, the internal structural parameters are not changed, so A 1 =A 2
By combining the polar line geometric constraint relation, the expressions of the basic matrix F and the essential matrix E can be obtained by the expression (2), and are respectively as follows:
E=SR (4)
wherein S is an antisymmetric matrix,
as can be seen from the formula (3), the basic matrix F is only related to the internal parameters of the two cameras and the external structural parameters of the system, and the cameras only do rigid motion due to rotation of the pan-tilt, so that the internal parameters of the cameras are unchanged; therefore, an essential matrix E in the formula (4) can be obtained, and it can be seen that E is only related to external parameters of the vision system, and the E can be decomposed to calculate external structural parameters R and t between two view models of the mobile monocular vision measurement system;
from the polar geometric constraint relation and the definition of the essential matrix, the basic matrix is a matrix with 7 degrees of freedom and rank of 2, the basic matrix F between two view images can be obtained by utilizing an 8-point algorithm through extraction and matching of the characteristic points of the two images, and the essential matrix E can be obtained by combining the internal parameters of a camera; the external structural parameters R and t between the two views can be finally obtained through decomposing the matrix E;
(II) image feature point matching and three-dimensional reconstruction
Performing ORB-FAST feature extraction on images which are captured by a camera, have overlapping areas and are at the same position and at different moments by utilizing the characteristics of ORB-FAST feature points, and eliminating mismatching points in an image pair by utilizing a RANSAC algorithm so as to realize accurate registration of SIFT feature points between the two images; at a certain position, as the virtual camera has positioning errors and control errors, the virtual camera images the same target at different moments, two images with overlapping areas are obtained, and ORB-FAST characteristic points are extracted and matched in the overlapping areas of the images; combining formulas (3) and (4), and solving external structural parameters R and t between two-view image machine coordinate systems by utilizing an 8-point algorithm, wherein the external parameters have important significance for reconstructing three-dimensional points in space;
combining the information of the matching characteristic points of the two images, the perspective projection model of the camera can be utilized to respectively obtain the relational expression between the image coordinate system and the world coordinate system of the matching points between the images at two moments,
as can be derived from equations (5) and (6),
order theH is a matrix of 3×3, the H matrix reflects the mapping relation between the two image feature points, as shown in FIG. 1, H is defined as a homography matrix between two planes, assuming +.>Substitution into equation (7) yields
From equation (8)
Wherein, (u) a ,v a ) Sum (u) b ,v b ) Matching point pairs on the two images;
as can be seen from the formula (9), each pair of feature points can obtain two equations, and the H matrix is a singular matrix with the rank of 8, so that at least 4 pairs of matching points can solve a homography matrix H of two planes;
after solving homography matrix H of two images and external structural parameters R and t of two-view image machine coordinate system, selecting a group of matching feature point pairs in matching feature points in the two images by taking the first image as a reference image p And q, the corresponding spatial feature point is P, and the image coordinates of the spatial feature point in the two corresponding images are p= (x) a ,y a ) And q= (x) b ,y b ) The method comprises the steps of carrying out a first treatment on the surface of the Combining with the binocular stereoscopic vision measurement principle, knowing the image coordinates of two points and external structural parameters R and t, the two-vision image machine measurement system after accurate calibration can accurately calculate the three-dimensional world coordinate system of the space point P, and the world coordinate system is built under the camera 2 coordinate system, so as to facilitate the calculation of the rotation angle of the camera; let the world three-dimensional coordinates of the spatial point P calculated at this time be p= (X) 1 ,Y 1 ,Z 1 );
Because the virtual camera has positioning errors, the image coordinates of the matching feature point images of the two images shot at different moments have deviation; matching the characteristic point pairs p and q, one on the right side of the principal point and one on the left side of the principal point; assuming that the camera is accurately positioned, the above error is not present, and the image coordinate is set to p= (x) in the image 1 a ,y a ) Then the position and angle of the camera at time 1 should be the same as the position and angle of the camera at time 2, so in image 2, the image coordinates of the matching point corresponding to its p-point in image 1 should be q' = (x) a ,y a ) The method comprises the steps of carrying out a first treatment on the surface of the In fact, due to the error, the position and angle of the camera at time 2 deviate from the position and angle of the camera at time 1;
under the condition that positioning errors exist, the three-dimensional coordinate position is not changed, and the shooting position of the virtual camera is adjusted by changing the space rotation angle, so that the target is not deviated from the center position of the image; the spatial rotation angle is obtained through an angle error calculated by a camera;
from the above analysis, it can be seen that if the target point is to be located at the same image coordinates as the image taken at time 1 as the image taken at time 2, the camera at time 2 should be movedThe dynamic angle is +.qo 2 q';
In practice, the homography matrix H of the two images is obtained from the above analysis, and the point q' = (x) in the image 2 can be obtained a ,y a ) Corresponding matching points p' in image 1, i.e.
p'=H -T q'=H -T (x a ,y a ,1) (10)
P ' and q ' are called virtual image matching point pairs, and the corresponding space points are called P ' and virtual space three-dimensional points; similarly, knowing the image coordinates of the corresponding point pair P 'and q', and the external structural parameters R and t of the system, by combining the binocular stereo vision measurement principle, the three-dimensional world coordinates of the virtual space point P 'can be obtained, and the calculated P' = (X ') is assumed' 1 ,Y 1 ',Z' 1 ) The method comprises the steps of carrying out a first treatment on the surface of the Because the three-dimensional coordinates of the space points are established under the camera 2 coordinate system, +.qo 2 q'=∠Po 2 P' the angle of movement of the camera 2 should be calculated by
The angle at which the virtual camera should be rotated can be obtained by decomposing the expression (11). .
CN202310343093.7A 2023-04-03 2023-04-03 Self-adaptive positioning algorithm for digital twin machine room target tracking Pending CN116563336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310343093.7A CN116563336A (en) 2023-04-03 2023-04-03 Self-adaptive positioning algorithm for digital twin machine room target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310343093.7A CN116563336A (en) 2023-04-03 2023-04-03 Self-adaptive positioning algorithm for digital twin machine room target tracking

Publications (1)

Publication Number Publication Date
CN116563336A true CN116563336A (en) 2023-08-08

Family

ID=87485154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310343093.7A Pending CN116563336A (en) 2023-04-03 2023-04-03 Self-adaptive positioning algorithm for digital twin machine room target tracking

Country Status (1)

Country Link
CN (1) CN116563336A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot
CN114862969A (en) * 2022-05-27 2022-08-05 国网江苏省电力有限公司电力科学研究院 Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot
KR20220117626A (en) * 2021-02-17 2022-08-24 네이버랩스 주식회사 Method and system for determining camera pose

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103528571A (en) * 2013-10-12 2014-01-22 上海新跃仪表厂 Monocular stereo vision relative position/pose measuring method
CN104596502A (en) * 2015-01-23 2015-05-06 浙江大学 Object posture measuring method based on CAD model and monocular vision
CN110728715A (en) * 2019-09-06 2020-01-24 南京工程学院 Camera angle self-adaptive adjusting method of intelligent inspection robot
KR20220117626A (en) * 2021-02-17 2022-08-24 네이버랩스 주식회사 Method and system for determining camera pose
CN114862969A (en) * 2022-05-27 2022-08-05 国网江苏省电力有限公司电力科学研究院 Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot

Similar Documents

Publication Publication Date Title
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN107301654B (en) Multi-sensor high-precision instant positioning and mapping method
CN109202912B (en) Method for registering target contour point cloud based on monocular depth sensor and mechanical arm
CN106408609B (en) A kind of parallel institution end movement position and posture detection method based on binocular vision
US8593524B2 (en) Calibrating a camera system
CN114399554B (en) Calibration method and system of multi-camera system
CN102842117B (en) Method for correcting kinematic errors in microscopic vision system
US20120148145A1 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
CN111897349A (en) Underwater robot autonomous obstacle avoidance method based on binocular vision
CN112734863B (en) Crossed binocular camera calibration method based on automatic positioning
CN113724337B (en) Camera dynamic external parameter calibration method and device without depending on tripod head angle
CN110827321B (en) Multi-camera collaborative active target tracking method based on three-dimensional information
CN112362034B (en) Solid engine multi-cylinder section butt joint guiding measurement method based on binocular vision
CN113450416B (en) TCSC method applied to three-dimensional calibration of three-dimensional camera
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN113393439A (en) Forging defect detection method based on deep learning
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
Ann et al. Study on 3D scene reconstruction in robot navigation using stereo vision
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN114862969A (en) Onboard holder camera angle self-adaptive adjusting method and device of intelligent inspection robot
JP7033294B2 (en) Imaging system, imaging method
CN116563336A (en) Self-adaptive positioning algorithm for digital twin machine room target tracking
Neubert et al. Automatic training of a neural net for active stereo 3D reconstruction
WO2012076979A1 (en) Model-based pose estimation using a non-perspective camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230808

RJ01 Rejection of invention patent application after publication