CN107886541B - Real-time monocular moving target pose measuring method based on back projection method - Google Patents
Real-time monocular moving target pose measuring method based on back projection method Download PDFInfo
- Publication number
- CN107886541B CN107886541B CN201711111369.XA CN201711111369A CN107886541B CN 107886541 B CN107886541 B CN 107886541B CN 201711111369 A CN201711111369 A CN 201711111369A CN 107886541 B CN107886541 B CN 107886541B
- Authority
- CN
- China
- Prior art keywords
- image
- pose
- point
- matching
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
A monocular moving target pose real-time measurement method based on back projection method, this method utilizes single camera to obtain the infrared sequence characteristic image of the moving target under the dynamic interference environment, withdraw the infrared lamp mark point without difference, has designed a kind of matching detection state machine, utilize the back projection to accumulate the mathematical model and obtain the mark point image object space and map the matching relation optimally; an effective rapid tracking state machine is designed, and pose parameters are calculated in real time; a state switching mechanism based on precision evaluation is designed, effective conversion between a matching detection state and a quick tracking state is realized, and the instantaneity and the accuracy of pose calculation are ensured.
Description
Technical Field
The invention belongs to the technical field of computer vision measurement, and relates to a method for acquiring the position and the attitude parameters of a moving target in real time by using a single camera through a back projection method.
Background
The real-time measurement of the position and the posture of a moving target is the research frontier in the field of international computer vision. The method has the advantages that the three-dimensional pose parameters of the moving target are obtained, particularly the three-dimensional pose parameters are obtained dynamically in real time, the real position and the attitude of the target can be restored visually, the method plays an important role in the fields of industrial design, aerospace, accident analysis and the like, and particularly plays a great role in the technical fields of automatic docking of aircrafts and space stations, precision control of target guidance systems, operation attitude monitoring of high-speed vehicles and the like.
At present, various three-dimensional pose measurement approaches are available, including a GPS, a dynamic attitude heading machine and the like. The technical requirements of GPS and the like are on sky visibility and high cost, and the real-time measuring method of the pose of the monocular moving target based on computer vision shows the advantages that: non-contact, high precision, no need to look through the sky. The pose parameters of the target are calculated by adopting an ICP (inductively coupled plasma) matching algorithm in Terui F and the like, and a large number of characteristic points are matched with a target model for calculation, so that the algorithm is inefficient, and the requirement of real-time measurement cannot be met. U.S. Kelsey J M et al proposes a monocular pose measurement method based on target tracking, extracts edge features of a target image, and solves pose parameters in real time through matching of a three-dimensional model and the edge features, so that the accuracy is high, the calculation amount is low, and the problem of solving the initial pose of the target tracking is difficult. The existing methods promote the progress of pose measurement research to a certain extent, but still face three problems which need to be solved urgently: (1) how to find the correct mapping matching relation between the marker point image objects under the condition that the characteristics of the marker points are not different provides an initial pose solution for tracking. (2) How to utilize the continuity between video sequence frame images to quickly track mark points, avoid repeated calculation of mapping matching and realize real-time dynamic pose resolving. (3) How to design a tracking evaluation mechanism, restart detection is realized under the condition of tracking failure, and the accuracy and the stability of pose resolving are ensured.
Disclosure of Invention
The invention aims to provide a monocular moving target pose real-time measurement method based on a back projection method, so as to solve the problems of searching for the matching relation between marker point images and object spaces and rapidly measuring pose parameters in real time, and ensure the instantaneity and stability of pose calculation.
Technical scheme of the invention
A monocular moving target pose real-time measurement method based on back projection method, this method utilizes single camera to obtain the infrared sequence characteristic image of the moving target under the dynamic interference environment, withdraw the infrared lamp mark point without difference, has designed a kind of matching detection state machine, utilize the back projection to accumulate the mathematical model and obtain the mark point image object space and map the matching relation optimally; an effective rapid tracking state machine is designed, and pose parameters are calculated in real time; a state switching mechanism based on precision evaluation is designed, effective conversion between a matching detection state and a quick tracking state is realized, and the instantaneity and the accuracy of pose calculation are ensured.
The method comprises the following specific steps:
In order to find out the optimal matching relation between the mark point image object space, a mark point image object space matching detection method based on a back projection method is provided; extracting and identifying infrared lamp mark points on the image, calculating the gravity center coordinates of light points, blindly selecting 3 image space mark points from the infrared lamp mark points, performing enumeration matching with known object space mark points to generate a combination of a plurality of image object space points as a matching sample, performing rendezvous to calculate a pose model, reversely projecting the object space points on the image according to the pose model, and calculating the gray value sum of the reverse projection image points; the correctness of the pose model is ensured by correct matching, and the image point of the back projection is necessarily positioned in a light spot area, so that a group of matches with the maximum gray value sum is selected as the best; determining image-object matching relations of all the mark points according to the pose model of the optimal sample, carrying out back intersection and accurately calculating pose parameters;
step 2, fast tracking state
The efficiency of the corresponding matching detection of the mark point image object space is low, the time consumption is long, and in order to save the matching time overhead, a quick tracking method is adopted, so that most of the time of the system operation is in a tracking state, the speed and the efficiency are high, and the instantaneity of the pose measurement is achieved; because the change of the corresponding relation of the mark point image object space between two adjacent frames of the sequence image is very small, the rapid tracking method tracks the light point of the frame by utilizing the matched light point position of the previous frame image, and the image object space matching relation of the frame image is rapidly obtained;
step 3, state switching mechanism based on precision evaluation
The tracking state speed is high, but once an error occurs in the previous frame, a subsequent tracking permanent error can be caused; in order to make up for the instability of the tracking algorithm, a state switching mechanism based on precision evaluation is adopted; the overall accuracy evaluation is carried out on the pose resolving result obtained by each frame in the tracking state, and the measurement error sigma is calculatedσWhen the error threshold is exceeded, the marker point object matching detection state machine is restarted, so that the serious problems of one frame error and all subsequent tracking errors are avoided;
step 4, recovering the three-dimensional posture and position of the target in real time
And recovering the three-dimensional posture and position of the target in real time by utilizing an OpenGL technology according to the accurately solved sequence image pose parameters.
Determining image-object matching relations of all the mark points according to the pose model of the optimal sample, performing back intersection, and accurately calculating pose parameters, wherein the specific steps are as follows:
(1) method for identifying mark points based on circle confidence
The method comprises the steps that a camera obtains images of mark points of an infrared lamp, shields a visible light background with the wavelength less than 850nm, and adopts OTSU threshold segmentation based on the maximum between-class variance, so that the extracted mark points and the background have the minimum misclassification probability; adopting a method based on circle confidence to identify the mark points; firstly, extracting a contour chain in a binary image, and defining a two-dimensional (p + q) order moment of an image f (x, y) with the distribution of the contour chain being omega as follows:
wherein, (x, y) is the coordinate of the pixel point, and f (x, y) is the gray value of the pixel point; when the orders p and q are both 0, m00Is the area of the profile chain.
Considering the camera focal length f, the pixel size ω, the photographing distance D, and the LED diameter D, the imaging area S of the LED halo on the image is expressed as:
considering the divergence of the LED halo, the area signature with adaptivity is defined as:
0.5S<m00<5S (3)
the circle confidence θ is defined as:
wherein L is the perimeter of the profile chain; the larger the circle confidence coefficient, the closer the shape is to the circle, the circle confidence coefficient of a perfect circle is 1, and the circle confidence coefficient of non-target noise and interference is mostly below 0.1; circle confidence recognition is defined as:
0.5<θ≤1 (5)
the area characteristic and the circle confidence coefficient are identified, noise and interference are filtered, and the mark points are correctly identified;
(2) enumerating, matching and calculating pose model
Object space mark on n pairsThe blind selection of 3 pairs in the points is performed for enumeration matching, and various matching combinations omega can be generatediCalculating a plurality of pose models M by a back intersection methodj;
(3) Backprojection method for screening best matches
To screen for the best match omega*And its pose parameter M*Checking the matching correctness by adopting an object space point back projection method; matching omega according to object spaceiAnd pose model MjReversely projecting the n object space points to the image; pose model MjCorrectly, n object space points are all correctly back projected to the mark light spot area with larger gray scale, and the sum of the gray scales of the n pointsIs large; thus maximum total gray value of back projected dotsCorresponding to the best match omega*And pose model M thereof*;
Expression of the maximum backprojection gray value sum:
wherein the content of the first and second substances,andis the position and pose parameter of the robot,is a pose model obtained by back-meeting, XoIs the object-side coordinates of the marker point,is the image-side coordinates of the landmark points, g (u) is the gray value of the image point u; n is the number of landmark points. When u exceeds the image range, g(u) 0; the image point coordinate calculation formula is as follows:
preliminarily obtained optimal pose model M*The pose calculation method is characterized in that the pose calculation method is obtained by backward intersection of object space points by 3, the precision is low, and secondary accurate calculation of pose parameters is needed; according to the obtained optimal image object space point combination omega*And pose model M thereof*Carrying out back projection on the n object space mark points again, wherein the marker light image point closest to the back projection point is the corresponding image point of the marker light, and thus, the corresponding matching of the n object space points is completed one by one; and performing secondary back intersection on the object space points by using n, and calculating to obtain more accurate pose parameters including a rotation parameter R and a displacement parameter T.
The fast tracking matching method tracks the light spot of the current frame by utilizing the matched light spot position of the previous frame image, and the fast acquisition of the image-object matching relation of the current frame image comprises the following steps:
defining effective image points as marker light outline chain gravity center points subjected to feature identification; the previous frame has n valid points, and the ith image point is marked as Pi(xi,yi) The corresponding object point is Ni(ii) a The frame has m effective points, the jth image point is marked as Pj(xj,yj) (ii) a Finding P by shortest distancejAnd PiCorresponding to each other; the shortest distance expression:
MIN[DIST(Pj,Pi)|i=0,1,...n-1] (8)
i=i*the minimum distance is obtained, the image point P of the framej(xj,yj) Corresponding to the image point of the previous frameThereby obtaining the image-object matching relation of the frame: this frame image point Pj(xj,yj) Corresponding point Ni(ii) a The quick matching of the image object space of the m effective mark points is completed in sequence; the rapid tracking method greatly improves the measurement efficiency, and canAnd realizing the real-time pose resolving of the high-speed moving target.
The specific operation steps of the state switching mechanism based on the precision evaluation are as follows:
defining the coordinate of the image point obtained by tracking the frame as (x)i,yi) And reversely projecting the object space point according to the pose parameter of the previous frame to obtain an image point coordinateAnd (3) verifying an error calculation formula of matching precision:
where n is increasedσBecomes smaller, the error σ of each image pointiReduced overall error sigmaσAlso becomes smaller; the evaluation function can therefore correctly reflect the situation of the overall match error.
The invention has the advantages and beneficial effects that:
(1) the invention utilizes a single camera to acquire the infrared sequence characteristic image of the moving target in the dynamic interference environment, and automatically extracts the infrared lamp mark points without difference. (2) A matching detection state machine is designed, and the optimal mapping matching relation of the mark point image object space is obtained by utilizing a back projection accumulation mathematical model. (3) An effective and fast tracking state machine is designed, the time cost of resolving is reduced, and the purpose of measuring the pose in real time is achieved. (4) A state switching mechanism based on precision evaluation is designed, effective conversion between a matching detection state and a rapid tracking state is realized, and the high efficiency and the precision of pose calculation are ensured.
Drawings
Fig. 1 is a schematic diagram of monocular pose measurement.
Fig. 2 is a flowchart of a monocular pose measurement method.
FIG. 3 is a diagram of the recognition effect of the characteristics of the mark points of the infrared lamp; wherein, (a) visible light image, (b) infrared image, (c) binary image, (d) infrared lamp mark point identification image.
FIG. 4 is a backprojection gray scale and statistics plot; wherein (a) is a best match, (b) is an error match, and (c) is an error match.
FIG. 5 is a comparison graph of the matching detection efficiency of the fast tracking and back projection method.
Fig. 6 is a real-time statistical chart of measurement accuracy.
FIG. 7 is a real-time statistical graph of measurement elapsed time.
FIG. 8 is a rendering diagram of a three-dimensional pose of a moving object.
Detailed Description
The following detailed description of the invention refers to the accompanying drawings.
FIG. 1 is a schematic diagram of monocular pose measurement. An 8 x 8 chess board is adopted as a moving target, 6 infrared LED light-emitting diodes with the wavelength larger than 850nm are installed as infrared lamp mark points, and an infrared camera acquires sequential images with the image width of 640 x 480. The manual control target moves at high speed in the visual field range of the camera and makes 360-degree parallel rotation, deflection and rolling three-dimensional rotation relative to the camera.
FIG. 2 is a flow chart of a monocular pose measurement method, and for OTSU threshold segmentation images, an infrared lamp mark point feature recognition algorithm filters all noise and non-target interference according to the contour features of area and roundness, so as to correctly recognize infrared lamp mark points on a target. The recognition effect is shown in fig. 3.
The mark point image object matching detection algorithm performs enumeration matching on object points, 3 of 6 mark points are selected blindly for image object matching, and the number of matching combinations omega which can be generated isThe combined list is shown in table 1.
TABLE 1 matching combinations of image object space points
Matching combinations Ω according to table 1iCalculating pose model M by backward intersectionjIncluding the rotation matrix R and the translation matrix T, as shown in table 2.
TABLE 2 pose model Mj
And (3) reversely projecting the 6 object space mark points to the position of the original image according to the pose model of blind matching, marking a cross hair, and calculating the pixel gray sum at the 6 positions so as to verify the correctness of the image-object space matching relationship in the detection state, wherein the maximum reversely projected gray sum corresponds to the matching relationship which is the best matching. As shown in Table 3, the gray sum of the back projection is statistically known to match Ω correctly2Much larger than other mismatching, as shown in fig. 4.
TABLE 3 Back projection Gray sum value Table under different matching
Counting 400 frame sequence image time consumption, the enumeration matching efficiency in an image object matching module based on a back projection method is low, the average time consumption of each frame is 78.9ms, and a target with high motion speed is difficult to track. The rapid tracking module overcomes the defect of low pose resolving efficiency, takes 15.4ms per frame on average, can resolve the pose of a target moving at high speed in real time, and is smooth in operation. The efficiency statistics are compared as shown in figure 5.
To avoid permanent errors due to the generation of false matches by fast tracking, the accuracy-based state-switching mechanism calculates the errors of the matches in real time during fast tracking, with the errors exceeding the threshold e at 195, 233 and 337 framesTWhen the program is 13 pixels, the program automatically restarts the matching detection state machine, and the program is switched to the tracking state again after the pose parameters are solved correctly, so that the program can track and run quickly, and the matching detection state machine can be restarted in time to correct errorsAnd the accuracy and the stability of pose calculation are ensured. The measurement accuracy real-time statistical result is shown in fig. 6, and the measurement time consumption real-time statistical result is shown in fig. 7.
The chessboard of the chess does rapid rotation, deflection and rolling motion on a three-dimensional space relative to the camera. Under the condition that the LED lamp can clearly image on the camera, the motion ranges in three directions are all 360 degrees. In OpenGL three-dimensional rendering, the chess board is virtualized as an airplane model and the 6 LED lights are virtualized as red circles. And recovering the three-dimensional attitude and position of the airplane model in real time according to the accurately calculated sequence image pose parameters, wherein the effect is shown in the attached figure 8.
The invention can be used for measuring the three-dimensional pose of the moving target in real time and has wide application range. And under the condition that the infrared lamp mark points are not different, obtaining the optimal object image matching relation by adopting a back projection method, and accurately calculating the pose parameters. The fast tracking state is fast and efficient. The stability of the measurement is ensured by a state switching mechanism based on precision evaluation. The three-dimensional position and the posture of the target are dynamically displayed through three-dimensional real-time rendering, and the method has a wide application prospect.
Claims (4)
1. A monocular moving target pose real-time measurement method based on back projection method, this method utilizes single camera to obtain the infrared sequence characteristic image of the moving target under the dynamic interference environment, withdraw the infrared lamp mark point without difference, has designed a kind of matching detection state machine, utilize the back projection to accumulate the mathematical model and obtain the mark point image object space and map the matching relation optimally; an effective rapid tracking state machine is designed, and pose parameters are calculated in real time; a state switching mechanism based on precision evaluation is designed, effective conversion between a matching detection state and a quick tracking state is realized, and the instantaneity and the accuracy of pose calculation are ensured; the method comprises the following specific steps:
step 1, sign point image object matching detection based on back projection method
In order to find out the optimal matching relation between the mark point image object space, a mark point image object space matching detection method based on a back projection method is provided; extracting and identifying infrared lamp mark points on the image, calculating the gravity center coordinates of light points, blindly selecting 3 image space mark points from the infrared lamp mark points, performing enumeration matching with known object space mark points to generate a combination of a plurality of image object space points as a matching sample, performing rendezvous to calculate a pose model, reversely projecting the object space points on the image according to the pose model, and calculating the gray value sum of the reverse projection image points; the correctness of the pose model is ensured by correct matching, and the image point of the back projection is necessarily positioned in a light spot area, so that a group of matches with the maximum gray value sum is selected as the best; determining image-object matching relations of all the mark points according to the pose model of the optimal sample, carrying out back intersection and accurately calculating pose parameters;
step 2, fast tracking state
The efficiency of the corresponding matching detection of the mark point image object space is low, the time consumption is long, and in order to save the matching time overhead, a quick tracking method is adopted, so that most of the time of the system operation is in a tracking state, the speed and the efficiency are high, and the instantaneity of the pose measurement is achieved; because the change of the corresponding relation of the mark point image object space between two adjacent frames of the sequence image is very small, the rapid tracking method tracks the light point of the frame by utilizing the matched light point position of the previous frame image, and the image object space matching relation of the frame image is rapidly obtained;
step 3, state switching mechanism based on precision evaluation
The tracking state speed is high, but once an error occurs in the previous frame, a subsequent tracking permanent error can be caused; in order to make up for the instability of the tracking algorithm, a state switching mechanism based on precision evaluation is adopted; the overall accuracy evaluation is carried out on the pose resolving result obtained by each frame in the tracking state, and the measurement error sigma is calculatedσWhen the error threshold is exceeded, the marker point object matching detection state machine is restarted, so that the serious problems of one frame error and all subsequent tracking errors are avoided;
step 4, recovering the three-dimensional posture and position of the target in real time
And recovering the three-dimensional posture and position of the target in real time by utilizing an OpenGL technology according to the accurately solved sequence image pose parameters.
2. The real-time measuring method of the pose of the monocular moving object based on the back projection method as claimed in claim 1, wherein the step 1 of determining the image-object matching relationship of all the mark points according to the pose model of the optimal sample and performing back intersection, the specific steps of accurately calculating the pose parameters are as follows:
(1) method for identifying mark points based on circle confidence
The method comprises the steps that a camera obtains images of mark points of an infrared lamp, shields a visible light background with the wavelength less than 850nm, and adopts OTSU threshold segmentation based on the maximum between-class variance, so that the extracted mark points and the background have the minimum misclassification probability; adopting a method based on circle confidence to identify the mark points; firstly, extracting a contour chain in a binary image, and defining a two-dimensional (p + q) order moment of an image f (x, y) with the distribution of the contour chain being omega as follows:
wherein, (x, y) is the coordinate of the pixel point, and f (x, y) is the gray value of the pixel point; when the orders p and q are both 0, m00Is the area of the profile chain;
considering the camera focal length f, the pixel size ω, the photographing distance D, and the LED diameter D, the imaging area S of the LED halo on the image is expressed as:
considering the divergence of the LED halo, the area signature with adaptivity is defined as:
0.5S<m00<5S (3)
the circle confidence θ is defined as:
wherein L is the perimeter of the profile chain; the larger the circle confidence coefficient, the closer the shape is to the circle, the circle confidence coefficient of a perfect circle is 1, and the circle confidence coefficient of non-target noise and interference is mostly below 0.1; circle confidence recognition is defined as:
0.5<θ≤1 (5)
the area characteristic and the circle confidence coefficient are identified, noise and interference are filtered, and the mark points are correctly identified;
(2) enumerating, matching and calculating pose model
The method carries out enumeration matching by blindly selecting 3 pairs from n pairs of object space mark points, and can generate a plurality of matching combinations omegaiCalculating a plurality of pose models M by a back intersection methodj;
(3) Backprojection method for screening best matches
To screen for the best match omega*And its pose parameter M*Checking the matching correctness by adopting an object space point back projection method; matching omega according to object spaceiAnd pose model MjReversely projecting the n object space points to the image; pose model MjCorrectly, n object space points are all correctly back projected to the mark light spot area with larger gray scale, and the sum of the gray scales of the n pointsIs large; thus maximum total gray value of back projected dotsCorresponding to the best match omega*And pose model M thereof*;
Expression of the maximum backprojection gray value sum:
wherein the content of the first and second substances,andis the position and pose parameter of the robot,is a pose model obtained by back-meeting, XoIs the object-side coordinates of the marker point,is the image-side coordinates of the landmark points, g (u) is the gray value of the image point u; n is the number of marker points; when u is out of the image range, g (u) is 0; the image point coordinate calculation formula is as follows:
preliminarily obtained optimal pose model M*The pose calculation method is characterized in that the pose calculation method is obtained by backward intersection of object space points by 3, the precision is low, and secondary accurate calculation of pose parameters is needed; according to the obtained optimal image object space point combination omega*And pose model M thereof*Carrying out back projection on the n object space mark points again, wherein the marker light image point closest to the back projection point is the corresponding image point of the marker light, and thus, the corresponding matching of the n object space points is completed one by one; and performing secondary back intersection on the object space points by using n, and calculating to obtain more accurate pose parameters including a rotation parameter R and a displacement parameter T.
3. The real-time measuring method of the pose of the monocular moving object based on the back projection method as claimed in claim 1, wherein the fast tracking and matching method in the step 2 tracks the light point of the frame by using the matched light point position of the previous frame image, and the step of fast obtaining the image object matching relationship of the frame image comprises the following steps:
defining effective image points as marker light outline chain gravity center points subjected to feature identification; the previous frame has n valid points, and the ith image point is marked as Pi(xi,yi) The corresponding object point is Ni(ii) a The frame has m effective points, the jth image point is marked as Pj(xj,yj) (ii) a By shortest distanceDistance finding PjAnd PiCorresponding to each other; the shortest distance expression:
MIN[DIST(Pj,Pi)|i=0,1,...n-1] (8)
i=i*the minimum distance is obtained, the image point P of the framej(xj,yj) Corresponding to the image point of the previous frameThereby obtaining the image-object matching relation of the frame: this frame image point Pj(xj,yj) Corresponding point Ni(ii) a The quick matching of the image object space of the m effective mark points is completed in sequence; the rapid tracking method greatly improves the measurement efficiency and can realize the real-time pose resolution of the high-speed moving target.
4. The real-time measuring method of the pose of the monocular moving object based on the back projection method as claimed in claim 1, wherein the specific operation steps of the state switching mechanism based on the precision evaluation in the step 3 are as follows:
defining the coordinate of the image point obtained by tracking the frame as (x)i,yi) And reversely projecting the object space point according to the pose parameter of the previous frame to obtain an image point coordinateAnd (3) verifying an error calculation formula of matching precision:
where n is increasedσBecomes smaller, the error σ of each image pointiReduced overall error sigmaσAlso becomes smaller; therefore, the error calculation formula can correctly reflect the integral matchingError condition.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711111369.XA CN107886541B (en) | 2017-11-13 | 2017-11-13 | Real-time monocular moving target pose measuring method based on back projection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711111369.XA CN107886541B (en) | 2017-11-13 | 2017-11-13 | Real-time monocular moving target pose measuring method based on back projection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107886541A CN107886541A (en) | 2018-04-06 |
CN107886541B true CN107886541B (en) | 2021-03-26 |
Family
ID=61780104
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711111369.XA Active CN107886541B (en) | 2017-11-13 | 2017-11-13 | Real-time monocular moving target pose measuring method based on back projection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107886541B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109540173A (en) * | 2018-09-17 | 2019-03-29 | 江西洪都航空工业集团有限责任公司 | A kind of Transfer Alignment of vision auxiliary |
CN109712172A (en) * | 2018-12-28 | 2019-05-03 | 哈尔滨工业大学 | A kind of pose measuring method of initial pose measurement combining target tracking |
CN112985411A (en) * | 2021-03-02 | 2021-06-18 | 南京航空航天大学 | Air bearing table target layout and attitude calculation method |
CN113570708A (en) * | 2021-07-30 | 2021-10-29 | 重庆市特种设备检测研究院 | Defect three-dimensional modeling method and device and computer readable storage medium |
CN113989450B (en) * | 2021-10-27 | 2023-09-26 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101907459A (en) * | 2010-07-12 | 2010-12-08 | 清华大学 | Monocular video based real-time posture estimation and distance measurement method for three-dimensional rigid body object |
KR20110027460A (en) * | 2009-09-10 | 2011-03-16 | 부산대학교 산학협력단 | A method for positioning and orienting of a pallet based on monocular vision |
CN103530613A (en) * | 2013-10-15 | 2014-01-22 | 无锡易视腾科技有限公司 | Target person hand gesture interaction method based on monocular video sequence |
CN104101331A (en) * | 2014-07-24 | 2014-10-15 | 合肥工业大学 | Method used for measuring pose of non-cooperative target based on complete light field camera |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN104463914A (en) * | 2014-12-25 | 2015-03-25 | 天津工业大学 | Improved Camshift target tracking method |
CN107063190A (en) * | 2017-03-02 | 2017-08-18 | 辽宁工程技术大学 | Towards the high-precision direct method estimating of pose of calibration area array cameras image |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9418480B2 (en) * | 2012-10-02 | 2016-08-16 | Augmented Reailty Lab LLC | Systems and methods for 3D pose estimation |
US10380758B2 (en) * | 2016-04-27 | 2019-08-13 | Mad Street Den, Inc. | Method for tracking subject head position from monocular-source image sequence |
-
2017
- 2017-11-13 CN CN201711111369.XA patent/CN107886541B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110027460A (en) * | 2009-09-10 | 2011-03-16 | 부산대학교 산학협력단 | A method for positioning and orienting of a pallet based on monocular vision |
CN101907459A (en) * | 2010-07-12 | 2010-12-08 | 清华大学 | Monocular video based real-time posture estimation and distance measurement method for three-dimensional rigid body object |
CN103530613A (en) * | 2013-10-15 | 2014-01-22 | 无锡易视腾科技有限公司 | Target person hand gesture interaction method based on monocular video sequence |
CN104101331A (en) * | 2014-07-24 | 2014-10-15 | 合肥工业大学 | Method used for measuring pose of non-cooperative target based on complete light field camera |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN104463914A (en) * | 2014-12-25 | 2015-03-25 | 天津工业大学 | Improved Camshift target tracking method |
CN107063190A (en) * | 2017-03-02 | 2017-08-18 | 辽宁工程技术大学 | Towards the high-precision direct method estimating of pose of calibration area array cameras image |
Non-Patent Citations (4)
Title |
---|
Hybrid tracking approach using optical flow and pose estimation;Muriel Pressigout et al;《2008 15th IEEE International Conference on Image Processing》;20081015;第2720-2723页 * |
Monocular Vision-based Two-stage Iterative Algorithm for Relative Position and Attitude Estimation of Docking Spacecraft;Zhang Shijie et al;《Chinese Journal of Aeronautics》;20100430;第23卷(第2期);第204-210页 * |
球形全景影像位姿估计的改进EPnP算法;邓非等;《测绘学报》;20160630;第45卷(第6期);第677-684页 * |
轮廓矩不变量及其在物体形状识别中的应用;刘亦书等;《中国图象图形学报》;20040331;第9卷(第3期);第308-313页 * |
Also Published As
Publication number | Publication date |
---|---|
CN107886541A (en) | 2018-04-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107886541B (en) | Real-time monocular moving target pose measuring method based on back projection method | |
CN108932475B (en) | Three-dimensional target identification system and method based on laser radar and monocular vision | |
Kim et al. | Deep learning based vehicle position and orientation estimation via inverse perspective mapping image | |
Ushani et al. | A learning approach for real-time temporal scene flow estimation from lidar data | |
CN103714541B (en) | Method for identifying and positioning building through mountain body contour area constraint | |
CN111598952B (en) | Multi-scale cooperative target design and online detection identification method and system | |
CN105373135A (en) | Method and system for guiding airplane docking and identifying airplane type based on machine vision | |
Zhang et al. | A practical robotic grasping method by using 6-D pose estimation with protective correction | |
CN108009494A (en) | A kind of intersection wireless vehicle tracking based on unmanned plane | |
Zhang et al. | Robust method for measuring the position and orientation of drogue based on stereo vision | |
CN114639115B (en) | Human body key point and laser radar fused 3D pedestrian detection method | |
CN103839274B (en) | Extended target tracking method based on geometric proportion relation | |
CN108225273B (en) | Real-time runway detection method based on sensor priori knowledge | |
CN109917359A (en) | Robust vehicle distances estimation method based on vehicle-mounted monocular vision | |
CN111160231A (en) | Automatic driving environment road extraction method based on Mask R-CNN | |
CN114359865A (en) | Obstacle detection method and related device | |
CN105447431A (en) | Docking airplane tracking and positioning method and system based on machine vision | |
Min et al. | Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering | |
Nitsch et al. | 3d ground point classification for automotive scenarios | |
Yu et al. | Visual simultaneous localization and mapping (SLAM) based on blurred image detection | |
Yang et al. | Method for building recognition from FLIR images | |
CN105631431B (en) | The aircraft region of interest that a kind of visible ray objective contour model is instructed surveys spectral method | |
Dang et al. | Fast object hypotheses generation using 3D position and 3D motion | |
CN113436252A (en) | Pose identification method based on monocular vision | |
Xu et al. | Online stereovision calibration using on-road markings |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 300191 No. 428 Hongqi South Road, Tianjin, Nankai District Applicant after: Tianjin survey and Design Institute Group Co., Ltd Address before: 300191 No. 428 Hongqi South Road, Tianjin, Nankai District Applicant before: Kanjoin in Tianjin City |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |