CN112489091A - Full strapdown image seeker target tracking method based on direct-aiming template - Google Patents
Full strapdown image seeker target tracking method based on direct-aiming template Download PDFInfo
- Publication number
- CN112489091A CN112489091A CN202011508984.6A CN202011508984A CN112489091A CN 112489091 A CN112489091 A CN 112489091A CN 202011508984 A CN202011508984 A CN 202011508984A CN 112489091 A CN112489091 A CN 112489091A
- Authority
- CN
- China
- Prior art keywords
- target
- template
- image
- seeker
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
- G06T7/238—Analysis of motion using block-matching using non-full search, e.g. three-step search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a full strapdown image seeker target tracking method based on a direct-view template, which corrects a target image by using the direct-view template and inertial navigation information, so that the target image is consistent with the view angle and the scale between the target image acquired by a full strapdown image seeker, and has the characteristic of high tracking precision; the target tracking range can be reduced by a target memory tracking method by fusing inertial navigation information and image information, and the problems of translation invariance, scale invariance and rotation invariance are solved; filtering out the interference of similar targets, and processing target tracking under the conditions of short-time visual field emergence, shielding and the like of target nonlinear maneuvering; the method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.
Description
Technical Field
The invention belongs to the field of image processing and machine vision, relates to a target tracking method of an air-ground full-strapdown image seeker, in particular to a real-time tracking method for attacking a ground static target by a direct-aiming template, is suitable for an air-ground guided weapon with a full-strapdown image seeker and an inertia measuring device and a pod, and is also suitable for all imaging guided weapons according to the principle.
Background
The instantaneous optical view field of the full strapdown image seeker is large, so that the image signal noise and the image processing calculation amount are large; meanwhile, the photoelectric detector is fixedly connected with the reference seat, the disturbance of the carrier is large, and the phenomena of field of view, geometric deformation, shielding and the like of nonlinear maneuvering of the target are easy to occur in the tracking process; the direct-aiming template is generated by directly pointing a target by scouting equipment such as a pod distributed aperture photoelectric system and the like, and has larger difference of visual angle, scale, wave band and background with a real-time image acquired by a full strapdown image seeker. The objective factors cause that the traditional target tracking method based on the direct-view template is difficult to adapt to the full strapdown image seeker; the traditional template matching tracking method only has translation invariance and does not have rotation invariance and scale invariance; meanwhile, similarity measurement (absolute difference, product correlation and the like) has the defects of poor geometric distortion resistance and large calculation amount on a search strategy, and the application range of the similarity measurement is influenced.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a full strapdown image seeker target tracking method based on a direct-view template, which can solve the problem of real-time tracking of a ground static target under the conditions of visual angle difference, scale difference, large maneuverability, field of view, shielding and the like between the direct-view template and a target image acquired by a full strapdown image seeker, can realize that the full strapdown image seeker can quickly and accurately give target position information, and provides support for realizing accurate target striking.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a full strapdown image seeker target tracking method based on a direct-view template is realized by adopting the following steps:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and attitude data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by using the loaded direct-view template 1 to obtain a target template 2 for tracking;
(2) matching the target template 2 with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, wherein the initial position of the target in the image can be obtained after successful matching, and continuing issuing the target real-time image for matching if unsuccessful matching is performed;
(3) according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the target real-time image, matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
(4) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
Further, the target template 2 for tracking is obtained after image correction is carried out by using the direct-view template 1 in the step (1), the direct-view template 1 and the position and inertial navigation data of the corresponding moment are obtained through pod equipment, and the target template is converted to the target consistent with the view angle and the scale of a target real-time image when the full-strapdown image seeker is usedMarking a template 2; the transformation process requires that the position of the direct-view template 1 at the moment is knownAnd posturePod focusFocal length of full strapdown image seekerAnd target location dataCorrecting the position of the full strapdown seeker at the momentAnd postureFrom high to low anglesCorrected to high and low anglesThe transformation process is as follows:
the same principle is that:
in the formula (I), the compound is shown in the specification,to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,to correct the elevation angle and the distance to the target at the moment of the target template 2,the target image coordinates of the direct-aiming template 1 at the moment and the target template 2 at the correction moment are acquired.
By the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained by correcting by using the direct-view template 1.
Further, the target template 2 in the step (2) is matched with the target real-time image of the adjacent target and the area acquired by the full strapdown image seeker, the target real-time image is obtained by predicting the target by using inertial navigation information, and the position information of the full strapdown image seeker at the current momentFirstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
in the formula (I), the compound is shown in the specification,is the radius of the earth ellipsoid.
And then the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and the current full strapdown image guidance of the target can be obtained according to the coordinate transformation relation between the pinhole imaging model of the camera and the center-of-earth system to the image coordinate systemImaging coordinates in head state:
In the formula (I), the compound is shown in the specification,the center of the earth is the cosine matrix of the geographic system,the geography is the cosine matrix of the projectile system,is the center of the image of the full strapdown image seeker.
By selecting the target real-time graph through the method, the target can be quickly matched, the false target can be effectively avoided, and the problem of short-time visual field of the target nonlinear maneuvering can be solved.
Further, the direct-view template 1 is corrected by combining inertial navigation information to obtain a target template 2 and a target real-time image, and the scale and rotation parameters in the tracking process are determined; according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the target template 2 through inertial navigation information; in the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factorComprises the following steps:
in the formula (I), the compound is shown in the specification,in order to match the distance between the seeker and the target in the current frame full strapdown image,in order to match the distance between the seeker and the target in the previous full strapdown image,andthe opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
In the formula (I), the compound is shown in the specification,in order to match the rolling angle of the seeker of the full strapdown image of the current frame,the roll angle of the seeker is the roll angle of the whole strapdown image seeker in the previous frame during matching.
Further, the normalized gray scale mutual information entropy matching method of the hill-climbing search strategy in the step (3) is to adjust the hill-climbing sequence according to the priority and to reduce the calculation of the correlation by using the sample; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
setting real-time chartIs of a size of(Pixel), stencil imageIs of a size of(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
in the formula (I), the compound is shown in the specification,for the edge entropy of the real-time sub-graph S,probability of occurrence of real-time sub-graph gray scale;is the edge entropy of the template graph,the probability of the gray scale of the template graph;for joint entropy of the real-time subgraph and the template graph,for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,weighting values for gradient amplitudes and principal directions of the real-time map and the template map;For real-time subgraphsThe value of the match metric at (a) is,and is provided withThe size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
Further, in the step (4), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame;
in the formula (I), the compound is shown in the specification,matching a template picture with the maximum position of the mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering a cycle of template matching, calculating the gray-scale mutual information entropy, obtaining the current best matching position, updating the target template and matching the template to realize the stable tracking of the target.
Compared with the prior art, the invention has the advantages that:
1. the method uses inertial navigation information to correct the direct-view template, so that the direct-view template is consistent with a real-time image acquired by a full strapdown image seeker in view angle and scale, and the method has the advantage of high tracking precision.
2. By integrating the inertial navigation information and the image information, the target memory tracking method can reduce the target tracking search range, has the advantages of translation invariance, scale invariance and rotation invariance, filters out the interference of similar targets, and can process the target tracking under the conditions of short-time out of view field, shielding and the like of the target nonlinear maneuvering.
3. The method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the template calibration principle of the present invention;
FIG. 3 is a schematic perspective projection diagram of the present invention;
FIG. 4 is a schematic diagram of an improved hill-climbing search strategy according to the present invention;
FIG. 5 is a schematic diagram of the improved normalized gray scale mutual information entropy matching principle of the present invention;
FIG. 6 is a schematic diagram of the principle of the scale correction factor of the present invention.
Detailed Description
The following describes in detail a specific embodiment of the present invention with reference to the drawings.
As shown in fig. 1, a full strapdown image seeker target tracking method based on a direct-view template includes the following specific steps:
(1) the full strapdown image seeker is loaded with a direct-view template 1 and corresponding longitude and latitude height and posture data, and carries out image correction by using the loaded direct-view template 1 to obtain a target template 2 for tracking.
(2) The target template 2 is matched with a real-time image (hereinafter referred to as a real-time image) of an adjacent target and an area, which are acquired by the full strapdown image seeker, successfully matched, so that the initial position of the target in the image can be obtained, and the real-time image is continuously issued to be matched if the matching is unsuccessful.
(3) According to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the real-time image as a target template 2 for tracking, and correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the real-time image; matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued.
(4) And if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
In the step (1): the direct-aiming template 1 is combined with inertial navigation information to obtain a target template 2 through image correction, the target template 1 and the position and inertial navigation data at the corresponding moment are obtained through pod equipment, and the target template 2 is converted to be consistent with the view angle and the scale of a real-time image when the full-strapdown image seeker is used.
The transformation process requires that the position of the direct-view template 1 at the moment is knownAnd posturePod focusFocal length of full strapdown image seekerAnd target location dataCorrecting the position of the full strapdown seeker at the momentAnd postureFrom high to low anglesCorrected to high and low anglesA schematic diagram of the transformation process is shown in fig. 2;
and as can be seen from fig. 2:
the same principle is that:
in the formula (I), the compound is shown in the specification,to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,to correct the elevation angle and the distance to the target at the moment of the target template 2,the target image coordinates of the direct-aiming template 1 at the moment and the target template 2 at the correction moment are acquired.
By the formulas (5) and (6), the target template 2 consistent with the view angle and the scale of the real-time image can be obtained by correcting the direct-view template map 1.
In the step (2), the target template 2 is matched with a real-time image (hereinafter referred to as a real-time image) of an adjacent target and an area, which are acquired by the full strapdown image seeker, and the target is predicted by using inertial navigation information to obtain the real-time image.
Position information of full strapdown image seeker at current momentFirstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
in the formula (I), the compound is shown in the specification,is the radius of the earth ellipsoid.
And then, the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and according to the imaging model of the pinhole of the camera and the coordinate transformation relation between the center-of-earth system and the image coordinate system, as shown in fig. 3, the imaging coordinate of the target in the current missile state can be obtained:
In the formula (I), the compound is shown in the specification,the center of the earth is the cosine matrix of the geographic system,the geography is the cosine matrix of the projectile system,is the center of the image of the full strapdown image seeker.
(13) The target position is predicted through inertial navigation information, and a real-time image is selected, so that the target can be quickly matched, a false target can be effectively avoided, and the problem of short-time visual field of nonlinear maneuvering of the target can be solved.
And (3) obtaining a target template 2 which is consistent with the visual angle and the scale of the full strapdown image seeker in the step (1), obtaining a real-time image in the step (2), and then obtaining the position of the target by adopting a normalized gray scale mutual information entropy matching method based on the hill climbing search strategy in the step (3) for the target template 2 and the real-time image.
In the invention, the determination of scale and rotation parameters in the tracking process is to match the real-time image obtained in the step (2) with the target template 2, correct the target template 2 and the real-time image by using inertial navigation information, and obtain the target position by using a hill-climbing search strategy and normalized mutual information entropy matching.
The target template 2 is corrected through inertial navigation information, and the normalized mutual information entropy matching method has the advantages of scale invariance and rotation invariance. In the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factorComprises the following steps:
in the formula (I), the compound is shown in the specification,in order to match the distance between the seeker and the target in the current frame full strapdown image,in order to match the distance between the seeker and the target in the previous full strapdown image,andthe field angle of the previous frame and the next frame of the target in the full strapdown image seeker is regarded as equal by adjacent frames, and the schematic diagram of the scale invariant correction principleAs shown in fig. 6.
In the formula (I), the compound is shown in the specification,in order to match the rolling angle of the seeker of the full strapdown image of the current frame,the roll angle of the seeker is the roll angle of the whole strapdown image seeker in the previous frame during matching.
In order to meet the requirements of real-time performance and precision, an improved hill-climbing search strategy and a simplified correlation measurement mode are adopted. The hill-climbing search strategy is improved, as shown in fig. 4, mainly by adjusting the hill-climbing order by priority and reducing the calculation of correlation with samples. The adjustment of the hill climbing sequence according to the priority also solves many unnecessary calculation search processes on the premise of ensuring the matching accuracy, such as: the points with the maximum correlation values (gray scale mutual information entropies) are searched by a plurality of climbers, the local maximum values are searched by a plurality of climbers, and the calculation efficiency can be further improved and the redundant calculation can be avoided by setting the correlation matrix table and the search position matrix table in the searching process. The search matching is shown in fig. 5, and the search process from coarse to fine can be realized.
Setting real-time chartIs of a size of(Pixel), stencil imageIs of a size of(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
in the formula (I), the compound is shown in the specification,for the edge entropy of the real-time sub-graph S,is the probability of the real-time sub-image gray level occurrence.Is the edge entropy of the template graph,the probability of the occurrence of the template image gray scale.For joint entropy of the real-time subgraph and the template graph,for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,the weighted values of the gradient amplitude and the main direction of the real-time image and the template image are obtained.For real-time subgraphsThe value of the match metric at (a) is,and is provided withThe size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value, the higher the similarity degree.
If the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
In the step (4), if the correlation value is greater than the threshold value 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during general updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame.
In the formula (I), the compound is shown in the specification,and matching a template picture with a certain area taking the maximum position of the mutual information entropy as the center for the current frame. And then repeatedly entering a cycle of 'template matching, calculating gray scale mutual information entropy, obtaining the current best matching position, target template updating and template matching' to realize stable target tracking.
Claims (6)
1. A full strapdown image seeker target tracking method based on a direct-view template is characterized by comprising the following specific implementation steps of:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and attitude data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by using the loaded direct-view template 1 to obtain a target template 2 for tracking;
(2) matching the target template 2 with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, wherein the initial position of the target in the image can be obtained after successful matching, and continuing issuing the target real-time image for matching if unsuccessful matching is performed;
(3) according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the target real-time image; matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
(4) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
2. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (1) of obtaining the target template 2 for tracking after image correction by using the direct-view template 1 is to obtain the direct-view template 1 and position and inertial navigation data of corresponding time through pod equipment, and to convert the position and inertial navigation data into the target template 2 which is consistent with the view angle and the scale of a target real-time image when the full strapdown image seeker is used; the transformation process requires that the position of the direct-view template 1 at the moment is knownAnd posturePod focusFocal length of full strapdown image seekerAnd target location dataCorrecting the position of the full strapdown seeker at the momentAnd postureFrom high to low anglesCorrected to high and low anglesThe transformation process is as follows:
the same principle is that:
in the formula (I), the compound is shown in the specification,to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,to correct the elevation angle and the distance to the target at the moment of the target template 2,obtaining the target image coordinates of the direct-view template 1 moment and the target template 2 image coordinates of the correction moment;
by the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained by correcting by using the direct-view template 1.
3. The method for tracking the target of the full-strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the target template 2 in the step (2) is matched with the real-time target map of the adjacent target and the area obtained by the full-strapdown image seeker, the target is predicted by using inertial navigation information to obtain the real-time target map, and the position information of the full-strapdown image seeker at the current momentFirstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
And then the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and the imaging coordinate of the target in the current full strapdown image seeker state can be obtained according to the coordinate transformation relation between the camera pinhole imaging model and the center-of-earth system to the image coordinate system:
4. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (3) of correcting the direct-view template 1 by combining the inertial navigation information to obtain the target template 2 and a real-time target image is to determine the scale and rotation parameters in the tracking process; according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the target template 2 through inertial navigation information; in the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factorComprises the following steps:
in the formula (I), the compound is shown in the specification,in order to match the distance between the seeker and the target in the current frame full strapdown image,in order to match the distance between the seeker and the target in the previous full strapdown image,andthe opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
5. The method for tracking the target of the full-strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the normalized gray scale mutual information entropy matching method of the hill-climbing search strategy in the step (3) is to adjust the hill-climbing order according to the priority and to reduce the calculation of the correlation by using the samples; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
setting real-time chartIs of a size of(Pixel), stencil imageIs of a size of(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
in the formula (I), the compound is shown in the specification,for the edge entropy of the real-time sub-graph S,probability of occurrence of real-time sub-graph gray scale;is the edge entropy of the template graph,the probability of the gray scale of the template graph;for joint entropy of the real-time subgraph and the template graph,for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,weighting values of gradient amplitude and main direction of the real-time graph and the template graph;for real-time subgraphsThe value of the match metric at (a) is,and is provided withThe size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
6. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein in the step (4), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a completely updated mode is adopted during updating, namely, an image with a certain area with the best matching position of the current frame as the center is used as the template 3 to be matched with the image of the next frame;
in the formula (I), the compound is shown in the specification,matching a template picture with the maximum position of the mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering a cycle of template matching, calculating the gray-scale mutual information entropy, obtaining the current best matching position, updating the target template and matching the template to realize the stable tracking of the target.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508984.6A CN112489091B (en) | 2020-12-18 | 2020-12-18 | Full strapdown image seeker target tracking method based on direct-aiming template |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508984.6A CN112489091B (en) | 2020-12-18 | 2020-12-18 | Full strapdown image seeker target tracking method based on direct-aiming template |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112489091A true CN112489091A (en) | 2021-03-12 |
CN112489091B CN112489091B (en) | 2022-08-12 |
Family
ID=74914838
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508984.6A Active CN112489091B (en) | 2020-12-18 | 2020-12-18 | Full strapdown image seeker target tracking method based on direct-aiming template |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112489091B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113395448A (en) * | 2021-06-15 | 2021-09-14 | 西安视成航空科技有限公司 | Airborne pod image searching, tracking and processing system |
CN113554131A (en) * | 2021-09-22 | 2021-10-26 | 四川大学华西医院 | Medical image processing and analyzing method, computer device, system and storage medium |
CN114280978A (en) * | 2021-11-29 | 2022-04-05 | 中国航空工业集团公司洛阳电光设备研究所 | Tracking decoupling control method for photoelectric pod |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5146228A (en) * | 1990-01-24 | 1992-09-08 | The Johns Hopkins University | Coherent correlation addition for increasing match information in scene matching navigation systems |
US20160209846A1 (en) * | 2015-01-19 | 2016-07-21 | The Regents Of The University Of Michigan | Visual Localization Within LIDAR Maps |
CN107945215A (en) * | 2017-12-14 | 2018-04-20 | 湖南华南光电(集团)有限责任公司 | High-precision infrared image tracker and a kind of target fast tracking method |
CN108717712A (en) * | 2018-05-29 | 2018-10-30 | 东北大学 | A kind of vision inertial navigation SLAM methods assumed based on ground level |
-
2020
- 2020-12-18 CN CN202011508984.6A patent/CN112489091B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5146228A (en) * | 1990-01-24 | 1992-09-08 | The Johns Hopkins University | Coherent correlation addition for increasing match information in scene matching navigation systems |
US20160209846A1 (en) * | 2015-01-19 | 2016-07-21 | The Regents Of The University Of Michigan | Visual Localization Within LIDAR Maps |
CN107945215A (en) * | 2017-12-14 | 2018-04-20 | 湖南华南光电(集团)有限责任公司 | High-precision infrared image tracker and a kind of target fast tracking method |
CN108717712A (en) * | 2018-05-29 | 2018-10-30 | 东北大学 | A kind of vision inertial navigation SLAM methods assumed based on ground level |
Non-Patent Citations (1)
Title |
---|
闫宇壮等: "惯导信息辅助的匹配模板校正方法", 《国防科技大学学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113395448A (en) * | 2021-06-15 | 2021-09-14 | 西安视成航空科技有限公司 | Airborne pod image searching, tracking and processing system |
CN113554131A (en) * | 2021-09-22 | 2021-10-26 | 四川大学华西医院 | Medical image processing and analyzing method, computer device, system and storage medium |
CN114280978A (en) * | 2021-11-29 | 2022-04-05 | 中国航空工业集团公司洛阳电光设备研究所 | Tracking decoupling control method for photoelectric pod |
CN114280978B (en) * | 2021-11-29 | 2024-03-15 | 中国航空工业集团公司洛阳电光设备研究所 | Tracking decoupling control method for photoelectric pod |
Also Published As
Publication number | Publication date |
---|---|
CN112489091B (en) | 2022-08-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112489091B (en) | Full strapdown image seeker target tracking method based on direct-aiming template | |
CN105222788B (en) | The automatic correcting method of the matched aircraft Route Offset error of feature based | |
CN108534782B (en) | Binocular vision system-based landmark map vehicle instant positioning method | |
CN103047985B (en) | A kind of method for rapidly positioning of extraterrestrial target | |
CN104732518B (en) | A kind of PTAM improved methods based on intelligent robot terrain surface specifications | |
CN106295512B (en) | Vision data base construction method and indoor orientation method in more correction lines room based on mark | |
CN110992263B (en) | Image stitching method and system | |
CN107560603B (en) | Unmanned aerial vehicle oblique photography measurement system and measurement method | |
CN109387192B (en) | Indoor and outdoor continuous positioning method and device | |
CN109540113B (en) | Total station and star map identification method thereof | |
CN111238540A (en) | Lopa gamma first camera-satellite sensitive installation calibration method based on fixed star shooting | |
CN103996027B (en) | Space-based space target recognizing method | |
CN112163995B (en) | Splicing generation method and device for oversized aerial strip images | |
WO2020181506A1 (en) | Image processing method, apparatus and system | |
KR20200084972A (en) | Method for acquisition of hyperspectral image using an unmanned aerial vehicle | |
CN103428408A (en) | Inter-frame image stabilizing method | |
CN110889353B (en) | Space target identification method based on primary focus large-visual-field photoelectric telescope | |
CN113610896B (en) | Method and system for measuring target advance quantity in simple fire control sighting device | |
CN114860196A (en) | Telescope main light path guide star device and calculation method of guide star offset | |
CN105403886B (en) | A kind of carried SAR scaler picture position extraction method | |
CN109883400B (en) | Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL | |
CN116977902B (en) | Target tracking method and system for on-board photoelectric stabilized platform of coastal defense | |
CN117036666B (en) | Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching | |
CN116091804B (en) | Star suppression method based on adjacent frame configuration matching | |
CN110986916A (en) | Indoor positioning method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |