CN112489091A - Full strapdown image seeker target tracking method based on direct-aiming template - Google Patents

Full strapdown image seeker target tracking method based on direct-aiming template Download PDF

Info

Publication number
CN112489091A
CN112489091A CN202011508984.6A CN202011508984A CN112489091A CN 112489091 A CN112489091 A CN 112489091A CN 202011508984 A CN202011508984 A CN 202011508984A CN 112489091 A CN112489091 A CN 112489091A
Authority
CN
China
Prior art keywords
target
template
image
seeker
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011508984.6A
Other languages
Chinese (zh)
Other versions
CN112489091B (en
Inventor
周伟
卢鑫
陆叶
颜有翔
周波
李路
李显彦
胡军
朱磊
马帅宝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Huanan Optoelectronic Group Co ltd
Original Assignee
Hunan Huanan Optoelectronic Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Huanan Optoelectronic Group Co ltd filed Critical Hunan Huanan Optoelectronic Group Co ltd
Priority to CN202011508984.6A priority Critical patent/CN112489091B/en
Publication of CN112489091A publication Critical patent/CN112489091A/en
Application granted granted Critical
Publication of CN112489091B publication Critical patent/CN112489091B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • G06T7/238Analysis of motion using block-matching using non-full search, e.g. three-step search
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a full strapdown image seeker target tracking method based on a direct-view template, which corrects a target image by using the direct-view template and inertial navigation information, so that the target image is consistent with the view angle and the scale between the target image acquired by a full strapdown image seeker, and has the characteristic of high tracking precision; the target tracking range can be reduced by a target memory tracking method by fusing inertial navigation information and image information, and the problems of translation invariance, scale invariance and rotation invariance are solved; filtering out the interference of similar targets, and processing target tracking under the conditions of short-time visual field emergence, shielding and the like of target nonlinear maneuvering; the method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.

Description

Full strapdown image seeker target tracking method based on direct-aiming template
Technical Field
The invention belongs to the field of image processing and machine vision, relates to a target tracking method of an air-ground full-strapdown image seeker, in particular to a real-time tracking method for attacking a ground static target by a direct-aiming template, is suitable for an air-ground guided weapon with a full-strapdown image seeker and an inertia measuring device and a pod, and is also suitable for all imaging guided weapons according to the principle.
Background
The instantaneous optical view field of the full strapdown image seeker is large, so that the image signal noise and the image processing calculation amount are large; meanwhile, the photoelectric detector is fixedly connected with the reference seat, the disturbance of the carrier is large, and the phenomena of field of view, geometric deformation, shielding and the like of nonlinear maneuvering of the target are easy to occur in the tracking process; the direct-aiming template is generated by directly pointing a target by scouting equipment such as a pod distributed aperture photoelectric system and the like, and has larger difference of visual angle, scale, wave band and background with a real-time image acquired by a full strapdown image seeker. The objective factors cause that the traditional target tracking method based on the direct-view template is difficult to adapt to the full strapdown image seeker; the traditional template matching tracking method only has translation invariance and does not have rotation invariance and scale invariance; meanwhile, similarity measurement (absolute difference, product correlation and the like) has the defects of poor geometric distortion resistance and large calculation amount on a search strategy, and the application range of the similarity measurement is influenced.
Disclosure of Invention
Aiming at the problems, the invention aims to provide a full strapdown image seeker target tracking method based on a direct-view template, which can solve the problem of real-time tracking of a ground static target under the conditions of visual angle difference, scale difference, large maneuverability, field of view, shielding and the like between the direct-view template and a target image acquired by a full strapdown image seeker, can realize that the full strapdown image seeker can quickly and accurately give target position information, and provides support for realizing accurate target striking.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a full strapdown image seeker target tracking method based on a direct-view template is realized by adopting the following steps:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and attitude data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by using the loaded direct-view template 1 to obtain a target template 2 for tracking;
(2) matching the target template 2 with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, wherein the initial position of the target in the image can be obtained after successful matching, and continuing issuing the target real-time image for matching if unsuccessful matching is performed;
(3) according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the target real-time image, matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
(4) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
Further, the target template 2 for tracking is obtained after image correction is carried out by using the direct-view template 1 in the step (1), the direct-view template 1 and the position and inertial navigation data of the corresponding moment are obtained through pod equipment, and the target template is converted to the target consistent with the view angle and the scale of a target real-time image when the full-strapdown image seeker is usedMarking a template 2; the transformation process requires that the position of the direct-view template 1 at the moment is known
Figure 100002_DEST_PATH_IMAGE002
And posture
Figure 100002_DEST_PATH_IMAGE004
Pod focus
Figure 100002_DEST_PATH_IMAGE006
Focal length of full strapdown image seeker
Figure 100002_DEST_PATH_IMAGE008
And target location data
Figure 100002_DEST_PATH_IMAGE010
Correcting the position of the full strapdown seeker at the moment
Figure 100002_DEST_PATH_IMAGE012
And posture
Figure 100002_DEST_PATH_IMAGE014
From high to low angles
Figure 100002_DEST_PATH_IMAGE016
Corrected to high and low angles
Figure 100002_DEST_PATH_IMAGE018
The transformation process is as follows:
Figure 100002_DEST_PATH_IMAGE020
(1)
Figure 100002_DEST_PATH_IMAGE022
(2)
Figure 100002_DEST_PATH_IMAGE024
(3)
Figure 100002_DEST_PATH_IMAGE026
(4)
Figure 100002_DEST_PATH_IMAGE028
(5)
the same principle is that:
Figure 100002_DEST_PATH_IMAGE030
(6)
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE032
to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,
Figure 100002_DEST_PATH_IMAGE034
to correct the elevation angle and the distance to the target at the moment of the target template 2,
Figure 100002_DEST_PATH_IMAGE036
the target image coordinates of the direct-aiming template 1 at the moment and the target template 2 at the correction moment are acquired.
By the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained by correcting by using the direct-view template 1.
Further, the target template 2 in the step (2) is matched with the target real-time image of the adjacent target and the area acquired by the full strapdown image seeker, the target real-time image is obtained by predicting the target by using inertial navigation information, and the position information of the full strapdown image seeker at the current moment
Figure 100002_DEST_PATH_IMAGE038
Firstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
Figure 100002_DEST_PATH_IMAGE040
(7)
Figure 100002_DEST_PATH_IMAGE042
(8)
Figure 100002_DEST_PATH_IMAGE044
(9)
Figure 100002_DEST_PATH_IMAGE046
(10)
Figure 100002_DEST_PATH_IMAGE048
(11)
Figure 100002_DEST_PATH_IMAGE050
(12)
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE052
is the radius of the earth ellipsoid.
Subtracting the two to obtain a relative position vector
Figure 100002_DEST_PATH_IMAGE054
Figure 100002_DEST_PATH_IMAGE056
(13)
And then the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and the current full strapdown image guidance of the target can be obtained according to the coordinate transformation relation between the pinhole imaging model of the camera and the center-of-earth system to the image coordinate systemImaging coordinates in head state
Figure 100002_DEST_PATH_IMAGE058
Figure 100002_DEST_PATH_IMAGE060
(14)
In the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE062
the center of the earth is the cosine matrix of the geographic system,
Figure 100002_DEST_PATH_IMAGE064
the geography is the cosine matrix of the projectile system,
Figure 100002_DEST_PATH_IMAGE066
is the center of the image of the full strapdown image seeker.
By selecting the target real-time graph through the method, the target can be quickly matched, the false target can be effectively avoided, and the problem of short-time visual field of the target nonlinear maneuvering can be solved.
Further, the direct-view template 1 is corrected by combining inertial navigation information to obtain a target template 2 and a target real-time image, and the scale and rotation parameters in the tracking process are determined; according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the target template 2 through inertial navigation information; in the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factor
Figure 100002_DEST_PATH_IMAGE068
Comprises the following steps:
Figure 100002_DEST_PATH_IMAGE070
(15)
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE072
in order to match the distance between the seeker and the target in the current frame full strapdown image,
Figure 100002_DEST_PATH_IMAGE074
in order to match the distance between the seeker and the target in the previous full strapdown image,
Figure 100002_DEST_PATH_IMAGE076
and
Figure 100002_DEST_PATH_IMAGE078
the opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
correcting rotation angle of template during tracking
Figure 100002_DEST_PATH_IMAGE080
Figure 100002_DEST_PATH_IMAGE082
(16)
In the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE084
in order to match the rolling angle of the seeker of the full strapdown image of the current frame,
Figure 100002_DEST_PATH_IMAGE086
the roll angle of the seeker is the roll angle of the whole strapdown image seeker in the previous frame during matching.
Further, the normalized gray scale mutual information entropy matching method of the hill-climbing search strategy in the step (3) is to adjust the hill-climbing sequence according to the priority and to reduce the calculation of the correlation by using the sample; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
setting real-time chart
Figure 100002_DEST_PATH_IMAGE088
Is of a size of
Figure 100002_DEST_PATH_IMAGE090
(Pixel), stencil image
Figure 100002_DEST_PATH_IMAGE092
Is of a size of
Figure 100002_DEST_PATH_IMAGE094
(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
Figure 100002_DEST_PATH_IMAGE096
(17)
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE098
for the edge entropy of the real-time sub-graph S,
Figure 100002_DEST_PATH_IMAGE100
probability of occurrence of real-time sub-graph gray scale;
Figure 100002_DEST_PATH_IMAGE102
is the edge entropy of the template graph,
Figure 100002_DEST_PATH_IMAGE104
the probability of the gray scale of the template graph;
Figure 100002_DEST_PATH_IMAGE106
for joint entropy of the real-time subgraph and the template graph,
Figure 100002_DEST_PATH_IMAGE108
for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,
Figure 100002_DEST_PATH_IMAGE110
weighting values for gradient amplitudes and principal directions of the real-time map and the template map;
Figure 100002_DEST_PATH_IMAGE112
For real-time subgraphs
Figure 100002_DEST_PATH_IMAGE114
The value of the match metric at (a) is,
Figure 100002_DEST_PATH_IMAGE116
and is provided with
Figure 100002_DEST_PATH_IMAGE118
The size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
Further, in the step (4), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame;
Figure 100002_DEST_PATH_IMAGE120
(18)
in the formula (I), the compound is shown in the specification,
Figure 100002_DEST_PATH_IMAGE122
matching a template picture with the maximum position of the mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering a cycle of template matching, calculating the gray-scale mutual information entropy, obtaining the current best matching position, updating the target template and matching the template to realize the stable tracking of the target.
Compared with the prior art, the invention has the advantages that:
1. the method uses inertial navigation information to correct the direct-view template, so that the direct-view template is consistent with a real-time image acquired by a full strapdown image seeker in view angle and scale, and the method has the advantage of high tracking precision.
2. By integrating the inertial navigation information and the image information, the target memory tracking method can reduce the target tracking search range, has the advantages of translation invariance, scale invariance and rotation invariance, filters out the interference of similar targets, and can process the target tracking under the conditions of short-time out of view field, shielding and the like of the target nonlinear maneuvering.
3. The method adopts a hill climbing search strategy and utilizes the improved mutual information entropy as a similarity measurement function, has the advantages of small calculated amount and strong anti-interference capability, and can improve the target tracking robustness.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the template calibration principle of the present invention;
FIG. 3 is a schematic perspective projection diagram of the present invention;
FIG. 4 is a schematic diagram of an improved hill-climbing search strategy according to the present invention;
FIG. 5 is a schematic diagram of the improved normalized gray scale mutual information entropy matching principle of the present invention;
FIG. 6 is a schematic diagram of the principle of the scale correction factor of the present invention.
Detailed Description
The following describes in detail a specific embodiment of the present invention with reference to the drawings.
As shown in fig. 1, a full strapdown image seeker target tracking method based on a direct-view template includes the following specific steps:
(1) the full strapdown image seeker is loaded with a direct-view template 1 and corresponding longitude and latitude height and posture data, and carries out image correction by using the loaded direct-view template 1 to obtain a target template 2 for tracking.
(2) The target template 2 is matched with a real-time image (hereinafter referred to as a real-time image) of an adjacent target and an area, which are acquired by the full strapdown image seeker, successfully matched, so that the initial position of the target in the image can be obtained, and the real-time image is continuously issued to be matched if the matching is unsuccessful.
(3) According to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the real-time image as a target template 2 for tracking, and correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the real-time image; matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued.
(4) And if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
In the step (1): the direct-aiming template 1 is combined with inertial navigation information to obtain a target template 2 through image correction, the target template 1 and the position and inertial navigation data at the corresponding moment are obtained through pod equipment, and the target template 2 is converted to be consistent with the view angle and the scale of a real-time image when the full-strapdown image seeker is used.
The transformation process requires that the position of the direct-view template 1 at the moment is known
Figure 977398DEST_PATH_IMAGE002
And posture
Figure 956855DEST_PATH_IMAGE004
Pod focus
Figure 736593DEST_PATH_IMAGE006
Focal length of full strapdown image seeker
Figure 197048DEST_PATH_IMAGE008
And target location data
Figure 11421DEST_PATH_IMAGE010
Correcting the position of the full strapdown seeker at the moment
Figure 896200DEST_PATH_IMAGE012
And posture
Figure 163233DEST_PATH_IMAGE014
From high to low angles
Figure 424450DEST_PATH_IMAGE016
Corrected to high and low angles
Figure 93329DEST_PATH_IMAGE018
A schematic diagram of the transformation process is shown in fig. 2;
and as can be seen from fig. 2:
Figure 149010DEST_PATH_IMAGE020
(1)
Figure 903339DEST_PATH_IMAGE022
(2)
Figure 702668DEST_PATH_IMAGE024
(3)
Figure 226053DEST_PATH_IMAGE026
(4)
Figure 718214DEST_PATH_IMAGE028
(5)
the same principle is that:
Figure 959840DEST_PATH_IMAGE030
(6)
in the formula (I), the compound is shown in the specification,
Figure 297280DEST_PATH_IMAGE032
to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,
Figure 940751DEST_PATH_IMAGE034
to correct the elevation angle and the distance to the target at the moment of the target template 2,
Figure 338235DEST_PATH_IMAGE036
the target image coordinates of the direct-aiming template 1 at the moment and the target template 2 at the correction moment are acquired.
By the formulas (5) and (6), the target template 2 consistent with the view angle and the scale of the real-time image can be obtained by correcting the direct-view template map 1.
In the step (2), the target template 2 is matched with a real-time image (hereinafter referred to as a real-time image) of an adjacent target and an area, which are acquired by the full strapdown image seeker, and the target is predicted by using inertial navigation information to obtain the real-time image.
Position information of full strapdown image seeker at current moment
Figure 67156DEST_PATH_IMAGE038
Firstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
Figure 211217DEST_PATH_IMAGE040
(7)
Figure 709195DEST_PATH_IMAGE042
(8)
Figure 543158DEST_PATH_IMAGE044
(9)
Figure 493797DEST_PATH_IMAGE046
(10)
Figure 438619DEST_PATH_IMAGE048
(11)
Figure 791103DEST_PATH_IMAGE050
(12)
in the formula (I), the compound is shown in the specification,
Figure 795968DEST_PATH_IMAGE052
is the radius of the earth ellipsoid.
Subtracting the two to obtain a relative position vector
Figure 233903DEST_PATH_IMAGE054
Figure 716837DEST_PATH_IMAGE056
(13)
And then, the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and according to the imaging model of the pinhole of the camera and the coordinate transformation relation between the center-of-earth system and the image coordinate system, as shown in fig. 3, the imaging coordinate of the target in the current missile state can be obtained
Figure 189406DEST_PATH_IMAGE058
Figure 302856DEST_PATH_IMAGE060
(14)
In the formula (I), the compound is shown in the specification,
Figure 290403DEST_PATH_IMAGE062
the center of the earth is the cosine matrix of the geographic system,
Figure 577028DEST_PATH_IMAGE064
the geography is the cosine matrix of the projectile system,
Figure 638525DEST_PATH_IMAGE066
is the center of the image of the full strapdown image seeker.
(13) The target position is predicted through inertial navigation information, and a real-time image is selected, so that the target can be quickly matched, a false target can be effectively avoided, and the problem of short-time visual field of nonlinear maneuvering of the target can be solved.
And (3) obtaining a target template 2 which is consistent with the visual angle and the scale of the full strapdown image seeker in the step (1), obtaining a real-time image in the step (2), and then obtaining the position of the target by adopting a normalized gray scale mutual information entropy matching method based on the hill climbing search strategy in the step (3) for the target template 2 and the real-time image.
In the invention, the determination of scale and rotation parameters in the tracking process is to match the real-time image obtained in the step (2) with the target template 2, correct the target template 2 and the real-time image by using inertial navigation information, and obtain the target position by using a hill-climbing search strategy and normalized mutual information entropy matching.
The target template 2 is corrected through inertial navigation information, and the normalized mutual information entropy matching method has the advantages of scale invariance and rotation invariance. In the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factor
Figure 985193DEST_PATH_IMAGE068
Comprises the following steps:
Figure 397720DEST_PATH_IMAGE070
(15)
in the formula (I), the compound is shown in the specification,
Figure 490965DEST_PATH_IMAGE072
in order to match the distance between the seeker and the target in the current frame full strapdown image,
Figure 672548DEST_PATH_IMAGE074
in order to match the distance between the seeker and the target in the previous full strapdown image,
Figure 862221DEST_PATH_IMAGE076
and
Figure 824360DEST_PATH_IMAGE078
the field angle of the previous frame and the next frame of the target in the full strapdown image seeker is regarded as equal by adjacent frames, and the schematic diagram of the scale invariant correction principleAs shown in fig. 6.
Correcting rotation angle of template during tracking
Figure 452788DEST_PATH_IMAGE080
Figure 754456DEST_PATH_IMAGE082
(16)
In the formula (I), the compound is shown in the specification,
Figure 177347DEST_PATH_IMAGE084
in order to match the rolling angle of the seeker of the full strapdown image of the current frame,
Figure 564466DEST_PATH_IMAGE086
the roll angle of the seeker is the roll angle of the whole strapdown image seeker in the previous frame during matching.
In order to meet the requirements of real-time performance and precision, an improved hill-climbing search strategy and a simplified correlation measurement mode are adopted. The hill-climbing search strategy is improved, as shown in fig. 4, mainly by adjusting the hill-climbing order by priority and reducing the calculation of correlation with samples. The adjustment of the hill climbing sequence according to the priority also solves many unnecessary calculation search processes on the premise of ensuring the matching accuracy, such as: the points with the maximum correlation values (gray scale mutual information entropies) are searched by a plurality of climbers, the local maximum values are searched by a plurality of climbers, and the calculation efficiency can be further improved and the redundant calculation can be avoided by setting the correlation matrix table and the search position matrix table in the searching process. The search matching is shown in fig. 5, and the search process from coarse to fine can be realized.
Setting real-time chart
Figure 996585DEST_PATH_IMAGE088
Is of a size of
Figure 887180DEST_PATH_IMAGE090
(Pixel), stencil image
Figure 746552DEST_PATH_IMAGE092
Is of a size of
Figure 355388DEST_PATH_IMAGE094
(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
Figure 528880DEST_PATH_IMAGE096
(17)
in the formula (I), the compound is shown in the specification,
Figure 601878DEST_PATH_IMAGE098
for the edge entropy of the real-time sub-graph S,
Figure 569834DEST_PATH_IMAGE100
is the probability of the real-time sub-image gray level occurrence.
Figure 728283DEST_PATH_IMAGE102
Is the edge entropy of the template graph,
Figure 439887DEST_PATH_IMAGE104
the probability of the occurrence of the template image gray scale.
Figure 358603DEST_PATH_IMAGE106
For joint entropy of the real-time subgraph and the template graph,
Figure 497460DEST_PATH_IMAGE108
for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,
Figure 143205DEST_PATH_IMAGE110
the weighted values of the gradient amplitude and the main direction of the real-time image and the template image are obtained.
Figure 392921DEST_PATH_IMAGE112
For real-time subgraphs
Figure 440511DEST_PATH_IMAGE114
The value of the match metric at (a) is,
Figure 750270DEST_PATH_IMAGE116
and is provided with
Figure 883311DEST_PATH_IMAGE118
The size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value, the higher the similarity degree.
If the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
In the step (4), if the correlation value is greater than the threshold value 2, the template needs to be updated for the next frame matching, and a complete updating mode is adopted during general updating, namely, an image with the best matching position of the current frame as the center and a certain area is used as the template 3 to be matched with the image of the next frame.
Figure 936718DEST_PATH_IMAGE120
(18)
In the formula (I), the compound is shown in the specification,
Figure 838815DEST_PATH_IMAGE122
and matching a template picture with a certain area taking the maximum position of the mutual information entropy as the center for the current frame. And then repeatedly entering a cycle of 'template matching, calculating gray scale mutual information entropy, obtaining the current best matching position, target template updating and template matching' to realize stable target tracking.

Claims (6)

1. A full strapdown image seeker target tracking method based on a direct-view template is characterized by comprising the following specific implementation steps of:
(1) loading a direct-view template 1 and corresponding longitude and latitude height and attitude data in a full strapdown image seeker, and carrying out image correction on the full strapdown image seeker by using the loaded direct-view template 1 to obtain a target template 2 for tracking;
(2) matching the target template 2 with a target real-time image of an adjacent target and an area acquired by the full strapdown image seeker, wherein the initial position of the target in the image can be obtained after successful matching, and continuing issuing the target real-time image for matching if unsuccessful matching is performed;
(3) according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the direct-view template 1 by combining inertial navigation information to obtain the target template 2 and the target real-time image; matching the target template 2 with the target real-time image, and adopting a hill-climbing search strategy and normalized gray scale mutual information entropy similarity measurement during matching; if the matching correlation value is larger than the threshold value 1, the matching is considered to be successful, and if the matching correlation value is not successful, the matching is continued;
(4) and if the correlation value is greater than the threshold value 2, the template changing condition is met, and a target template 3 is obtained.
2. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (1) of obtaining the target template 2 for tracking after image correction by using the direct-view template 1 is to obtain the direct-view template 1 and position and inertial navigation data of corresponding time through pod equipment, and to convert the position and inertial navigation data into the target template 2 which is consistent with the view angle and the scale of a target real-time image when the full strapdown image seeker is used; the transformation process requires that the position of the direct-view template 1 at the moment is known
Figure DEST_PATH_IMAGE002
And posture
Figure DEST_PATH_IMAGE004
Pod focus
Figure DEST_PATH_IMAGE006
Focal length of full strapdown image seeker
Figure DEST_PATH_IMAGE008
And target location data
Figure DEST_PATH_IMAGE010
Correcting the position of the full strapdown seeker at the moment
Figure DEST_PATH_IMAGE012
And posture
Figure DEST_PATH_IMAGE014
From high to low angles
Figure DEST_PATH_IMAGE016
Corrected to high and low angles
Figure DEST_PATH_IMAGE018
The transformation process is as follows:
Figure DEST_PATH_IMAGE020
(1)
Figure DEST_PATH_IMAGE022
(2)
Figure DEST_PATH_IMAGE024
(3)
Figure DEST_PATH_IMAGE026
(4)
Figure DEST_PATH_IMAGE028
(5)
the same principle is that:
Figure DEST_PATH_IMAGE030
(6)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE032
to obtain the elevation angle and the distance to the target at the moment of the direct-view template 1,
Figure DEST_PATH_IMAGE034
to correct the elevation angle and the distance to the target at the moment of the target template 2,
Figure DEST_PATH_IMAGE036
obtaining the target image coordinates of the direct-view template 1 moment and the target template 2 image coordinates of the correction moment;
by the formulas (5) and (6), the target template 2 consistent with the visual angle and the scale of the target real-time image can be obtained by correcting by using the direct-view template 1.
3. The method for tracking the target of the full-strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the target template 2 in the step (2) is matched with the real-time target map of the adjacent target and the area obtained by the full-strapdown image seeker, the target is predicted by using inertial navigation information to obtain the real-time target map, and the position information of the full-strapdown image seeker at the current moment
Figure DEST_PATH_IMAGE038
Firstly, the coordinates of the current time position and the target position in the geocentric system are calculated:
Figure DEST_PATH_IMAGE040
(7)
Figure DEST_PATH_IMAGE042
(8)
Figure DEST_PATH_IMAGE044
(9)
Figure DEST_PATH_IMAGE046
(10)
Figure DEST_PATH_IMAGE048
(11)
Figure DEST_PATH_IMAGE050
(12)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE052
is the earth ellipsoid radius;
subtracting the two to obtain a relative position vector
Figure DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE056
(13)
And then the position vector is subjected to coordinate transformation according to the center-of-earth system → geographic system → missile system → camera system → image system, and the imaging coordinate of the target in the current full strapdown image seeker state can be obtained according to the coordinate transformation relation between the camera pinhole imaging model and the center-of-earth system to the image coordinate system
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE060
(14)
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE062
the center of the earth is the cosine matrix of the geographic system,
Figure DEST_PATH_IMAGE064
the geography is the cosine matrix of the projectile system,
Figure DEST_PATH_IMAGE066
is the center of the image of the full strapdown image seeker.
4. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the step (3) of correcting the direct-view template 1 by combining the inertial navigation information to obtain the target template 2 and a real-time target image is to determine the scale and rotation parameters in the tracking process; according to the initial position of the target obtained in the step (2), taking the position as a center, selecting a certain area in the target real-time image as a target template 2 for tracking, and correcting the target template 2 through inertial navigation information; in the tracking process, because the distance between the full strapdown image seeker and the target is far larger than the depth of field of the target, the template corrects the scale factor
Figure DEST_PATH_IMAGE068
Comprises the following steps:
Figure DEST_PATH_IMAGE070
(15)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE072
in order to match the distance between the seeker and the target in the current frame full strapdown image,
Figure DEST_PATH_IMAGE074
in order to match the distance between the seeker and the target in the previous full strapdown image,
Figure DEST_PATH_IMAGE076
and
Figure DEST_PATH_IMAGE078
the opening angles of a previous frame and a next frame of a target in the full strapdown image seeker are regarded as equal;
correcting rotation angle of template during tracking
Figure DEST_PATH_IMAGE080
Figure DEST_PATH_IMAGE082
(16)
In the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE084
in order to match the rolling angle of the seeker of the full strapdown image of the current frame,
Figure DEST_PATH_IMAGE086
the roll angle of the seeker is the roll angle of the whole strapdown image seeker in the previous frame during matching.
5. The method for tracking the target of the full-strapdown image seeker based on the direct-view template as claimed in claim 1, wherein the normalized gray scale mutual information entropy matching method of the hill-climbing search strategy in the step (3) is to adjust the hill-climbing order according to the priority and to reduce the calculation of the correlation by using the samples; the method comprises the following steps of realizing a search process from coarse to fine by setting initial hill climbing points at equal intervals under the condition of ensuring the precision, wherein the specific process comprises the following steps:
setting real-time chart
Figure DEST_PATH_IMAGE088
Is of a size of
Figure DEST_PATH_IMAGE090
(Pixel), stencil image
Figure DEST_PATH_IMAGE092
Is of a size of
Figure DEST_PATH_IMAGE094
(pixel), the metric function of the improved normalized grayscale mutual information entropy matching method is expressed as:
Figure DEST_PATH_IMAGE096
(17)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE098
for the edge entropy of the real-time sub-graph S,
Figure DEST_PATH_IMAGE100
probability of occurrence of real-time sub-graph gray scale;
Figure DEST_PATH_IMAGE102
is the edge entropy of the template graph,
Figure DEST_PATH_IMAGE104
the probability of the gray scale of the template graph;
Figure DEST_PATH_IMAGE106
for joint entropy of the real-time subgraph and the template graph,
Figure DEST_PATH_IMAGE108
for the joint probability of the real-time subgraph and the template graph gray-scale occurrence,
Figure DEST_PATH_IMAGE110
weighting values of gradient amplitude and main direction of the real-time graph and the template graph;
Figure DEST_PATH_IMAGE112
for real-time subgraphs
Figure DEST_PATH_IMAGE114
The value of the match metric at (a) is,
Figure DEST_PATH_IMAGE116
and is provided with
Figure DEST_PATH_IMAGE118
The size of the template graph reflects the similarity degree between the template graph and the real-time subgraph, and the larger the value is, the higher the similarity degree is;
if the correlation value of the real-time graph and the template graph is larger than the threshold value 1, the matching is considered to be successful, otherwise, the matching is failed.
6. The method for tracking the target of the full strapdown image seeker based on the direct-view template as claimed in claim 1, wherein in the step (4), when the correlation value is greater than the threshold 2, the template needs to be updated for the next frame matching, and a completely updated mode is adopted during updating, namely, an image with a certain area with the best matching position of the current frame as the center is used as the template 3 to be matched with the image of the next frame;
Figure DEST_PATH_IMAGE120
(18)
in the formula (I), the compound is shown in the specification,
Figure DEST_PATH_IMAGE122
matching a template picture with the maximum position of the mutual information entropy as a certain area of the center for the current frame, and then repeatedly entering a cycle of template matching, calculating the gray-scale mutual information entropy, obtaining the current best matching position, updating the target template and matching the template to realize the stable tracking of the target.
CN202011508984.6A 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template Active CN112489091B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011508984.6A CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011508984.6A CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Publications (2)

Publication Number Publication Date
CN112489091A true CN112489091A (en) 2021-03-12
CN112489091B CN112489091B (en) 2022-08-12

Family

ID=74914838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011508984.6A Active CN112489091B (en) 2020-12-18 2020-12-18 Full strapdown image seeker target tracking method based on direct-aiming template

Country Status (1)

Country Link
CN (1) CN112489091B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395448A (en) * 2021-06-15 2021-09-14 西安视成航空科技有限公司 Airborne pod image searching, tracking and processing system
CN113554131A (en) * 2021-09-22 2021-10-26 四川大学华西医院 Medical image processing and analyzing method, computer device, system and storage medium
CN114280978A (en) * 2021-11-29 2022-04-05 中国航空工业集团公司洛阳电光设备研究所 Tracking decoupling control method for photoelectric pod

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146228A (en) * 1990-01-24 1992-09-08 The Johns Hopkins University Coherent correlation addition for increasing match information in scene matching navigation systems
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
CN107945215A (en) * 2017-12-14 2018-04-20 湖南华南光电(集团)有限责任公司 High-precision infrared image tracker and a kind of target fast tracking method
CN108717712A (en) * 2018-05-29 2018-10-30 东北大学 A kind of vision inertial navigation SLAM methods assumed based on ground level

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146228A (en) * 1990-01-24 1992-09-08 The Johns Hopkins University Coherent correlation addition for increasing match information in scene matching navigation systems
US20160209846A1 (en) * 2015-01-19 2016-07-21 The Regents Of The University Of Michigan Visual Localization Within LIDAR Maps
CN107945215A (en) * 2017-12-14 2018-04-20 湖南华南光电(集团)有限责任公司 High-precision infrared image tracker and a kind of target fast tracking method
CN108717712A (en) * 2018-05-29 2018-10-30 东北大学 A kind of vision inertial navigation SLAM methods assumed based on ground level

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫宇壮等: "惯导信息辅助的匹配模板校正方法", 《国防科技大学学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113395448A (en) * 2021-06-15 2021-09-14 西安视成航空科技有限公司 Airborne pod image searching, tracking and processing system
CN113554131A (en) * 2021-09-22 2021-10-26 四川大学华西医院 Medical image processing and analyzing method, computer device, system and storage medium
CN114280978A (en) * 2021-11-29 2022-04-05 中国航空工业集团公司洛阳电光设备研究所 Tracking decoupling control method for photoelectric pod
CN114280978B (en) * 2021-11-29 2024-03-15 中国航空工业集团公司洛阳电光设备研究所 Tracking decoupling control method for photoelectric pod

Also Published As

Publication number Publication date
CN112489091B (en) 2022-08-12

Similar Documents

Publication Publication Date Title
CN112489091B (en) Full strapdown image seeker target tracking method based on direct-aiming template
CN105222788B (en) The automatic correcting method of the matched aircraft Route Offset error of feature based
CN108534782B (en) Binocular vision system-based landmark map vehicle instant positioning method
CN103047985B (en) A kind of method for rapidly positioning of extraterrestrial target
CN104732518B (en) A kind of PTAM improved methods based on intelligent robot terrain surface specifications
CN106295512B (en) Vision data base construction method and indoor orientation method in more correction lines room based on mark
CN110992263B (en) Image stitching method and system
CN107560603B (en) Unmanned aerial vehicle oblique photography measurement system and measurement method
CN109387192B (en) Indoor and outdoor continuous positioning method and device
CN109540113B (en) Total station and star map identification method thereof
CN111238540A (en) Lopa gamma first camera-satellite sensitive installation calibration method based on fixed star shooting
CN103996027B (en) Space-based space target recognizing method
CN112163995B (en) Splicing generation method and device for oversized aerial strip images
WO2020181506A1 (en) Image processing method, apparatus and system
KR20200084972A (en) Method for acquisition of hyperspectral image using an unmanned aerial vehicle
CN103428408A (en) Inter-frame image stabilizing method
CN110889353B (en) Space target identification method based on primary focus large-visual-field photoelectric telescope
CN113610896B (en) Method and system for measuring target advance quantity in simple fire control sighting device
CN114860196A (en) Telescope main light path guide star device and calculation method of guide star offset
CN105403886B (en) A kind of carried SAR scaler picture position extraction method
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN116977902B (en) Target tracking method and system for on-board photoelectric stabilized platform of coastal defense
CN117036666B (en) Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN116091804B (en) Star suppression method based on adjacent frame configuration matching
CN110986916A (en) Indoor positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant