CN115690611A - Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy - Google Patents

Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy Download PDF

Info

Publication number
CN115690611A
CN115690611A CN202211171636.3A CN202211171636A CN115690611A CN 115690611 A CN115690611 A CN 115690611A CN 202211171636 A CN202211171636 A CN 202211171636A CN 115690611 A CN115690611 A CN 115690611A
Authority
CN
China
Prior art keywords
response
hog
image
psr
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211171636.3A
Other languages
Chinese (zh)
Inventor
张岩
郑钰辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202211171636.3A priority Critical patent/CN115690611A/en
Publication of CN115690611A publication Critical patent/CN115690611A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a multi-feature unmanned aerial vehicle video tracking method based on a self-adaptive updating strategy, which comprises the following steps: cutting out image blocks by taking the target position of the previous frame as the center after the image is input; extracting HOG characteristics, CN characteristics and SA characteristics of the image block, and fusing the three characteristics through PSR to obtain a final response diagram; judging the tracking quality of the current frame according to the PSR of the final response image, and if the tracking quality is not good, denoising the final response image through a denoising mechanism; determining the position of the target in the current frame through the final response map; and adaptively updating the model according to the position of the target in the current frame. The invention integrates HOG characteristics, CN characteristics and SA characteristics, and can improve the representation information of the target; meanwhile, the denoising mechanism can effectively relieve the interference of background noise, and the self-adaptive updating strategy can prevent the template from being degraded.

Description

Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy
Technical Field
The invention relates to image recognition and target tracking, in particular to a multi-feature unmanned aerial vehicle video tracking method based on a self-adaptive updating strategy.
Background
The unmanned aerial vehicle is used as a new observation method, and can continuously observe specific events on the ground by providing images and videos with high time resolution. Video observation makes various unmanned aerial vehicle applications possible, such as road monitoring, wildlife census, and the like. The target tracking is one of the popular research directions in the field of unmanned aerial vehicle video observation, and according to the input information of a first frame, the position information of a tracked target is established in a subsequent frame to form a motion track of the target.
The existing target tracking algorithm comprises a correlation filtering algorithm and a deep learning algorithm, wherein the deep learning algorithm needs a large amount of training parameters, the calculation amount is huge, and the real-time tracking speed is difficult to ensure; the target tracking algorithm based on the correlation filtering generally converts convolution operation into a frequency domain through Fourier transform for calculation, so that the calculation speed is improved, and high tracking precision and tracking speed can be obtained in the traditional video target tracking process. In addition, the performance of the tracking algorithm is also severely affected by low resolution, background interference.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a multi-feature unmanned aerial vehicle video tracking method based on a self-adaptive updating strategy, so that the interference of background noise is effectively relieved, and the performance of a tracking algorithm is greatly improved.
The technical scheme is as follows: the invention relates to a multi-feature unmanned aerial vehicle video tracking method based on a self-adaptive updating strategy, which comprises the following steps of:
(1) After the image of the current frame is input, cutting out image blocks by taking the position of the target in the previous frame as a central point, wherein the size of the image blocks is 5 times of the size of the target.
(1.1) if the current frame is the first frame of the video, the position of the object is known, i.e. the position of the object in the first frame is specified artificially.
(1.2) if the current frame is not the first frame of the video, cutting out an image block with the position of the target of the previous frame as a central point, wherein the size of the image block is 5 times that of the target.
(2) Extracting gradient Histogram (HOG) features, color (CN) features and Significance (SA) features of the image block, calculating a HOG response diagram, a CN response diagram and an SA response diagram by using a correlation filter, and fusing the three response diagrams through peak side lobe ratio (PSR) to obtain a fused response diagram.
(2.1) extracting HOG characteristics of the image block: firstly, converting an image block into a gray image; then calculating the gradient of each pixel to obtain a pixel-level feature map, and gathering the pixel-level feature map into a dense grid of rectangular units to obtain a unit-based feature map; then aligning truncation and normalization using four different normalization factors of the 27-dimensional histogram in 9 insensitive directions and 18 sensitive directions; and finally, calculating the sum of each channel of different normalization factors to obtain the HOG characteristic F (HOG).
(2.2) extracting CN features of the image blocks: the RGB primaries of the image are associated with 11 basic color preselections, i.e. with 11 color language tags, and then the CN feature F (CN) is obtained.
(2.3) extracting SA characteristics of the image: the SA characteristic of the image is extracted by using a spectrum residual error method, the k frame image I (k) is preprocessed before the characteristic is extracted, fourier transform F () is used for obtaining the amplitude characteristic A (k) and the phase characteristic P (k) of the image I (k) in a frequency domain, and the calculation formula is as follows:
A(k)=AC(F(I(k)))
P(k)=PC(F(I(k)))
wherein AC () and PC () are functions for extracting amplitude and phase, respectively; the image is represented by a log spectrum, the formula is as follows:
L(k)=log(A(k))
amplitude characteristic A (k) can pass H n And L (k), so that the spectral residual r (k) can be obtained by the following formula:
r(k)=L(k)-H n *L(k)
wherein H n Is an n × n matrix defined as:
Figure RE-GDA0003979937250000021
the saliency information contained in the image can be captured by a block of spectral residuals r (k), the saliency features F (SA) being obtained by the following formula:
F(SA)=G(k)★F -1 [exp(r(k)+P(k))] 2
wherein G (k) is a Gaussian filter, F -1 Is an inverse fourier transform.
(2.4) the HOG response graph res (HOG), the CN response graph res (CN) and the SA response graph res (SA) can be obtained by carrying out correlation operation on the HOG characteristic graph, the CN characteristic graph and the SA characteristic graph and the correlation filter, and the formula is as follows:
res(HOG)=F(HOG)⊙H HOG
res(CN)=F(CN)⊙H CN
res(SA)=F(SA)⊙H SA
wherein, the filament is a dot product operation, H HOG 、H CN 、H SA Respectively, three correlation filters corresponding to different characteristics.
(2.5) calculating the PSR of the three characteristic response graphs, wherein the PSR calculation formula is as follows:
Figure RE-GDA0003979937250000031
wherein i belongs to { HOG, CN, SA }, g max () Is the maximum response value of the response map, μ () is the mean of the response map, and σ () is the standard deviation of the response map.
(2.6) enhancing the response diagram by using PSRs of different characteristic response diagrams, wherein the enhancing process is as follows:
Re(i)=PSR(i)×res(i)
wherein i belongs to { HOG, CN, SA }, and Re () is a characteristic response diagram after enhancement, namely Re (HOG), re (CN) and Re (SA).
(2.7) fusing two by two the three enhanced characteristic response graphs, wherein the formula is as follows:
R(HC)=Re(HOG)⊙Re(CN)
R(HS)=Re(HOG)⊙Re(SA)
R(CS)=Re(CN)⊙Re(SA)
and (2.8) finally, respectively calculating the PSR of the fused response graph to obtain a final characteristic response graph R (final), wherein the calculation formula is as follows:
Figure RE-GDA0003979937250000032
wherein i, j belongs to { HC, HS, CS }.
(3) Calculating the PSR of the fused response graph, wherein if the PSR is larger than a threshold value, the maximum response value point of the fused response graph is the position of the target; and if the PSR is smaller than the threshold, denoising the fused response image, wherein the maximum response value point of the denoised response image is the position of the target.
(3.1) calculating the PSR of the final response graph R (final) according to the following calculation formula:
Figure RE-GDA0003979937250000033
(3.2) if the value of PSR (final) is greater than the threshold, the position of the maximum response value point of the final response map R (final) is the position of the target.
(3.3) if the value of PSR (final) is less than the threshold, then de-noising the final response map R (final) is required, firstly calculating the distance L between the maximum response value point in the final response map and the target position in the previous frame, and the size of the de-noising map (i, j) is SL multiplied by SL, wherein
SL=2.5×L
The denoised map (i, j) follows a cosine distribution with a center value of 1 and an edge value of 0.
And (3.4) aligning the centers of the de-noised map (i, j) and the final response map R (final), and then performing point multiplication operation to de-noise the final response map, wherein the position of the maximum response value point of the de-noised response map is the position of the target.
(4) The correlation filter is adaptively updated according to the position of the target.
(4.1) the PSR of the response map of the current frame, i.e., PSR (I (k)), is obtained.
(4.2) average PSR of response maps of historical frames, i.e.
Figure RE-GDA0003979937250000041
(4.3) calculating the confidence coefficient beta of the current frame, wherein the calculation formula is as follows:
Figure RE-GDA0003979937250000042
(4.4) adjusting the update weight V of the current frame by the confidence degree β, which is calculated as follows:
Figure RE-GDA0003979937250000043
where α and γ are pre-set hyper-parameters.
(4.5) carrying out adaptive model updating, wherein the formula is as follows:
H k =(1-ηV)H k-1 +ηVH’ k
where η is the initial learning rate, H k Is a model of the current frame, H k-1 Is the model of the last frame, H' k Is the vector map of the current frame.
A computer storage medium having stored thereon a computer program which, when executed by a processor, implements the adaptive update policy based multi-feature drone video tracking method described above.
A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the adaptive update policy based multi-feature drone video tracking method described above when executing the computer program.
Has the beneficial effects that: compared with the prior art, the invention has the following advantages:
1. HOG characteristics, CN characteristics and SA characteristics are fused, and the representation information of the target can be further enhanced;
2. a self-adaptive model updating strategy is adopted, so that model degradation can be effectively prevented;
3. the denoising mechanism can further mitigate the effect of background noise.
Drawings
FIG. 1 is a flow chart of a target tracking method of the present invention;
FIG. 2 is a flowchart of the feature fusion module operation of the present invention;
FIG. 3 is a graph comparing the accuracy of the target tracking method in the example;
fig. 4 is a success rate comparison chart of the target tracking method in the embodiment.
Detailed Description
The technical scheme of the invention is further explained by combining the attached drawings.
As shown in fig. 1, a multi-feature unmanned aerial vehicle video tracking method based on an adaptive update policy includes the following steps:
(1) After the image of the current frame is input, cutting out image blocks by taking the position of the target in the previous frame as a central point, wherein the size of the image blocks is 5 times of the size of the target.
(1.1) if the current frame is the first frame of the video, the position of the object is known, i.e. the position of the object in the first frame is specified artificially.
(1.2) if the current frame is not the first frame of the video, cutting out an image block with the position of the target of the previous frame as a central point, wherein the size of the image block is 5 times of the size of the target.
(2) Extracting gradient Histogram (HOG) features, color (CN) features and Saliency (SA) features of the image block, calculating a HOG response diagram, a CN response diagram and an SA response diagram by using a correlation filter, and fusing the three response diagrams by peak side lobe ratio (PSR) to obtain a fused response diagram, wherein the feature fusion process is shown in fig. 2.
(2.1) extracting HOG characteristics of the image block: firstly, converting an image block into a gray image; then calculating the gradient of each pixel to obtain a pixel-level feature map, and gathering the pixel-level feature map into a dense grid of rectangular units to obtain a unit-based feature map; then aligning truncation and normalization using four different normalization factors of the 27-dimensional histogram in 9 insensitive directions and 18 sensitive directions; and finally, calculating the sum of each channel of different normalization factors to obtain the HOG characteristic F (HOG).
(2.2) extracting CN characteristics of the image blocks: the three RGB primaries of the image are associated with 11 basic color preselections, i.e. with 11 color language labels, and then the CN feature F (CN) is obtained.
(2.3) extracting SA features of the image: the SA characteristic of the image is extracted by using a spectrum residual error method, the k frame image I (k) is preprocessed before the characteristic is extracted, fourier transform F () is used for obtaining the amplitude characteristic A (k) and the phase characteristic P (k) of the image I (k) in a frequency domain, and the calculation formula is as follows:
A(k)=AC(F(I(k)))
P(k)=PC(F(I(k)))
where AC () and PC () are functions that extract amplitude and phase, respectively; the image is represented by a log spectrum, the formula is as follows:
L(k)=log(A(k))
amplitude characteristic A (k) can pass H n And L (k), so the spectral residual r (k) can be obtained by the following formula:
r(k)=L(k)-H n *L(k)
wherein H n Is an n x n matrix defined as:
Figure RE-GDA0003979937250000061
the saliency information contained in the image can be captured by a block of spectral residuals r (k), the saliency features F (SA) being obtained by the following formula:
F(SA)=G(k)★F -1 [exp(r(k)+P(k))] 2
wherein G (k) is a Gaussian filter, F -1 Is an inverse fourier transform.
(2.4) the HOG response graph res (HOG), the CN response graph res (CN) and the SA response graph res (SA) can be obtained by carrying out correlation operation on the HOG characteristic graph, the CN characteristic graph and the SA characteristic graph and the correlation filter, and the formula is as follows:
res(HOG)=F(HOG)⊙H HOG
res(CN)=F(CN)⊙H CN
res(SA)=F(SA)⊙H SA
wherein, the one is a dot product operation, H HOG 、H CN 、H SA Respectively, three correlation filters corresponding to different characteristics.
(2.5) calculating the PSR of the three characteristic response graphs, wherein the PSR calculation formula is as follows:
Figure RE-GDA0003979937250000062
wherein i belongs to { HOG, CN, SA }, g max () Is the maximum response value of the response map, μ () is the mean of the response map, and σ () is the standard deviation of the response map.
(2.6) enhancing the response diagram by using PSRs of different characteristic response diagrams, wherein the enhancing process is as follows: ""
Re(i)=PSR(i)×res(i)
Wherein i belongs to { HOG, CN, SA }, and Re () is a characteristic response diagram after enhancement, namely Re (HOG), re (CN) and Re (SA).
(2.7) fusing two by two the three enhanced characteristic response graphs, wherein the formula is as follows:
R(HC)=Re(HOG)⊙Re(CN)
R(HS)=Re(HOG)⊙Re(SA)
R(CS)=Re(CN)⊙Re(SA)
and (2.8) finally, respectively calculating the PSR of the fused response graph to obtain a final characteristic response graph R (final), wherein the calculation formula is as follows:
Figure RE-GDA0003979937250000071
wherein i, j belongs to { HC, HS, CS }.
(3) Calculating the PSR of the fused response graph, wherein if the PSR is larger than a threshold value, the maximum response value point of the fused response graph is the position of the target; and if the PSR is smaller than the threshold, denoising the fused response image, wherein the maximum response value point of the denoised response image is the position of the target.
(3.1) calculating the PSR of the final response graph R (final) by the following formula:
Figure RE-GDA0003979937250000072
(3.2) if the value of PSR (final) is greater than the threshold, the position of the maximum response value point of the final response map R (final) is the position of the target.
(3.3) if the PSR (final) value is smaller than the threshold, then de-noising the final response map R (final) is required, firstly calculating the distance L between the maximum response value point in the final response map and the target position in the previous frame, and the size of the de-noising map (i, j) is SL × SL, wherein
SL=2.5×L
The denoised map (i, j) follows a cosine distribution with a center value of 1 and an edge value of 0.
And (3.4) aligning the centers of the de-noised map (i, j) and the final response map R (final), and then performing point multiplication operation to de-noise the final response map, wherein the position of the maximum response value point of the de-noised response map is the position of the target.
(4) The correlation filter is adaptively updated according to the position of the target.
(4.1) the PSR of the response map of the current frame, i.e., PSR (I (k)), is obtained.
(4.2) average PSR of response maps of historical frames, i.e.
Figure RE-GDA0003979937250000073
(4.3) calculating the confidence coefficient beta of the current frame, wherein the calculation formula is as follows:
Figure RE-GDA0003979937250000074
(4.4) adjusting the update weight V of the current frame by the confidence degree β, which is calculated as follows:
Figure RE-GDA0003979937250000075
where α and γ are pre-set hyper-parameters.
(4.5) carrying out adaptive model updating, wherein the formula is as follows:
H k =(1-ηV)H k-1 +ηVH’ k
where η is the initial learning rate, H k Is a model of the current frame, H k-1 Is the model of the last frame, H' k Is the vector map of the current frame.
In order to further verify the effect of the target tracking method, the accuracy and the success rate of the target tracking method of the embodiment are simulated and compared with other tracking methods, and the results are shown in table 1. The accuracy is the ratio of the number of frames with the error of the central distance between the predicted position and the real position smaller than a certain threshold to the total number of frames, the success rate is the ratio of the number of frames with the overlapped area between the predicted position and the real position larger than a certain threshold to the total number of frames, and the AUC is the area under the line of the success rate curve. As can be seen from Table 1, the accuracy, success rate and AUC of the present invention all gave the highest results. The PSRS method is most similar to the method, but the extracted features are not enhanced by the PSRS, but the features are enhanced by the method, so that compared with the PSRS, the accuracy rate of the method is improved by 0.04, the success rate is improved by 0.031, and the AUC is improved by 0.028. Furthermore, as can be seen from fig. 3 and 4, the performance of the algorithm of the present invention is significantly better than that of the other algorithms.
TABLE 1 statistical table of the results of the experiments of the present invention
Ours PSRS KCC BACF
Rate of accuracy 0.628 0.588 0.531 0.572
Success rate 0.557 0.526 0.447 0.506
AUC 0.460 0.432 0.374 0.413

Claims (7)

1. A multi-feature unmanned aerial vehicle video tracking method based on a self-adaptive updating strategy is characterized by comprising the following steps:
(1) After the image of the current frame is input, cutting out an image block by taking the position of the target in the previous frame as a central point, wherein the size of the image block is 5 times that of the target;
(2) Extracting HOG characteristics, CN characteristics and SA characteristics of the image block, calculating an HOG response graph, a CN response graph and an SA response graph by using a relevant filter, and fusing the three response graphs through a PSR (particle swarm optimization) to obtain a fused response graph;
(3) Calculating the PSR of the fused response graph, wherein if the PSR is larger than a threshold value, the maximum response value point of the fused response graph is the position of the target; if the PSR is smaller than the threshold value, denoising the fused response image, wherein the maximum response value point of the denoised response image is the position of the target;
(4) The correlation filter is adaptively updated according to the position of the target.
2. The method for tracking the video of the multi-feature unmanned aerial vehicle based on the adaptive update strategy according to claim 1, wherein the step (1) is specifically as follows:
(1.1) if the current frame is the first frame of the video, the position of the object is known, i.e. the position of the object in the first frame is specified artificially;
(1.2) if the current frame is not the first frame of the video, cutting out an image block with the position of the target of the previous frame as a central point, wherein the size of the image block is 5 times that of the target.
3. The method for tracking the video of the multi-feature unmanned aerial vehicle based on the adaptive update strategy according to claim 1, wherein the step (2) is specifically as follows:
(2.1) extracting HOG features of the image block: firstly, converting an image block into a gray image; then calculating the gradient of each pixel to obtain a pixel-level feature map, and gathering the pixel-level feature map into a dense grid of rectangular units to obtain a unit-based feature map; then aligning truncation and normalization using four different normalization factors of the 27-dimensional histogram in 9 insensitive directions and 18 sensitive directions; finally, calculating the sum of each channel of different normalization factors to obtain HOG characteristic F (HOG);
(2.2) extracting CN features of the image blocks: associating RGB three primary colors of the image with 11 basic color pre-selection sets, namely associating the RGB three primary colors with 11 color language labels, and then obtaining CN characteristics F (CN);
(2.3) extracting SA features of the image: the SA characteristic of the image is extracted by using a spectrum residual error method, the k frame image I (k) is preprocessed before the characteristic is extracted, fourier transform F () is used for obtaining the amplitude characteristic A (k) and the phase characteristic P (k) of the image I (k) in a frequency domain, and the calculation formula is as follows:
A(k)=AC(F(I(k)))
P(k)=PC(F(I(k)))
wherein AC () and PC () are functions for extracting amplitude and phase, respectively; the image is represented by a log spectrum, the formula is as follows:
L(k)=log(A(k))
amplitude characteristic A (k) can pass H n And L (k), so the spectral residual r (k) can be obtained by the following formula:
r(k)=L(k)-H n *L(k)
wherein H n Is an n x n matrix defined as:
Figure RE-FDA0003979937240000021
the saliency information contained in the image can be captured by a block of spectral residuals r (k), the saliency features F (SA) being obtained by the following formula:
F(SA)=G(k)*F -1 [exp(r(k)+P(k))] 2
wherein G (k) is a Gaussian filter, F -1 Is an inverse fourier transform;
(2.4) the HOG response graph res (HOG), the CN response graph res (CN) and the SA response graph res (SA) can be obtained by carrying out correlation operation on the HOG characteristic graph, the CN characteristic graph and the SA characteristic graph and the correlation filter, and the formula is as follows:
res(HOG)=F(HOG)⊙H HOG
res(CN)=F(CN)⊙H CN
res(SA)=F(SA)⊙H SA
wherein, the filament is a dot product operation, H HOG 、H CN 、H SA Three kinds of correlation filters corresponding to different characteristics respectively;
(2.5) calculating the PSR of the three characteristic response graphs, wherein the PSR calculation formula is as follows:
Figure RE-FDA0003979937240000022
wherein i belongs to { HOG, CN, SA }, g max () Is the maximum response value of the response map, μ () is the mean of the response map, σ () is the standard deviation of the response map;
(2.6) enhancing the response diagram by using PSRs of different characteristic response diagrams, wherein the enhancing process is as follows:
Re(i)=PSR(i)×res(i)
wherein i belongs to { HOG, CN, SA }, and Re () is a characteristic response diagram after enhancement, namely Re (HOG), re (CN) and Re (SA);
(2.7) fusing two by two the three enhanced characteristic response graphs, wherein the formula is as follows:
R(HC)=Re(HOG)⊙Re(CN)
R(HS)=Re(HOG)⊙Re(SA)
R(CS)=Re(CN)⊙Re(SA)
and (2.8) finally, respectively calculating the PSR of the fused response graph to obtain a final characteristic response graph R (final), wherein the calculation formula is as follows:
Figure RE-FDA0003979937240000031
wherein i, j belongs to { HC, HS, CS }.
4. The method for tracking the video of the multi-feature unmanned aerial vehicle based on the adaptive update strategy according to claim 1, wherein the step (3) is specifically as follows:
(3.1) calculating the PSR of the final response graph R (final) by the following formula:
Figure RE-FDA0003979937240000032
(3.2) if the PSR (final 0 value is greater than the threshold, the location of the final response map R (final) maximum response value point is the location of the target;
(3.3) if the value of PSR (final) is less than the threshold, then de-noising the final response map R (final) is required, firstly calculating the distance L between the maximum response value point in the final response map and the target position in the previous frame, and the size of the de-noising map (i, j) is SL multiplied by SL, wherein
SL=2.5×L
The denoised map (i, j) follows a cosine distribution with a center value of 1 and an edge value of 0;
and (3.4) aligning the centers of the de-noised map (i, j) and the final response map R (final), and then performing point multiplication operation to de-noise the final response map, wherein the position of the maximum response value point of the de-noised response map is the position of the target.
5. The method for tracking the video of the multi-feature unmanned aerial vehicle based on the adaptive update strategy according to claim 1, wherein the step (4) is specifically as follows:
(4.1) calculating the PSR of the response map of the current frame, namely PSR (I (k));
(4.2) average PSR of response maps of historical frames, i.e.
Figure RE-FDA0003979937240000033
(4.3) calculating the confidence coefficient beta of the current frame, wherein the calculation formula is as follows:
Figure RE-FDA0003979937240000034
(4.4) adjusting the update weight V of the current frame by the confidence degree beta, wherein the calculation formula is as follows:
Figure RE-FDA0003979937240000035
wherein alpha and gamma are preset hyper-parameters;
(4.5) carrying out adaptive model updating, wherein the formula is as follows:
H k =(1-ηV)H k-1 +ηVH’ k
where η is the initial learning rate, H k Is a model of the current frame, H k-1 Is the model of the last frame, H' k Is the vector map of the current frame.
6. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the adaptive update policy based multi-feature drone video tracking method according to any one of claims 1-5.
7. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the adaptive update policy based multi-feature drone video tracking method according to any one of claims 1-5.
CN202211171636.3A 2022-09-26 2022-09-26 Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy Pending CN115690611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211171636.3A CN115690611A (en) 2022-09-26 2022-09-26 Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211171636.3A CN115690611A (en) 2022-09-26 2022-09-26 Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy

Publications (1)

Publication Number Publication Date
CN115690611A true CN115690611A (en) 2023-02-03

Family

ID=85063430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211171636.3A Pending CN115690611A (en) 2022-09-26 2022-09-26 Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy

Country Status (1)

Country Link
CN (1) CN115690611A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596951A (en) * 2018-03-30 2018-09-28 西安电子科技大学 A kind of method for tracking target of fusion feature
CN109102528A (en) * 2018-08-07 2018-12-28 上海海事大学 A kind of ship tracking method and system
CN109754424A (en) * 2018-12-17 2019-05-14 西北工业大学 Correlation filtering track algorithm based on fusion feature and adaptive updates strategy

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596951A (en) * 2018-03-30 2018-09-28 西安电子科技大学 A kind of method for tracking target of fusion feature
CN109102528A (en) * 2018-08-07 2018-12-28 上海海事大学 A kind of ship tracking method and system
CN109754424A (en) * 2018-12-17 2019-05-14 西北工业大学 Correlation filtering track algorithm based on fusion feature and adaptive updates strategy

Similar Documents

Publication Publication Date Title
CN107527332B (en) Low-illumination image color retention enhancement method based on improved Retinex
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
CN103295204B (en) A kind of image self-adapting enhancement method based on non-down sampling contourlet transform
CN105046664A (en) Image denoising method based on self-adaptive EPLL algorithm
CN111666903B (en) Method for identifying thunderstorm cloud cluster in satellite cloud picture
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN111612741A (en) Accurate non-reference image quality evaluation method based on distortion recognition
CN115063318A (en) Adaptive frequency-resolved low-illumination image enhancement method and related equipment
CN102789634B (en) A kind of method obtaining illumination homogenization image
CN109635809B (en) Super-pixel segmentation method for visual degradation image
Wen et al. Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment
CN104616259A (en) Non-local mean image de-noising method with noise intensity self-adaptation function
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens
CN116862809A (en) Image enhancement method under low exposure condition
CN115690611A (en) Multi-feature unmanned aerial vehicle video tracking method based on self-adaptive updating strategy
CN115797205A (en) Unsupervised single image enhancement method and system based on Retinex fractional order variation network
CN108376396B (en) High-efficiency PM2.5 concentration prediction method based on image
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN115937302A (en) Hyperspectral image sub-pixel positioning method combined with edge preservation
Wu et al. Numerical computation of ocean HABs image enhancement based on empirical mode decomposition and wavelet fusion
CN112733714B (en) VGG network-based automatic crowd counting image recognition method
CN112330566B (en) Image denoising method and device and computer storage medium
CN114782404A (en) Computer image processing equipment
CN110796609B (en) Low-light image enhancement method based on scale perception and detail enhancement model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination