CN111612729A - Target sequence tracking image recovery method based on Kalman filtering - Google Patents

Target sequence tracking image recovery method based on Kalman filtering Download PDF

Info

Publication number
CN111612729A
CN111612729A CN202010454432.5A CN202010454432A CN111612729A CN 111612729 A CN111612729 A CN 111612729A CN 202010454432 A CN202010454432 A CN 202010454432A CN 111612729 A CN111612729 A CN 111612729A
Authority
CN
China
Prior art keywords
matrix
target image
jth column
image matrix
moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010454432.5A
Other languages
Chinese (zh)
Other versions
CN111612729B (en
Inventor
文成林
付仁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010454432.5A priority Critical patent/CN111612729B/en
Publication of CN111612729A publication Critical patent/CN111612729A/en
Application granted granted Critical
Publication of CN111612729B publication Critical patent/CN111612729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a target sequence tracking image recovery method based on Kalman filtering. The present invention generally comprises three components. The first part is that a Kalman equation set is established according to the motion state of actual target tracking; a second part, improving the Kalman equation set into a column vector form; and in the third part, fusing target sequence observation images. The method and the device can not only solve the problem of limited effect obtained by restoring a single damaged image, but also solve the problem of severe requirements of image fusion technology on the image.

Description

Target sequence tracking image recovery method based on Kalman filtering
Technical Field
The invention belongs to the field of image enhancement, and relates to a target sequence tracking image recovery method based on Kalman filtering.
Background
In the process of rapid development of an intelligent image processing system, an enhancement technology for a target tracking image is widely concerned by people. The target sequence tracking image is a sequence image obtained by continuously tracking a moving target by a sensor, and a computer can obtain more perfect information than human visual observation by analyzing the images.
The quality of the image is an important influence factor of computer processing and decision, but the image is not free from the influence of environmental noise, lossy compression and low digital-to-analog conversion in the processes of imaging, transmission and storage, and even loses the application value. Therefore, how to remove noise from damaged target sequence tracking images and reconstruct high-quality images is one of the important problems faced in the field of intelligent image processing.
Researchers at home and abroad have proposed a plurality of practical image enhancement algorithms, such as kalman filtering algorithm, mean filtering algorithm, wavelet denoising algorithm, etc., which can directly or indirectly remove partial influence of noise in image information, and the image enhancement algorithms have the advantages of simplicity, convenience and feasibility, but have the disadvantages that the effect obtained by restoring a single damaged image is limited, and when the noise of a polluted image is too large or the image is greatly deformed, the requirement of a task on the image quality is difficult to meet.
The image fusion technology is to take a plurality of images of the same target to perform fusion denoising treatment, to concentrate useful information in a plurality of damaged images into one image, and to perform complementation and redundancy by using the image information, so as to obtain a better effect.
The kalman filtering algorithm can remove part of the influence of noise in the image information, but when the noise polluting the image is too large or the image is greatly deformed, the efficiency of the algorithm is obviously insufficient. Considering that a Kalman filtering framework can fuse sequence images and an observation model under the framework can simulate the deformation of the images, the invention provides the image recovery method based on Kalman filtering.
Disclosure of Invention
The invention provides an image recovery method based on Kalman filtering, aiming at solving the problem that the effect obtained by recovering a single damaged image is limited and the problem that the image fusion technology has strict requirements on the image.
The present invention generally comprises three components. The first part is that a Kalman equation set is established according to the motion state of actual target tracking; a second part, improving the Kalman equation set into a column vector form; and in the third part, fusing target sequence observation images.
The invention can not only solve the problem of limited effect obtained by restoring a single damaged image, but also solve the problem of severe requirements of the image fusion technology on the image, and comprises the following steps:
step 1, describing a matrix form of a Kalman equation set, and firstly establishing a system model and an observation model which accord with a Kalman filtering framework in order to estimate a target image by using a Kalman filter fusion sequence observation image.
Considering that the object is moving slowly, a dynamic model of the object is represented as follows
X(k+1)=A(k)X(k)+W(k),k=0,1,2,…,N (1)
In the above formula, X (k) is the target state corresponding to the observation image at the kth time, assuming that X (0) is the matrix of the target initial state, a (k) is the corresponding state transition matrix, and w (k) is zero-mean gaussian white noise.
Combined observation model
Y(k+1)=H(k+1)X(k+1)+V(k+1),k=0,1,2,…,N (2)
In the above formula, Y (k +1) is the observation image at the k +1 th time, H (k +1) is the corresponding observation matrix, and V (k +1) is white gaussian noise with zero mean.
And 2, describing the column vector form of the Kalman equation system, and rewriting the system equation in the matrix form into a form taking the column vector as a basic unit for the reason that no vector form-based Kalman filter exists at present.
X(k)=[x1(k) x2(k) … xj(k) … xn(k)](3)
W(k)=[w1(k) w2(k) … wj(k) … wn(k)](4)
Y(k)=[y1(k) y2(k) … yj(k) … yn(k)](5)
V(k)=[v1(k) v2(k) … vj(k) … vn(k)](6)
In the above formula, xj(k) Is the pixel gray value of the jth column of the target image matrix at the kth moment, wj(k) Is the pixel gray value of the jth column of the Gaussian white noise matrix at the kth momentj(k) Is the gray value of the pixel in the j column of the observed image matrix at the k timej(k) Is the pixel gray value of the jth column of the Gaussian white noise matrix at the kth moment.
The system model and the observation model are rewritten into a form taking a column vector as a basic unit
xj(k+1)=A(k)xj(k)+wj(k),k=0,1,2,…,N;j=1,2,…,n (7)
yj(k)=H(k)xj(k)+vj(k),k=0,1,2,…,N;j=1,2,…,n (8)
wj(k)~N[0,Q(k)],k=0,1,2,…,N;j=1,2,…,n (9)
vj(k)~N[0,R(k)],k=0,1,2,…,N;j=1,2,…,n (10)
And 3, fusing target sequence observation images, wherein a specific algorithm is as follows:
assuming initial estimation of a given state
Figure BDA0002508810060000031
And state initial estimate covariance matrix Pj(0) Obtaining the estimated value of the jth column of the target image based on the first k observed values
Figure BDA0002508810060000032
State estimation covariance matrix with time k
Figure BDA00025088100600000310
Step 3.1 State estimation one-step prediction equation
Figure BDA0002508810060000033
Step 3.2 State estimation error covariance matrix one-step prediction equation
Figure BDA0002508810060000034
Step 3.3 measurement one-step prediction equation
Figure BDA0002508810060000035
Step 3.4 optimal gain array Uj(k +1) calculation
Uj(k+1)=Pj(k+1|k)H(k+1)T[H(K+1)Pj(k+1|k)H(k+1)T+R]-1(16)
Step 3.5 obtaining real-time update equation from state estimation value and estimation error
Figure BDA0002508810060000036
Pj(k+1|k+1)=[I-Uj(k+1)H(k+1)]Pj(k+1|k) (18)
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000037
step 3.6 is based on the Kalman filtering fusion result of the sequence observation images Y (1), Y (2), …, Y (N), that is, the estimation value of the target image X is
Figure BDA0002508810060000038
Wherein
Figure BDA0002508810060000039
In the above process, j is 1,2, …, n.
The invention has the beneficial effects that: on one hand, the information of a plurality of damaged images is fused through the Kalman filtering principle to reconstruct an original image, so that the problem of limited effect obtained by restoring a single damaged image is solved; on the other hand, a system model and an observation model under a Kalman filtering framework are established to simulate the continuous deformation of a target sequence tracking image; the invention can fuse the damaged image set which can not be processed by the common image fusion technology to reconstruct the original image, and solves the problem that the image fusion technology has strict requirements on the image.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The application of the principles of the present invention will be further described with reference to fig. 1.
Step 1, establishing a system model and an observation model which accord with a Kalman filtering framework.
A system model of an object is represented as follows
X(k+1)=A(k)X(k)+W(k),k=0,1,2,…,N (1)
In the above formula, X (k) is the target image matrix at the kth time, assuming that X (0) is the initial state of the target image, a (k) is the state transition matrix of the target image, and w (k) is the zero-mean gaussian white noise matrix. X (k +1) can be understood as a target image at the k +1 th time obtained by performing linear transformation on a target image X (k) at the k th time through a state transition matrix a (k) and then adding white gaussian noise w (k), wherein a (k) adopts an identity matrix.
Giving an observation model
Y(k+1)=H(k+1)X(k+1)+V(k+1),k=0,1,2,…,N (2)
In the above formula, Y (k +1) is an observed image matrix at the k +1 th time, H (k +1) is an observed image matrix, and V (k +1) is a zero-mean gaussian white noise matrix. Y (k +1) can be understood as an observed image matrix obtained by observing a target image matrix X (k +1) at the k +1 moment and adding Gaussian white noise V (k +1), the image deformation can be caused by the change of the observed matrix H (k +1), and the invention can carry out noise removal and image reconstruction on the image which is randomly deformed.
Step 2, splitting the matrix into a column vector form
And rewriting the target image matrix, the observation image matrix and the Gaussian white noise matrix into a form taking a column vector as a basic unit.
X(k)=[x1(k) x2(k) … xj(k) … xn(k)](3)
W(k)=[w1(k) w2(k) … wj(k) … wn(k)](4)
Y(k)=[y1(k) y2(k) … yj(k) … yn(k)](5)
V(k)=[v1(k) v2(k) … vj(k) … vn(k)](6)
In the above formula, xj(k) Is the pixel gray value of the jth column of the target image matrix at the kth moment, wj(k) Is the pixel gray value of the jth column of the Gaussian white noise matrix at the kth momentj(k) Is the gray value of the pixel in the j column of the observed image matrix at the k timej(k) Is the pixel gray value of the jth column of the Gaussian white noise matrix at the kth moment.
Then, the system model and the observation model are rewritten into a form taking the column vector as a basic unit
xj(k+1)=A(k)xj(k)+wj(k),k=0,1,2,…,N;j=1,2,…,n (7)
yj(k)=H(k)xj(k)+vj(k),k=0,1,2,…,N;j=1,2,…,n (8)
wj(k)~N[0,Q(k)],k=0,1,2,…,N;j=1,2,…,n (9)
vj(k)~N[0,R(k)],k=0,1,2,…,N;j=1,2,…,n (10)
In the above formula, Q (k) is the variance of the Gaussian white noise matrix W (k), and R (k) is the variance of the Gaussian white noise matrix V (k). The target image matrix, the observation image matrix and the Gaussian white noise matrix in the system model and the observation model are split into a column of vector forms, and the Gaussian white noise in the observation image matrix can be effectively removed and the target image matrix can be reconstructed through Kalman filtering between the observation image matrices Y (1), Y (2), … and Y (N) in the same column.
And 3, fusing target sequence observation images, which specifically comprises the following steps:
initial estimation value of given state
Figure BDA0002508810060000051
And state initial estimate covariance matrix Pj(0) The initial state estimation value and the initial state estimation covariance matrix are initial values given according to actual conditions,
Figure BDA0002508810060000056
may be the j column, P, of the observed image matrix at the initial timej(0) May be an identity matrix.
The optimal estimation value of the jth column of the target image matrix can be obtained based on the jth column of the observed image matrix at the first k moments
Figure BDA0002508810060000052
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000053
is the optimal estimated value of the jth column of the target image matrix at the kth moment,
Figure BDA0002508810060000054
the mathematical expectation of the jth column of the target image matrix obtained from the jth column of the observed image matrix at the first k moments. The overall process of the invention is that
Figure BDA0002508810060000055
To obtain xj(k) I.e. the best estimate of the jth column of the target image matrix at the kth time instant.
Obtaining the optimal estimation covariance matrix of the jth column of the target image matrix at the kth moment
Figure BDA0002508810060000061
In the above formula, Pj(k | k) is the optimal estimated covariance matrix for the jth column of the target image matrix at the kth time instant,
Figure BDA0002508810060000062
the mathematical expectation of the covariance matrix for the jth column of the target image matrix at the kth time instant. This is an important parameter for fusing the predicted values of the next step in the present invention.
Step 3.1 one-step predictor calculation for jth column of target image matrix
Figure BDA0002508810060000063
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000064
and predicting the one-step predicted value of the jth column of the target image matrix at the k +1 th moment in one step by the optimal estimated value of the jth column of the target image matrix at the k th moment. The first step of the invention is to make an initial estimate based on a given state
Figure BDA0002508810060000065
Predicting the jth column of the target image matrix at the (k +1) th moment in one step
Figure BDA0002508810060000066
Step 3.2 covariance matrix one-step predictor calculation for jth column of target image matrix
Figure BDA0002508810060000067
In the above formula, PjAnd (k +1| k) is the covariance matrix one-step predicted value of the jth column of the target image matrix at the k +1 th moment, which is predicted by the optimal estimation covariance matrix of the jth column of the target image matrix at the kth moment in one step. The second step of the invention is to calculate the importance of obtaining the predicted value for fusing the next stepParameter Pj(k+1|k)。
Step 3.3 one-step predictor calculation for Observation of the jth column of the image matrix
Figure BDA0002508810060000068
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000069
and predicting the one-step predicted value of the jth column of the observed image matrix at the (k +1) th moment by the one-step predicted value of the jth column of the target image matrix at the (k +1) th moment in one step.
Step 3.4 optimal gain array Uj(k +1) calculation
Uj(k+1)=Pj(k+1|k)H(k+1)T[H(K+1)Pj(k+1|k)H(k+1)T+R]-1(16)
In the above formula, UjAnd (k +1) is the optimal gain array of the jth column of the observed image matrix at the (k +1) th moment.
UjAnd (k +1) is a core parameter of the invention, and a one-step predicted value of the jth column of the target image matrix at the (k +1) th moment and a one-step predicted value of the jth column of the observed image matrix can be fused to obtain an optimal estimated value of the jth column of the target image matrix.
Step 3.5 optimal estimation value of jth column of target image matrix and optimal estimation covariance matrix calculation of jth column of target image matrix
Figure BDA0002508810060000071
Pj(k+1|k+1)=[I-Uj(k+1)H(k+1)]Pj(k+1|k) (18)
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000072
is the optimal estimated value of the jth column of the target image matrix at the (k +1) th moment, Pj(k +1| k +1) is the optimal estimated covariance matrix for the jth column of the target image matrix at time instant k +1,
Figure BDA0002508810060000073
the result of this step is substituted into step 3.1 again as the optimal estimate for the previous time instant
Figure BDA0002508810060000074
And (3) performing iteration circulation, and obtaining the optimal estimation value of the target image X after the iteration of the observation image matrixes Y (1), Y (2), …, Y (N) is finished, namely the image restored by the method.
Step 3.6, through observing the image matrixes Y (1), Y (2), …, Y (N), the optimal estimated value of the target image matrix X at the Nth moment can be calculated as
Figure BDA0002508810060000075
Wherein
Figure BDA0002508810060000076
E { X | Y (1), …, Y (j), …, Y (N) } is the mathematical expectation of the target image matrix X at the Nth time calculated from the observed image matrices Y (1), Y (2), …, Y (N),
Figure BDA0002508810060000077
the optimal estimated value of the jth column of the target image matrix at the Nth moment
Figure BDA0002508810060000078
In the above formula, the first and second carbon atoms are,
Figure BDA0002508810060000079
is the mathematical expectation of the jth column of the nth time target image matrix X calculated for the jth column from the observed image matrices Y (1), Y (2), …, Y (N).
Figure BDA00025088100600000710
After repeating the observation image matrix Y (1), Y (2), …, Y (N) for N timesAnd obtaining a de-noised reconstructed image.

Claims (1)

1. A target sequence tracking image recovery method based on Kalman filtering is characterized by comprising the following steps:
step 1, establishing a system model and an observation model conforming to a Kalman filtering framework
The system model of the target is represented as follows
X(k+1)=A(k)X(k)+W(k),k=0,1,2,...,N (1)
In the above formula, X (k) is a target image matrix at the kth time, and X (0) is set as the initial state of the target image, a (k) is a state transition matrix of the target image, and w (k) is a zero-mean gaussian white noise matrix;
the observation model is expressed as follows
Y(k+1)=H(k+1)X(k+1)+V(k+1),k=0,1,2,...,N (2)
In the above formula, Y (k +1) is an observed image matrix at the k +1 th time, H (k +1) is an observed image matrix, and V (k +1) is a zero-mean gaussian white noise matrix;
step 2, rewriting the target image matrix, the observation image matrix and the Gaussian white noise matrix into an expression with the column vector as a basic unit
X(k)=[x1(k) x2(k)...xj(k)...xn(k)](3)
W(k)=[w1(k) w2(k)...wj(k)...wn(k)](4)
Y(k)=[y1(k) y2(k)...yj(k)...yn(k)](5)
V(k)=[v1(k) v2(k)...vj(k)...vn(k)](6)
In the above formula, xj(k) Is the pixel gray value of the jth column of the target image matrix at the kth moment, wj(k) Is the pixel gray value of the jth column of the Gaussian white noise matrix at the kth momentj(k) Is the gray value of the pixel in the j column of the observed image matrix at the k timej(k) The gray value of the pixel of the jth column of the Gaussian white noise matrix at the kth moment;
then, the system model and the observation model are rewritten into an expression with the column vector as a basic unit
xj(k+1)=A(k)xj(k)+wj(k),k=0,1,2,...,N;j=1,2,...,n (7)
yj(k)=H(k)xj(k)+vj(k),k=0,1,2,...,N;j=1,2,...,n (8)
wj(k)~N[0,Q(k)],k=0,1,2,...,N;j=1,2,...,n (9)
vj(k)~N[0,R(k)],k=0,1,2,...,N;j=1,2,...,n (10)
In the above formula, Q (k) is the variance of the Gaussian white noise matrix W (k), and R (k) is the variance of the Gaussian white noise matrix V (k);
and 3, fusing target sequence observation images, which specifically comprises the following steps:
initial estimation value of given state
Figure FDA0002508810050000021
And state initial estimate covariance matrix Pj(0) Based on the j column of the observed image matrix at the first k moments, the optimal estimation value of the j column of the target image matrix can be obtained
Figure FDA0002508810050000022
In the above formula, the first and second carbon atoms are,
Figure FDA0002508810050000023
is the optimal estimated value of the jth column of the target image matrix at the kth moment,
Figure FDA0002508810050000024
a mathematical expectation of a jth column of the target image matrix obtained from a jth column of the observed image matrix at the first k moments;
obtaining the optimal estimation covariance matrix of the jth column of the target image matrix at the kth moment
Figure FDA0002508810050000025
In the above formula, Pj(k | k) is the optimal estimated covariance matrix for the jth column of the target image matrix at the kth time instant,
Figure FDA0002508810050000026
a mathematical expectation of a covariance matrix of a jth column of a target image matrix at a kth time;
step 3.1 one-step predictor calculation for jth column of target image matrix
Figure FDA0002508810050000027
In the above formula, the first and second carbon atoms are,
Figure FDA0002508810050000028
the one-step prediction value of the jth column of the target image matrix at the k +1 th moment is predicted by the optimal estimation value of the jth column of the target image matrix at the kth moment in one step;
step 3.2 covariance matrix one-step predictor calculation for jth column of target image matrix
Figure FDA0002508810050000029
In the above formula, Pj(k +1| k) is a covariance matrix one-step predicted value of the jth column of the target image matrix at the k +1 moment, which is predicted by the optimal estimation covariance matrix of the jth column of the target image matrix at the k moment in one step;
step 3.3 one-step predictor calculation for Observation of the jth column of the image matrix
Figure FDA00025088100500000210
In the above formula, the first and second carbon atoms are,
Figure FDA00025088100500000211
predicting the one-step predicted value of the jth column of the observation image matrix at the (k +1) th moment by the one-step predicted value of the jth column of the target image matrix at the (k +1) th moment in one step;
step 3.4 optimal gain array Uj(k +1) calculation
Uj(k+1)=Pj(k+1|k)H(k+1)T[H(K+1)Pj(k+1|k)H(k+1)T+R]-1(16)
In the above formula, Uj(k +1) is the optimal gain array of the jth column of the observed image matrix at the (k +1) th moment;
step 3.5 optimal estimation value of jth column of target image matrix and optimal estimation covariance matrix calculation of jth column of target image matrix
Figure FDA0002508810050000031
Pj(k+1|k+1)=[I-Uj(k+1)H(k+1)]Pj(k+1|k) (18)
In the above formula, the first and second carbon atoms are,
Figure FDA0002508810050000032
is the optimal estimated value of the jth column of the target image matrix at the (k +1) th moment, Pj(k +1| k +1) is the optimal estimated covariance matrix for the jth column of the target image matrix at time instant k +1,
Figure FDA0002508810050000033
step 3.6, the optimal estimation value of the target image matrix X at the nth time can be calculated by observing the image matrices Y (1), Y (2)
Figure FDA0002508810050000034
Wherein
Figure FDA0002508810050000035
An optimal estimation value of the target image matrix X at the nth time, E { X | Y (1),. ·, Y (j),. and Y (N))) is a mathematical expectation of the target image matrix X at the nth time calculated from the observed image matrices Y (1), Y (2),. and Y (N)),
Figure FDA0002508810050000036
the optimal estimated value of the jth column of the target image matrix at the Nth moment
Figure FDA0002508810050000037
In the above formula, the first and second carbon atoms are,
Figure FDA0002508810050000038
mathematical expectations of the jth column of the nth time target image matrix X calculated from the jth columns of the observed image matrices Y (1), Y (2).
CN202010454432.5A 2020-05-26 2020-05-26 Target sequence tracking image recovery method based on Kalman filtering Active CN111612729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010454432.5A CN111612729B (en) 2020-05-26 2020-05-26 Target sequence tracking image recovery method based on Kalman filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010454432.5A CN111612729B (en) 2020-05-26 2020-05-26 Target sequence tracking image recovery method based on Kalman filtering

Publications (2)

Publication Number Publication Date
CN111612729A true CN111612729A (en) 2020-09-01
CN111612729B CN111612729B (en) 2023-06-23

Family

ID=72204298

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010454432.5A Active CN111612729B (en) 2020-05-26 2020-05-26 Target sequence tracking image recovery method based on Kalman filtering

Country Status (1)

Country Link
CN (1) CN111612729B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999696A (en) * 2012-11-13 2013-03-27 杭州电子科技大学 Capacity information filtering-based pure direction tracking method of noise-related system
CN104298650A (en) * 2014-09-30 2015-01-21 杭州电子科技大学 Multi-method fusion based Kalman filtering quantization method
US20150363940A1 (en) * 2014-06-08 2015-12-17 The Board Of Trustees Of The Leland Stanford Junior University Robust Anytime Tracking Combining 3D Shape, Color, and Motion with Annealed Dynamic Histograms
CN105913455A (en) * 2016-04-11 2016-08-31 南京理工大学 Local image enhancement-based object tracking method
CN106780542A (en) * 2016-12-29 2017-05-31 北京理工大学 A kind of machine fish tracking of the Camshift based on embedded Kalman filter
CN107169993A (en) * 2017-05-12 2017-09-15 甘肃政法学院 Detection recognition method is carried out to object using public security video monitoring blurred picture
CN109829938A (en) * 2019-01-28 2019-05-31 杭州电子科技大学 A kind of self-adapted tolerance volume kalman filter method applied in target following
CN110443832A (en) * 2019-06-21 2019-11-12 西北工业大学 A kind of evidence filtered target tracking based on observation interval value

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999696A (en) * 2012-11-13 2013-03-27 杭州电子科技大学 Capacity information filtering-based pure direction tracking method of noise-related system
US20150363940A1 (en) * 2014-06-08 2015-12-17 The Board Of Trustees Of The Leland Stanford Junior University Robust Anytime Tracking Combining 3D Shape, Color, and Motion with Annealed Dynamic Histograms
CN104298650A (en) * 2014-09-30 2015-01-21 杭州电子科技大学 Multi-method fusion based Kalman filtering quantization method
CN105913455A (en) * 2016-04-11 2016-08-31 南京理工大学 Local image enhancement-based object tracking method
CN106780542A (en) * 2016-12-29 2017-05-31 北京理工大学 A kind of machine fish tracking of the Camshift based on embedded Kalman filter
CN107169993A (en) * 2017-05-12 2017-09-15 甘肃政法学院 Detection recognition method is carried out to object using public security video monitoring blurred picture
CN109829938A (en) * 2019-01-28 2019-05-31 杭州电子科技大学 A kind of self-adapted tolerance volume kalman filter method applied in target following
CN110443832A (en) * 2019-06-21 2019-11-12 西北工业大学 A kind of evidence filtered target tracking based on observation interval value

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WANG LILI 等: "Image Reconstruction Algorithm Based on Kalman Filter for Electrical Capacitance Tomography", 《2014 FOURTH INTERNATIONAL CONFERENCE ON INSTRUMENTATION AND MEASUREMENT, COMPUTER, COMMUNICATION AND CONTROL》 *
YAN LI 等: "An Online-Updating Deep CNN Method based on Kalman Filter for Illumination-Drifting Road Damage Classification", 《THE 2018 INTERNATIONAL CONFERENCE ON CONTROL AUTOMATION & INFORMATION SCIENCES (ICCAIS 2018)》 *
宁子健 等: "非线性***的改进多模型扩展 Kalman 滤波器", 《控制工程》 *
葛泉波 等: "面向工程应用的 Kalman 滤波理论深度分析", 《指挥与控制学报》 *

Also Published As

Publication number Publication date
CN111612729B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109859147B (en) Real image denoising method based on generation of antagonistic network noise modeling
CN112200750B (en) Ultrasonic image denoising model establishing method and ultrasonic image denoising method
CN106952228B (en) Super-resolution reconstruction method of single image based on image non-local self-similarity
CN102208100B (en) Total-variation (TV) regularized image blind restoration method based on Split Bregman iteration
CN110210524B (en) Training method of image enhancement model, image enhancement method and device
CN108346133B (en) Deep learning network training method for super-resolution reconstruction of video satellite
CN112581378B (en) Image blind deblurring method and device based on significance strength and gradient prior
CN112184549B (en) Super-resolution image reconstruction method based on space-time transformation technology
Yang et al. A survey of super-resolution based on deep learning
CN113962905A (en) Single image rain removing method based on multi-stage feature complementary network
CN113096032B (en) Non-uniform blurring removal method based on image region division
CN117078516B (en) Mine image super-resolution reconstruction method based on residual mixed attention
CN113538258A (en) Image deblurring model and method based on mask
CN103903239B (en) A kind of video super-resolution method for reconstructing and its system
CN111612729A (en) Target sequence tracking image recovery method based on Kalman filtering
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN113240581A (en) Real world image super-resolution method for unknown fuzzy kernel
CN112149613A (en) Motion estimation evaluation method based on improved LSTM model
CN116563110A (en) Blind image super-resolution reconstruction method based on Bicubic downsampling image space alignment
CN113808039B (en) Migration learning defogging method and system based on Gaussian process mapping
CN112529081B (en) Real-time semantic segmentation method based on efficient attention calibration
Babu et al. Review on CNN based image denoising
CN112819743A (en) General video time domain alignment method based on neural network
CN116823656B (en) Image blind deblurring method and system based on frequency domain local feature attention mechanism
CN113793269B (en) Super-resolution image reconstruction method based on improved neighborhood embedding and priori learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant