CN104050685B - Moving target detecting method based on particle filter visual attention model - Google Patents

Moving target detecting method based on particle filter visual attention model Download PDF

Info

Publication number
CN104050685B
CN104050685B CN201410255267.5A CN201410255267A CN104050685B CN 104050685 B CN104050685 B CN 104050685B CN 201410255267 A CN201410255267 A CN 201410255267A CN 104050685 B CN104050685 B CN 104050685B
Authority
CN
China
Prior art keywords
attention
particle
particles
motion
centerdot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410255267.5A
Other languages
Chinese (zh)
Other versions
CN104050685A (en
Inventor
刘龙
樊波阳
刘金星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201410255267.5A priority Critical patent/CN104050685B/en
Publication of CN104050685A publication Critical patent/CN104050685A/en
Application granted granted Critical
Publication of CN104050685B publication Critical patent/CN104050685B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a kind of moving target detecting method based on particle filter visual attention model, Bayesian Estimation principle is first depending on, builds particle filter two-way fusion attention model;Then based on particle filter two-way fusion attention model framework, B U and T D notices are respectively with motion attention and color of object notice to be input into, particle distribution is changed by particle weight computing, notice notable figure is formed, and finally determine moving target position.The method of the present invention, time of fusion and spatial attention so that it is more accurate that motion attention is calculated;The bottom-up and top-down notice of fusion, it is simple and effective to simulate human visual attention forming process;For complicated global motion scene, the validity and accuracy of moving object detection are improve.

Description

Moving target detection method based on particle filter visual attention model
Technical Field
The invention belongs to the technical field of video image detection, and relates to a moving target detection method based on a particle filter visual attention model.
Background
Moving object detection is one of the important problems in the field of machine vision, and is a precondition for object tracking and identification, but in a complex moving scene, the existing moving object detection method still has great limitations and defects. In recent years, visual perception research is gradually merged into human physiological and psychological research results, the main idea is to adopt a computer to simulate local functions of human physiology to solve the problems existing in the visual field, visual attention is typical of the research, and the research results have important promotion effects on visual problems such as target detection, segmentation and the like.
The traditional moving target detection method comprises an inter-frame difference method, a background difference method and a global motion compensation method, wherein the background difference method and the inter-frame difference method are only limited to a local moving scene, the global motion compensation method is wide in application range, but accuracy is affected by target size and motion intensity, when a target body is large or the motion is strong, the global motion estimation accuracy is reduced, global effective compensation cannot be carried out, and large errors exist in moving target detection.
Human visual attention is generated by the combined action of Bottom-Up (B-U) and Top-Down (T-D). In 2002, Itti and Koch establish a B-U visual attention model with main characteristics of brightness, color, direction and the like, and subsequent research derives a plurality of visual attention calculation methods and applications. Currently, visual attention models can be roughly classified into a bottom-up type and a bidirectional type. Describing a B-U attention calculation method from bottom to top; the bidirectional type describes the attention calculation method of the combined action of B-U and T-D.
Attention is the initial response of the human visual system to external observations, and some scholars have conducted preliminary studies on the problem of target detection from the point of attention. Some of these use a two-way attention approach for static object detection. The Sang-WooBan and the like carry out self-organizing neural network learning on the color characteristics of a specific static target, generate a weight matrix, and adjust a B-U attention calculation process by taking the weight matrix as an influence factor of T-D to form a target attention saliency map. And (4) extracting the direction characteristics of the target as T-D attention by Yuming Fang and the like, performing proportional weighted fusion with the B-U attention, and finally determining the position of the target. YuanlongYu et al establish a target feature Long Term Memory (LTM) unit, calculate a position probability distribution bias by comparison with low-level features, and perform two-way weighted fusion to determine a target position.
In addition, a motion attention model is established in documents for detecting a moving target, and the main idea is to define the motion attention model according to motion contrast so as to enable a motion salient region to approach a target region. Yu-Fei Ma comprehensively defines a motion attention model according to the motion vector energy, the spatial correlation and the temporal correlation of the motion vector field obtained by decompression from the MPEG code stream, and a motion saliency region can be obtained through the model. Junwei Han divides attention into static attention and dynamic attention, the static attention is mainly attracted by information such as brightness and color of an image, the dynamic attention is defined by calculating the proportion of changing pixels of an area on the basis of global motion compensation, and an attention model is finally obtained by fusing the static attention and the dynamic attention and is mainly applied to moving target detection.
In summary, the research on target detection from the visual attention point of view has positive significance, but most of the current research aims at the detection of static targets, and the research on the detection of moving targets is lacked; in addition, the exercise attention model is limited to a bottom-up data-driven model method, and there is no bidirectional attention model that fuses a plurality of features such as color and motion.
Disclosure of Invention
The invention aims to provide a moving target detection method based on a particle filter visual attention model, and solves the problems that in the prior art, a moving attention model is limited to a bottom-up data drive model method, a plurality of characteristics such as color and motion are not fused, the moving attention model cannot adapt to a complex moving scene, and a moving target is difficult to detect effectively and accurately.
The invention adopts the technical scheme that a moving target detection method based on a particle filter visual attention model is characterized in that firstly, a particle filter bidirectional fusion attention model is constructed according to the Bayesian estimation principle; and then on the basis of a particle filter bidirectional fusion attention model framework, taking the motion attention and the target color attention as B-U and T-D attention inputs respectively, calculating and changing the distribution state of particles through particle weight values to form an attention saliency map, and finally determining the position of the motion target.
The invention has the beneficial effects that:
1) the method has the advantages that the video image is subjected to Gaussian multi-scale decomposition, the visual characteristics of human beings are better met, the motion vector field is subjected to superposition and filtering preprocessing, and the influence of estimation errors and noise is reduced.
2) Temporal and spatial attention are fused, so that the motion attention calculation is more accurate.
3) A particle filtering mechanism is introduced according to a Bayesian estimation principle, bottom-up and top-down attention is fused, a bidirectional fusion attention model is constructed, and the human visual attention forming process is simulated simply and effectively.
4) The moving target detection is carried out by using the bidirectional fusion attention model, and the effectiveness and the accuracy of the moving target detection are improved aiming at a complex global moving scene.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention;
FIG. 2 is a three-level Gaussian pyramid of a video image according to an embodiment of the present invention, where (a) is an original scale, (b) is a next lower scale, and (c) is a lower scale;
fig. 3 shows the corresponding motion vector field and its pre-processing result in the present invention, wherein (a) is the motion vector field, (b) the motion vector field after the filtering is superimposed, and (c) the motion vector field after the median filtering is performed;
FIG. 4 is a B-U attention saliency map of an implementation of the present invention, wherein (a) is an attention saliency map and (B) is an attention saliency heat map;
FIG. 5 is a particle filter fusion two-way attention result of the present invention, wherein (a) is the importance sampling result and (b) is the re-sampled particle distribution map;
fig. 6 is a schematic diagram of particle attention saliency map and target localization implemented in accordance with the present invention, wherein (a) a saliency value is generated for the spatial distribution of particles, (b) a saliency map is obtained from resampled particles, and (c) a target localization result.
Fig. 7 shows experimental results of a "vehicle" video sequence in example 1 of the present invention, where the first to fourth rows are respectively the 5 th, 28 th, 40 th and 60 th frames, where (a) is an original frame, (b) is a Yu-Fei Ma motion attention model result, (c) is a global motion compensation-based visual attention (GMC-VA) model result, (d) is a YumingFang two-way weighted fusion visual attention result, (e) is a two-way attention model result of the present invention, and (f) is a moving object detection positioning result of the method of the present invention;
fig. 8 shows the experimental results of the "horse" video sequence of embodiment 2 of the present invention, wherein the first to fourth rows are the 4 th, 15 th, 38 th and 100 th frames respectively.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention relates to a moving target detection method based on a particle filter visual attention model, which comprises the steps of firstly, constructing a particle filter bidirectional fusion attention model according to a Bayesian estimation principle; then based on a particle filter bidirectional fusion attention model frame, the motion attention and the target color attention are respectively used as B-U and T-D attention inputs, the particle distribution state is changed through particle weight calculation, an attention saliency map is formed (the attention saliency is calculated through the particle distribution state after filtering), and finally the position of the motion target is determined.
The method comprises the following specific implementation steps:
step 1, calculating the attention of the movement at the current t moment as B-U attention, and recording the significance asBy passingControlling the sampling of the initial importance of the particles;
carrying out Gaussian multi-scale decomposition on the video image; estimating motion vector fields of the motion vectors respectively, and preprocessing the motion vector fields; defining time and space attention factors, constructing motion attention, and obtaining by fusing multi-scale motion attentionBy passingControlling the sampling of particle importance. The method specifically comprises the following steps:
1.1) Gaussian multiscale decomposition of images
The multi-scale analysis adopts a gaussian image pyramid method, and specifically, a three-layer scale image can be obtained through gaussian smoothing and down-sampling processing, as shown in fig. 2, in the embodiment, a three-layer gaussian pyramid method is adopted.
1.2) estimating a motion vector field by adopting an optical flow method, and performing two preprocessing of superposition and filtering on the motion vector field
As shown in fig. 3(a), the motion vector field estimated by the optical flow method generally exhibits sparse and local and chaotic motion features, because the motion features of adjacent frames are not strong enough, and there is a certain amount of noise in the video signal, which is not favorable for accurate calculation of motion attention. By performing two preprocessing of superposition and filtering on the motion vector field, the influence of estimation errors and noise can be well reduced.
The motion vector superposition process is as follows: let the current frame motion vector field be MVFtThe center coordinate of the macro block is (k, l), and the corresponding motion vector is expressed asSuperimposed with the motion vectors of preceding and succeeding frames according to the formulaThe results of the calculation and the superposition are shown in FIG. 3 (b).
The motion vectors are processed after superposition by median filtering, i.e. for each non-zero motion vector, the median of the neighboring motion vectors is used to replace its value, and the median filtering result is shown in fig. 3 (c).
1.3) calculating the attention of the movement as the attention of the B-U
Calculating the attention of the movement, defining the attention of both time and spaceAndrepresenting temporal and spatial attention, respectively, defined as:
whereinAndrepresenting the motion vector with coordinate position (i, j) in the motion vector field at time t and t-1, respectively,representsThe mean of the motion vectors in the range of the neighborhood Λ,
the motor attention is obtained by linear fusion of temporal and spatial attention, i.e.
In the formula, α and β are coefficients of positive values, and B-U attention is shown in FIG. 4.
1.4) byControlling particle importance sampling
Adopting the motion significance characteristics to adjust the density of Gaussian random particle sampling to obtain a random sampling result which changes along with the motion significance, adopting the Gaussian random sampling to obtain the initial distribution state of particles, and settingIndependently and equally distributing, and making:
wherein, mux、μyAndare respectivelyMean and variance of the pseudo-random sequence, expressed in (μ) by the above formulaxy) Generating random Gaussian sampling results in a region of the coordinate center, wherein the density of the sampled particles in the region is regulated and controlled by the motion significance, assumingIs the saliency value of the saliency map at time t in (x, y) coordinates, the sampling density function is defined as follows:
wherein i and j represent the horizontal and vertical coordinates, mean values, respectively, in the saliency mapSum varianceThe significance sampling result is shown in fig. 5 (a).
If the initial time is, sampling particles according to the method to form an initial particle distribution state; otherwise, sampling a part of particles at the position of the difference between the motion attention and the previous motion attention, and replacing the same number of particles with lower weight at the previous moment to serve as the initial particle state at the moment.
Step 2, calculating T-D color attention according to target characteristics
T-D color attention significance is recorded asThe magnitude of T-D attention is measured by the degree of similarity of the target feature to the image feature,
2.1) setting the color histogram as the target feature quantization expression method, and recording asm is the number of components, the color distribution of the particle target region is defined asWherein (.) is a Delta function,to normalize the factor so thatK (.) is a kernel function Epanechnikov, defined as
2.2) calculation of T-D attention significance:where ρ is the Barcharya (Bhattacharyya) coefficient.
Step 3, fusing bidirectional attention by adopting particle filtering, calculating the weight of the particles, and forming new particle distribution after resampling
The particle filtering is regarded as a Bayesian estimation solving process, namely, an observed value is given, a condition expectation value of the current state is estimated, the process is divided into three links of importance sampling, weight calculation and resampling,
3.1) importance sampling
Defining observation likelihood probabilities of B-U and T-D attention respectively, updating the particle weight by Bayesian fusion of the observation likelihood probabilities of B-U and T-D,
the importance sampling method adopts the motion significance characteristics to adjust the density of Gaussian random particle sampling as described in the step 1.4);
3.2) weight calculation
Using resampling to eliminate particles with lower weight, making the particles gather around the particles with higher weight, and setting the particle state asObserved value is ZkThen the posterior probability density function at time k is approximated as:
wherein,suppose the state of the bidirectional fusion attention saliency map is x0:tB-U and T-D attention observations areAndthe posterior probability P (X)T|ZT) Is shown as
The assumption is that: (i) the temporal dynamic process conforms to the markov process; (ii) the observed values at different times are independent of each other, and the observed values are only related to the current state, so that the posterior probability solution of the bidirectional attention is simply deduced as follows:
according to the sampling theorem of importance, the weight of a particle is λ(i)Proportional ratioIs represented as follows:
then there are:
wherein,represents the conditional probability of the B-U attention observation under the current particle attention state, andis the conditional probability of the T-D attention observation under the B-U observation and the current particle attention state,andthe weight of the updated particle is directly determined, and is defined as follows:
3.3) resampling
After the weight of the particle is recalculated, resampling is added to eliminate the particles with lower weight, so that the particles are gathered around the particles with high weight.
Step 4, calculating the attention saliency map SM according to the particle distribution state at the momentt' and determining the target position
4.1) particle saliency map SMt
After resampling, the density degree of particle distribution reflects the attention intensity, the attention significance of the dense particle distribution area is strong, the attention significance of the sparse distribution area is weak,
according to the distribution state of the particles, the attention saliency is defined as follows on a two-dimensional space:
wherein (x, y) is the spatial position of the particle distribution, n is the number of particles,for the window width, the window function is a two-dimensional gaussian window function, and then the above equation is transformed:
a schematic diagram of the calculation result of attention saliency is shown in fig. 6(a), and a current attention saliency map is shown in fig. 6(b) as a current attention saliency value after superimposing and normalizing a calculated attention saliency value and a moving attention
4.2) target location
After the above treatment, all the particles in each frame of image are obtainedIn the distribution state, the moving target has a higher possibility as the particle swarm density is higher, so that the target position information is estimated by calculating the particle swarm position mean value, and the calculation formula is as follows:and (5) obtaining the finished product.
The final target location result is shown in fig. 6 (c).
Fig. 7 and 8 show the test experiment results of the "aircraft" sequence and the "horse" sequence, respectively. The "aircraft" sequence: the target to be detected is selected as a red-coat pilot, the target slides at a high speed in the air, the background is a ground distant view object, the texture is complex, the aircraft moves towards the upper part of the image, and the lens moves along with the aircraft; the "horse" sequence: the target to be measured is selected as a horse, the target moves violently, the background is trees and grasslands, the texture changes flatly, the horse runs to the left side of the image, and the lens moves to the left side rapidly. The test result shows that the overall effect of the method is superior to that of the MA algorithm, the GMC-VA algorithm and the YumingFang algorithm. The MA algorithm is influenced by the motion estimation precision, so that the motion attention calculation is inaccurate, when the motion is relatively simple, a good effect can be obtained, such as an 'aircraft' sequence, and when the motion is strong, the motion attention is relatively messy, and the significance of a motion target cannot be effectively reflected, such as a 'horse' sequence; the GMA-VA algorithm integrates global motion compensation and static attention, but generally, the target significance effect is poor due to the fact that global motion estimation has errors; the experimental result shows that the method of the invention has a significant attention map, compared with the attention maps of the MA algorithm, the GMC-VA algorithm and the YumingFang algorithm, the background noise interference is greatly weakened, the particle distribution state changes along with the motion state and the target position due to the particle filtering process, and the particles are finally converged and gathered in the motion target area, so that the attention accuracy is greatly improved.

Claims (3)

1. A moving target detection method based on a particle filter visual attention model is characterized in that firstly, a particle filter bidirectional fusion attention model is constructed according to a Bayesian estimation principle; then based on a particle filter bidirectional fusion attention model frame, the motion attention and the target color attention are respectively used as B-U attention and T-D attention input, the distribution state of particles is changed through particle weight calculation to form an attention saliency map, and finally the position of a motion target is determined,
the method specifically comprises the following steps:
step 1, calculating the attention of the movement at the current time t as B-U attention, and recording the attention significance at the time B-U asBy passingControlling the initial importance sampling of the particles, and specifically performing the following steps:
1.1) Gaussian multiscale decomposition of images
The multi-scale analysis adopts a Gaussian image pyramid method;
1.2) estimating a motion vector field by adopting an optical flow method, and performing two preprocessing of superposition and filtering on the motion vector field
The motion vector superposition process is as follows: let the current frame motion vector field be MVFtThe center coordinate of the macro block is (k, l), and the corresponding motion vector is expressed asSuperimposed with the motion vectors of preceding and succeeding frames according to the formulaThe calculation is carried out according to the calculation,
the motion vectors are processed by median filtering after superposition, namely, for each non-zero motion vector, the median of the adjacent motion vectors is used for replacing the value of the motion vector;
1.3) calculating the attention of the movement as the attention significance of the moment B-U
Calculating the attention of the movement, defining the attention of both time and spaceAndrespectively representing the temporal and spatial attention specific values, respectively defined as:
whereinAndrepresenting the motion vector with coordinate position (i, j) in the motion vector field at time t and t-1, respectively,representsThe mean of the motion vectors in the range of the neighborhood Λ,
the motor attention is obtained by linear fusion of temporal and spatial attention, i.e.Wherein α and β are coefficients having positive values;
1.4) byControlling particle importance sampling
Adopting the motion significance characteristics to adjust the density of Gaussian random particle sampling to obtain a random sampling result which changes along with the motion significance, adopting the Gaussian random sampling to obtain the initial distribution state of particles, and setting 1,2, …, N independently and equally distributed, such that:
x ^ ( i ) = ( Σ i = 1 N z 1 ( i ) - N · μ x ) / N · δ x 2 , i = 1 , 2 , ... , N y ^ ( i ) = ( Σ i = 1 N z 2 ( i ) - N · μ y ) / N · δ y 2
wherein, mux、μyAndare respectivelyMean and variance of the pseudo-random sequence in (μ)xy) Generating random Gaussian sampling results in a region of the coordinate center, wherein the density of the sampled particles in the region is regulated and controlled by the motion significance, assumingIs the saliency of the (x, y) coordinate of the attention saliency at time B-U at time t, and defines the sampling density function as follows:
Γ ( x , y ) ∈ Λ ( SM t B - U ( x , y ) ) = 1 2 π δ A · exp ( - 1 2 ( SM i B - U ( x , y ) - μ A δ A ) 2 ) ,
wherein x and y represent the horizontal and vertical coordinates, mean values, respectively, in the saliency mapSum variance
If the time is the initial time,sampling particles by a method for controlling particle importance sampling to form an initial particle distribution state; otherwise, sampling a part of particles at the position of the difference between the motion attention and the previous motion attention, and replacing the same number of particles with lower weight at the previous moment to serve as the particle distribution state at the current moment;
step 2, calculating the T-D attention according to the target characteristics, and the specific steps are as follows:
T-D attention significance is recorded as SMT-DThe T-D attention is measured by the similarity degree of the target characteristic and the image characteristic,
2.1) setting the color histogram as the target feature quantization expression method, and recording asm is the number of components, the color distribution of the particle target region is defined as
q ^ ( u ) ( X i ) = C · Σ i = 1 m K ( | | x - x i h | | ) · δ ( b ( x i ) - u ) ,
Wherein (·) is a Delta function,to normalize the factor so thatK (-) is the kernel function Epanechnikov, defined as
2.2) calculation of T-D attention significance:wherein rho is a Bucharian coefficient;
step 3, adopting particle filtering to fuse bidirectional attention, calculating a particle weight, and forming new particle distribution after resampling;
step 4, calculating the attention saliency map SM according to the particle distribution state at the momenttAnd determines the target location.
2. The method according to claim 1, wherein the step 3 specifically comprises the following steps:
3.1) importance sampling
Defining the observation likelihood probability of B-U attention and T-D attention respectively, updating the weight of particles by Bayesian fusion of the observation likelihood probability of B-U attention and T-D attention,
the importance sampling method adopts the motion significance characteristics to adjust the density of Gaussian random particle sampling as described in the step 1.4);
3.2) weight calculation
Using resampling to eliminate particles with lower weight, making the particles gather around the particles with higher weight, and setting the particle state asObserved value is ZtThen the posterior probability density function at time t is approximated as:whereinSuppose the state of the bidirectional fusion attention saliency map is x0:tThe observed values of B-U attention and T-D attention are respectivelyAndthe posterior probability density p (x)t|zt) Is shown as
The assumption is that: (i) the temporal dynamic process conforms to the markov process; (ii) the observed values at different times are independent of each other, and the observed values are only related to the current state, so that the solution of the posterior probability density of the bidirectional attention is simply deduced as follows:
p ( x 0 : t | z 1 : t B - U , z 1 : t T - D ) = p ( z t B - U | x 0 : t , z 1 : t - 1 B - U , z 1 : t T - D ) p ( x 0 : t | z 1 : t - 1 B - U , z 1 : t T - D ) P ( z t B - U | z 1 : t - 1 B - U , z 1 : t T - D ) = 1 k t p ( z t B - U | x t ) p ( z t T - D | x t , z t B - U ) p ( x t | x 0 : t - 1 ) p ( x 0 : t - 1 | z 1 : t - 1 B - U , z 1 : t - 1 T - D )
according to the sampling theorem of importance, the weight of the particlesProportional ratioIs represented as follows:
then there are:
λ t ( i ) ∞λ t - 1 ( i ) · p ( z t B - U | x ^ t ( i ) ) p ( z t T - D | x ^ t ( i ) , z t B - U ) p ( x ^ t ( i ) | x ^ t - 1 ( i ) ) q ( x ^ t ( i ) | x ^ t - 1 ( i ) , z t ) ≈ λ t - 1 ( i ) · p ( z t B - U | x ^ t ( i ) ) p ( z t T - D | x ^ t ( i ) , z t B - U )
wherein,represents the conditional probability of the B-U attention observation under the current particle attention state, andis the conditional probability of the T-D attention observation under the B-U attention observation and the current particle attention state,andthe weight of the updated particle is directly determined, and is defined as follows:
p ( z t B - U | x ^ t ( i ) ) = 1 2 π · δ B - U · exp ( - ( SM t B - U ) 2 2 · δ B - U 2 )
p ( z t T - D | x ^ t ( i ) , z t B - U ) = 1 2 π · δ T - D · exp ( - ( SM T - D ) 2 · e ( SM t B - U ) 2 2 · δ T - D 2 ) ;
3.3) resampling
After the weight of the particles is recalculated, resampling is added to eliminate the particles with lower weight, so that the particles are gathered around the particles with higher weight.
3. The method according to claim 2, wherein the step 4 specifically comprises the following steps:
4.1) attention saliency map SMt
According to the distribution state of the particles, the attention saliency is defined as follows on a two-dimensional space:
wherein (x, y) is the spatial position of the particle distribution, N is the number of particles,for the window width, the window function is a two-dimensional gaussian window function, and then the above equation is transformed:
superposing and normalizing the attention significance value obtained by calculation with the movement attention to be used as a current attention significance value;
4.2) target location
After all the steps, the distribution state of all the particles in each frame of image is obtained, the target position information is estimated by calculating the particle swarm position mean value, and the calculation formula is as follows:
and (5) obtaining the finished product.
CN201410255267.5A 2014-06-10 2014-06-10 Moving target detecting method based on particle filter visual attention model Expired - Fee Related CN104050685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410255267.5A CN104050685B (en) 2014-06-10 2014-06-10 Moving target detecting method based on particle filter visual attention model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410255267.5A CN104050685B (en) 2014-06-10 2014-06-10 Moving target detecting method based on particle filter visual attention model

Publications (2)

Publication Number Publication Date
CN104050685A CN104050685A (en) 2014-09-17
CN104050685B true CN104050685B (en) 2017-05-31

Family

ID=51503468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410255267.5A Expired - Fee Related CN104050685B (en) 2014-06-10 2014-06-10 Moving target detecting method based on particle filter visual attention model

Country Status (1)

Country Link
CN (1) CN104050685B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732559B (en) * 2015-02-02 2017-08-01 大连民族学院 A kind of multi-target detection and tracking method based on RGB D data
CN104778713B (en) * 2015-04-27 2017-07-07 清华大学深圳研究生院 A kind of image processing method
US10997472B2 (en) 2016-05-20 2021-05-04 Deepmind Technologies Limited Classifying input examples using a comparison set
CN106951870B (en) * 2017-02-15 2020-07-17 重庆警察学院 Intelligent detection and early warning method for active visual attention of significant events of surveillance video
CN108921051B (en) * 2018-06-15 2022-05-20 清华大学 Pedestrian attribute identification network and technology based on cyclic neural network attention model
CN109816100B (en) * 2019-01-30 2020-09-01 中科人工智能创新技术研究院(青岛)有限公司 Salient object detection method and device based on bidirectional fusion network
CN111723829B (en) * 2019-03-18 2022-05-06 四川大学 Full-convolution target detection method based on attention mask fusion
CN109902763B (en) * 2019-03-19 2020-05-15 北京字节跳动网络技术有限公司 Method and device for generating feature map
CN110623680A (en) * 2019-08-30 2019-12-31 中国民用航空飞行学院 Method for testing psychological health of civil aviation flight trainees
CN115346180B (en) * 2022-10-18 2023-02-10 湖北车安达信息科技有限公司 Road congestion condition detection method and device based on low-frequency data

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521844A (en) * 2011-11-30 2012-06-27 湖南大学 Particle filter target tracking improvement method based on vision attention mechanism

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521844A (en) * 2011-11-30 2012-06-27 湖南大学 Particle filter target tracking improvement method based on vision attention mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Top-down visual attention control based on a particle filter for human-interactive robots;Motoyuki Ozeki等;《Human System Interactions (HSI), 2011 4th International Conference on》;20110531;第188-194页 *
基于视觉注意力计算的运动目标检测方法研究;刘龙 等;《计算机工程与应用》;20131220;第1-10页 *
基于运动注意力融合模型的目标检测与提取算法;刘龙 等;《模式识别与人工智能》;20131231;第26卷(第12期);第1140-1145页 *

Also Published As

Publication number Publication date
CN104050685A (en) 2014-09-17

Similar Documents

Publication Publication Date Title
CN104050685B (en) Moving target detecting method based on particle filter visual attention model
Tung et al. Self-supervised learning of motion capture
Kwak et al. Learning occlusion with likelihoods for visual tracking
CN111191667B (en) Crowd counting method based on multiscale generation countermeasure network
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN105022982A (en) Hand motion identifying method and apparatus
CN106780560A (en) A kind of feature based merges the bionic machine fish visual tracking method of particle filter
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN103902989A (en) Human body motion video recognition method based on non-negative matrix factorization
CN103714556A (en) Moving target tracking method based on pyramid appearance model
CN102592135B (en) Visual tracking method of subspace fusing target space distribution and time sequence distribution characteristics
CN104751111A (en) Method and system for recognizing human action in video
Zhang et al. Detection and tracking of human track and field motion targets based on deep learning
CN111027586A (en) Target tracking method based on novel response map fusion
Kiyak et al. Small aircraft detection using deep learning
Károly et al. Optical flow-based segmentation of moving objects for mobile robot navigation using pre-trained deep learning models
Zhu et al. Tracking multiple objects through occlusion with online sampling and position estimation
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN110751670B (en) Target tracking method based on fusion
Gomez-Donoso et al. Three-dimensional reconstruction using SFM for actual pedestrian classification
CN104123569A (en) Video person number information statistics method based on supervised learning
Li Research on camera-based human body tracking using improved cam-shift algorithm
CN103873859B (en) Based on the video fusion method of evaluating performance that time and space significance detects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170531

Termination date: 20200610