CN109544604B - Target tracking method based on cognitive network - Google Patents

Target tracking method based on cognitive network Download PDF

Info

Publication number
CN109544604B
CN109544604B CN201811438479.1A CN201811438479A CN109544604B CN 109544604 B CN109544604 B CN 109544604B CN 201811438479 A CN201811438479 A CN 201811438479A CN 109544604 B CN109544604 B CN 109544604B
Authority
CN
China
Prior art keywords
target
layer
tracking
candidate region
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811438479.1A
Other languages
Chinese (zh)
Other versions
CN109544604A (en
Inventor
修春波
赖太湖
李鸿一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cai Jianlong
Shenzhen Topology Vision Technology Co ltd
Original Assignee
Shenzhen Topology Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Topology Vision Technology Co ltd filed Critical Shenzhen Topology Vision Technology Co ltd
Priority to CN201811438479.1A priority Critical patent/CN109544604B/en
Publication of CN109544604A publication Critical patent/CN109544604A/en
Application granted granted Critical
Publication of CN109544604B publication Critical patent/CN109544604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of target identification and tracking, and particularly relates to a target tracking method based on a cognitive network. And establishing a local histogram model sequence of the target template to describe the target, inputting a local histogram model of a target candidate region into a cognitive network to perform matching calculation, determining the central position of the candidate region with the largest matching value as the central position of the target, realizing the tracking of the moving target, and improving the invariance of the tracking method to the translation, rotation and scaling of the target. The invention can be applied to a real-time monitoring system.

Description

Target tracking method based on cognitive network
Technical Field
The invention belongs to the field of target identification and tracking, relates to a moving target tracking method, and in particular relates to a method for realizing moving target tracking by adopting a cognitive network.
Background
The identification and tracking of moving targets are widely applied in the fields of video monitoring, security protection and the like. Because of the complexity of moving targets, tracking algorithms are generally required to be capable of adapting to changes such as target translation, rotation scaling and the like, and therefore high requirements are placed on the performance of the tracking algorithms. While tracking algorithms such as Meanshift, camshift can meet the requirements for tracking real-time and target scaling variations, it is often desirable to have significant differences between the target and the background, otherwise it is difficult to meet efficient tracking. The cognitive network is a network model for associative memory, and has better application in image associative memory due to better associative performance, but the basic cognitive network cannot be directly applied to target tracking.
Therefore, by improving the cognitive network, the method for realizing the tracking of the moving target is designed to improve the tracking performance of the moving target, and has good application value.
Disclosure of Invention
The invention aims to solve the technical problem of designing a target tracking method based on a cognitive network to improve the tracking performance of a moving target.
The technical scheme adopted by the invention is as follows: a target tracking method based on a cognitive network establishes a local histogram model sequence of a target template to describe a target, inputs a local histogram model of a target candidate region into the cognitive network to perform matching calculation, determines the central position of the candidate region with the largest matching value as the central position of the target, realizes the tracking of a moving target, and improves invariance of the tracking method to translation, rotation and scaling of the target.
The object of the invention is to construct a target tracking method based on a cognitive network, which can improve the invariance of a tracking system to target translation, scaling and rotation and has good practicability.
Drawings
Fig. 1 is a diagram of a trace network architecture.
FIG. 2 is a graph of the results of tracking using the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings.
The method for establishing the target model has a plurality of methods, wherein the histogram representation method has the characteristics of small calculation amount and simple description, and is widely applied to a real-time tracking system. Therefore, the invention also describes the object model by using a histogram model. The method establishes a plurality of local histogram models of the target according to the distance from the center. Let the radius of the target template be R, and the local color histogram model of the region with the distance R from the center of the target template be q' r ={q′ r u },u=1,2,...,m:
Wherein m is the number of pixel chromaticity dividing classes, u is the chromaticity class variable, b r (x i ,y i ) Is a pixel point (x i ,y i ) S is the total number of pixels in the region at a distance r from the center of the target template, and the function delta (·) is defined as:
the normalized histogram model is:
assuming that the radius of the target candidate region is R ', and R ' > R, the local chromaticity histogram model of the region with the distance R from the center of the candidate region is p ' r ={p′ r u },u=1,2,...,m:
The normalized histogram model is:
from which a histogram vector [ p ] can be composed r 1 ,p r 2 ,...,p r u ] T . By improving the cognitive network model, a cognitive network model as shown in fig. 1 can be constructed.
The cognitive network consists of four layers of structures of an input layer, a calculation layer, an association layer and a competitive output layer. The first layer is an input layer and has m neurons, and corresponds to a pixel chromaticity grade distribution vector [ p ] with a distance r from the center of the candidate region r 1 ,p r 2 ,...,p r m ] T . The second layer is calculationA layer having m×R neurons in common, the layer having a neuron output h of the ith row and the jth column of the network ij The method comprises the following steps:
the third layer is an associative layer, and has R neurons and the output y of the kth neuron k The method comprises the following steps:
where θ is a set threshold.
The fourth layer is a competitive output layer, which outputs the result z r The method comprises the following steps:
z r =max(y k |k=1,2,...,R) (8)
the process for realizing target identification and tracking by adopting the cognitive network is as follows:
step 1, establishing an isocenter distance histogram model sequence { q } of the tracking template r u |r=1,2,...,R;u=1,2,...,m}。
Step 2, acquiring N target post-selection areas in the tracking window by using priori knowledge, wherein if the priori knowledge is not available, N is the number of all pixels in the tracking window. The N candidate region center point sets are determined as follows: { A 1 ,A 2 ,...,A N }。
Step 3. Initialize the current candidate region to l=1.
Step4, establishing an isocenter distance histogram sequence { p } of the area around the current candidate point r =[p r 1 ,p r 2 ,...,p r m ] T ,r=1,2,..., R′}。
Step 5. Calculate the identification result M of the current candidate region l Target size R l
(step a) set M l =0,r=1,R l =0。
(step b) p is taken r Input to an associative network sinkIn total, calculate the output result z r
(step c) if z r > 0, then M l =M l +z r ,r=r+1:
{
If R is less than or equal to R', returning to the step b to continue calculation;
otherwise R is l =r', and go to Step6.
}
If z r =0, then R l =r-1, and go to Step6.
Step 6.l =l+1, if l is less than or equal to N, returning to Step4 to continue calculation, otherwise, going to Step7.
Step7. Set M t =max{M l I l=1, 2, N }, then A t To identify the target result, and the target radius size is R t
FIG. 2 shows the result of tracking by the method of the present invention, and the white box in FIG. 2 shows the result of tracking the target. From experimental results, the method has good invariance to translation, rotation and scaling of the target, and can realize effective tracking of the target.
The method has the advantages that the target model is built by adopting the local histogram model, the original advantages of the histogram model can be kept, more detailed information of the target is kept, so that the target can be described more accurately, in addition, the calculation layer of the cognitive network adopts annular template matching calculation, the algorithm can be ensured to have good rotation invariance, and the comparison calculation of the third layer of the cognitive network can ensure that the algorithm has scaling invariance, so that the method has good invariance for translation, rotation and scaling of the target.

Claims (1)

1. The target tracking method based on the cognitive network is characterized in that a local histogram model sequence of a target template is established to describe a target, a local histogram model of a target candidate region is input into the cognitive network to carry out matching calculation, and the center position of the candidate region with the largest matching value is determined as the center position of the target; according to the distance from the centerEstablishing a plurality of local histogram models of the target at a distance; let the radius of the target template be R, and the local color histogram model of the region with the distance R from the center of the target template be q' r ={q′ r u },u=1,2,...,m:
Wherein m is the number of pixel chromaticity dividing classes, u is the chromaticity class variable, b r (x i ,y i ) Is a pixel point (x i ,y i ) S is the total number of pixels in the region at a distance r from the center of the target template, and the function delta (·) is defined as:
the normalized histogram model is:
assuming that the radius of the target candidate region is R ', and R ' > R, the local chrominance histogram model of the region with the distance R from the center of the candidate region is q ' r ={q′ r u },u=1,2,...,m:
The normalized histogram model is:
from which a histogram vector [ p ] can be composed r 1 ,p r 2 ,...,p r u ] T
The cognitive network consists of four layers of structures of an input layer, a calculation layer, an association layer and a competitive output layer; the first layer is an input layer and has m neurons, and corresponds to a pixel chromaticity grade distribution vector [ p ] with a distance r from the center of the candidate region r 1 ,p r 2 ,...,p r m ] T The method comprises the steps of carrying out a first treatment on the surface of the The second layer is a calculation layer, which has m×R neurons in total, and the ith row and jth column of the layer network have neuron outputs h ij The method comprises the following steps:
the third layer is an associative layer, and has R neurons and the output y of the kth neuron k The method comprises the following steps:
wherein θ is a set threshold;
the fourth layer is a competitive output layer, which outputs the result z r The method comprises the following steps:
z r =max(y k |k=1,2,...,R) (8)
the process for realizing target identification and tracking is as follows:
step 1, establishing an isocenter distance histogram model sequence { q } of the tracking template r u |r=1,2,...,R;u=1,2,...,m);
Step 2, acquiring N target post-selection areas in the tracking window by using priori knowledge, wherein if the priori knowledge is not available, N is the number of all pixels in the tracking window; the N candidate region center point sets are determined as follows: { A 1 ,A 2 ,...,A N );
Step 3, initializing a current candidate region to l=1;
step4, establishing an isocenter distance histogram sequence { p } of the area around the current candidate point r =[p r 1 ,p r 2 ,...,p r m ] T ,r=1,2,...,R′};
Step 5. Calculate the identification result M of the current candidate region l Target size R l
(step a) set M l =0,r=1,R l =0;
(step b) p is taken r Input to a cognitive network for summarizing, and calculating an output result z r
(step c) if z r > 0, then M l =M l +z r ,r=r+1:
{
If R is less than or equal to R', returning to the step b to continue calculation;
otherwise R is l =r', and go to Step6;
}
if z r =0, then R l =r-1, and go to Step6;
step 6.l =l+1, if l is less than or equal to N, returning to Step4 to continue calculation, otherwise, turning to Step7;
step7. Set M t =max{M l I l=1, 2, N }, then A t To identify the target result, and the target radius size is R t
CN201811438479.1A 2018-11-28 2018-11-28 Target tracking method based on cognitive network Active CN109544604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811438479.1A CN109544604B (en) 2018-11-28 2018-11-28 Target tracking method based on cognitive network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811438479.1A CN109544604B (en) 2018-11-28 2018-11-28 Target tracking method based on cognitive network

Publications (2)

Publication Number Publication Date
CN109544604A CN109544604A (en) 2019-03-29
CN109544604B true CN109544604B (en) 2023-12-01

Family

ID=65852033

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811438479.1A Active CN109544604B (en) 2018-11-28 2018-11-28 Target tracking method based on cognitive network

Country Status (1)

Country Link
CN (1) CN109544604B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381053B (en) * 2020-12-01 2021-11-19 连云港豪瑞生物技术有限公司 Environment-friendly monitoring system with image tracking function

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268902A (en) * 2014-09-30 2015-01-07 东南大学 Multi-target video tracking method for industrial site
CN104574444A (en) * 2015-01-19 2015-04-29 天津工业大学 Camshift tracking method based on target decomposition
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
CN106570893A (en) * 2016-11-02 2017-04-19 中国人民解放军国防科学技术大学 Rapid stable visual tracking method based on correlation filtering
CN108198207A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Multiple mobile object tracking based on improved Vibe models and BP neural network
CN108537170A (en) * 2018-04-09 2018-09-14 电子科技大学 A kind of power equipment firmware unmanned plane inspection pin missing detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268902A (en) * 2014-09-30 2015-01-07 东南大学 Multi-target video tracking method for industrial site
CN104574444A (en) * 2015-01-19 2015-04-29 天津工业大学 Camshift tracking method based on target decomposition
CN106428000A (en) * 2016-09-07 2017-02-22 清华大学 Vehicle speed control device and method
CN106570893A (en) * 2016-11-02 2017-04-19 中国人民解放军国防科学技术大学 Rapid stable visual tracking method based on correlation filtering
CN108198207A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Multiple mobile object tracking based on improved Vibe models and BP neural network
CN108537170A (en) * 2018-04-09 2018-09-14 电子科技大学 A kind of power equipment firmware unmanned plane inspection pin missing detection method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Associative memory network and its hardware design;Chunbo Xiu 等;《Neurocomputing》;20150207;第204-209页 *
基于CNN的海空光电目标检测技术研究;刘天华等;《红外与激光工程》;20080615;第655-658页 *
神经网络的单滤波器目标跟踪算法;阮怀林等;《火力与指挥控制》;20111015(第10期);正文第2节 *

Also Published As

Publication number Publication date
CN109544604A (en) 2019-03-29

Similar Documents

Publication Publication Date Title
Yu et al. Region normalization for image inpainting
CN109949255B (en) Image reconstruction method and device
Bai et al. Infrared ship target segmentation based on spatial information improved FCM
CN111489394B (en) Object posture estimation model training method, system, device and medium
Sanchez-Cuevas et al. A comparison of color models for color face segmentation
Bach et al. Controlling explanatory heatmap resolution and semantics via decomposition depth
Nimbarte et al. Multi-level thresholding algorithm for color image segmentation
Chidananda et al. Entropy-cum-Hough-transform-based ear detection using ellipsoid particle swarm optimization
CN109544604B (en) Target tracking method based on cognitive network
Afshar et al. Lung tumor area recognition in CT images based on Gustafson-Kessel clustering
Cheung et al. Lip segmentation under MAP-MRF framework with automatic selection of local observation scale and number of segments
Hassan et al. A hue preserving uniform illumination image enhancement via triangle similarity criterion in HSI color space
Li et al. Digital image edge detection based on LVQ neural network
Dou et al. Robust visual tracking based on joint multi-feature histogram by integrating particle filter and mean shift
CN108665470B (en) Interactive contour extraction method
CN111145221A (en) Target tracking algorithm based on multi-layer depth feature extraction
Li et al. Fast and robust active contours model for image segmentation
CN115994933A (en) Partial point cloud registration method based on consistency learning
CN113470072B (en) Particle swarm target tracking algorithm based on moving particles
Lee et al. Robust face tracking by integration of two separate trackers: Skin color and facial shape
Ghaleh et al. Lip contour extraction using RGB color space and fuzzy c-means clustering
Korfiatis et al. Automatic local parameterization of the Chan Vese active contour model’s force coefficients using edge information
Chacon-Murguia et al. Segmentation of video background regions based on a DTCNN-clustering approach
CN111160363A (en) Feature descriptor generation method and device, readable storage medium and terminal equipment
CN111247556A (en) Training artificial neural networks on additional tasks while avoiding catastrophic interference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20231108

Address after: 518000, Building 6, Building 1209, Hongchuang Technology Center, Qiankeng Community, Fucheng Street, Longhua District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Topology Vision Technology Co.,Ltd.

Applicant after: Cai Jianlong

Address before: No. 399 Bingshui Road, Xiqing District, Tianjin, Tianjin

Applicant before: TIANJIN POLYTECHNIC University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant