CN107066922B - Target tracking method for monitoring homeland resources - Google Patents

Target tracking method for monitoring homeland resources Download PDF

Info

Publication number
CN107066922B
CN107066922B CN201611259923.4A CN201611259923A CN107066922B CN 107066922 B CN107066922 B CN 107066922B CN 201611259923 A CN201611259923 A CN 201611259923A CN 107066922 B CN107066922 B CN 107066922B
Authority
CN
China
Prior art keywords
target
tracking
frame image
image
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611259923.4A
Other languages
Chinese (zh)
Other versions
CN107066922A (en
Inventor
胡锦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Tianhe Defense Technology Co ltd
Original Assignee
Xi'an Tianhe Defense Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Tianhe Defense Technology Co ltd filed Critical Xi'an Tianhe Defense Technology Co ltd
Priority to CN201611259923.4A priority Critical patent/CN107066922B/en
Publication of CN107066922A publication Critical patent/CN107066922A/en
Application granted granted Critical
Publication of CN107066922B publication Critical patent/CN107066922B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a target tracking method for monitoring homeland resources. The method comprises the following steps: acquiring a current frame image, and processing a previous frame image of the current frame image according to a Bayesian classifier obtained by pre-training to determine a target position of a tracking target in the current frame image; acquiring a next frame image, taking the next frame image as the current frame image, returning to the steps for repeated tracking processing until all frame images of all image sequences are processed; in the tracking processing process, when the tracking target disappears, the target position of the tracking target in the next frame image is predicted and obtained according to a preset target position prediction algorithm. The method and the system can stably and reliably track the monitored target for a long time in the complex scene of the homeland resources.

Description

Target tracking method for monitoring homeland resources
Technical Field
The disclosure relates to the technical field of information monitoring, in particular to a target tracking method for monitoring homeland resources.
Background
With the rapid development of Chinese economy, the contradiction between land supply and demand is increasingly prominent, and the phenomena of cultivated land occupation by illegal construction, illegal or unconventional land construction in cities and stealing mining and stealing of mineral resources frequently occur. At present, the land use change condition is mainly monitored by technical means such as satellite remote sensing monitoring and the like in the aspect of land resource monitoring, and the illegal land conditions of various regions are checked through the change of land remote sensing images of different time before and after years. However, the satellite monitoring is mostly performed with macroscopic supervision from the national level, and relates to the monitoring of a regional resource, and most of the satellite monitoring is performed by building a video monitoring system.
In practical engineering applications, for example, in a complex scene (such as a complex scene of a field forest, a mountain land, and the like), a monitored target (such as a vehicle or a person) may have poor imaging quality, low contrast, a disordered background, a change in target posture, or be occluded (including partial and complete occlusion), and the like. Under the conditions, the current video monitoring system is difficult to stably track the monitored target for a long time, so that a monitoring error zone of the video monitoring system is caused, and some unnecessary misjudgments can be caused.
Therefore, there is a need to provide a new technical solution to improve one or more of the problems in the above solutions.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
An object of the present disclosure is to provide a target tracking method for monitoring of homeland resources, thereby overcoming, at least to some extent, one or more problems due to limitations and disadvantages of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to a first aspect of the embodiments of the present disclosure, there is provided a target tracking method for monitoring homeland resources, the method including:
acquiring a current frame image, and processing a previous frame image of the current frame image according to a Bayesian classifier obtained by pre-training to determine a target position of a tracking target in the current frame image;
acquiring a next frame image, taking the next frame image as the current frame image, returning to the steps for repeated tracking processing until all frame images of all image sequences are processed;
in the tracking processing process, when the tracking target disappears, the target position of the tracking target in the next frame image is predicted and obtained according to a preset target position prediction algorithm.
In an exemplary embodiment of the present disclosure, the processing, according to a bayesian classifier obtained through pre-training, a previous frame image of the current frame image to determine a target position of a tracking target in the current frame image includes:
randomly sampling by using a particle filter in a circular range with a preset radius around a target position in the previous frame of image to obtain a first preset number of candidate samples;
classifying each obtained candidate sample according to the Bayesian classifier obtained by pre-training, calculating the classifier response of each candidate sample, and determining the candidate sample with the maximum classifier response as the tracking target in the current frame image so as to determine the target position.
In an exemplary embodiment of the disclosure, the pre-trained bayesian classifier is determined by:
acquiring a first frame image, and selecting a tracking area of the tracking target in the first frame image;
randomly selecting a second preset number of positive and negative templates in the tracking area by using a particle filter;
and training a naive Bayes classifier according to the second predetermined number of positive and negative templates to obtain the Bayes classifier obtained by pre-training.
In an exemplary embodiment of the disclosure, the pre-trained bayesian classifier is as follows:
Figure BDA0001199605230000031
wherein, the prior probability is uniformly distributed, i.e. p (y is 1) p (y is 0);
y belongs to {0,1} and represents a binary variable of the binary mark; n is the number of candidate samples to be classified, xi
A feature vector for each candidate sample to be classified;
p(xi|y=1),p(xiy 0) is estimated by a gaussian distribution, which obeys having four parameters
Figure BDA0001199605230000032
The following gaussian distribution:
Figure BDA0001199605230000033
the above-mentioned
Figure BDA0001199605230000034
Respectively, the mean and standard deviation of the positive template, the
Figure BDA0001199605230000035
The mean and standard deviation of the negative template are respectively.
In an exemplary embodiment of the present disclosure, the method further includes:
in the tracking processing process, fitting the maximum classifier response corresponding to the images with the preset frame number to form a response curve when the preset frame number is reached; wherein the predetermined number of frames is greater than or equal to 5 frames;
judging whether the tracking target in the current frame image disappears or not according to the variation trend of the response curve;
and if the tracking target disappears, predicting the target position of the tracking target in the next frame of image according to the preset target position prediction algorithm.
In an exemplary embodiment of the present disclosure, the determining whether the tracking target disappears in the current frame image according to the variation trend of the response curve includes:
if the response curve continuously drops for more than five frames and meets the following preset conditions, the tracking target is considered to disappear:
the preset conditions are as follows: the first predetermined value is greater than a times the second predetermined value;
wherein, a is 0.8; the first preset value is a difference value between a maximum classifier response corresponding to the initial mutation point and a maximum classifier response corresponding to the last mutation point on the response curve; each mutation point corresponds to one frame of image;
the second predetermined value is a difference value between a maximum classifier response and a minimum classifier response corresponding to a fifth frame on the response curve before mutation.
In an exemplary embodiment of the present disclosure, the method further includes:
and if the tracking target does not disappear, updating the pre-trained Bayes classifier every five frames so as to determine the target position of the tracking target by processing according to the updated Bayes classifier.
In an exemplary embodiment of the present disclosure, the predicting the target position of the tracking target in the next frame image according to a preset target position prediction algorithm includes:
and calculating and predicting the target position of the tracking target in the next frame of image by adopting a Kalman filtering algorithm according to the position information of the tracking target before disappearance.
In an exemplary embodiment of the present disclosure, the method further includes:
detecting whether the tracking target reappears in the prediction process after the tracking target disappears;
if yes, ending the prediction process, and processing the current frame image with the tracking target reappeared according to the pre-trained Bayes classifier to obtain the tracking target in the corresponding next frame image.
In an exemplary embodiment of the present disclosure, the detecting whether the tracking target reappears includes:
simultaneously calculating a confidence value for each of the candidate samples during the prediction process;
and judging whether the tracking target reappears according to the change trend of the confidence value of each candidate sample in the whole tracking processing process.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in an embodiment of the disclosure, the position of the tracked target is determined by the target tracking method for monitoring the homeland resources in combination with a bayesian classifier algorithm and a trajectory prediction method. Therefore, on one hand, the monitoring target can be stably tracked for a long time under the condition that the target is shielded and the like in a complex scene; on the other hand, the tracking target can be accurately captured by the video monitoring system, so that the reliable operation of the homeland resource monitoring video system is ensured, and the condition of misjudgment or monitoring accidents is avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
FIG. 1 schematically illustrates a flow chart of a target tracking method for homeland resource monitoring in an exemplary embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of a target tracking method for another homeland resource monitoring in an exemplary embodiment of the present disclosure;
3A-3D schematically illustrate target tracking results with a target in a cluttered context in exemplary embodiments of the present disclosure;
4A-4D schematically show a target tracking result diagram under a background that a target is occluded in an exemplary embodiment of the disclosure;
fig. 5 schematically illustrates a schematic diagram of a target tracking apparatus for monitoring of homeland resources in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The present exemplary embodiment provides a target tracking method for monitoring homeland resources. Referring to fig. 1, the method may include the steps of:
step S101: acquiring a current frame image, and processing a previous frame image of the current frame image according to a Bayesian classifier obtained by pre-training to determine a target position of a tracking target in the current frame image.
Step S102: and acquiring a next frame image, taking the next frame image as the current frame image, returning to the steps for repeated tracking processing until all frame images of all image sequences are processed.
Step S103: in the tracking processing process, when the tracking target disappears, the target position of the tracking target in the next frame image is predicted and obtained according to a preset target position prediction algorithm.
By the target tracking method for monitoring the homeland resources, on one hand, the monitored target can be stably tracked for a long time under the conditions of complex scenes such as the shielded target and the like; on the other hand, the tracking target can be accurately captured by the video monitoring system, so that the reliable operation of the homeland resource monitoring video system is ensured, and the condition of misjudgment or monitoring accidents is avoided.
Next, the respective steps of the above-described method in the present exemplary embodiment are described in more detail with reference to fig. 1 to 2.
In step S101, a current frame image is acquired, and a previous frame image of the current frame image is processed according to a bayesian classifier obtained through pre-training to determine a target position of a tracking target in the current frame image.
In an exemplary embodiment, the current frame image and the previous frame image of the current frame image may be obtained from a monitoring video system, specifically, for example, the current frame image of the territorial resource area to be monitored and the previous frame image of the current frame image may be obtained. The territorial resource area to be monitored may be a mining area, a cultural relic preservation area and the like, and the exemplary embodiment is not particularly limited in this regard. The step of processing the previous frame image of the current frame image according to the previously trained bayesian classifier to determine the target position of the tracking target in the current frame image may include the following steps 201 to 202; wherein:
step 201: in a circular range with a preset radius R around the position of the target (such as a vehicle) in the previous frame of image, randomly sampling by using a particle filter to obtain a first preset number (such as 60) of candidate samples. To improve processing efficiency, all candidate samples may also be normalized to the same size, e.g., 16 × 16 pixel size.
For example, the candidate samples may be selected randomly with a predetermined standard deviation according to a gaussian distribution by taking the position of the target in the previous frame of image as a central mean. The selection by adopting the Gaussian distribution instead of the random distribution utilizes the attention mechanism of the human visual system, namely, more attention is paid to objects which are closer to the target, the attention is reduced to objects which are far away from the target, and the Gaussian distribution accords with the mechanism of the human visual system and also accords with the inter-frame continuity and the time correlation in the continuous video sequence.
Step 202: classifying each obtained candidate sample according to the Bayesian classifier obtained by pre-training, calculating the classifier response of each candidate sample, and determining the candidate sample with the maximum classifier response as the tracking target in the current frame image so as to determine the target position.
Illustratively, the most likely candidate target position may be calculated, for example, according to the maximum a posteriori probability MAP criterion, i.e., the candidate sample with the largest classifier response is taken as the target to be tracked in the current frame. It should be noted that, the existing algorithm may be referred to for specific calculation according to the maximum a posteriori probability MAP criterion, and details are not described here.
In this exemplary embodiment, the pre-trained bayesian classifier may be determined in the following manner, and specifically may include the following steps 301 to 303:
step 301: acquiring a first frame image, and selecting a tracking area of the tracking target in the first frame image.
For example, a target area to be tracked may be selected from a first frame of a monitoring video image sequence of an obtained territorial resource area to be monitored, and a center position, a width, a high level and other parameters of the initial tracking area may be recorded.
Step 302: and randomly selecting a second preset number of positive and negative templates in the tracking area by using a particle filter.
Illustratively, a number of positive and negative templates (also called positive and negative samples) are randomly selected around the selected initial tracking area using a particle filter and normalized to the same size. The training sample set may be composed of NpA positive template and NnAnd (4) a negative template. First, N is sampled around a selected target tracking area (e.g., a circle of several pixels in radius)pAn image. Subsequently, to improve efficiency, each image sampled may be normalized to the same size, e.g., 16 x 16. And then stacking each sampling image together to form a corresponding positive template vector. Similarly, a negative training sample set consists of images that are far from the marker location (e.g., concentric circles a few pixels from the target). Thus, the training sample set contains both background and partial target images. Since a sample containing only apparent information of the target portion is considered as a negative sample, its confidence value is small. Thus, a better target localization can be obtained.
In this exemplary embodiment, the selection of the positive and negative samples may be randomly selected around the target position of the previous frame of image according to a gaussian distribution, the number of the positive and negative samples selected may be 25 and 100, respectively, the normalized size may be 16 × 16, and the method is fixed for all scenes. Of course, this is not particularly limited in the present exemplary embodiment, and those skilled in the art may adjust the number of samples and the normalized size according to actual needs.
Step 303: and training a naive Bayes classifier according to the second predetermined number of positive and negative templates to obtain the Bayes classifier obtained by pre-training.
In the present exemplary embodiment, in each frame image processing, samples are taken around the tracked target in the previous frame image using particle filtering. For better tracking of the target, affine transformations are used to model the target motion. Assuming that the affine parameters are independent, it can be modeled with six-scale gaussian distributions.
Specifically, a Bayesian classifier is initialized by adopting the selected positive and negative templates, and the mean value and standard deviation of the positive and negative templates are obtained. Given a sample's feature vector of x, all elements in x are assumed to be independent of each other. Random vectors in the image follow a gaussian distribution. Thus, the conditional distribution p (x) in the classifieri|y=1),p(xiY 0) obeys four parameters
Figure BDA0001199605230000081
A gaussian distribution of (a). p (x)i|y=1),p(xiY ═ 0) can be estimated by gaussian distribution.
Illustratively, the pre-trained bayesian classifier is as follows:
Figure BDA0001199605230000082
wherein, the prior probability is uniformly distributed, i.e. p (y is 1) p (y is 0);
y belongs to {0,1} and represents a binary variable of the binary mark; n is the number of candidate samples to be classified, xiA feature vector for each candidate sample to be classified;
p(xi|y=1),p(xiy 0) is estimated by a gaussian distribution, which obeys having four parameters
Figure BDA0001199605230000091
The following gaussian distribution:
Figure BDA0001199605230000092
the above-mentioned
Figure BDA0001199605230000093
Respectively, the mean and standard deviation of the positive template, the
Figure BDA0001199605230000094
The mean and standard deviation of the negative template are respectively.
Here, in order to reduce the computational complexity, hardware implementation is facilitated. The naive bayes classifier is subjected to taylor expansion in the present exemplary embodiment to form a bayes classifier shown in the above formula.
In step S102, a next frame of image of the soil resource area to be monitored is obtained, the next frame of image is used as the current frame of image, and the above steps are returned to repeat the tracking processing until all the frame images of all the image sequences of the soil resource area to be monitored are processed. That is, the processing procedure based on the Bayesian classifier algorithm is continuously repeated to process all the frame images of all the image sequences.
In step S103, in the tracking process, when the tracking target disappears, the target position of the tracking target in the next frame image is predicted according to a preset target position prediction algorithm.
In a homeland resource video surveillance system, the target is usually far from the detector. In the imaging process, due to factors such as atmospheric turbulence, system jitter and aberration of an optical system, the image of the target in the system is very blurred, and the contrast is poor. In addition, due to the remote imaging, the target has no texture and color information, and the shape and the posture are different. On the other hand, the background of the target is complex and chaotic, and the situations of shielding, posture change, blurring and the like can also occur in the motion process, which all bring great challenges to the long-term target tracking in the complex scene.
In order to obtain long-term stable target tracking in a complex scene, the embodiment of the present invention utilizes the advantages of a classification method based on a bayesian classifier, and is combined with a trajectory prediction method, when a tracked target disappears, the target position of the tracked target in a next frame image is predicted according to a preset target position prediction algorithm, so as to realize long-term robust target tracking.
In an exemplary embodiment, the predicting the target position of the tracking target in the next frame of image according to a preset target position prediction algorithm may include: and calculating and predicting the target position of the tracking target in the next frame of image by adopting a Kalman filtering algorithm according to the position information of the tracking target before disappearance. The specific calculation process using the kalman filter algorithm may refer to the prior art, and is not described in detail. Of course, the specific trajectory prediction algorithm is not particularly limited in the present exemplary embodiment.
How to determine whether the tracking target disappears in step S103 is explained in an exemplary embodiment. The method can further comprise the following steps 401-403. Wherein:
step 401: in the tracking processing process, fitting the maximum classifier response corresponding to the images with the preset frame number to form a response curve when the preset frame number is reached; wherein the predetermined number of frames is greater than or equal to 5 frames.
For example, after tracking is performed for a certain number of frames, the trend of the response curve formed by the maximum classifier response corresponding to each frame is judged. And if the response curve has mutation, the frame corresponding to the mutation point is the frame with tracking failure.
Step 402: and judging whether the tracking target in the current frame image disappears or not according to the change trend of the response curve.
For example, in the present exemplary embodiment, the determining whether the tracking target disappears in the current frame image according to the variation trend of the response curve may include: and if the response curve continuously drops for more than five frames and meets the following preset conditions, the tracking target is considered to disappear.
The preset conditions are as follows: the first predetermined value is greater than a times the second predetermined value; wherein a is 0.8, which is an experimental value. The first preset value is a difference value between a maximum classifier response corresponding to the initial mutation point and a maximum classifier response corresponding to the last mutation point on the response curve; each mutation point corresponds to one frame of image. The second predetermined value is a difference value between a maximum classifier response and a minimum classifier response corresponding to a fifth frame on the response curve before mutation.
Step 403: and if the tracking target disappears, predicting the target position of the tracking target in the next frame of image according to the preset target position prediction algorithm.
In an exemplary embodiment, the method may further include: and if the tracking target does not disappear, updating the pre-trained Bayes classifier every five frames so as to determine the target position of the tracking target by processing according to the updated Bayes classifier. Specifically, the parameters of the bayesian classifier obtained by the pre-training may be updated, and the updating of the specific bayesian classifier may refer to the prior art and is not described in detail. Tracking targets can be captured more accurately by such updating.
Referring to fig. 2, on the basis of the above embodiment, in an exemplary embodiment, the method may further include the following steps:
step S104: and detecting whether the tracking target reappears in the prediction process after the tracking target disappears.
For example, an object (e.g., a vehicle) may be occluded when it disappears, such as entering a forest, partially or completely, and reappearing over time.
Step S105: if so, namely the tracking target reappears, ending the prediction process, and processing the current frame image reappearing the tracking target according to the Bayes classifier obtained by the pre-training to obtain the tracking target in the corresponding next frame image.
For example, after the tracking target disappears, a prediction process can be entered to predict the track of the target and then determine the position of the target at the next moment. And after the target is reproduced, the processing procedure based on the Bayesian classifier is carried out to determine the position of the target. Namely, the conversion from the prediction state to the reacquisition state is realized, and the target tracking algorithm based on the Bayesian classifier is restarted for target tracking. The two modes are combined in this way, so that the tracking target can be stably and reliably captured for a long time.
For example, the detecting whether the tracking target reappears may include the following steps 501-502. Wherein:
step 501: a confidence value for each of the candidate samples is calculated simultaneously in the prediction process.
Step 502: and judging whether the tracking target reappears according to the change trend of the confidence value of each candidate sample in the whole tracking processing process. For example, if the confidence value of each candidate sample gradually increases and reaches a preset threshold in the prediction process, it may be determined that the tracking target reappears.
The invention provides a long-term target tracking method in a complex scene based on the combination of a Bayesian classifier and trajectory prediction, which treats tracking as a two-classification problem and solves the problem that a target and a background are easy to be confused in the complex scene. And when the target is completely shielded to cause the failure of the tracking algorithm, obtaining the failed tracking state by utilizing the track prediction. After a certain time, the target reappears, the target is recaptured, and tracking algorithm tracking is continuously carried out, so that long-term and robust tracking when the target is shielded (partially or completely shielded), background is disordered and the posture is changed in a complex scene is realized.
The test results obtained by applying the above-described method in the present exemplary embodiment are described below with reference to fig. 3A to 3D and fig. 4A to 4D to verify the applicability of the method.
In order to verify the adaptability of the method to the target in a chaotic background, a ground complex scene image sequence acquired by an external field test is adopted for 824 frames, and as shown in fig. 3A to 3D, tracking results obtained by intercepting the 2 nd frame, the 116 th frame, the 128 th frame and the 248 th frame are obtained. These four frames describe the cases where target tracking is initially, begins in a cluttered background, is in a cluttered background, and reappears from the cluttered background, respectively. The gray rectangular frame in fig. 3A to 3D indicates the tracking frame, and the cross at the center of the rectangular frame indicates the center point of the tracking frame. As can be seen from the figure, when the target encounters a cluttered background, the bayesian classifier-based tracking method fails, and a trajectory prediction mechanism is adopted to predict the position of the target in the next frame. After a certain number of frames, the target reappears, and long-term stable tracking in a complex scene can be obtained by continuously adopting a tracking method of a Bayesian classifier.
In order to verify the adaptability of the method to the shielded (such as partially or completely shielded) background of the target, 359 frames of ground complex scene image sequences acquired by an external field test are adopted, as shown in fig. 3A to 3D. And intercepting tracking results obtained by images of the 90 th frame, the 120 th frame, the 183 th frame and the 210 th frame. The four images show the case where the target is partially occluded (by the trunk), reappears after occlusion, is partially occluded again (into the tree cluster), and reappears after occlusion (out of the tree cluster), respectively. The grey rectangle in the figure represents the tracking box, and the cross at the center of the rectangle represents the center point of the tracking box. As can be seen from the figure, the target tracking method under the complex scene based on the combination of the Bayesian classifier and the track prediction can adapt to the situation that the target is partially or globally shielded, and realize long-term stable tracking.
The target tracking method for monitoring the territorial resources has the advantages that: compared with a method for tracking only by using a naive Bayes classifier, the method for tracking the long target in the complex scene based on the combination of the Bayes classifier and the trajectory prediction adds the mechanism of trajectory prediction and recapture of the target from the moment of temporary disappearance (such as complete occlusion) to the moment of reappearance, solves the problem of continuous and stable tracking of the target when the target is completely occluded or in a chaotic background, and realizes long-term and robust target tracking in the complex scene. In addition, the naive Bayes classifier is subjected to the Taylor expansion approximation, and compared with the naive Bayes classifier, the computational complexity is lower, and the problem that the naive Bayes classifier is difficult to realize on hardware is solved. And finally, sampling the positive and negative templates and the candidate samples by adopting particle filtering, and modeling the target motion by affine transformation, so that the method can adapt to the changes of the scale, rotation, translation and miscut angle of the target, and has good adaptability.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc. Additionally, it will also be readily appreciated that the steps may be performed synchronously or asynchronously, e.g., among multiple modules/processes/threads.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to make a computing device (which may be a personal computer, a server, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
Fig. 5 shows a schematic diagram of a target tracking apparatus 400 for homeland resource monitoring according to an example embodiment of the present disclosure. For example, the apparatus 400 may be provided as a server. Referring to fig. 5, apparatus 400 includes a processing component 422, which further includes one or more processors, and memory resources, represented by memory 432, for storing instructions, such as applications, that are executable by processing component 422. The application programs stored in memory 432 may include one or more modules that each correspond to a set of instructions. Further, the processing component 422 is configured to execute instructions to perform the above-described methods.
The apparatus 400 may also include a power component 426 configured to perform power management of the apparatus 400, a wired or wireless network interface 450 configured to connect the apparatus 400 to a network (e.g., a video surveillance network), and an input/output (I/O) interface 458. The apparatus 400 may operate based on an operating system stored in the memory 432, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. A target tracking method for monitoring homeland resources is characterized by comprising the following steps:
acquiring a current frame image, and processing a previous frame image of the current frame image according to a Bayesian classifier obtained by pre-training to determine a target position of a tracking target in the current frame image;
acquiring a next frame image, taking the next frame image as the current frame image, returning to the steps for repeated tracking processing until all frame images of all image sequences are processed;
in the tracking process, fitting the maximum classifier response corresponding to the images with the preset frame number to form a response curve when the preset frame number is reached; wherein the predetermined number of frames is greater than or equal to 5 frames; judging whether the tracking target in the current frame image disappears or not according to the variation trend of the response curve;
the judging whether the tracking target in the current frame image disappears according to the variation trend of the response curve comprises the following steps: if the response curve continuously drops for more than five frames and meets the following preset conditions, the tracking target is considered to disappear: the preset conditions are as follows: the first predetermined value is greater than a times the second predetermined value; wherein, a is 0.8; the first preset value is a difference value between a maximum classifier response corresponding to the initial mutation point and a maximum classifier response corresponding to the last mutation point on the response curve; each mutation point corresponds to one frame of image; the second preset value is the difference value between the maximum classifier response and the minimum classifier response corresponding to the fifth frame on the response curve before mutation;
in the tracking processing process, when the tracking target disappears, the target position of the tracking target in the next frame image is predicted and obtained according to a preset target position prediction algorithm.
2. The method of claim 1, wherein the processing the previous frame of image of the current frame of image according to the pre-trained bayesian classifier to determine the target position of the tracking target in the current frame of image comprises:
randomly sampling by using a particle filter in a circular range with a preset radius around a target position in the previous frame of image to obtain a first preset number of candidate samples;
classifying each obtained candidate sample according to the Bayesian classifier obtained by pre-training, calculating the classifier response of each candidate sample, and determining the candidate sample with the maximum classifier response as the tracking target in the current frame image so as to determine the target position.
3. The method of claim 2, wherein the pre-trained bayesian classifier is determined by:
acquiring a first frame image, and selecting a tracking area of the tracking target in the first frame image;
randomly selecting a second preset number of positive and negative templates in the tracking area by using a particle filter;
and training a naive Bayes classifier according to the second predetermined number of positive and negative templates to obtain the Bayes classifier obtained by pre-training.
4. The method of claim 3, wherein the pre-trained Bayesian classifier is as follows:
Figure FDA0002934850960000021
wherein, the prior probability is uniformly distributed, i.e. p (y is 1) p (y is 0);
y belongs to {0,1} and represents a binary variable of the binary mark; n is the number of candidate samples to be classified, xiA feature vector for each candidate sample to be classified;
p(xi|y=1),p(xiy 0) is estimated by a gaussian distribution, which obeys having four parameters
Figure FDA0002934850960000022
The following gaussian distribution:
Figure FDA0002934850960000023
the above-mentioned
Figure FDA0002934850960000024
Respectively, mean and standard deviation of the positive template, said
Figure FDA0002934850960000025
The mean and standard deviation of the negative template are respectively.
5. The method of claim 1, further comprising:
and if the tracking target does not disappear, updating the pre-trained Bayes classifier every five frames so as to determine the target position of the tracking target by processing according to the updated Bayes classifier.
6. The method of claim 2, wherein the predicting the target position of the tracking target in the next frame of image according to a preset target position prediction algorithm comprises:
and calculating and predicting the target position of the tracking target in the next frame of image by adopting a Kalman filtering algorithm according to the position information of the tracking target before disappearance.
7. The method of claim 6, further comprising:
detecting whether the tracking target reappears in the prediction process after the tracking target disappears;
if yes, ending the prediction process, and processing the current frame image with the tracking target reappeared according to the pre-trained Bayes classifier to obtain the tracking target in the corresponding next frame image.
8. The method of claim 7, wherein the detecting whether the tracking target reappears comprises:
simultaneously calculating a confidence value for each of the candidate samples during the prediction process;
and judging whether the tracking target reappears according to the change trend of the confidence value of each candidate sample in the whole tracking processing process.
CN201611259923.4A 2016-12-30 2016-12-30 Target tracking method for monitoring homeland resources Active CN107066922B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611259923.4A CN107066922B (en) 2016-12-30 2016-12-30 Target tracking method for monitoring homeland resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611259923.4A CN107066922B (en) 2016-12-30 2016-12-30 Target tracking method for monitoring homeland resources

Publications (2)

Publication Number Publication Date
CN107066922A CN107066922A (en) 2017-08-18
CN107066922B true CN107066922B (en) 2021-05-07

Family

ID=59624217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611259923.4A Active CN107066922B (en) 2016-12-30 2016-12-30 Target tracking method for monitoring homeland resources

Country Status (1)

Country Link
CN (1) CN107066922B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108230358A (en) * 2017-10-27 2018-06-29 北京市商汤科技开发有限公司 Target following and neural network training method, device, storage medium, electronic equipment
CN107784291A (en) * 2017-11-03 2018-03-09 北京清瑞维航技术发展有限公司 target detection tracking method and device based on infrared video
CN111488776B (en) * 2019-01-25 2023-08-08 北京地平线机器人技术研发有限公司 Object detection method, object detection device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339240A (en) * 2008-08-26 2009-01-07 中国人民解放军海军工程大学 Wireless sensor network object tracking method based on double layer forecast mechanism
CN102053247A (en) * 2009-10-28 2011-05-11 中国科学院电子学研究所 Phase correction method for three-dimensional imaging of multi-base line synthetic aperture radar
CN104392467A (en) * 2014-11-18 2015-03-04 西北工业大学 Video target tracking method based on compressive sensing
CN104933733A (en) * 2015-06-12 2015-09-23 西北工业大学 Target tracking method based on sparse feature selection
CN105096345A (en) * 2015-09-15 2015-11-25 电子科技大学 Target tracking method based on dynamic measurement matrix and target tracking system based on dynamic measurement matrix
CN105389546A (en) * 2015-10-22 2016-03-09 四川膨旭科技有限公司 System for identifying person at night during vehicle driving process

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250878B (en) * 2016-08-19 2019-12-31 中山大学 Multi-modal target tracking method combining visible light and infrared images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339240A (en) * 2008-08-26 2009-01-07 中国人民解放军海军工程大学 Wireless sensor network object tracking method based on double layer forecast mechanism
CN102053247A (en) * 2009-10-28 2011-05-11 中国科学院电子学研究所 Phase correction method for three-dimensional imaging of multi-base line synthetic aperture radar
CN104392467A (en) * 2014-11-18 2015-03-04 西北工业大学 Video target tracking method based on compressive sensing
CN104933733A (en) * 2015-06-12 2015-09-23 西北工业大学 Target tracking method based on sparse feature selection
CN105096345A (en) * 2015-09-15 2015-11-25 电子科技大学 Target tracking method based on dynamic measurement matrix and target tracking system based on dynamic measurement matrix
CN105389546A (en) * 2015-10-22 2016-03-09 四川膨旭科技有限公司 System for identifying person at night during vehicle driving process

Also Published As

Publication number Publication date
CN107066922A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
JP6723247B2 (en) Target acquisition method and apparatus
US9904852B2 (en) Real-time object detection, tracking and occlusion reasoning
Hao et al. Spatio-temporal traffic scene modeling for object motion detection
KR102153607B1 (en) Apparatus and method for detecting foreground in image
CN107886048A (en) Method for tracking target and system, storage medium and electric terminal
KR102465960B1 (en) Multi-Class Multi-Object Tracking Method using Changing Point Detection
CN107066922B (en) Target tracking method for monitoring homeland resources
Bešić et al. Dynamic object removal and spatio-temporal RGB-D inpainting via geometry-aware adversarial learning
CN108320298B (en) Visual target tracking method and equipment
US20210286997A1 (en) Method and apparatus for detecting objects from high resolution image
Garg et al. Rapid and robust background modeling technique for low-cost road traffic surveillance systems
JP6967056B2 (en) Alignment-free video change detection with deep blind image region prediction
KR20210020723A (en) Cctv camera device having assault detection function and method for detecting assault based on cctv image performed
CN106780567B (en) Immune particle filter extension target tracking method fusing color histogram and gradient histogram
CN113269722A (en) Training method for generating countermeasure network and high-resolution image reconstruction method
CN110648351B (en) Multi-appearance model fusion target tracking method and device based on sparse representation
CN106897731B (en) Target tracking system for monitoring homeland resources
Angelo A novel approach on object detection and tracking using adaptive background subtraction method
Wang et al. Object counting in video surveillance using multi-scale density map regression
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
CN110728700A (en) Moving target tracking method and device, computer equipment and storage medium
Zhang et al. An optical flow based moving objects detection algorithm for the UAV
Duan [Retracted] Deep Learning‐Based Multitarget Motion Shadow Rejection and Accurate Tracking for Sports Video
CN115457274A (en) Vehicle-mounted view angle shielding target detection method and device based on deep learning
Ghahremannezhad et al. Real-time hysteresis foreground detection in video captured by moving cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant