CN107832683A - A kind of method for tracking target and system - Google Patents

A kind of method for tracking target and system Download PDF

Info

Publication number
CN107832683A
CN107832683A CN201711003044.XA CN201711003044A CN107832683A CN 107832683 A CN107832683 A CN 107832683A CN 201711003044 A CN201711003044 A CN 201711003044A CN 107832683 A CN107832683 A CN 107832683A
Authority
CN
China
Prior art keywords
target area
frame
video
target
tracked
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711003044.XA
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bright Wind Taiwan (shanghai) Mdt Infotech Ltd
Original Assignee
Bright Wind Taiwan (shanghai) Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bright Wind Taiwan (shanghai) Mdt Infotech Ltd filed Critical Bright Wind Taiwan (shanghai) Mdt Infotech Ltd
Priority to CN201711003044.XA priority Critical patent/CN107832683A/en
Publication of CN107832683A publication Critical patent/CN107832683A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The purpose of the application is to provide a kind of method for tracking target and system.This method includes:Default target area is tracked in acquired each frame of video;The target area that compartment of terrain checking is tracked in the video frame;When verifying target area tracking failure, the tracked target area of adjustment from the frame of video verified.Compared with prior art, the application verifies that tracked target area both can ensure that tracking was ageing by compartment of terrain, and and can improves tracking accuracy.

Description

A kind of method for tracking target and system
Technical field
The application is related to computer realm, more particularly to a kind of method for tracking target and system.
Background technology
Target following is the challenging work of computer vision and augmented reality field, and its where the shoe pinches are to locate Manage various visual informations and movable information.Wherein, visual information includes but is not limited to:The profile of target, color, texture etc..Fortune Dynamic information includes but is not limited to:Motion change, illumination variation, scene change, block, motion blur etc..Visual information it is various Property and variation characteristic cause existing tracking technique to be easily lost target under complex scene.
In addition, by long-run development, target following technology obtains larger lifting in speed and precision aspect.It is however, fast Spend that the fast usual precision of target following technology is relatively low, and the high target following technology of precision can not then realize online tracking.
The content of the invention
The purpose of the application is to provide a kind of method for tracking target and system.
According to the one side of the application, there is provided a kind of method for tracking target, wherein, this method includes:Acquired Each frame of video in track default target area;The target area that compartment of terrain checking is tracked in the video frame;When checking mesh When marking area tracking failure, the tracked target area of adjustment from the frame of video verified.
In some embodiments, described the step of tracking default target area in acquired each frame of video, wraps Include:The target area being consistent is matched in current video frame based on the target area tracked in previous frame of video.
In some embodiments, it is described based on the target area tracked in previous frame of video in current video frame The step of matching the target area being consistent includes:The feature that multiple position candidates in current video frame are carried out to target area is sieved Choosing, to obtain the position of target area in current video frame;Wherein, each position candidate is tracked according to previous frame of video Target area position and set.
In some embodiments, it is described based on the target area tracked in previous frame of video in current video frame The step of matching the target area being consistent includes:The region of multiple candidate's sizes in current video frame is subjected to target area Feature Selection, to obtain the size of target area in current video frame;Wherein, the region of each candidate's size regards according to previous The size for the target area that frequency frame is tracked and set.
In some embodiments, the interval for checking includes following at least one interval preset duration, pre- setting video Frame number or the interval of checking instruction.
In some embodiments, the mode of checking target area tracking includes:Goal-selling region is extracted respectively Characteristic information and frame of video to be verified in include the feature of at least one candidate target region of target area tracked Information;By matching each candidate target region confidence level similar to the feature in goal-selling region in frame of video to be verified Verified and obtain the target area of empirical tests in target area.
In some embodiments, when verifying target area tracking failure, the target area after being corrected based on empirical tests Domain adjusts tracked target area since the frame of video verified.
In some embodiments, the target area based on after correction, this is adjusted since the selected frame of video The mode of the target area tracked afterwards in each frame of video includes:Based in previous frame of video correct after target area, again Target area in the corresponding frame of video of tracking.
In some embodiments, the target area based on after being corrected in previous frame of video, tracking is corresponding again regards The mode of target area in frequency frame includes:The default adjustable window model centered on target area after being corrected in previous frame of video Enclose the target in the corresponding frame of video of interior tracking.
According to the another aspect of the application, a kind of computer program product is additionally provided, when the computer program product When being performed by computer equipment, as above the method described in any one is performed.
According to the another further aspect of the application, a kind of computer equipment is additionally provided, the computer equipment includes:One or Multiple processors;Memory, for storing one or more computer programs;When one or more of computer programs are by institute When stating one or more processors execution so that one or more of processors realize the as above method described in any one.
According to the further aspect of the application, a kind of Target Tracking System is additionally provided, wherein, the system includes:First mould Block, for tracking default target area in acquired each frame of video;Second module, verified for compartment of terrain in frame of video Middle tracked target area, and for when verifying target area tracking failure, controlling first module to be verified certainly Frame of video start to adjust tracked target area.
In some embodiments, first module is used to exist based on the target area tracked in previous frame of video The target area being consistent is matched in current video frame.
In some embodiments, first module is used to multiple position candidates in current video frame carrying out target area The Feature Selection in domain, to obtain the position of target area in current video frame;Wherein, each position candidate is regarded according to previous The position for the target area that frequency frame is tracked and set.
In some embodiments, first module is used to carry out in the region of multiple candidate's sizes in current video frame The Feature Selection of target area, to obtain the size of target area in current video frame;Wherein, the region of each candidate's size The size of the target area tracked according to previous frame of video and set.
In some embodiments, the interval for checking includes following at least one interval preset duration, pre- setting video Frame number or the interval of checking instruction.
In some embodiments, the mode of the second module verification target area tracking includes:Extraction is default respectively At least one candidate target area of the target area tracked is included in the characteristic information of target area and frame of video to be verified The characteristic information in domain;To be verified is regarded by matching each candidate target region confidence level similar to the feature in goal-selling region Verified and obtain the target area of empirical tests in target area in frequency frame.
In some embodiments, when verifying target area tracking failure, first module is based on empirical tests and entangled Target area after just adjusts tracked target area since the frame of video verified.
In some embodiments, first module is opened based on the target area after correction from selected frame of video Beginning to adjust the mode of the target area tracked in hereafter each frame of video includes:Based on the target area after being corrected in previous frame of video Domain, the target area in corresponding frame of video is tracked again.
In some embodiments, it is pre- centered on target area of first module after being corrected in previous frame of video If the target in corresponding frame of video is tracked in the range of adjustable window.
Compared with prior art, the application verifies that tracked target area both can ensure that tracking timeliness by compartment of terrain Property, and can improves tracking accuracy.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other Feature, objects and advantages will become more apparent upon:
Fig. 1 shows a kind of Target Tracking System schematic diagram according to the application one side;
Fig. 2 shows the legend according to one adjustment candidate target region size of the application;
Fig. 3 shows a kind of method for tracking target flow chart according to the application other side;
Fig. 4 a and Fig. 4 b are accuracy and correctness validation test comparison chart in the OPE carried out based on OTB2013 data;
Fig. 4 c and Fig. 4 d are accuracy and correctness validation test comparison chart in the OPE carried out based on OTB2015 data.
Same or analogous reference represents same or analogous part in accompanying drawing.
Embodiment
The application is described in further detail below in conjunction with the accompanying drawings.
In one typical configuration of the application, terminal, the equipment of service network and trusted party include one or more Processor (CPU), input/output interface, network interface and internal memory.
Internal memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flash RAM).Internal memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moved State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM), Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, magnetic disk storage or other magnetic storage apparatus or Any other non-transmission medium, the information that can be accessed by a computing device available for storage.Defined according to herein, computer Computer-readable recording medium does not include non-temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
Fig. 1 shows a kind of Target Tracking System according to the application one side.The Target Tracking System includes installation Software and hardware in computer equipment.Wherein, the computer equipment includes but is not limited to:PC, intelligent terminal, Server etc..Wherein, the server includes but is not limited to single server, server cluster, the service based on cloud framework are put down Platform etc..The computer equipment can be placed in the Control Room, remote monitoring computer room, third party's computer room for carrying out target tracking Deng.
The Target Tracking System 1 includes the first module 11 and the second module 12.The module of first module 11 and second 12 are the data interactions between the computer equipment based on program module, program function and configuration processor module and divide.Institute State the first module 11 and the second module 12 and include computer equipment when performing the program corresponding to respective step according to sequential And the hardware and corresponding program module called.
First module 11 is used to track default target area in acquired each frame of video.Here, in order to clearly regard The target area to be tracked in frequency file, using the target area feature obtained in advance the video file the first frame Characteristic matching is carried out in image, so as to finding the target area in the first two field picture, and to chronologically being arranged in video file Frame of video carries out target area tracking.Or the first two field picture in video file is shown for user, and prompt user annotation to want The target area of tracking, so that it is determined that the target area in the first two field picture, and then carry out the tracking of target area.Here, institute Video file is stated to include but is not limited to:Local or Remote text in the nonvolatile memory is preserved based on video file format Part, or be based on video format and be stored in volatile memory and ever-increasing video cache file etc..
First module 11 can be since the first two field picture of whole video file, can also be based on labeled determination Frame of video where target following region proceeds by target area tracking.Here, first module 11 can use known mesh Mark tracing algorithm is to having determined that subsequent video frame that the first frame of video of target area starts carries out the tracking of target area.Example Such as, the first module 11 is using the track algorithm based on discrimination model or based on generation model.Wherein, the target based on discrimination model Tracking is to be made a distinction from background target area using Study strategies and methods, and tracking is regarded as into one two classification asks Topic, i.e., distinguish target from background.The method of the Study strategies and methods includes but is not limited to:More case-based learnings, compression Perception, P-N study, structuring SVMs, online boosting or based on above in association with and improved grader study calculate Method.Target tracking algorism based on generation model is will to track the process regarded as and found from background with the most like region of target. Such as increment sub-space learning, sparse expression etc..
In some embodiments, in order to improve the efficiency of target following, amount of calculation is reduced, first module 11 is used for The target area being consistent is matched in current video frame based on the target area tracked in previous frame of video.For example, base The position (xn-1, yn-1, w, h) of identified target area in frame of video (n-1), from frame of video n same position or should Position nearby starts to carry out image piecemeal and carries out learning classification, can so restrain the classification results of Study strategies and methods as early as possible. And for example, based on the position (xn-1, yn-1, w, h) of identified target area in frame of video (n-1), the same position from frame of video n Put or the position nearby starts to divide candidate target region, and characteristic matching is carried out to each candidate target region, so also can Improve the efficiency that target area is traced into frame of video n.
In one embodiment, the position for the target area that first module 11 can be tracked based on previous frame of video And multiple position candidates are first selected in current video frame, and the feature of the image progress target area to each position candidate carries Take, then the feature of each position candidate is matched with the feature in previous frame of video, the similar confidence level highest of feature is waited Bit selecting puts corresponding candidate target region as the target area in the current video frame tracked.Wherein, the candidate bit Put is on the basis of length and width pixel shared by location of pixels of the target area in previous frame of video and target area, with default bias Step-length is offset identity, respective pixel location in current video frame and length and width pixel are divided obtained by.It is described to work as forward sight Each position candidate is corresponded in frequency frame and can obtain piece image block to be used as candidate target region.Described is described with following examples One module 11 carries out the screening process of target following to each candidate target region:Learn one on predetermined target area Optimal filter (or being template) so that the template does convolution algorithm with each candidate target region, and resulting maximum output is rung Should (candidate target region with tracking target real estate both similarity degree maximum) corresponding to candidate target region be The target area that current video frame is tracked.Wherein, the tracking mode can be according to the input of each frame, it is only necessary to by continuous Correction wave filter template, so as to get output response maximum.
Wherein, the tracking mode includes input side:The training image containing tracking target marked in the frame of image first Block (rectangle frame), g represent the preferable output corresponding with the training image blocks.
The feature f of input picture block is designed as d dimensional feature vectors by first module 11, and fl represents theCharacteristic vector on individual passage.So object function be then considered as study one includeComponent is most Excellent correlation filter h, i.e. h the wave filter hl corresponding to each passage are formed, and its mathematical expression form is as follows, by minimizing generation Valency function:
Wherein g represents corresponding with training image blocks preferable output, and λ (λ >=0) is the penalty coefficient of normalization item, * tables Show cyclic convolution computing, hlIt isWave filter corresponding to individual passage.
Being capable of rapid solving target letter using Fast Fourier Transform (FFT) (fast Fourier transformation, FFT) Number (1), it is as follows:
Capitalization wherein in formula (2) represents the direct computation of DFT leaf form of relevant variable,Represent conjugate complex number.
It is general to update the position in the tracking mode respectively using simple linear more new strategy in order to improve efficiency Wave filterThe molecular moiety of (subscript t represents t frames)With denominator part BT, trans, it is as follows:
Wherein η is the learning rate of renewal.
Give new input picture block a z, zlRepresent theThe feature of individual passage, ZlRepresent zl's Direct computation of DFT leaf form, it exports response y and can calculated in the following way
Wherein,Fourier inversion is represented, tracking result (position of tracking target in the current frame) is by maximum in y The position of value determines.
In another embodiment, the region of multiple candidate's sizes in current video frame is carried out mesh by first module 11 The Feature Selection in region is marked, to obtain the size of target area in current video frame;Wherein, the region root of each candidate's size The size of the target area tracked according to previous frame of video and set.With the foregoing candidate target extracted based on position candidate Region is similar, and the target area length and width dimensions that first module 11 can also be tracked according to previous frame of video are in current video Multiple candidate target regions are extracted in frame.For example, the size traversal current video of the target area tracked with previous frame of video Frame obtains multiple candidate target regions.Or the size of the target area tracked with previous frame of video stretches out or to inside contracting Area size's traversal current video frame obtained by small obtains multiple candidate target regions.Each candidate target region is carried out as before Described characteristic matching, and the target that the maximum candidate target region of the similar confidence level of feature is tracked as current video frame Region.
It should be noted that those skilled in the art can the described target based on position candidate or candidate's size with Improvement on the basis of track mode with reference to existing target tracking algorism should be considered as based on technical thought provided herein Technology extends, and belongs to protection domain described herein.
In yet another embodiment, first module 11 combine based on position candidate and based on the target of candidate's size with Track mode is tracked to the target area in current video frame.So can a small range as far as possible realize fast target with Track.For example, based on position candidate and on the basis of carrying out the algorithm of target following, in order to realize scaling filter, described first Module 11 constructs the Scale Space Filtering device of a 3-dimensional in advance, while the position and scale size of candidate target region are entered Row estimation.As shown in Fig. 2 we construct the yardstick pyramid of a target, comprising a variety of different scales (common M kinds yardstick), Wherein ft(1), ftAnd f (2)t(M) be respectively the image block that yardstick is 1, yardstick is 2 and yardstick is M feature.To every kind of yardstick, Its scaling filter is calculated using formula (3)MoleculeAnd denominator BT, scale.Calculated by formula (4) every The output response of kind yardstick, and go to estimate present frame (t with the yardstick corresponding to the maximum output in M output response responds Frame) target area size.
First module 11 can carry out target following processing to video file in real time.During target following is handled, There is tracking mistake to prevent target area that the first module 11 tracked, first module 11 can compartment of terrain will have determined that mesh The frame of video in mark region is submitted to the second module 12 and verified.Wherein, first module 11 can using preset duration as interval, Or using default video frame number as interval, or the checking instruction sent based on the second module 12 is interval, will have determined that target area The frame of video in domain is supplied to the second module 12 to be verified.In order to improve the tracking velocity of the first module 11, in the second module 12 During being verified, first module 11 will also continue to track target area, if the second module 12 is fed back to be proved to be successful, First module 11 continues to track, conversely, then the first module 11 is rolled back to the frame of video of authentication failed and tracked again.Below I The verification process of the second module 12 is first described, then the process that the first module 11 tracks again is described.
Second module 12 is used for the target area that compartment of terrain checking is tracked in the video frame, and for when checking During the tracking failure of target area, the tracked target area of first module 11 adjustment from the frame of video verified is controlled Domain.
Here, asking for the target area correctness that prevents from the first module 11 can not being examined to be tracked using same algorithm Topic, second module 12 are verified using the track algorithm different from the first module 11.For example, the profit of the first module 11 The algorithm of target area tracking is carried out with position candidate and size, the second module 12 can be redefined using grader learning algorithm Target area.
In one embodiment, second module 12 extracts the characteristic information in goal-selling region and to be verified respectively Frame of video in include the characteristic information of at least one candidate target region of target area tracked;It is and each by matching Candidate target region confidence level similar to the feature in goal-selling region is tested the target area in frame of video to be verified Demonstrate,prove and obtain the target area of empirical tests.
Specifically, the second module 12 can individually travel through frame of video to be verified and obtain multiple candidate target regions, gained To candidate target region in may there is the target area tracked with the first module 11 to overlap, or by the first module 11 with The target area of track, which is returned, to be added in candidate target region.In some embodiments, second module 12 is regarded with to be verified The candidate target region in corresponding frame of video is chosen centered on the target area tracked in frequency frame, in the range of default adjustable window. For example, the second module 12 is centered on the target area tracked in frame of video to be verified, the length of side isSquare, Wherein w and h is the wide and high of tracking result rectangle frame, and β is used for command deployment area size.Then in the Local Search of square Several size identical candidate target rectangle frames are taken in region in the form of sliding window.
Then, second module 12 is carried out using the characteristic processing mode of convolutional neural networks to each candidate target region Feature extraction.For example, illustrated with the validator based on deep learning constructed by Siamese networks.Wherein, Siamese Network is a network for being used to judge similarity, and the algorithm can handle two inputs simultaneously, and utilize two convolutional Neurals Network (Convolutional Neural Networks, CNNs) branch extraction description operator, obtains characteristic vector.Then it is sharp With two goal-selling regions confidence level similar with the characteristic vector judging characteristic of each candidate target region, and use characteristic vector Construction loss function is carried out, to carry out network training, more features is provided by goal-selling region and the target area tracked Information.
In this application, Liang Ge CNNs branches have borrowed VGGNet depth convolutional neural networks used in the second module 12 In network structure.But unlike VGGNet networks, extra pool area is increased newly on each CNNs branches Layer (region pooling layer, RPL).Increase RPL the reason for be validator ν in addition to being verified to tracking result, Also need to bear the task of target detection.For target detection, multiple authentication process is can be regarded as, every time one time of checking Constituency aiming field, final choice with target appearance is immediate is used as testing result.RPL effect is can to make Siamese nets Network handles multiple candidate regions simultaneously, greatly reduces the amount of calculation of detection.
Second module 12 after testing, if the similar confidence level of feature for the target area that the first module 11 is tracked is low In some threshold valueIt would consider that authentication failed.In this case, the second module 12 is by the similar confidence level highest of feature or spy Levy similar confidence level and feed back to the first mould higher than the target area that the candidate target region of predetermined threshold value is empirically demonstrate,proved and corrected Block 11.
In yet another embodiment, second module 12 can also be first with the characteristic processing using convolutional neural networks Mode extract respectively goal-selling region characteristic information and frame of video to be verified in tracked target area feature letter Breath, then the similar confidence calculations of feature are carried out to the characteristic information of two target areas, if the similar confidence level of resulting feature Less than predetermined threshold value, then candidate target region is chosen again, then carry out the meter of the extraction confidence level similar with feature of characteristic information Calculate, until finding the candidate target region of the similar confidence level highest of feature or the similar confidence level of feature higher than predetermined threshold value as warp The target area verified and corrected feeds back to the first module 11.
Wherein, in above-mentioned each checking tracking mode, it may be necessary to which multiple candidate target regions are tracked with calculating, institute Stating the second module 12 can be handled simultaneously in region of search multiple candidate targets with using area pond layer, from And greatly reduce the amount of calculation of detection.Due to having used pool area layer, multiple candidate targets in region of search can be same When handle, so as to greatly reduce the amount of calculation of detection.
Here, second module 12 can be adjusted according to checking success or failure ratio between the output checking instruction of the first module 11 Every or the first module 11 submit the interval of frame of video to be verified.Efficiency is tracked to improve, reduces checking, rollback tracks institute again The timeliness of cost.
First module 11 rolls back to the frame of video of empirical tests tracking failure, and the target based on empirical tests and after correcting Region adjusts tracked target area since the frame of video verified.Here, first module 11 is with regarding after correction Frequency frame is the first frame, and target area tracking is carried out to the subsequent video frame tracked again using above-mentioned each track algorithm. In some embodiments, based on the target area after being corrected in previous frame of video, tracking is corresponding again regards first module 11 Target area in frequency frame.For example, with the positions and dimensions of the target area after correction in previous frame of video, to as described above Parameter in formula (3) and (4) is adjusted, and then tracks the target area in current video frame.
Fig. 3 shows a kind of method for tracking target according to the application one side.The method for tracking target is mainly by mesh Tracking system is marked to perform.
In step s 110, default target area is tracked in acquired each frame of video.Here, for clear and definite video The target area to be tracked in file, using the target area feature obtained in advance the video file the first frame figure Characteristic matching is carried out as in, so as to find the target area in the first two field picture, and is regarded to what is chronologically arranged in video file Frequency frame carries out target area tracking.Or the first two field picture in video file is shown for user, and prompt user annotation will be with The target area of track, so that it is determined that the target area in the first two field picture, and then carry out the tracking of target area.It is here, described Video file includes but is not limited to:Local or Remote text in the nonvolatile memory is preserved based on video file format Part, or be based on video format and be stored in volatile memory and ever-increasing video cache file etc..
The Target Tracking System can also be based on labeled determination since the first two field picture of whole video file Target following region where frame of video proceed by target area tracking.Here, the Target Tracking System can be using known Target tracking algorithm to have determined that subsequent video frame that the first frame of video of target area starts carry out target area with Track.For example, Target Tracking System is using the track algorithm based on discrimination model or based on generation model.Wherein, based on differentiation mould The method for tracking target of type is to be made a distinction from background target area using Study strategies and methods, and tracking is regarded as into one two Classification problem, i.e., target is distinguished from background.The method of the Study strategies and methods includes but is not limited to:More examples Habit, compressed sensing, P-N study, structuring SVMs, online boosting or based on above in association with and improved grader Learning algorithm.Target tracking algorism based on generation model is to regard tracking as to find and the most like region of target from background Process.Such as increment sub-space learning, sparse expression etc..
In some embodiments, in order to improve the efficiency of target following, amount of calculation is reduced, the Target Tracking System is used In the target area that is consistent is matched in current video frame based on the target area tracked in previous frame of video.For example, Based on the position (xn-1, yn-1, w, h) of identified target area in frame of video (n-1), from frame of video n same position or The position nearby starts to carry out image piecemeal and carries out learning classification, can so restrain the classification knot of Study strategies and methods as early as possible Fruit.And for example, it is same from frame of video n based on the position (xn-1, yn-1, w, h) of identified target area in frame of video (n-1) One position or the position nearby start to divide candidate target region, and carry out characteristic matching to each candidate target region, so Also the efficiency that target area is traced into frame of video n can be improved.
In one embodiment, the position for the target area that the Target Tracking System can be tracked based on previous frame of video Put and multiple position candidates are first selected in current video frame, and the feature of the image progress target area to each position candidate Extraction, then by the feature of each position candidate with previous frame of video with feature matched, by the similar confidence level highest of feature Position candidate corresponding to candidate target region as the target area in the current video frame tracked.Wherein, the time It is on the basis of length and width pixel shared by location of pixels of the target area in previous frame of video and target area, with default that bit selecting, which is put, Offset step is offset identity, respective pixel location in current video frame and length and width pixel are divided obtained by.It is described to work as Each position candidate is corresponded in preceding frame of video and can obtain piece image block to be used as candidate target region.Institute is described with following examples State the screening process that Target Tracking System carries out target following to each candidate target region:Gone to school in predetermined target area Practise an optimal filter (or being template) so that the template does convolution algorithm, resulting maximum with each candidate target region Candidate target area corresponding to output response (candidate target region and both real estates of tracking target similarity degree are maximum) Domain is the target area that current video frame is tracked.Wherein, the tracking mode can be according to the input of each frame, it is only necessary to logical Cross continuous correction wave filter template, so as to get output response maximum.
Wherein, the tracking mode includes input side:The training image containing tracking target marked in the frame of image first Block (rectangle frame), g represent the preferable output corresponding with the training image blocks.
The feature f of input picture block is designed as d dimensional feature vectors, f by the Target Tracking SystemlRepresent theCharacteristic vector on individual passage.So object function be then considered as study one includeComponent Optimization correlative filter device h, i.e. h the wave filter h as corresponding to each passagelComposition, its mathematical expression form is as follows, passes through minimum Cost function:
Wherein g represents corresponding with training image blocks preferable output, and λ (λ >=0) is the penalty coefficient of normalization item, * tables Show cyclic convolution computing, hlIt isWave filter corresponding to individual passage.
Being capable of rapid solving target letter using Fast Fourier Transform (FFT) (fast Fourier transformation, FFT) Number (1), it is as follows:
Capitalization wherein in formula (2) represents the direct computation of DFT leaf form of relevant variable,Represent conjugate complex number.
It is general to update the position in the tracking mode respectively using simple linear more new strategy in order to improve efficiency Wave filterThe molecular moiety of (subscript t represents t frames)With denominator part BT, trans, it is as follows:
Wherein η is the learning rate of renewal.
Give new input picture block a z, zlRepresent theThe feature of individual passage, ZlRepresent zl's Direct computation of DFT leaf form, it exports response y and can calculated in the following way
Wherein,Fourier inversion is represented, tracking result (position of tracking target in the current frame) is by maximum in y The position of value determines.
In another embodiment, the Target Tracking System carries out the region of multiple candidate's sizes in current video frame The Feature Selection of target area, to obtain the size of target area in current video frame;Wherein, the region of each candidate's size The size of the target area tracked according to previous frame of video and set.With the foregoing candidate's mesh extracted based on position candidate Mark region is similar, and the target area length and width dimensions that the Target Tracking System can also be tracked according to previous frame of video are current Multiple candidate target regions are extracted in frame of video.For example, the size traversal of the target area tracked with previous frame of video is current Frame of video obtains multiple candidate target regions.Or the size of the target area tracked with previous frame of video stretch out or to Area size's traversal current video frame obtained by interior diminution obtains multiple candidate target regions.Each candidate target region is carried out Foregoing characteristic matching, and the maximum candidate target region of the similar confidence level of feature is tracked as current video frame Target area.
It should be noted that those skilled in the art can the described target based on position candidate or candidate's size with Improvement on the basis of track mode with reference to existing target tracking algorism should be considered as based on technical thought provided herein Technology extends, and belongs to protection domain described herein.
In yet another embodiment, the Target Tracking System combines the target based on position candidate and based on candidate's size Tracking mode is tracked to the target area in current video frame.So can a small range as far as possible realize fast target with Track.For example, based on position candidate and on the basis of carrying out the algorithm of target following, in order to realize scaling filter, the target Tracking system constructs the Scale Space Filtering device of a 3-dimensional, while the position to candidate target region and scale size in advance Estimated.As shown in Fig. 2 we construct the yardstick pyramid of a target, a variety of different scales (common M kinds chi is included Degree), wherein ft(1), ftAnd f (2)t(M) be respectively the image block that yardstick is 1, yardstick is 2 and yardstick is M feature.To every kind of Yardstick, its scaling filter is calculated using formula (3)MoleculeAnd denominator BT, scale.Pass through formula (4) The output response of every kind of yardstick is calculated, and responds corresponding yardstick with the maximum output in M output response and goes estimation current Frame (t frames) target area size.
Target Tracking System can carry out target following processing to video file in real time.During target following is handled, it is Prevent that tracking mistake from occurring in target area that Target Tracking System tracked, the Target Tracking System can compartment of terrain will have determined that The frame of video of target area is submitted to step S120 and verified.Wherein, the Target Tracking System can be using preset duration between Every, or using default video frame number as interval, or the checking instruction sent based on step S120 is interval, will have determined that target area The frame of video in domain is supplied to step S120 to be verified.In order to improve the tracking velocity of Target Tracking System, enter in step S120 During row checking, the Target Tracking System will also continue the target area of tracking, if step S120 is fed back to be proved to be successful, Target Tracking System continues to track, conversely, then Target Tracking System is rolled back to the frame of video of authentication failed and tracked again.Under We first describe step S120 verification process in face, then describe the process that Target Tracking System tracks again.
In the step S120, the target area tracked in the video frame is verified in compartment of terrain, and for when checking During the tracking failure of target area, the tracked target area of first module adjustment from the frame of video verified is controlled.
Here, asking for the target area correctness that prevents from the first module can not being examined to be tracked using same algorithm Topic, Target Tracking System are verified using the track algorithm different from the first module.For example, the step S110 utilizes candidate Position and size carry out the algorithm of target area tracking, and Target Tracking System can redefine target using grader learning algorithm Region.
In one embodiment, included respectively in the characteristic information in extraction goal-selling region and frame of video to be verified The characteristic information of at least one candidate target region of the target area tracked;And by match each candidate target region with The similar confidence level of feature in goal-selling region is verified to the target area in frame of video to be verified and obtains empirical tests Target area.
Specifically, Target Tracking System can individually travel through frame of video to be verified and obtain multiple candidate target regions, institute There may be the target area tracked with the first module to overlap in obtained candidate target region, or step S110 is tracked Target area return and be added in candidate target region.In some embodiments, the Target Tracking System is regarded with to be verified The candidate target region in corresponding frame of video is chosen centered on the target area tracked in frequency frame, in the range of default adjustable window. For example, Target Tracking System is centered on the target area tracked in frame of video to be verified, the length of side isPros Shape, wherein w and h are the wide and high of tracking result rectangle frame, and β is used for command deployment area size.Then in the part of square Several size identical candidate target rectangle frames are taken in region of search in the form of sliding window.
Then, the Target Tracking System is entered using the characteristic processing mode of convolutional neural networks to each candidate target region Row feature extraction.For example, illustrated with the validator based on deep learning constructed by Siamese networks.Wherein, Siamese networks are a networks for being used to judge similarity, and the algorithm can handle two inputs simultaneously, and utilize two volumes Product neutral net (Convolutional Neural Networks, CNNs) branch extraction description operator, obtains characteristic vector. Then two goal-selling regions confidence level similar with the characteristic vector judging characteristic of each candidate target region is utilized, and with spy Sign vector carries out construction loss function, to carry out network training, is provided more by goal-selling region and the target area tracked Multicharacteristic information.
In this application, Liang Ge CNNs branches used in Target Tracking System have borrowed VGGNet depth convolutional Neural nets Network structure in network.But unlike VGGNet networks, extra pool area is increased newly on each CNNs branches Change layer (region pooling layer, RPL).The reason for increasing RPL is validator ν except being verified to tracking result Outside, it is also necessary to bear the task of target detection.For target detection, multiple authentication process is can be regarded as, verifies one every time Individual candidate regions aiming field, final choice with target appearance is immediate is used as testing result.RPL effect is to make Siamese networks handle multiple candidate regions simultaneously, greatly reduce the amount of calculation of detection.
The Target Tracking System after testing, if the similar confidence level of the feature for the target area that step S110 is tracked is low In some threshold valueIt would consider that authentication failed.In this case, Target Tracking System by the similar confidence level highest of feature or The similar confidence level of feature is higher than the target area that the candidate target region of predetermined threshold value is empirically demonstrate,proved and corrected.
In yet another embodiment, second module 12 can also be first with the characteristic processing using convolutional neural networks Mode extract respectively goal-selling region characteristic information and frame of video to be verified in tracked target area feature letter Breath, then the similar confidence calculations of feature are carried out to the characteristic information of two target areas, if the similar confidence level of resulting feature Less than predetermined threshold value, then candidate target region is chosen again, then carry out the meter of the extraction confidence level similar with feature of characteristic information Calculate, until finding the candidate target region of the similar confidence level highest of feature or the similar confidence level of feature higher than predetermined threshold value as warp The target area verified and corrected feeds back to the first module 11.
Wherein, in above-mentioned each checking tracking mode, it may be necessary to which multiple candidate target regions are tracked with calculating, institute Stating the second module 12 can be handled simultaneously in region of search multiple candidate targets with using area pond layer, from And greatly reduce the amount of calculation of detection.Due to having used pool area layer, multiple candidate targets in region of search can be same When handle, so as to greatly reduce the amount of calculation of detection.
Here, the Target Tracking System can produce the interval or to be tested of checking instruction according to checking success or failure ratio adjustment Demonstrate,prove the interval of frame of video.Efficiency is tracked to improve, reduces checking, rollback tracks spent timeliness again.
In step s 130, the frame of video of empirical tests tracking failure, and the target based on empirical tests and after correcting are rolled back to Region adjusts tracked target area since the frame of video verified.Here, after the Target Tracking System is to correct Frame of video is the first frame, and target area tracking is carried out to the subsequent video frame tracked again using above-mentioned each track algorithm. In some embodiments, the Target Tracking System is based on the target area after being corrected in previous frame of video, again tracking pair Answer the target area in frame of video.For example, with the positions and dimensions of the target area after correction in previous frame of video, to as above institute Parameter in the formula (3) stated and (4) is adjusted, and then tracks the target area in current video frame.
The application also provides a kind of computer equipment, and it includes:One or more processors;Memory, for storing one Individual or multiple computer programs;When one or more of computer programs are by one or more of computing devices, make One or more of processors realize method described in above-mentioned steps S110-S130.
In order to verify the validity of model, carried out on data set more popular OTB2013 and OTB2015 two real Test.50 and 100 videos are included respectively, and all videos are manually to mark.We will be based on the two databases Present techniques thought and the target following scheme (PTAV) that designs are compared with 11 kinds of target following schemes.This 11 kinds of methods 3 classes can be divided into:1. the tracker based on depth characteristic, including SINT, HCF, SiamFC and DLT;2. based on related filter The tracker of ripple, including fDSST, LCT, SRDCF, KCF and Staple;3. other representative track algorithms, including TGPR, MEEM and Struck etc..
1st, overall performance is verified
We report the comparative result under OPE (One-Pass Evaluation) evaluations, as shown in figures 4a-4d.OPE Evaluation includes two indices:Range index (Distance Precision Rate, DPR) and overlapping index (Overlap Success Rate, OSR), wherein, Fig. 4 a and Fig. 4 b are accuracy and correctness in the OPE carried out based on OTB2013 data Validation test comparison chart, Fig. 4 c and Fig. 4 d are accuracy and correctness validation test in the OPE carried out based on OTB2015 data Comparison chart.All in all, compared with other target following schemes, herein described scheme achieves favorably in OPR and OSR As a result.In addition, we also reported distance be the DPR results of 20 pixels and Duplication be 0.5 OSR and tracking velocity, As shown in table 1.Experiment shows that PTAV track algorithms achieve the best result in all comparative approach.In OTB2015 data sets On, PTAV achieves 84.9% DPR and 77.6% OSR.Although HCF track algorithms have used depth characteristic to target appearance It is modeled, achieves 83.7% DPR and 65.6% OSR, herein described scheme achieves more excellent result.In addition, Due to having used parallel framework, the tracking velocity of the application is 27fps, and HCF speed only has about 10fps.With SINT with Track algorithm is compared, and DPR is brought up to 84.9% by PTAV from 77.3%, and OSR is improved to 77.6% from 70.3%.Furthermore PTAV Real time execution can be realized, and SINT is then not all right.Compared with pedestal method fDSST, PTAV significantly improves DPR (12.9%) With OSR (10.0%).
Table 1
2 Performance Evaluations based on attribute
We further analyze performance of the track algorithm of the application on OTB2015 under different attribute.Table 2 is shown The comparison of the track algorithm (PTAV) of the application and other 5 best trackings in 11 attribute.For DPR, this 8 in 11 attributes of the track algorithm of application achieve best result.To remaining 3 (FM, IPR and SV), sheet The track algorithm of application also achieves good result.Compared with the tracker based on depth characteristic, the track algorithm of the application The position of target in video can preferably be positioned.On the other hand, in the assessment based on OSR, the track algorithm of the application Best result is achieved in 11 all attributes.With the tracker based on correlation filtering and typical track algorithm MEEM is compared, and the track algorithm of the application is in the more Shandong that blocking, showing when background clutter, low resolution occur in target Rod.
Table 2
Wherein, the comparison of this paper target tracking algorism and other advanced methods under different attribute:Background is noisy (background cluttered, BC), deformation (deformation, DEF) is quick to move (fast motion, FM), in face Rotate (in-plane rotation, IPR), illumination variation (illumination variation, IV), low resolution (low Resolution, LR), motion blur (motion blur, MB), block (occlusion, OCC), (out-of- is rotated outside face Plane rotation, OPR), target is not visible (out-of-view, OV), dimensional variation (scale variation, SV) (thickened portion represents best effect).
3 couples of different validation interval V analysis
In the tracking scheme (PTAV) of the application, there may be influence on speed and progress by different validation interval V. For less validation interval V, the request that tracker sends checking is more frequent, causes validator needs largely to calculate, from And reduce efficiency.On the contrary, big validation interval may be such that PTAV loses target because tracking object appearance change is excessive. Once losing target, the tracker in PTAV may be verified by context update into trace model until next time.Even if test Card device can reposition target, and tracker may also change excessive because of target appearance and lose tracking object.Table 3 illustrates Influence of the different validation intervals to PTAV on OTB2013.Consider precision and speed, we set validation interval V For 10.
Table 3
Wherein, V represents different validation intervals.
The contrast of 4 dual-threads and single thread
In PTAV, tracker is not relying on validator within most times.In order to improve the efficiency of whole system, We handle tracking and checking using two independent threads respectively, rather than with the serial execution tracking of a thread and verify. Advantage of this is that tracker without waiting for validator return the result can start to process next frame, until receive Negative-feedback to validator just needs to carry out rollback and itself adjustment.Because during tracking, we will be all Intermediateness kept so that the rollback of tracker can be carried out quickly, without recalculating.Table 4 is shown Use the velocity contrast under dual-thread and single thread.As can be seen from Table 4, energy is tracked and verified using dual-thread parallel processing Enough effective operational efficiency for improving whole system.
Table 4
5 pairs of basic trackers of differenceAnalysis
In PTAV, trackerMust efficiently and should be accurate as far as possible in most time.In order to analyze not Influence of the same tracker to whole system, we compare two different tracker fDSST and KCF.Compared with fDSST, KCF track algorithms are more efficient, but the precision in shorter interval is really without fDSST height.On OTB2013 and OTB2015 Comparative result be shown in Table 5.As can be seen from the table, it is substantially better than using fDSST as tracker using KCF and is used as tracker. Although KCF efficiency is higher than fDSST, due to the precision in shorter interval it is relatively low, it is necessary to carry out substantial amounts of checking and Detection, finally have a strong impact on the performance of whole system.On the contrary, fDSST is more accurate in shorter interval, so as to greatly reduce To the demand of checking, systematic function is improved.
Table 5
Wherein, comparison of the different trackers in validation interval V=10 in table 5 (thickened portion represents best effect)
In summary, the application method for tracking target and system, verify that tracked target area can by compartment of terrain Ensure to track ageing, and can raising tracking accuracy.So the application effectively overcome various shortcoming of the prior art and Has high industrial utilization.
It is obvious to a person skilled in the art that the application is not limited to the details of above-mentioned one exemplary embodiment, Er Qie In the case of without departing substantially from spirit herein or essential characteristic, the application can be realized in other specific forms.Therefore, no matter From the point of view of which point, embodiment all should be regarded as exemplary, and be nonrestrictive, scope of the present application is by appended power Profit requires rather than described above limits, it is intended that all in the implication and scope of the equivalency of claim by falling Change is included in the application.Any reference in claim should not be considered as to the involved claim of limitation.This Outside, it is clear that the word of " comprising " one is not excluded for other units or step, and odd number is not excluded for plural number.That is stated in device claim is multiple Unit or device can also be realized by a unit or device by software or hardware.The first, the second grade word is used for table Show title, and be not offered as any specific order.

Claims (20)

1. a kind of method for tracking target, wherein, this method includes:
Default target area is tracked in acquired each frame of video;
The target area that compartment of terrain checking is tracked in the video frame;
When verifying target area tracking failure, the tracked target area of adjustment from the frame of video verified.
2. the method according to claim 11, wherein, it is described to track default target area in acquired each frame of video The step of include:The target area being consistent is matched in current video frame based on the target area tracked in previous frame of video Domain.
3. the method according to claim 11, wherein, it is described to be worked as based on the target area tracked in previous frame of video The step of matching the target area being consistent in preceding frame of video includes:
Multiple position candidates in current video frame are carried out to the Feature Selection of target area, to obtain target area in current video frame The position in domain;Wherein, each position candidate is the position of the target area tracked according to previous frame of video and set.
4. the method according to claim 11, wherein, it is described to be worked as based on the target area tracked in previous frame of video The step of matching the target area being consistent in preceding frame of video includes:
The region of multiple candidate's sizes in current video frame is carried out to the Feature Selection of target area, to obtain in current video frame The size of target area;Wherein, the size for the target area that the region of each candidate's size is tracked according to previous frame of video And set.
5. the method according to claim 11, wherein, when the interval for checking is preset including following at least one interval Long, default video frame number or the interval of checking instruction.
6. according to the method for claim 1, wherein, the mode of checking target area tracking includes:
Respectively the target area tracked is included in the characteristic information in extraction goal-selling region and frame of video to be verified extremely The characteristic information of a few candidate target region;
By matching each candidate target region confidence level similar to the feature in goal-selling region in frame of video to be verified Verified and obtain the target area of empirical tests in target area.
7. according to the method for claim 6, wherein, when verifying target area tracking failure, corrected based on empirical tests Target area afterwards adjusts tracked target area since the frame of video verified.
8. the method according to claim 11, wherein, the target area based on after correction, from selected frame of video Starting to adjust the mode of the target area tracked in hereafter each frame of video includes:Based on the target after being corrected in previous frame of video Region, the target area in corresponding frame of video is tracked again.
9. according to the method for claim 6, wherein, the acquisition modes of the candidate target region include:Regarded with to be verified Candidate target region is obtained in the range of default adjustable window centered on the target area tracked in frequency frame.
10. a kind of computer program product, when the computer program product is performed by computer equipment, such as claim 1 It is performed to the method any one of 9.
11. a kind of Target Tracking System, wherein, the system includes:
First module, for tracking default target area in acquired each frame of video;
Second module, the target area tracked in the video frame for compartment of terrain checking, and for when checking target area During tracking failure, the tracked target area of first module adjustment from the frame of video verified is controlled.
12. system according to claim 11, wherein, first module is used to be based on being tracked in previous frame of video Target area the target area being consistent is matched in current video frame.
13. system according to claim 12, wherein, first module is used for multiple candidate bits in current video frame The Feature Selection for carrying out target area is put, to obtain the position of target area in current video frame;Wherein, each position candidate It is the position of the target area tracked according to previous frame of video and sets.
14. system according to claim 12, wherein, first module is used for multiple candidate's chis in current video frame Very little region carries out the Feature Selection of target area, to obtain the size of target area in current video frame;Wherein, each time Select the size for the target area that the region of size tracked according to previous frame of video and set.
15. system according to claim 11, wherein, when the interval for checking is preset including following at least one interval Long, default video frame number or the interval of checking instruction.
16. system according to claim 11, wherein, the mode of the second module verification target area tracking includes:
Respectively the target area tracked is included in the characteristic information in extraction goal-selling region and frame of video to be verified extremely The characteristic information of a few candidate target region;
By matching each candidate target region confidence level similar to the feature in goal-selling region in frame of video to be verified Verified and obtain the target area of empirical tests in target area.
17. system according to claim 16, wherein, when verifying target area tracking failure, the first module base Tracked target area is adjusted in the target area after empirical tests are corrected is since the frame of video verified.
18. system according to claim 17, wherein, first module is based on the target area after correction, selected by The mode that the frame of video gone out starts to adjust the target area tracked in hereafter each frame of video includes:Based on being entangled in previous frame of video Target area after just, the target area in corresponding frame of video is tracked again.
19. system according to claim 16, wherein, target of second module to have been tracked in frame of video to be verified Candidate target region is obtained centered on region, in the range of default adjustable window.
20. a kind of computer equipment, wherein, the computer equipment includes:
One or more processors;
Memory, for storing one or more computer programs;
When one or more of computer programs are by one or more of computing devices so that one or more of Processor realizes method as claimed in any one of claims 1-9 wherein.
CN201711003044.XA 2017-10-24 2017-10-24 A kind of method for tracking target and system Pending CN107832683A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711003044.XA CN107832683A (en) 2017-10-24 2017-10-24 A kind of method for tracking target and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711003044.XA CN107832683A (en) 2017-10-24 2017-10-24 A kind of method for tracking target and system

Publications (1)

Publication Number Publication Date
CN107832683A true CN107832683A (en) 2018-03-23

Family

ID=61649180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711003044.XA Pending CN107832683A (en) 2017-10-24 2017-10-24 A kind of method for tracking target and system

Country Status (1)

Country Link
CN (1) CN107832683A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596957A (en) * 2018-04-26 2018-09-28 北京小米移动软件有限公司 Object tracking methods and device
CN109086648A (en) * 2018-05-24 2018-12-25 同济大学 A kind of method for tracking target merging target detection and characteristic matching
CN109242882A (en) * 2018-08-06 2019-01-18 北京市商汤科技开发有限公司 Visual tracking method, device, medium and equipment
CN109271927A (en) * 2018-09-14 2019-01-25 北京航空航天大学 A kind of collaboration that space base is multi-platform monitoring method
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN110147768A (en) * 2019-05-22 2019-08-20 云南大学 A kind of method for tracking target and device
CN110163066A (en) * 2018-12-07 2019-08-23 腾讯科技(深圳)有限公司 Multi-medium data recommended method, device and storage medium
CN110717575A (en) * 2018-07-13 2020-01-21 奇景光电股份有限公司 Frame buffer free convolutional neural network system and method
CN110781778A (en) * 2019-10-11 2020-02-11 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN110796062A (en) * 2019-10-24 2020-02-14 浙江大华技术股份有限公司 Method and device for precisely matching and displaying object frame and storage device
CN110827319A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN111428539A (en) * 2019-01-09 2020-07-17 成都通甲优博科技有限责任公司 Target tracking method and device
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN114554300A (en) * 2022-02-28 2022-05-27 合肥高维数据技术有限公司 Video watermark embedding method based on specific target

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9239965B2 (en) * 2012-06-12 2016-01-19 Electronics And Telecommunications Research Institute Method and system of tracking object
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9239965B2 (en) * 2012-06-12 2016-01-19 Electronics And Telecommunications Research Institute Method and system of tracking object
CN107066990A (en) * 2017-05-04 2017-08-18 厦门美图之家科技有限公司 A kind of method for tracking target and mobile device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HENG FAN ETC.: ""Parallel Tracking and Verifying:A Framework for Real-Time and High Accuracy Visual Tracking"", 《INTERNATIONAL CONFERENCE ON COMPUTER VISION》 *
MARTIN DANELLJAN ETC.: ""Discriminative Scale Space Tracking"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596957A (en) * 2018-04-26 2018-09-28 北京小米移动软件有限公司 Object tracking methods and device
CN109086648A (en) * 2018-05-24 2018-12-25 同济大学 A kind of method for tracking target merging target detection and characteristic matching
CN110717575B (en) * 2018-07-13 2022-07-26 奇景光电股份有限公司 Frame buffer free convolutional neural network system and method
CN110717575A (en) * 2018-07-13 2020-01-21 奇景光电股份有限公司 Frame buffer free convolutional neural network system and method
CN109242882A (en) * 2018-08-06 2019-01-18 北京市商汤科技开发有限公司 Visual tracking method, device, medium and equipment
CN109242882B (en) * 2018-08-06 2020-11-27 北京市商汤科技开发有限公司 Visual tracking method, device, medium and equipment
CN110827319A (en) * 2018-08-13 2020-02-21 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN110827319B (en) * 2018-08-13 2022-10-28 中国科学院长春光学精密机械与物理研究所 Improved Staple target tracking method based on local sensitive histogram
CN109271927A (en) * 2018-09-14 2019-01-25 北京航空航天大学 A kind of collaboration that space base is multi-platform monitoring method
CN109271927B (en) * 2018-09-14 2020-03-27 北京航空航天大学 Cooperative monitoring method for space-based multiple platforms
CN109597431A (en) * 2018-11-05 2019-04-09 视联动力信息技术股份有限公司 A kind of method and device of target following
CN109597431B (en) * 2018-11-05 2020-08-04 视联动力信息技术股份有限公司 Target tracking method and device
CN110163066A (en) * 2018-12-07 2019-08-23 腾讯科技(深圳)有限公司 Multi-medium data recommended method, device and storage medium
CN111428539A (en) * 2019-01-09 2020-07-17 成都通甲优博科技有限责任公司 Target tracking method and device
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN111507999B (en) * 2019-01-30 2023-07-18 北京四维图新科技股份有限公司 Target tracking method and device based on FDSST algorithm
CN110147768B (en) * 2019-05-22 2021-05-28 云南大学 Target tracking method and device
CN110147768A (en) * 2019-05-22 2019-08-20 云南大学 A kind of method for tracking target and device
CN110781778B (en) * 2019-10-11 2021-04-20 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN110781778A (en) * 2019-10-11 2020-02-11 珠海格力电器股份有限公司 Access control method and device, storage medium and home system
CN110796062A (en) * 2019-10-24 2020-02-14 浙江大华技术股份有限公司 Method and device for precisely matching and displaying object frame and storage device
CN110796062B (en) * 2019-10-24 2022-08-09 浙江华视智检科技有限公司 Method and device for precisely matching and displaying object frame and storage device
CN114554300A (en) * 2022-02-28 2022-05-27 合肥高维数据技术有限公司 Video watermark embedding method based on specific target
CN114554300B (en) * 2022-02-28 2024-05-07 合肥高维数据技术有限公司 Video watermark embedding method based on specific target

Similar Documents

Publication Publication Date Title
CN107832683A (en) A kind of method for tracking target and system
CN110120065B (en) Target tracking method and system based on hierarchical convolution characteristics and scale self-adaptive kernel correlation filtering
CN104268539B (en) A kind of high performance face identification method and system
CN104574445B (en) A kind of method for tracking target
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN107424177A (en) Positioning amendment long-range track algorithm based on serial correlation wave filter
CN111696128A (en) High-speed multi-target detection tracking and target image optimization method and storage medium
CN104573731A (en) Rapid target detection method based on convolutional neural network
CN105844669A (en) Video target real-time tracking method based on partial Hash features
CN108765470A (en) One kind being directed to the improved KCF track algorithms of target occlusion
CN112348849A (en) Twin network video target tracking method and device
CN107067410B (en) Manifold regularization related filtering target tracking method based on augmented samples
Zhang et al. Sparse learning-based correlation filter for robust tracking
CN110245587B (en) Optical remote sensing image target detection method based on Bayesian transfer learning
CN106780552A (en) Anti-shelter target tracking based on regional area joint tracing detection study
CN110135318A (en) Cross determination method, apparatus, equipment and the storage medium of vehicle record
CN104881640A (en) Method and device for acquiring vectors
CN111429481B (en) Target tracking method, device and terminal based on adaptive expression
CN104463240A (en) Method and device for controlling list interface
CN104680190A (en) Target detection method and device
JP2020098455A (en) Object identification system, object identification method, and image identification program
CN109584267A (en) A kind of dimension self-adaption correlation filtering tracking of combination background information
CN104268565A (en) Scene matching region selecting method based on regression learning
CN104091315B (en) Method and system for deblurring license plate image
CN105447887A (en) Historical-route-based target tracking method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180323

RJ01 Rejection of invention patent application after publication