CN110503663A - A kind of random multi-target automatic detection tracking based on pumping frame detection - Google Patents

A kind of random multi-target automatic detection tracking based on pumping frame detection Download PDF

Info

Publication number
CN110503663A
CN110503663A CN201910659013.2A CN201910659013A CN110503663A CN 110503663 A CN110503663 A CN 110503663A CN 201910659013 A CN201910659013 A CN 201910659013A CN 110503663 A CN110503663 A CN 110503663A
Authority
CN
China
Prior art keywords
frame
target
detection
tracking
collection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910659013.2A
Other languages
Chinese (zh)
Other versions
CN110503663B (en
Inventor
刘娟秀
傅小明
于腾
李佼
杜晓辉
郝如茜
张静
倪光明
刘霖
刘永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910659013.2A priority Critical patent/CN110503663B/en
Publication of CN110503663A publication Critical patent/CN110503663A/en
Application granted granted Critical
Publication of CN110503663B publication Critical patent/CN110503663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the random multi-target automatic detection tracking for taking out frame detection, belongs to digital image processing field and machine learning field, and in particular to the random multi-target automatic detection tracking for combining target detection with method for tracking target.Target detection and target following are fused in same system by the present invention, have taken into account the advantage of detection with tracking.Utilizing proposed initial frame to search method can detecte when all targets occur in the video sequence, so as to automatically detect the different classes of target of arbitrary frame appearance in video sequence and tracked.Using renovator, considers the state of current detection and tracking, dbjective state can be updated, accomplish timely error correction.

Description

A kind of random multi-target automatic detection tracking based on pumping frame detection
Technical field
The invention belongs to digital image processing fields and machine learning field, and in particular to by target detection and target following The random multi-target automatic detection tracking that method combines.
Background technique
Either in military or civilian, the detecting and tracking of target all has a wide range of applications scene.The detection of target Tracking includes two subtasks of Object Detecting and Tracking as the important composition in image processing techniques.Target detection is just It is the process for detecting target object in the picture and being classified.Target following technology is exactly to be with a certain frame of video sequence Starting point is continuously available the motion state of target using the tracking target that selected or detector provides manually in subsequent frames Process.
Although detection method, which is used alone, can obtain the position of all targets well and mark its classification, detect Processing speed it is partially slow.Tracking is used alone firstly the need of the manual setting target initial position to be tracked, secondly for Emerging target can not be handled, although fast speed, can not cope with actual scene.Therefore need to find a kind of knot The method for closing detection with tracking can be applied in complicated task to take into account the two advantage.
Tracking detection method is studied there are many patent." a kind of intelligent multi-target detection tracking " 108664930A, " a kind of video in target detection tracking method " 108986143A, " a kind of multi-target detection tracking, electricity Sub- equipment and storage medium " be all in the patents such as 108121945A using based on single frame detection and matched tracking, and Tracking is not used really, wastes inter-frame information, therefore it is slower to detect speed.And " one kind is towards unmanned boat application Waterborne target detecting and tracking integral method " patents such as 106960446A combine detection and tracking, but use phase Every the method that anchor-frame is detected, it cannot be guaranteed that detecting the target that arbitrary frame occurs.
Summary of the invention
It is a kind of based on the random more mesh for taking out frame detection it is an object of the invention to propose in order to solve the deficiencies in the prior art Mark automatic detecting and tracking method.
The present invention is directed to the technical solution that these problems are taken are as follows: a kind of automatic based on the random multiple target for taking out frame detection Detecting and tracking method, this method comprises:
Step 1: video being equally divided into n sections, then one frame of random sampling in every section, obtain sample frame sequence f1,f2,…, fk,…,fn
Step 2: to each sample frame, carrying out target detection, record using the target detection neural network model of pre-training Through detecting position and the classification of obtained each target in each sample frame, the target collection of every frame is counted
Step 3: since first sample frame, analyzing present frame fkWith previous sample frame fk-1Target collectionIf having fresh target appearance in the frame, method is searched using initial frame and is found between the frame and previous sample frame The first frame that fresh target occurs;According to this process, the first frame that all targets occur and record are successively found in the video sequence;
Step 4: since the first frame for having carried out target detection, to the object initializations having detected that all in present frame One tracker tracks these targets until next picture frame for having carried out target detection using these trackers;It will tracking The tracking result of device and the object detection results of the frame input renovator, export the state of all targets of present frame, again initially Change tracker to continue to track;According to this process until the last frame of video, completes the tracking of entire video sequence.
Further, the target detection Establishment of Neural Model process of pre-training described in step 2 is as follows:
Step 2.1: collecting image largely comprising the target to be tracked, all targets in image are labeled, make It is made data set, data set is divided into training set, verifying collection and test set;
Step 2.2: the target detection neural network structure that selection detects selected target, the training set that will be made Collect input network with verifying to be trained;
Step 2.3: trained network being tested with test set, finally obtains the satisfactory mesh of Detection accuracy Mark detection network, for the detection before subsequent tracking;
Further, analysis f described in step 3kFrame and fk+1The method for whether thering is fresh target to occur between frame are as follows:
Assuming that through detected fk, fk+1All target collections of frame are Tfk={ p1,p2,…,pi,…,pm,Wherein the sequence of each set inner element sorts according to the coordinate of each target;It is right In present frame fk+1Each of element qj, in former frame setIn look for whether corresponding element piIf not depositing In then target qjTo increase target newly.
Further, described for present frame fk+1Each of element qj, in former frame setIn look for whether There is corresponding element piMethod are as follows:
If present frame fk+1In some element qjFor a, in former frame setFor T;Both for element a, in set T In look for whether the method for its corresponding element are as follows:
Assuming that set T={ t1,t2,…,ti,…,tm, for the element t in seti, it is right that its is obtained according to testing result The image internal coordinate is answered to beAnd the corresponding coordinates of targets of element a is (xa,ya), then a and tiSimilarity distance is defined as:
Wherein, label () is classification belonging to the target;Final result s is smaller, then two elements are more similar;If There are element t in set TiSo that s (ti)<Sa, then it is assumed that there are the corresponding elements of element a in T;Wherein SaFor the threshold value of setting, The threshold value is 3.2 times of target area size.
Further, initial frame described in step 3 searches method method particularly includes:
If fkThe target collection that obtains is after carrying out target detection in frameFk+1 The target collection that obtains is after carrying out target detection in frameIf fk+1Frame compares fkFrame is newly-increased Target qn, q is required to look up at this timenThe first frame f of appearancem;Take fkWith fk+1Intermediate value, that is, take fa=(fk+fk+1The frame of)/2, to this Frame is detected using target detection network;If obtained target collection isWith aforementioned side Method is identical, findsIn whether there is and target qnCorresponding element;Illustrate the frame f to be found if it existsk<fm<fa, otherwise say Bright fa<fm<fk+1;Assuming that corresponding element is not present, then f is takenaWith fk+1Intermediate value, i.e. fb=(fa+fk+1)/2, to fbFrame carries out mesh Marking the result detected isSame method, ifIt is middle to exist and target qnCorresponding element, if In the presence of then illustrating fa<fm<fb, otherwise fb<fm<fk+1;Successively take in this manner intermediate value detected untilIn be not present Corresponding target, andMiddle presence;F is determined at this timemIt is exactly target qnThe first frame of appearance.
Further, the method for building up of tracker described in step 4 are as follows:
Tracker can be from conventional method from the aspect of speed, accuracy rate or deep learning method, according to specific needs Select suitable method.It is cut from image as template according to dbjective state currently entered, to complete tracker Initialization.
Further, the method for update tracking mode described in step 4 are as follows:
Assuming that the target collection that current kth frame has carried out target detection in abovementioned steps and detection obtains is All trackers are combined into the tracking result collection of present frameIt is right InIn each element di, InIt is middle to find corresponding element tj, such element is directly by d if it does not existiIt is added Results set Tr.If tjIt is and diCorresponding element, similarity distance s then calculate it according to the following formula and select coefficient b;
B=con (tj)×r-con(di)×(1-r)
Wherein, confidence level when con () is the target detection/tracking, r are the coefficient of setting, indicate detection and tracking As a result which acceptance is higher, r ∈ (0,1);
If b > 0, illustrate that the confidence level of tracking result at this time is higher, updating result is
If b < 0, illustrate that the confidence level of testing result at this time is higher, updating result is
Result r will finally be updated, set T is addedr, updated in this mode until traversal is gatheredIt is all to complete present frame The update of dbjective state.
The technical effects of the invention are that:
Target detection and target following are fused in same system, the advantage of detection with tracking has been taken into account.Using institute The initial frame of proposition, which searches method, can detecte when all targets occur in the video sequence, so as to automatically detect In video sequence arbitrary frame occur different classes of target and tracked.Using renovator, current detection and tracking are considered State, dbjective state can be updated, accomplish timely error correction.
Detailed description of the invention
Fig. 1 is a kind of Automatic Targets tracking flow chart,
Fig. 2 is a kind of Automatic Targets tracking detail flowchart,
Fig. 3 is that target takes out frame detection schematic diagram,
Fig. 4 is that initial frame searches method schematic diagram,
Fig. 5 is tracking and update flow diagram.
Specific embodiment
In order to more clearly illustrate techniqueflow of the invention, the present invention is further elaborated with reference to the accompanying drawing.
This method as shown in Figures 1 and 2 is divided into four steps:
Step 1: segmentation sampling being carried out to video, obtains several sample frames f1,f2,…,fk,…,fn
Step 2: as shown in figure 3, carrying out target using the target detection neural network model of pre-training to each sample frame Detection records the position in each sample frame through detecting obtained each target and classification, counts the target collection of every frame
Step 3: since first sample frame, analyzing present frame fkWith previous sample frame fk-1Target collectionIf having fresh target appearance in the frame, method is searched using initial frame and is found between the frame and previous sample frame The first frame that fresh target occurs.According to this process, the first frame that all targets occur and record are successively found in the video sequence;
Step 4: as shown in figure 5, since the first frame for having carried out target detection, being had detected that all in present frame One tracker of object initialization tracks these targets until next image for having carried out target detection using these trackers Frame.The object detection results of the tracking result of tracker and the frame are inputted into renovator, export the state of all targets of present frame, Initialization tracker continues to track again.According to this process until the last frame of video, completes the tracking of entire video sequence.
Wherein, the target detection Establishment of Neural Model process of pre-training described in step 2 is as follows:
Step 1: collecting image largely comprising target to be tracked, image should have diversity, i.e., comprising to be tracked The image of target various states.All targets in image are labeled, data set is fabricated to;
Step 2: the suitable target detection network structure detected to selected target of selection, such as detection effect are preferable SSD or YOLO method.The training set made and verifying collection input network are trained;
Step 3: trained network being tested with test set, finally obtains the satisfactory target of Detection accuracy Network is detected, general accuracy rate should be not less than 80%.The detector is used for the detection before subsequent tracking;
Further, analysis f described in step 3kFrame and fk+1The process for whether having fresh target to occur between frame is as follows:
Assuming that through detected fk, fk+1All target collections of frame are Wherein the sequence of each set inner element sorts according to the coordinate of each target.It is right In present frame fk+1Each of element qj, in former frame setIn look for whether corresponding element piIf not depositing In then target qjTo increase target newly.
Wherein, initial frame described in step 3 searches method and is similar to intermediate value lookup method, as shown in Figure 4.The specific mistake of this method Journey is as follows:
Assuming that fkThe target collection that obtains is after carrying out target detection in frameThe fk+1The target collection that obtains is after carrying out target detection in frameThrough preceding method, fk+1 Frame compares fkFrame has increased target q newlyn, q is required to look up at this timenThe first frame f of appearancem.Take fkWith fk+1Intermediate value, that is, take fa=(fk +fk+1The frame of)/2 detects the frame using target detection network.If obtained target collection isIt is identical as preceding method, it findsIn whether there is and target qnCorresponding element.If depositing In the explanation frame f to be foundk<fm<fa, otherwise illustrate fa<fm<fk+1;Assuming that corresponding element is not present, then f is takenaWith fk+1In Value, i.e. fb=(fa+fk+1)/2, to fbFrame carry out target detection result beSame method, IfIt is middle to exist and target qnCorresponding element, and if it exists, then illustrate fa<fm<fb, otherwise fb<fm<fk+1.In this manner according to It is secondary take intermediate value detected untilIn be not present corresponding target, andMiddle presence.F is determined at this timemIt is exactly target qnOut Existing first frame.
Wherein, the method for building up of tracker described in step 4 are as follows:
Tracker can be from conventional method from the aspect of speed, accuracy rate or deep learning method, can be using correlation Filter tracker or SiamFC class tracker.
By taking SiamFC tracker as an example, first, in accordance with the principle tectonic network structure of SiamFC tracking.Oneself production Track file is trained network, or directly uses other people and trained tracking network.By initial frame and Initial target state inputs network, and subsequently inputting next frame can start to be tracked.

Claims (6)

1. it is a kind of based on the random multi-target automatic detection tracking for taking out frame detection, this method comprises:
Step 1: video being equally divided into n sections, then one frame of random sampling in every section, obtain sample frame sequence f1,f2,…,fk,…, fn
Step 2: to each sample frame, carrying out target detection using the target detection neural network model of pre-training, record is each Through detecting position and the classification of obtained each target in sample frame, the target collection of every frame is counted
Step 3: since first sample frame, analyzing present frame fkWith previous sample frame fk-1Target collectionIf There is fresh target appearance in the frame, then searches method using initial frame and find fresh target appearance between the frame and previous sample frame First frame;According to this process, the first frame that all targets occur and record are successively found in the video sequence;
Step 4: since the first frame for having carried out target detection, to the object initializations having detected that one all in present frame Tracker tracks these targets until next picture frame for having carried out target detection using these trackers;By tracker The object detection results of tracking result and the frame input renovator, export the state of all targets of present frame, initialize again with Track device continues to track;According to this process until the last frame of video, completes the tracking of entire video sequence.
2. as described in claim 1 a kind of based on the random multi-target automatic detection tracking for taking out frame detection, feature exists The target detection Establishment of Neural Model process of pre-training described in step 2 is as follows:
Step 2.1: collecting image largely comprising the target to be tracked, all targets in image are labeled, are fabricated to Data set, data set are divided into training set, verifying collection and test set;
Step 2.2: the target detection neural network structure that selection detects selected target by the training set made and is tested Card collection input network is trained;
Step 2.3: trained network being tested with test set, finally obtains the satisfactory target inspection of Detection accuracy Survey grid network, for the detection before subsequent tracking.
3. as described in claim 1 a kind of based on the random multi-target automatic detection tracking for taking out frame detection, feature exists Analysis f described in step 3kFrame and fk+1The method for whether thering is fresh target to occur between frame are as follows:
Assuming that through detected fk, fk+1All target collections of frame are Wherein the sequence of each set inner element sorts according to the coordinate of each target;It is right In present frame fk+1Each of element qj, in former frame setIn look for whether corresponding element piIf not depositing In then target qjTo increase target newly.
4. as described in claim 1 a kind of based on the random multi-target automatic detection tracking for taking out frame detection, feature exists Initial frame described in step 3 searches method method particularly includes:
If fkThe target collection that obtains is after carrying out target detection in frameFk+1In frame The target collection that obtains is after carrying out target detectionIf fk+1Frame compares fkFrame is newly-increased Target qn, q is required to look up at this timenThe first frame f of appearancem;Take fkWith fk+1Intermediate value, that is, take fa=(fk+fk+1The frame of)/2, to the frame It is detected using target detection network;If obtained target collection isWith preceding method It is identical, it findsIn whether there is and target qnCorresponding element;Illustrate the frame f to be found if it existsk<fm<fa, otherwise illustrate fa<fm<fk+1;Assuming that corresponding element is not present, then f is takenaWith fk+1Intermediate value, i.e. fb=(fa+fk+1)/2, to fbFrame carries out target The result of detection isSame method, ifIt is middle to exist and target qnCorresponding element, if depositing Then illustrating fa<fm<fb, otherwise fb<fm<fk+1;Successively take in this manner intermediate value detected untilIn there is no pair The target answered, andMiddle presence;F is determined at this timemIt is exactly target qnThe first frame of appearance.
5. as described in claim 1 a kind of based on the random multi-target automatic detection tracking for taking out frame detection, feature exists The method of update tracking mode described in step 4 are as follows:
Assuming that the target collection that current kth frame has carried out target detection in abovementioned steps and detection obtains is All trackers are combined into the tracking result collection of present frameIt is right InIn each element di, InIt is middle to find corresponding element tj, such element is directly by d if it does not existiIt is added Results set Tr.If tjIt is and diCorresponding element, similarity distance s then calculate it according to the following formula and select coefficient b;
B=con (tj)×r-con(di)×(1-r)
Wherein, confidence level when con () is the target detection/tracking, r are the coefficient of setting, indicate the result of detection with tracking Which acceptance is higher, r ∈ (0,1);
If b > 0, illustrate that the confidence level of tracking result at this time is higher, updating result is
If b < 0, illustrate that the confidence level of testing result at this time is higher, updating result is
Result r will finally be updated, set T is addedr, updated in this mode until traversal is gatheredComplete all targets of present frame The update of state.
6. as described in claim 1 a kind of based on the random multi-target automatic detection tracking for taking out frame detection, feature exists In described for present frame fk+1Each of element qj, in former frame setIn look for whether corresponding element piMethod are as follows:
If present frame fk+1In some element qjFor a, in former frame setFor T;Both it for element a, is sought in set T The method for whether having its corresponding element looked for are as follows:
Assuming that set T={ t1,t2,…,ti,…,tm, for the element t in seti, its corresponding diagram is obtained according to testing result As internal coordinate isAnd the corresponding coordinates of targets of element a is (xa,ya), then a and tiSimilarity distance is defined as:
Wherein, label () is classification belonging to the target;Final result s is smaller, then two elements are more similar;If in set T In there are element tiSo that s (ti)<Sa, then it is assumed that there are the corresponding elements of element a in T;Wherein SaFor the threshold value of setting, the threshold Value is 3.2 times of target area size.
CN201910659013.2A 2019-07-22 2019-07-22 Random multi-target automatic detection tracking method based on frame extraction detection Active CN110503663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910659013.2A CN110503663B (en) 2019-07-22 2019-07-22 Random multi-target automatic detection tracking method based on frame extraction detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910659013.2A CN110503663B (en) 2019-07-22 2019-07-22 Random multi-target automatic detection tracking method based on frame extraction detection

Publications (2)

Publication Number Publication Date
CN110503663A true CN110503663A (en) 2019-11-26
CN110503663B CN110503663B (en) 2022-10-14

Family

ID=68586679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910659013.2A Active CN110503663B (en) 2019-07-22 2019-07-22 Random multi-target automatic detection tracking method based on frame extraction detection

Country Status (1)

Country Link
CN (1) CN110503663B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882211A (en) * 2022-03-01 2022-08-09 广州文远知行科技有限公司 Time sequence data automatic labeling method and device, electronic equipment, medium and product

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103347167A (en) * 2013-06-20 2013-10-09 上海交通大学 Surveillance video content description method based on fragments
CN106778503A (en) * 2016-11-11 2017-05-31 深圳云天励飞技术有限公司 A kind of detection based on circulation frame buffer zone and the method and system for tracking
CN106960446A (en) * 2017-04-01 2017-07-18 广东华中科技大学工业技术研究院 A kind of waterborne target detecting and tracking integral method applied towards unmanned boat
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108564069A (en) * 2018-05-04 2018-09-21 中国石油大学(华东) A kind of industry safe wearing cap video detecting method
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103347167A (en) * 2013-06-20 2013-10-09 上海交通大学 Surveillance video content description method based on fragments
CN106778503A (en) * 2016-11-11 2017-05-31 深圳云天励飞技术有限公司 A kind of detection based on circulation frame buffer zone and the method and system for tracking
CN106960446A (en) * 2017-04-01 2017-07-18 广东华中科技大学工业技术研究院 A kind of waterborne target detecting and tracking integral method applied towards unmanned boat
CN108108697A (en) * 2017-12-25 2018-06-01 中国电子科技集团公司第五十四研究所 A kind of real-time UAV Video object detecting and tracking method
CN108564069A (en) * 2018-05-04 2018-09-21 中国石油大学(华东) A kind of industry safe wearing cap video detecting method
CN108986143A (en) * 2018-08-17 2018-12-11 浙江捷尚视觉科技股份有限公司 Target detection tracking method in a kind of video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SZU, H: "Neo-Angiogenesis Metabolic Biomarker of Tumor-genesis Tracking By Infrared Joystick Contact Imaging in Personalized Homecare System", 《百链》 *
冯成龙: "复杂环境下的实时目标跟踪算法研究", 《CNKI》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114882211A (en) * 2022-03-01 2022-08-09 广州文远知行科技有限公司 Time sequence data automatic labeling method and device, electronic equipment, medium and product

Also Published As

Publication number Publication date
CN110503663B (en) 2022-10-14

Similar Documents

Publication Publication Date Title
Miller et al. Dropout sampling for robust object detection in open-set conditions
CN109919974B (en) Online multi-target tracking method based on R-FCN frame multi-candidate association
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN111460968B (en) Unmanned aerial vehicle identification and tracking method and device based on video
CN111126360A (en) Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN112395957B (en) Online learning method for video target detection
CN109800624A (en) A kind of multi-object tracking method identified again based on pedestrian
CN110796679B (en) Target tracking method for aerial image
CN112766218B (en) Cross-domain pedestrian re-recognition method and device based on asymmetric combined teaching network
CN105184229A (en) Online learning based real-time pedestrian detection method in dynamic scene
CN109271927A (en) A kind of collaboration that space base is multi-platform monitoring method
CN116363694A (en) Multi-target tracking method of unmanned system crossing cameras matched with multiple pieces of information
CN107578424A (en) A kind of dynamic background difference detecting method, system and device based on space-time classification
CN111241987B (en) Multi-target model visual tracking method based on cost-sensitive three-branch decision
CN116311063A (en) Personnel fine granularity tracking method and system based on face recognition under monitoring video
CN113129336A (en) End-to-end multi-vehicle tracking method, system and computer readable medium
CN115147644A (en) Method, system, device and storage medium for training and describing image description model
CN109727268A (en) Method for tracking target, device, computer equipment and storage medium
CN110503663A (en) A kind of random multi-target automatic detection tracking based on pumping frame detection
CN110688895B (en) Underground cross-vision field target detection tracking method based on multi-template learning
CN117218382A (en) Unmanned system large-span shuttle multi-camera track tracking and identifying method
Zhang et al. Fused confidence for scene text detection via intersection-over-union
CN115082854A (en) Pedestrian searching method oriented to security monitoring video
Sun et al. Multiple object tracking for yellow feather broilers based on foreground detection and deep learning.
Wu et al. Flow guided short-term trackers with cascade detection for long-term tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant