CN110309729A - Tracking and re-detection method based on anomaly peak detection and twin network - Google Patents

Tracking and re-detection method based on anomaly peak detection and twin network Download PDF

Info

Publication number
CN110309729A
CN110309729A CN201910506402.1A CN201910506402A CN110309729A CN 110309729 A CN110309729 A CN 110309729A CN 201910506402 A CN201910506402 A CN 201910506402A CN 110309729 A CN110309729 A CN 110309729A
Authority
CN
China
Prior art keywords
network
target
ship
target ship
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910506402.1A
Other languages
Chinese (zh)
Inventor
陈姚节
冯春东
徐进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201910506402.1A priority Critical patent/CN110309729A/en
Publication of CN110309729A publication Critical patent/CN110309729A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of tracking and re-detection method based on anomaly peak detection and twin network, comprising steps of S1, for video first frame, the size and center position of initialized target ship, and obtain the characteristic pattern of target ship;S2, judge whether target ship is lost according to the variation of every frame peak value of response: if target ship is lost, entering step S3, otherwise enter step S7;S3, the highest image of peak value of response is saved, as template image;S4, object candidate area is extracted by RPN network;S5, using template image and object candidate area as twin network inputs, select the highest region of similarity to export as target area;S6, the information for re-entering target ship;S7, continue to track target ship.This method tracks accuracy rate 82.6% or more in tracking target ship, while re-detection network gives particular boat for change and works well, and ship gives rate for change 86.5% or more, and it is fast to give speed for change.

Description

Tracking and re-detection method based on anomaly peak detection and twin network
Technical field
The present invention relates to technical field of ships, and in particular to it is a kind of based on anomaly peak detection and twin network tracking and Re-detection method.
Background technique
Target following technology is one of the hot spot in computer vision research field.During the motion, target will not be always In perfect condition, it may occur that some cosmetic variations are blocked or deformation etc. such as posture, scale or the variation of illumination, these All certain difficulty is caused to target following research.How to solve the problems, such as target occlusion, realizes long term object tracking, be current mesh Mark the research emphasis in tracking technique.In the prior art, after target ship is blocked, the tracking accuracy rate and ship of ship are given for change Rate is relatively low, so that being unable to complete the long-term follow to target ship.
Summary of the invention
To overcome the above deficiencies, the invention provides it is a kind of based on anomaly peak detection and twin network with Track and re-detection method.
The present invention overcomes the technical solution used by its technical problem to be:
A kind of tracking and re-detection method based on anomaly peak detection and twin network, includes the following steps:
S1, it is directed to video first frame, the size and center position of initialized target ship, and obtains the spy of target ship Sign figure;
S2, judge whether target ship is lost according to the variation of every frame peak value of response: if target ship is lost, entering step Rapid S3, on the contrary enter step S7;
S3, the highest image of peak value of response is saved, as template image;
S4, object candidate area is extracted by RPN network;
S5, using template image and object candidate area as twin network inputs, select the highest region of similarity as Target area output;
S6, the information for re-entering target ship;
S7, continue to track target ship.
Further, in the step S1, the characteristic pattern of target ship is obtained using Faster R-CNN network, it is specific Method are as follows:
S1.1, the convolution characteristic pattern for calculating target ship image;
S1.2, convolution characteristic pattern is handled using RPN network, obtains target Suggestion box;
S1.3, characteristic pattern is extracted to target Suggestion box using RoI Pooling.
Further, in the step S2, the tool whether target ship is lost is judged according to the variation of every frame peak value of response Body method are as follows:
If VpreIt indicates the peak value of response of present frame, gathers { Vi| i=1,2 ..., n indicate present frame period the last period Interior peak set;
In above-mentioned formula, u indicates that the average value of peak value in period the last period of present frame, σ indicate its standard deviation;
If the peak distance mean value u of present frame has been more than λ σ, which is flagged as exceptional value, can sentence at this time Disconnected ship has been lost.
Further, in the step S4, object candidate area is extracted by RPN network method particularly includes:
A part of ship image that the ship image data of acquisition is concentrated is as training dataset, and another part is as survey Examination collection, RPN network carry out end-to-end training in the training stage;
S4.1, RPN network parameter is initialized with pre-training network model, passes through stochastic gradient descent algorithm and backpropagation Algorithm finely tunes RPN network parameter;
S4.2, Faster R-CNN target detection network parameter is initialized with pre-training network model, and with step S4.1 In RPN network extract candidate region, training objective detect network;
S4.3, it is reinitialized with target detection network in step S4.2 and finely tunes RPN network parameter;
S4.4, object candidate area is extracted with RPN network in step S4.3.
Further, in the step S5, using template image and object candidate area as the specific of twin network inputs Method are as follows:
The size of template image and object candidate area is adjusted to 107 pixels × 107 pixels, and it is inputted respectively Two twin sub-networks, the Euclidean distance between feature that two twin sub-networks export are used to characterize between input picture pair Similarity, twin network finally select the highest region of similarity as target area between object candidate area set Output.
The beneficial effects of the present invention are:
The present invention is detected by anomaly peak and the tracking and re-detection method of twin network, in tracking target ship, Accuracy rate is tracked 82.6% or more, while re-detection network gives particular boat for change and works well, ship gives rate for change 86.5% More than, and it is fast to give speed for change.
Detailed description of the invention
Fig. 1 is video sequence part described in the embodiment of the present invention to target ship tracking procedure chart, wherein Fig. 1 (a) is mesh The tracing figure that is not blocked of mark ship, Fig. 1 (b) are that target ship is blocked the tracing figure of a part, and Fig. 1 (c) is target ship The tracing figure being blocked completely.
Fig. 2 is the response distribution map for respectively corresponding Fig. 1, wherein the response distribution map of Fig. 2 (a) corresponding diagram 1 (a), figure The response distribution map of 2 (b) corresponding diagrams 1 (b), the response distribution map of Fig. 2 (c) corresponding diagram 1 (c).
Fig. 3 is the schematic diagram that RPN network exports result.
Fig. 4 is the result figure of twin network output.
In figure, 1, target ship, 2, shelter.
Specific embodiment
For a better understanding of the skilled in the art, being done in the following with reference to the drawings and specific embodiments to the present invention It is further described, it is following to be merely exemplary that the scope of protection of the present invention is not limited.
Tracking and re-detection method of the present invention based on anomaly peak detection and twin network, including walk as follows It is rapid:
Step S1, video first frame, the size and center position of initialized target ship, as shown in Figure 1, described are directed to Central point is the central point of ship in video image, and the characteristic pattern of target ship is obtained using Faster R-CNN network.
Wherein, the characteristic pattern of target ship is obtained using Faster R-CNN network method particularly includes:
S1.1, the convolution characteristic pattern for calculating target ship image;
S1.2, convolution characteristic pattern is handled using RPN network, obtains target Suggestion box;
S1.3, characteristic pattern is extracted to target Suggestion box using RoI Pooling.
Step S2, judge whether target ship is lost according to the variation of every frame peak value of response: if target ship is lost, into Enter step S3, otherwise enters step S7.
Wherein, judge what whether target ship was lost according to the variation of every frame peak value of response method particularly includes:
If VpreIt indicates the peak value of response of present frame, gathers { Vi| i=1,2 ..., n indicate present frame period the last period Interior peak set;
In above-mentioned formula, u indicates that the average value of peak value in period the last period of present frame, σ indicate its standard deviation;
If the peak distance mean value u of present frame has been more than λ σ, which is flagged as exceptional value, can sentence at this time Disconnected ship has been lost.
Step S3, the highest image of peak value of response is saved, as template image.
Step S4, object candidate area is extracted by RPN network, method particularly includes:
A part of ship image that the ship image data of acquisition is concentrated is as training dataset, and another part is as survey Examination collection, for the pre-training model that the present embodiment uses for ResNet50, RPN network carries out end-to-end training in the training stage;
S4.1, RPN network parameter is initialized with pre-training network model, passes through stochastic gradient descent algorithm and backpropagation Algorithm finely tunes RPN network parameter;
S4.2, Faster R-CNN target detection network parameter is initialized with pre-training network model, and with step S4.1 In RPN network extract candidate region, training objective detect network;
S4.3, it is reinitialized with target detection network in step S4.2 and finely tunes RPN network parameter;
S4.4, object candidate area is extracted with RPN network in step S4.3.
By operating above, RPN network (region is also made to suggest network) can export more accurate object candidate area, The object candidate area that the network area RPN generates is as shown in figure 3, Regional Representative's object candidate area in box.
Step S5, using template image and object candidate area as twin network inputs, the highest region of similarity is selected It is exported as target area.
Wherein, using template image and object candidate area as twin network inputs method particularly includes:
The size of template image and object candidate area is adjusted to 107 pixels × 107 pixels, and it is inputted respectively Two twin sub-networks, the Euclidean distance between feature that two twin sub-networks export are used to characterize between input picture pair Similarity, twin network finally select the highest region of similarity as target area between object candidate area set Output, the result of twin network output is as shown in figure 4, the part i.e. in box circle is that target area exports result.
Step S6, the information of target ship is re-entered.
Step S7, continue to track target ship.
In above-mentioned steps, target ship is navigation of turning left from the right side.Response distribution map in the above process as shown in Fig. 2, The response distribution map for the tracing figure that wherein Fig. 2 (a) corresponding diagram 1 (a) target ship is not blocked, Fig. 2 (b) corresponding diagram 1 (b) mesh Mark ship be blocked a part tracing figure response distribution map, Fig. 2 (c) corresponding diagram 1 (c) target ship is blocked completely The response distribution map of tracing figure.
Above only describes basic principle of the invention and preferred embodiment, those skilled in the art can be according to foregoing description Many changes and improvements are made, these changes and improvements should be within the scope of protection of the invention.

Claims (5)

1. a kind of tracking and re-detection method based on anomaly peak detection and twin network, which is characterized in that including walking as follows It is rapid:
S1, it is directed to video first frame, the size and center position of initialized target ship, and obtains the feature of target ship Figure;
S2, judge whether target ship is lost according to the variation of every frame peak value of response: if target ship is lost, entering step S3, on the contrary enter step S7;
S3, the highest image of peak value of response is saved, as template image;
S4, object candidate area is extracted by RPN network;
S5, using template image and object candidate area as twin network inputs, select the highest region of similarity as target Region output;
S6, the information for re-entering target ship;
S7, continue to track target ship.
2. the tracking and re-detection method according to claim 1 based on anomaly peak detection and twin network, feature It is, in the step S1, the characteristic pattern of target ship is obtained using Faster R-CNN network, method particularly includes:
S1.1, the convolution characteristic pattern for calculating target ship image;
S1.2, convolution characteristic pattern is handled using RPN network, obtains target Suggestion box;
S1.3, characteristic pattern is extracted to target Suggestion box using RoI Pooling.
3. the tracking and re-detection method according to claim 1 based on anomaly peak detection and twin network, feature It is, in the step S2, judges what whether target ship was lost according to the variation of every frame peak value of response method particularly includes:
If VpreIt indicates the peak value of response of present frame, gathers { Vi| i=1,2 ..., n } it indicates in period the last period of present frame Peak set;
In above-mentioned formula, u indicates that the average value of peak value in period the last period of present frame, σ indicate its standard deviation;
If the peak distance mean value u of present frame has been more than λ σ, which is flagged as exceptional value, may determine that ship at this time Oceangoing ship has been lost.
4. the tracking and re-detection method according to claim 1 based on anomaly peak detection and twin network, feature It is, in the step S4, object candidate area is extracted by RPN network method particularly includes:
A part of ship image that the ship image data of acquisition is concentrated is as training dataset, and another part is as test Collection, RPN network carry out end-to-end training in the training stage;
S4.1, RPN network parameter is initialized with pre-training network model, passes through stochastic gradient descent algorithm and back-propagation algorithm Finely tune RPN network parameter;
S4.2, Faster R-CNN target detection network parameter is initialized with pre-training network model, and in step S4.1 RPN network extracts candidate region, and training objective detects network;
S4.3, it is reinitialized with target detection network in step S4.2 and finely tunes RPN network parameter;
S4.4, object candidate area is extracted with RPN network in step S4.3.
5. the tracking and re-detection method according to claim 1 based on anomaly peak detection and twin network, feature It is, in the step S5, using template image and object candidate area as twin network inputs method particularly includes:
The size of template image and object candidate area is adjusted to 107 pixels × 107 pixels, and it is inputted two respectively Twin sub-network, the Euclidean distance between feature that two twin sub-networks export are used to characterize similar between input picture pair Degree, twin network finally select the highest region of similarity defeated as target area between object candidate area set Out.
CN201910506402.1A 2019-06-12 2019-06-12 Tracking and re-detection method based on anomaly peak detection and twin network Pending CN110309729A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910506402.1A CN110309729A (en) 2019-06-12 2019-06-12 Tracking and re-detection method based on anomaly peak detection and twin network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910506402.1A CN110309729A (en) 2019-06-12 2019-06-12 Tracking and re-detection method based on anomaly peak detection and twin network

Publications (1)

Publication Number Publication Date
CN110309729A true CN110309729A (en) 2019-10-08

Family

ID=68077391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910506402.1A Pending CN110309729A (en) 2019-06-12 2019-06-12 Tracking and re-detection method based on anomaly peak detection and twin network

Country Status (1)

Country Link
CN (1) CN110309729A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524162A (en) * 2020-04-15 2020-08-11 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
CN111724409A (en) * 2020-05-18 2020-09-29 浙江工业大学 Target tracking method based on densely connected twin neural network
CN111986517A (en) * 2020-08-26 2020-11-24 珠海大横琴科技发展有限公司 Ship anomaly detection method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019683A1 (en) * 2014-07-17 2016-01-21 Ricoh Company, Ltd. Object detection method and device
WO2018122459A1 (en) * 2017-01-02 2018-07-05 Tampereen Yliopisto Calcium level analysis in cardiomyocyte
CN108898620A (en) * 2018-06-14 2018-11-27 厦门大学 Method for tracking target based on multiple twin neural network and regional nerve network
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109766780A (en) * 2018-12-20 2019-05-17 武汉理工大学 A kind of ship smog emission on-line checking and method for tracing based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160019683A1 (en) * 2014-07-17 2016-01-21 Ricoh Company, Ltd. Object detection method and device
WO2018122459A1 (en) * 2017-01-02 2018-07-05 Tampereen Yliopisto Calcium level analysis in cardiomyocyte
CN108898620A (en) * 2018-06-14 2018-11-27 厦门大学 Method for tracking target based on multiple twin neural network and regional nerve network
CN109508655A (en) * 2018-10-28 2019-03-22 北京化工大学 The SAR target identification method of incomplete training set based on twin network
CN109766780A (en) * 2018-12-20 2019-05-17 武汉理工大学 A kind of ship smog emission on-line checking and method for tracing based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BO LI等: "High performance visual tracking with siamese region proposal network", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
刘延飞等: "基于异常值检测的KCF目标丢失预警方法研究", 《计算机工程与应用》 *
方萍等: "《试验设计与统计》", 30 June 2003, 浙江大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111524162A (en) * 2020-04-15 2020-08-11 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
WO2021208261A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Tracking target retrieving method and device, and handheld camera
CN111524162B (en) * 2020-04-15 2022-04-01 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
CN111724409A (en) * 2020-05-18 2020-09-29 浙江工业大学 Target tracking method based on densely connected twin neural network
CN111986517A (en) * 2020-08-26 2020-11-24 珠海大横琴科技发展有限公司 Ship anomaly detection method and device and storage medium

Similar Documents

Publication Publication Date Title
CN107832672B (en) Pedestrian re-identification method for designing multi-loss function by utilizing attitude information
CN108052896B (en) Human body behavior identification method based on convolutional neural network and support vector machine
CN109657631B (en) Human body posture recognition method and device
CN111814661B (en) Human body behavior recognition method based on residual error-circulating neural network
CN109685013B (en) Method and device for detecting head key points in human body posture recognition
CN105069434B (en) A kind of human action Activity recognition method in video
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN111563452B (en) Multi-human-body gesture detection and state discrimination method based on instance segmentation
CN104881029B (en) Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms
CN107424161B (en) Coarse-to-fine indoor scene image layout estimation method
CN114187665B (en) Multi-person gait recognition method based on human skeleton heat map
CN110309729A (en) Tracking and re-detection method based on anomaly peak detection and twin network
CN105718882A (en) Resolution adaptive feature extracting and fusing for pedestrian re-identification method
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
Paral et al. Vision sensor-based shoe detection for human tracking in a human–robot coexisting environment: A photometric invariant approach using DBSCAN algorithm
CN111126494B (en) Image classification method and system based on anisotropic convolution
CN113608663B (en) Fingertip tracking method based on deep learning and K-curvature method
CN112287906B (en) Template matching tracking method and system based on depth feature fusion
CN107798691A (en) A kind of unmanned plane independent landing terrestrial reference real-time detecting and tracking method of view-based access control model
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN106407978B (en) Method for detecting salient object in unconstrained video by combining similarity degree
CN102663777A (en) Target tracking method and system based on multi-view video
CN101320477B (en) Human body tracing method and equipment thereof
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN113076891B (en) Human body posture prediction method and system based on improved high-resolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191008

RJ01 Rejection of invention patent application after publication