CN108520218A - A kind of naval vessel sample collection method based on target tracking algorism - Google Patents

A kind of naval vessel sample collection method based on target tracking algorism Download PDF

Info

Publication number
CN108520218A
CN108520218A CN201810272277.8A CN201810272277A CN108520218A CN 108520218 A CN108520218 A CN 108520218A CN 201810272277 A CN201810272277 A CN 201810272277A CN 108520218 A CN108520218 A CN 108520218A
Authority
CN
China
Prior art keywords
naval vessel
frame
rcnn
fast
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810272277.8A
Other languages
Chinese (zh)
Inventor
庄祐存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinhan Sensing Technology Co Ltd
Original Assignee
Shenzhen Xinhan Sensing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinhan Sensing Technology Co Ltd filed Critical Shenzhen Xinhan Sensing Technology Co Ltd
Priority to CN201810272277.8A priority Critical patent/CN108520218A/en
Publication of CN108520218A publication Critical patent/CN108520218A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The naval vessel sample collection method based on target tracking algorism that the present invention relates to a kind of, includes the following steps:The naval vessel area information in first frame on certain video carrier is sketched the contours of manually;Picture frame after normalization size is input in Fast RCNN networks;The tracking monitor for carrying out naval vessel region to the frame after first frame by Fast RCNN target trackings frames, obtains the location information on the naval vessel in each frame;Often it is separated by 400 800 frames, by the location information of human eye observation's present frame, and judges whether annotation results are qualified;If unqualified, by the annotation results on artificial correction naval vessel, Fast RCNN target tracking algorithms are continued to execute, the naval vessel calibration until completing entire video carrier;The annotation results on the naval vessel in all video carriers are obtained, nominal data collection is formed.In the present invention, the method for using the visual pursuit based on deep learning realizes that in a video sequence, automatic Calibration goes out the area information on naval vessel, and the artificial workload to reduce improves the efficiency of nominal data.

Description

A kind of naval vessel sample collection method based on target tracking algorism
Technical field
The present invention relates to naval vessel sample collection method more particularly to a kind of naval vessel sample collections based on target tracking algorism Method.
Background technology
In the big data epoch of today, the importance of data is self-evident, but it many problems also occurs simultaneously, than Such as, how in the data of magnanimity, useful partial information is extracted.The mark of data is one of typical case. For at present, the mark of data is most of by the way of artificial, and not only workload is huge, but also working efficiency compares It is low, such as:In thousands of pictures, to it classify either marks out it specific partial information etc..
With the fast development of deep learning, also increasingly it is widely used in visual pursuit field.But for depth Data set is a very important part for degree study, and the calibration similarly for data set is also very important, data The quality of collection calibration largely decides the accuracy rate of deep learning, but the calibration of data set is very uninteresting.
For in terms of the naval vessel tracking, the data set of this aspect is less, the data set that can largely get be In one video containing naval vessel movement, for each frame, naval vessel area information is therefrom sketched the contours of, to which the frame can treat as one The data of a calibration not only work if carrying out artificial calibration naval vessel region part for each frame image in video Amount is big, very uninteresting, and efficiency is relatively low.
Invention content
The naval vessel sample collection method based on target tracking algorism that the main purpose of the present invention is to provide a kind of, it is intended to solve The heavy workload of certainly current existing artificial carry out data scaling, the low problem of efficiency.
To achieve the goals above, the technical solution that the present invention takes is to provide a kind of naval vessel based on target tracking algorism Sample collection method, includes the following steps:
The naval vessel area information in first frame on certain video carrier is sketched the contours of manually;
Picture frame after normalization size is input in Fast-RCNN networks;
The tracking monitor for carrying out naval vessel region to the frame after present frame by Fast-RCNN target trackings frame, obtains The location information on the naval vessel in each frame;
Often it is separated by 400-800 frames, by the location information of human eye observation's present frame, and judges whether calibration result is qualified;
If unqualified, by the annotation results on artificial correction naval vessel, Fast-RCNN target tracking algorithms are continued to execute, directly It is demarcated to the naval vessel for completing entire video carrier;
The annotation results on the naval vessel in all video carriers are obtained, nominal data collection is formed.
Optionally, often it is separated by 500 frames, passes through the location information of human eye observation's present frame.
Optionally, the human eye observation refers to:For present frame, if the annotation results that Fast-RCNN is exported In:According to the boxes of a specific dimensions 95% or more the region on naval vessel is not surrounded, then it is assumed that the mark As a result location information is unqualified.
Optionally, the annotation results on the artificial correction naval vessel include step:
The number of current unqualified frame is denoted as, manually again in the current frame, with mouse in the part of naval vessel region Heart position motors once, are completed manual intervention operation, are next proceeded through in the frames of Fast-RCNN after the current frame, predict The location information on naval vessel;
Abandon the 5-15 frames before the frame.
Optionally, the P is abandonedi10 frames before frame.
Optionally, before by Fast-RCNN tracking monitors, it is also necessary to be trained, instruct to Fast-RCNN networks Practicing step is:Picture of the normalization containing Ship Target marks location coordinate information of the naval vessel region in affiliated picture;
Past-RCNN predicts the location coordinate information in naval vessel region in a width picture;
The weight and bias of neuron are updated by BP algorithm so that neural network reaches convergence state;
After training, then the data set demarcated is obtained based on trained Fast-RCNN.
The beneficial effects of the invention are as follows:The present invention is for the calibration of naval vessel data set, using based on deep learning The mode of visual pursuit, it is automatic to obtain naval vessel region in each pictures in data set, to complete the calibration of data, Avoid the artificial heavy workload for carrying out data scaling, the low problem of efficiency;By the way of manual oversight, when one section Between, manual intervention resets tracing area (naval vessel region), ensures that follow-up data concentration detects naval vessel region substantially Accuracy, to ensure that the qualification rate of data set calibration, same workload are also relatively small;Due to using Fast-rcnn's Frame realizes that the tracking in naval vessel region need not accurately re-scale out the naval vessel band of position during manual intervention, only It needs to be clicked with mouse in naval vessel region, the frame of Fast-rcnn equally understands the output phase to accurately naval vessel region, saves Artificial operating quantity.
Description of the drawings
Fig. 1 is flow chart provided by the invention;
Fig. 2 is single neuronal structure provided by the invention;
Fig. 3 is overview flow chart provided by the invention;
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
The present invention is further described for explanation and specific implementation mode below in conjunction with the accompanying drawings.It should be appreciated that described herein Specific embodiment be only used to explain the present invention, be not intended to limit the present invention.
Fast-RCNN is one and is based on depth convolutional neural networks, is widely used in the target detection of movement at present, Accuracy is also relatively high, is based on this, and the present invention proposes a kind of data intelligence mask method based in the case of manual intervention, leads to The network frame using Fast-RCNN is crossed, the intelligent dimension of data set is realized, so as to improve efficiency and the reduction of mark Labor workload.
As shown in Figure 1, the present invention first has to initialization tracing area:In the present invention, objective for implementation is video carrier, In the video carrier, there is the information representation of object continuity, i.e., adjacent two frame to have certain relevance, be based on this, I Only need in the first frame, sketch the contours of the area information on naval vessel manually, that is, initialize tracing area, then using be based on depth The target following frame of the Fast-RCNN of study is tracked monitoring, so as to later to the frame after the first frame Frame in, by Fast-RCNN predict come as a result, the area information on the naval vessel got, i.e., the calibration letter in each frame Cease (the region partial information that in each frame, calibration information is exactly naval vessel), finally all satisfactory naval vessel area informations Form naval vessel data set.
Specifically, when carrying out naval vessel regional prediction using Fast-RCNN, we carry out firstly the need of to Fast-RCNN Network training can use the network later, carry out the data result for obtaining calibration, the specific method is as follows:
(1) input of network includes two parts:The normalized picture containing Ship Target, naval vessel region are in affiliated picture In location coordinate information.
In the portion, our training set is 2000 normalization pictures containing Ship Target, normalized size It is 1024 × 1024.Meanwhile in every pictures, we have marked the location information in naval vessel region, i.e., proper with a box Fortunately in normalized picture, include by naval vessel region, then the top left corner apex coordinate of this box and box length, Width just together forms the location coordinate information of network inputs.
(2) it exports:Network predicts in piece image oneself, the location coordinate information in naval vessel region (a pair of of coordinate points, One length value and a width value.With this to the top left corner apex that coordinate points are a box, and respectively with the length value and Width value is the length and width of box, then the region that the box is included is exactly the naval vessel regional location that network oneself predicts).
(3) Training strategy:Conventional BP training methods.Weight and the biasing of neuron are updated by using BP algorithm, Finally so that neural network reaches convergence state, specific parsing is as follows.
The structure of simple nervelet network can be as shown in Fig. 2, wherein each circle represents a neuron, w1And w2 The weight between neuron is represented, b indicates biasing, and g (z) is activation primitive, so that output becomes non-linear, a indicates defeated Go out, x1And x2It indicates input, is then directed to current structure, output is represented by formula (1).It can be obtained by formula (1), in input number According to activation primitive it is constant in the case of, the value a of the output of neural network is related with weight and biasing.It is different by adjusting Weight and biasing, the output of neural network also have different results.
A=g (x1*w1+x2*w2+1*b) (1)
The value (predicted value) of known neural network output is a, it is assumed that its corresponding actual value is a'.
For Fig. 2, BP algorithm executes as follows:
It A, can every connecting line weight (w of first random initializtion in BP algorithm1And w2) and biasing b;
B, for input data x1, x2, BP algorithm can all first carry out fl transmission and obtain predicted value a;
C, and then according to the error between actual value a' and predicted value aReverse feedback update neural network In every connecting line weight and every layer of biasing.
Shown in weight and the update method of biasing such as formula (2)-(4), i.e., w is asked respectively to E1, w2, the local derviation of b.Wherein η tables What is shown is learning rate, is a parameter set in this formula.
D, step A-C is constantly repeated, until the value of network convergence, i.e. E is minimum or is held essentially constant.This moment, it indicates Network is trained to be finished.
2.2, based on trained Fast-RCNN obtain the data set of calibration.
(1) for each frame image, the normalization operation of size is carried out to it first, due to the ruler of different video frame Very little of different sizes, therefore, specific normalization size is the same.
(2) it by the picture frame after normalization size in step (1), is input in Fast-RCNN networks, is predicted, Then network output result just predict in the frame, the zone position information on naval vessel, this be also achieved that in the frame, for The update of tracing area (naval vessel region) operates, while the data set of this output result namely our required calibration.
But for being realized in video carrier using computer method completely, the region of markup information is determined, It is the presence of certain precision.For the method for deep learning, accuracy rate can not possibly all reach 100%, therefore logical The frame of Fast-RCNN is crossed, realizes that it is there can be certain error to obtain the region of Ship Target, if error is excessive, is marked The data of note are unqualified, can be to the later reality used if these underproof labeled data use after being brought It tests result to affect, it would therefore be desirable to preferably ensure the qualification of data.So we introduce artificial do In advance.
In this link, we by the way of manual oversight, to come audit mark data set and and to mark Information is modified, which can be as shown in figure 3, concrete operations be as follows:
1, assume that existing video frame number is M, and we are in the first frame, sketch the contours of naval vessel area information (mark manually Information), since the second frame, execute Fast-RCNN target tracking methods.
2, every 400-800 frames, the present invention is by for 500 frames, the network output knot of human eye observation's once present frame Fruit (annotation results).
3, for present frame, if (in the current frame, Fast-RCNN is thought the results that are exported of Fast-RCNN Naval vessel regional location, and the position is surrounded using a box) not by 95% or more of practical naval vessel Region is surrounded, then it is assumed that its annotation results is unqualified, continues to execute following step and is modified to annotation results, no Then, it is believed that its annotation results is qualified.
4, at this point, the result that Fast-RCNN is exported has existed certain error, so we need to carry out this It corrects, the specific method is as follows:
(1) number of note present frame is Pi, manually again in the current frame, the centre bit with mouse in naval vessel region part It sets and clicks, then complete manual intervention operation, then, continue using Fast-RCNN in frame from now on, predict naval vessel Location information (annotation results).
(2) several frames before the frame are abandoned, the ranging from 5-15 of the frame, the present invention is to lose 10 frames before the frame For abandoning, i.e., it is P by numberi-10To Pi-1Frame abandon.In these frames, the information of mark can also have error, to influence The accuracy rate of overall data calibration for whole video data, influences smaller in addition, only abandoning 10 frames every time.
(3) equally, every 500 frames, the amendment of a tracing area is executed.
The amendment that a tracing area is executed every 500 frames carries out tracing area at this time first, modified frequency is higher Amendment, coverage caused by error can be reduced in time, else if every 2000 frames execute it is primary correct operation, then The coverage of error will be very big;Second, all manually being marked relative to each frame, executed once every 500 frames, The work is opposite to reduce many.
Finally, nominal data collection is obtained
Finally, after by manual intervention the step of, the output of Fast-RCNN networks accessed by us as a result, It is relatively qualified, the nominal data collection that can be used normally, to largely, reduce artificial burden, improve Working efficiency.
In the present invention, the method for using the visual pursuit based on deep learning, realization is in a video sequence, certainly The dynamic area information for calibrating naval vessel, the artificial workload to reduce improve the efficiency of nominal data.In addition, I Only need in a few frames in a video sequence either more than ten frames, sketch the contours of naval vessel region portion with carrying out manual oversight Point, to ensure the qualification of nominal data collection.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist Under the premise of not departing from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to the present invention's Protection domain.

Claims (5)

1. a kind of naval vessel sample collection method based on target tracking algorism, which is characterized in that include the following steps:
The naval vessel area information in first frame on certain video carrier is sketched the contours of manually;
Picture frame after normalization size is input in Fast-RCNN networks;
The tracking monitor for carrying out naval vessel region to the frame after first frame by Fast-RCNN target trackings frame, obtains each The location information on the naval vessel in frame;
Often it is separated by 400-800 frames, by the location information of human eye observation's present frame, and judges whether annotation results are qualified;
If unqualified, by the annotation results on artificial correction naval vessel, Fast-RCNN target tracking algorithms are continued to execute, until complete It is demarcated at the naval vessel of entire video carrier;
The annotation results on the naval vessel in all video carriers are obtained, nominal data collection is formed.
2. according to the method described in claim 1, it is characterized in that, be often separated by 500 frames, pass through the position of human eye observation's present frame Information.
3. according to the method described in claim 1, it is characterized in that, the human eye observation refers to:For present frame, if In the annotation results that Fast-RCNN is exported:The box of the specific dimensions used is not by 95% or more the area on naval vessel Domain is surrounded, then it is assumed that the mark result location information is unqualified.
4. according to the method described in claim 3, it is characterized in that, the annotation results on the artificial correction naval vessel include step:
The number of current unqualified frame is denoted as Pi, manually again in the current frame, the centre bit with mouse in naval vessel region part It sets motor once, completes manual intervention operation, next proceed through in the frames of Fast-RCNN after the current frame, predict naval vessel Location information;
Abandon the Pi5-15 frames before frame.According to the method described in claim 4, it is characterized in that, abandoning the PiBefore frame 10 frames.
5. according to the method described in claim 1, it is characterized in that, before by Fast-RCNN tracking monitors, it is also necessary to right Fast-RCNN networks are trained, and training step is:Picture of the normalization containing Ship Target marks naval vessel region and exists Location coordinate information in affiliated picture;
Past-RCNN predicts the location coordinate information in naval vessel region in a width picture;
The weight and bias of neuron are updated by BP algorithm so that neural network reaches convergence state;
After training, then the data set demarcated is obtained based on trained Fast-RCNN.
CN201810272277.8A 2018-03-29 2018-03-29 A kind of naval vessel sample collection method based on target tracking algorism Pending CN108520218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810272277.8A CN108520218A (en) 2018-03-29 2018-03-29 A kind of naval vessel sample collection method based on target tracking algorism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810272277.8A CN108520218A (en) 2018-03-29 2018-03-29 A kind of naval vessel sample collection method based on target tracking algorism

Publications (1)

Publication Number Publication Date
CN108520218A true CN108520218A (en) 2018-09-11

Family

ID=63431293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810272277.8A Pending CN108520218A (en) 2018-03-29 2018-03-29 A kind of naval vessel sample collection method based on target tracking algorism

Country Status (1)

Country Link
CN (1) CN108520218A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109753975A (en) * 2019-02-02 2019-05-14 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN109766830A (en) * 2019-01-09 2019-05-17 深圳市芯鹏智能信息有限公司 A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN109934088A (en) * 2019-01-10 2019-06-25 海南大学 Sea ship discrimination method based on deep learning
CN110782005A (en) * 2019-09-27 2020-02-11 山东大学 Image annotation method and system for tracking based on weak annotation data
CN112053323A (en) * 2020-07-31 2020-12-08 上海图森未来人工智能科技有限公司 Single-lens multi-frame image data object tracking and labeling method and device and storage medium
CN112164097A (en) * 2020-10-20 2021-01-01 南京莱斯网信技术研究院有限公司 Ship video detection sample acquisition method
CN112528609A (en) * 2019-08-29 2021-03-19 北京声智科技有限公司 Method, system, equipment and medium for quality inspection of labeled data
CN116246332A (en) * 2023-05-11 2023-06-09 广东工业大学 Eyeball tracking-based data labeling quality detection method, device and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170031437A1 (en) * 2015-07-30 2017-02-02 Boe Technology Group Co., Ltd. Sight tracking method and device
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN106934332A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 A kind of method of multiple target tracking
CN107564004A (en) * 2017-09-21 2018-01-09 杭州电子科技大学 It is a kind of that video labeling method is distorted based on computer auxiliary tracking
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170031437A1 (en) * 2015-07-30 2017-02-02 Boe Technology Group Co., Ltd. Sight tracking method and device
CN106934332A (en) * 2015-12-31 2017-07-07 中国科学院深圳先进技术研究院 A kind of method of multiple target tracking
CN106875425A (en) * 2017-01-22 2017-06-20 北京飞搜科技有限公司 A kind of multi-target tracking system and implementation method based on deep learning
CN107564004A (en) * 2017-09-21 2018-01-09 杭州电子科技大学 It is a kind of that video labeling method is distorted based on computer auxiliary tracking
CN107609601A (en) * 2017-09-28 2018-01-19 北京计算机技术及应用研究所 A kind of ship seakeeping method based on multilayer convolutional neural networks

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KAI KANG ET AL: "T-CNN: Tubelets With Convolutional Neural Networks for Object Detection From Videos", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 》 *
熊赟等: "《大数据挖掘》", 30 April 2016 *
赵妍: "《面向大数据的挖掘方法研究》", 31 July 2016 *
闵召阳等: "基于卷积神经网络检测的单镜头多目标跟踪算法", 《舰船电子工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109766830A (en) * 2019-01-09 2019-05-17 深圳市芯鹏智能信息有限公司 A kind of ship seakeeping system and method based on artificial intelligence image procossing
CN109934088A (en) * 2019-01-10 2019-06-25 海南大学 Sea ship discrimination method based on deep learning
CN109753975A (en) * 2019-02-02 2019-05-14 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN112528609A (en) * 2019-08-29 2021-03-19 北京声智科技有限公司 Method, system, equipment and medium for quality inspection of labeled data
CN110782005A (en) * 2019-09-27 2020-02-11 山东大学 Image annotation method and system for tracking based on weak annotation data
CN110782005B (en) * 2019-09-27 2023-02-17 山东大学 Image annotation method and system for tracking based on weak annotation data
CN112053323A (en) * 2020-07-31 2020-12-08 上海图森未来人工智能科技有限公司 Single-lens multi-frame image data object tracking and labeling method and device and storage medium
CN112164097A (en) * 2020-10-20 2021-01-01 南京莱斯网信技术研究院有限公司 Ship video detection sample acquisition method
CN112164097B (en) * 2020-10-20 2024-03-29 南京莱斯网信技术研究院有限公司 Ship video detection sample collection method
CN116246332A (en) * 2023-05-11 2023-06-09 广东工业大学 Eyeball tracking-based data labeling quality detection method, device and medium

Similar Documents

Publication Publication Date Title
CN108520218A (en) A kind of naval vessel sample collection method based on target tracking algorism
CN110097131B (en) Semi-supervised medical image segmentation method based on countermeasure cooperative training
CN111914727B (en) Small target human body detection method based on balance sampling and nonlinear feature fusion
US9928874B2 (en) Method for real-time video processing involving changing features of an object in the video
JP2019075121A (en) Learning method and learning device for adjusting parameter of cnn by using multi-scale feature map, and testing method and testing device using the same
CN111259940A (en) Target detection method based on space attention map
KR20210116923A (en) Method for Training a Denoising Network, Method and Device for Operating Image Processor
CN109472193A (en) Method for detecting human face and device
CN107945210A (en) Target tracking algorism based on deep learning and environment self-adaption
CN108537826A (en) A kind of Ship Target tracking based on manual intervention
CN111931581A (en) Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium
CN106651917A (en) Image target tracking algorithm based on neural network
CN113223055B (en) Image target tracking model establishing method and image target tracking method
CN109712128A (en) Feature point detecting method, device, computer equipment and storage medium
CN108154235A (en) A kind of image question and answer inference method, system and device
EP4235511A1 (en) Vector quantized auto-encoder codebook learning for manufacturing display extreme minor defects detection
CN108898603A (en) Plot segmenting system and method on satellite image
CN107977658A (en) Recognition methods, television set and the readable storage medium storing program for executing in pictograph region
WO2024055530A1 (en) Target detection method, system and device for image, and storage medium
CN112861855A (en) Group-raising pig instance segmentation method based on confrontation network model
CN115797846A (en) Wind power generation blade block defect comparison method and device and electronic equipment
Chen et al. Synthetic data augmentation rules for maritime object detection
CN113762049B (en) Content identification method, content identification device, storage medium and terminal equipment
KR102648270B1 (en) Method and apparatus for coordinate and uncertainty estimation in images
CN110211117A (en) Processing system for identifying linear tubular objects in medical images and method for optimizing segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180911

RJ01 Rejection of invention patent application after publication