CN103559237B - Semi-automatic image annotation sample generating method based on target tracking - Google Patents

Semi-automatic image annotation sample generating method based on target tracking Download PDF

Info

Publication number
CN103559237B
CN103559237B CN201310511762.3A CN201310511762A CN103559237B CN 103559237 B CN103559237 B CN 103559237B CN 201310511762 A CN201310511762 A CN 201310511762A CN 103559237 B CN103559237 B CN 103559237B
Authority
CN
China
Prior art keywords
delta
template
sample
sigma
positive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310511762.3A
Other languages
Chinese (zh)
Other versions
CN103559237A (en
Inventor
李宁
郭乔进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201310511762.3A priority Critical patent/CN103559237B/en
Publication of CN103559237A publication Critical patent/CN103559237A/en
Application granted granted Critical
Publication of CN103559237B publication Critical patent/CN103559237B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a semi-automatic image annotation sample generating method based on target tracking. The method comprises processes of target tracking and semi-automatic annotation. A serial of samples are generated through a target tracking mechanism, a template learning mechanism is designed for tracking and detecting target areas, detection on videos or images is performed by means of learned templates, manual annotation is utilized to help to perform determination, and therefore, annotation samples are generated. The semi-automatic image annotation sample generating method based on the target tracking has the advantages of being capable of obtaining a large amount of image annotation samples by means of less labor consumption.

Description

Semi-automatic image labeling sample generating method based on target following
Technical field
The present invention relates to a kind of semi-automatic image labeling sample generating method based on target following, belong to image procossing skill Art field.
Background technology
The target of image labeling is the corresponding relation set up between image-region and mark keyword.Image labeling passes through to build Mapping relations between vertical low-level visual feature and high-level semantic, can solve present in image retrieval to a certain extent " semantic gap " problem.Image labeling can be divided into manual mark and automatic marking two class.Carry out image using artificial mode Mark is to be also the most directly most effective way, but this is also the work taking time and effort very much.With Internet and The development of Digital image technology, view data magnanimity increases, and traditional artificial mask method every time can only be in piece image Object area is labeled, and is labeled more and more time-consuming effort using manually.Therefore increasing scholar's research is passed through Carry out automatic image annotation using machine learning method, be used statistical learning method to be also required to substantial amounts of mark sample conduct Training set, however, the collection of labeled data that presently, there are is relatively fewer.Therefore, the present invention proposes one kind and is based on target following Semi-automatic image labeling sample generating method, thus to obtain more image labeling samples by using less manpower consumption This.
Content of the invention
Goal of the invention:Object area in piece image can only be labeled every time for traditional artificial mask method Defect, the invention provides a kind of semi-automatic image labeling sample generating method based on target following, thus by less Human intervention obtain more marked image pattern.
Technical scheme:A kind of semi-automatic image labeling sample generating method based on target following, this method comprises two Process, its step is as follows:
Object tracking process:
(11) artificial mark area-of-interest in initial frame, as region to be tracked;
(12) according to tab area, generate initially positive negative sample;
(13) according to positive negative sample, generate original template;
(14) most like region is searched in the next frame according to template;
(15) according to tracking result more new template;
(16) return to step (14), iteration is followed the tracks of.
Annotation process:
(21) using study to template frame of video is detected;
(22) candidate region detecting is tracked;
(23) tracking sequence is classified or manually marked, retained positive sample sequence, removed error tracking sequence;
(24) positive sample sequence and its template are preserved;
(25) template being obtained using tracking, return to step (21) is iterated;
(26) manual confirmation is carried out to all image sequences producing of following the tracks of.
Beneficial effect:Compared with prior art, the semi-automatic image labeling sample based on target following that the present invention provides Generation method, carries out, from motion tracking, generating sample sequence by the area-of-interest in artificial selection video, and combines semi-automatic Label technology, is obtained by less manpower consumption and substantial amounts of has marked image pattern.
Brief description
Fig. 1 is the offset window schematic diagram in the embodiment of the present invention, and wherein, solid-line rectangle is positive sample, and dashed rectangle is Negative sample;
Fig. 2 is the sample extension convolution operator schematic diagram in the embodiment of the present invention;
Fig. 3 is the tracking sample sequence schematic diagram in the embodiment of the present invention, wherein, the big positive sample of front half section (bicycle), Second half section is negative sample;
Fig. 4 is the overall flow schematic diagram in the embodiment of the present invention.
Specific embodiment
With reference to specific embodiment, it is further elucidated with the present invention it should be understood that these embodiments are merely to illustrate the present invention Rather than restriction the scope of the present invention, after having read the present invention, the various equivalences to the present invention for the those skilled in the art The modification of form all falls within the application claims limited range.
Based on the semi-automatic image labeling sample generating method of target following, detailed process is as follows:
Object tracking process:
(1) give the video that resolution ratio is N × N, gradient is calculated to video initial frame, and manually marks out the rectangle of H × W Object area, the centre coordinate in rectangle object region is (m0,n0), as initial positive sample x0, H and W is image to be tracked respectively The height of window and width.
(2) negative sample is selected according to initial positive sample, rectangle object region is entered with line displacement, generate negative sampleIts Middle Δ i ∈ [- H/2,0) ∪ (0, H/2], Δ j ∈ [- W/2,0) ∪ (0, W/2] represent the skew of abscissa and ordinate, such as Fig. 1 Shown.Define αΔiΔjFor negative sampleWeight, δ refers to Gaussian function standard deviation, to define used here as Gaussian function αΔiΔj
(3) according to initial positive sample and negative sample, generate template w
The complexity directly calculating above-mentioned template is very high, by introducing h
The wherein schematic diagram of h is as shown in Figure 2.DefinitionIt is with (m0,n0) centered on 2H × 2W rectangular area, wherein comprise Region corresponding to all positive negative samples, by using h pairCarry out convolution, can very easily try to achieve templateBy using FFT and IFFT, computation complexity can be reduced, first with FFT by h andIt is transformed into frequency domain, Then dot product, finally recycles IFFT to be transformed into time domain and obtains template w.In the case of hardware supported, can also be added using GPU Speed.
(4) gradient is calculated to frame of video t, using template wt-1Area the most close in all H × W area in search present frame Domain, defining distance function is
Directly calculation template and the complexity of all H × W rectangular areas distance in frame t are O (NNHW), in order to accelerate Calculating speed, can be split as follows:
WhereinIt is constant for present frame;The unit matrix of a H × W can be passed through to current Frame carries out convolution and obtains, and two-dimentional unit matrix is split, obtains one-dimensional vector, can carry out quick convolution;Convolution can be carried out by using template and present frame to obtain.Can be multiple by calculating by using FFT and IFFT Miscellaneous degree is greatly reduced, and first with FFT, w and present frame is transformed into frequency domain, then dot product, finally when being transformed into using IFFT Domain obtains the distance of each pixel in present frame.In order to improve efficiency further, can only calculate with (mt-1,nt-1) centered on Distance in H × W area.In the case of hardware supported, can also be accelerated using GPU further.
(5) pass through calculated distance and obtain positive negative sampleWhereinRepresent tracking result,Expression removesThe minimum target area of outer distance, utilizesBy equation below come more new template wt
Wherein β ∈ (0,1) represents learning rate.
WillAdd positive sample data set, t=t+1, repeat step (4) (5) is iterated following the tracks of, until (mt,nt) surpass Go out image range, or artificial stopping is followed the tracks of.
Sample annotation process:
(1) using study to template w video is detected, obtain some candidate regions
(2) it is directed to each candidate region
A) generate trace template
B) distance of each region and template in frame t is calculated with reference to step (4);
C) generate positive negative sample with reference to step (5)And more new template
D) at any time, execute following operation:
I. artificial judgment:Remove the tracking sample of mistake by artificial mark, retain correct sample;
Ii. automatic decision:Calculate sample sequenceWith the distance of template, remove apart from unstable sample This
If e) sample is retained, continues to follow the tracks of, t=t+1, repeat (b) (c) (d), untilBeyond image model Enclose, or artificial stopping is followed the tracks of;
(3) one group of template { w is obtained according to tracking resulti, i=1 ..., K ' }, K ' is the positive sample sequence number retaining Amount, obtains one group of positive sample sequence simultaneously
(4) from template list W=w ∪ { wi, i=1 ..., K ' } and one template of middle random selection, return to step (), Repeat, until artificial stop;
(5) manual confirmation is carried out to the positive sample sequence getting, as shown in figure 3, removing false positive sample further.
Proved by test of many times, compared with prior art, its remarkable advantage is the method for the present invention:
(1) this method is directly using Pixel-level feature, rather than extracts the features such as histogram, it is to avoid the time of feature extraction disappears Consumption, can describe more judgement information in image-region simultaneously, such as shape, space structure etc..Gradient used herein As feature, but mechanism described herein such as can directly use gray scale using being applied to arbitrary pixel characteristic Or RGB triple channel color characteristic etc..
(2) for each positive sample, this method produces negative sample using skew, so that the calculating robustness of template is more By force, meanwhile, this method passes through to design convolution operator h, such as shown in step (3), using h come to extending positive negative sample, can be quick Generate a series of negative samples, its computational efficiency is higher.
(3) this method is decomposed using distance, such as shown in step (4), Euclidean distance higher for computation complexity is calculated and turns Turn to two convolutional calculation, such that it is able to be accelerated using FFT and IFFT, in the case of hardware supported, can also use GPU accelerates, and calculating speed and efficiency are higher.
(4) this method devises a kind of online template renewal mechanism, such as shown in step (5), is generated by every secondary tracking Positive negative sample, can directly update trace template, carry out weight without using all positive negative sample producing of following the tracks of before New training, its training is more easy and efficiency is higher.
(5) pass through once to mark, be tracked obtaining substantial amounts of image pattern, usual video is per second to comprise 30 frames, Therefore follow the tracks of and can produce within 30 seconds 900 samples, compared to existing technology, by using little manpower consumption, can obtain big The mark sample of amount.
For step (five), because image pattern is to be obtained by tracking, there is continuity, as shown in figure 3, therefore, only need Each sequence is divided into two sections, leading portion is positive sample, back segment is negative sample, and each sequence only needs once manually to mark, because This, taking of artificial mark is little.

Claims (4)

1. a kind of semi-automatic image labeling sample generating method based on target following it is characterised in that:Comprise two processes, its Step is as follows:
Object tracking process:
(11) artificial mark area-of-interest in initial frame, as region to be tracked;
(12) according to tab area, generate initially positive negative sample;
(13) according to positive negative sample, generate original template;
(14) most like region is searched in the next frame according to template;
(15) according to tracking result more new template;
(16) return to step (14), iteration is followed the tracks of;
Annotation process:
(21) using study to template frame of video is detected;
(22) candidate region detecting is tracked;
(23) tracking sequence is classified or manually marked, retained positive sample sequence, removed error tracking sequence;
(24) positive sample sequence and its template are preserved;
(25) template being obtained using tracking, return to step (21) is iterated;
(26) manual confirmation is carried out to all image sequences producing of following the tracks of;
In step (11), artificial mark area-of-interest in initial frame, as region to be tracked, given resolution ratio is N × N's Video, calculates gradient to video initial frame, and manually marks out the rectangle object region of H × W, and centre coordinate is (m0,n0), make For initial positive sample x0
In step (12), according to initial positive sample x0Select negative sample, rectangle object region is entered with line displacement, generate negative sampleWherein Δ i ∈ [- H/2,0) ∪ (0, H/2], Δ j ∈ [- W/2,0) ∪ (0, W/2] represent the inclined of abscissa and ordinate Move;Define αΔiΔjFor negative sampleWeight, to define α used here as Gaussian functionΔiΔj
α Δ i Δ j = g ( Δ i , Δ j ) ∝ exp ( - ( Δi 2 2 δ i 2 + Δj 2 2 δ j 2 ) ) - - - ( 1 )
In step (13), according to initial positive sample and negative sample, generate trace template w
w = Σ Δ i = - H / 2 H / 2 Σ Δ j = - W / 2 W / 2 α Δ i Δ j ( x - x Δ i Δ j ) - - - ( 2 )
The complexity directly calculating above-mentioned template is very high, by introducing h
h Δ i Δ j = Σ Δ i = - H / 2 H / 2 Σ Δ j = - W / 2 W / 2 α Δ i Δ j - α 00 i f Δ i = 0 , Δ j = 0 - α Δ i Δ j o t h e r w i s e - - - ( 3 )
DefinitionIt is with (m0,n0) centered on 2H × 2W rectangular area, wherein contain the region corresponding to all positive negative samples, lead to Cross and utilize h pairCarry out convolution, try to achieve trace templateUsing FFT by h andIt is transformed into frequency domain, then dot product, Recycle IFFT to be transformed into time domain afterwards and obtain template w;In the case of hardware supported, accelerated using GPU.
2. the semi-automatic image labeling sample generating method based on target following as claimed in claim 1 it is characterised in that:
In step (14), gradient is calculated to frame of video t, using template wt-1The most close in all H × W area in search present frame Region, define distance function be
d ( w , y ) = Σ i = 1 H Σ j = 1 W ( w i j - y i j ) 2 - - - ( 4 )
Replace directly calculating Euclidean distance using the method splitting convolution:
d ( w , y ) = Σ i = 1 H Σ j = 1 W ( w i j 2 + y i j 2 - 2 w i j y i j ) = Σ i = 1 H Σ j = 1 W w i j 2 + Σ i = 1 H Σ j = 1 W y i j 2 - 2 Σ i = 1 H Σ j = 1 W w i j y i j - - - ( 5 )
WhereinIt is constant for present frame;Convolution is carried out to present frame by the unit matrix of a H × W Obtain, two-dimentional unit matrix is split, obtains one-dimensional vector, quick convolution can be carried out;By profit Carry out convolution with template and present frame to obtain;First with FFT, w and present frame are transformed into frequency domain, then dot product, finally in profit It is transformed into the distance that time domain obtains each pixel in present frame with IFFT;In order to improve efficiency further, only calculate with (mt-1, nt-1) centered on H × W area in distance;In the case of hardware supported, accelerated using GPU further.
3. the semi-automatic image labeling sample generating method based on target following as claimed in claim 2 it is characterised in that:Step Suddenly, in (15), positive negative sample is obtained by calculated distanceWhereinRepresent tracking result,Expression removesOutward The minimum target area of distance, utilizesBy equation below come more new template wt
w t = ( 1 - β ) w t - 1 + β h ⊗ x t + - β h ⊗ x t - - - - ( 6 )
Wherein β ∈ (0,1) represents learning rate;
WillAdd positive sample data set, t=t+1, repeat step (14) (15) is iterated following the tracks of, until (mt,nt) beyond figure As scope, or artificial stopping is followed the tracks of.
4. the semi-automatic image labeling sample generating method based on target following as claimed in claim 3 it is characterised in that:Mark Note process,
(1) using study to template w video is detected, obtain some candidate regions
(2) it is directed to each candidate region
A) generate trace template
B) distance of each region and template in frame t is calculated with reference to step (14);
C) generate positive negative sample with reference to step (15)And more new template
D) at any time, execute following operation:
I. artificial judgment:Remove the tracking sample of mistake by artificial mark, retain correct sample;
Ii. automatic decision:Calculate sample sequenceWith the distance of template, remove apart from unstable sample;
If e) sample is retained, continues to follow the tracks of, t=t+1, repeat (b) (c) (d), untilBeyond image range, or Artificial stopping is followed the tracks of;
(3) one group of template { w is obtained according to tracking resulti, i=1 ..., K'}, K' are the positive sample sequence quantity retaining, simultaneously Obtain one group of positive sample sequence
(4) from template list W=w ∪ { wi, randomly choose a template, return to step () in i=1 ..., K'}, repeat to hold OK, until artificial stop;
(5) manual confirmation is carried out to the positive sample sequence getting, remove false positive sample further.
CN201310511762.3A 2013-10-25 2013-10-25 Semi-automatic image annotation sample generating method based on target tracking Active CN103559237B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310511762.3A CN103559237B (en) 2013-10-25 2013-10-25 Semi-automatic image annotation sample generating method based on target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310511762.3A CN103559237B (en) 2013-10-25 2013-10-25 Semi-automatic image annotation sample generating method based on target tracking

Publications (2)

Publication Number Publication Date
CN103559237A CN103559237A (en) 2014-02-05
CN103559237B true CN103559237B (en) 2017-02-15

Family

ID=50013484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310511762.3A Active CN103559237B (en) 2013-10-25 2013-10-25 Semi-automatic image annotation sample generating method based on target tracking

Country Status (1)

Country Link
CN (1) CN103559237B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11776292B2 (en) 2020-12-17 2023-10-03 Wistron Corp Object identification device and object identification method

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850832B (en) * 2015-05-06 2018-10-30 中国科学院信息工程研究所 A kind of large-scale image sample mask method and system based on classification iteration
CN106650565A (en) * 2016-08-31 2017-05-10 刘杰杰 Mobile Internet intelligent-terminal electronic evidence obtaining platform
CN107203755B (en) * 2017-05-31 2021-08-03 中国科学院遥感与数字地球研究所 Method, device and system for automatically adding new time sequence mark samples of remote sensing images
US10592786B2 (en) * 2017-08-14 2020-03-17 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
CN108288037B (en) * 2018-01-19 2021-08-06 深圳禾思众成科技有限公司 Tire code identification system
TWI666595B (en) 2018-02-26 2019-07-21 財團法人工業技術研究院 System and method for object labeling
CN108596958B (en) * 2018-05-10 2021-06-04 安徽大学 Target tracking method based on difficult positive sample generation
CN109034247B (en) * 2018-07-27 2021-04-23 北京以萨技术股份有限公司 Tracking algorithm-based higher-purity face recognition sample extraction method
CN108986134B (en) * 2018-08-17 2021-06-18 浙江捷尚视觉科技股份有限公司 Video target semi-automatic labeling method based on related filtering tracking
CN109359558B (en) * 2018-09-26 2020-12-25 腾讯科技(深圳)有限公司 Image labeling method, target detection method, device and storage medium
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot
CN109766830B (en) * 2019-01-09 2022-12-27 深圳市芯鹏智能信息有限公司 Ship target identification system and method based on artificial intelligence image processing
CN111444746B (en) * 2019-01-16 2024-01-30 北京亮亮视野科技有限公司 Information labeling method based on neural network model
CN109753975B (en) * 2019-02-02 2021-03-09 杭州睿琪软件有限公司 Training sample obtaining method and device, electronic equipment and storage medium
CN110210328B (en) * 2019-05-13 2020-08-07 北京三快在线科技有限公司 Method and device for marking object in image sequence and electronic equipment
CN110189333B (en) * 2019-05-22 2022-03-15 湖北亿咖通科技有限公司 Semi-automatic marking method and device for semantic segmentation of picture
CN110782005B (en) * 2019-09-27 2023-02-17 山东大学 Image annotation method and system for tracking based on weak annotation data
CN111738353B (en) * 2020-07-17 2020-12-04 平安国际智慧城市科技股份有限公司 Sample screening method and device and computer equipment
CN112383734B (en) * 2020-10-29 2023-06-23 岭东核电有限公司 Video processing method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722724A (en) * 2012-05-30 2012-10-10 广东好帮手电子科技股份有限公司 Vehicle-mounted night view system having target identification function and target identification method thereof
CN102945554A (en) * 2012-10-25 2013-02-27 西安电子科技大学 Target tracking method based on learning and speeded-up robust features (SURFs)
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4618098B2 (en) * 2005-11-02 2011-01-26 ソニー株式会社 Image processing system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722724A (en) * 2012-05-30 2012-10-10 广东好帮手电子科技股份有限公司 Vehicle-mounted night view system having target identification function and target identification method thereof
CN102945554A (en) * 2012-10-25 2013-02-27 西安电子科技大学 Target tracking method based on learning and speeded-up robust features (SURFs)
CN103325125A (en) * 2013-07-03 2013-09-25 北京工业大学 Moving target tracking method based on improved multi-example learning algorithm

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11776292B2 (en) 2020-12-17 2023-10-03 Wistron Corp Object identification device and object identification method

Also Published As

Publication number Publication date
CN103559237A (en) 2014-02-05

Similar Documents

Publication Publication Date Title
CN103559237B (en) Semi-automatic image annotation sample generating method based on target tracking
Yi et al. An improved tiny-yolov3 pedestrian detection algorithm
CN108376244B (en) Method for identifying text font in natural scene picture
CN109741347B (en) Iterative learning image segmentation method based on convolutional neural network
CN111723585A (en) Style-controllable image text real-time translation and conversion method
CN110276264B (en) Crowd density estimation method based on foreground segmentation graph
CN113688665B (en) Remote sensing image target detection method and system based on semi-supervised iterative learning
CN104952083B (en) A kind of saliency detection method based on the modeling of conspicuousness target background
Chen et al. Learning linear regression via single-convolutional layer for visual object tracking
Forte et al. Getting to 99% accuracy in interactive segmentation
CN103984959A (en) Data-driven and task-driven image classification method
CN104809481A (en) Natural scene text detection method based on adaptive color clustering
CN113298036B (en) Method for dividing unsupervised video target
Zeng et al. Reference-based defect detection network
CN108986143B (en) Target detection tracking method in video
CN104952073A (en) Shot boundary detecting method based on deep learning
CN104636761A (en) Image semantic annotation method based on hierarchical segmentation
CN102982544A (en) Multiple foreground object image interactive segmentation method
CN104318559A (en) Quick feature point detecting method for video image matching
CN112949408A (en) Real-time identification method and system for target fish passing through fish channel
CN102663777A (en) Target tracking method and system based on multi-view video
CN114743109A (en) Multi-model collaborative optimization high-resolution remote sensing image semi-supervised change detection method and system
CN112927266A (en) Weak supervision time domain action positioning method and system based on uncertainty guide training
Peng et al. Semi-supervised bolt anomaly detection based on local feature reconstruction
Nguyen et al. You always look again: Learning to detect the unseen objects

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant