CN116152606A - Sample generation method, target detection model training, target detection method and system - Google Patents

Sample generation method, target detection model training, target detection method and system Download PDF

Info

Publication number
CN116152606A
CN116152606A CN202310210125.6A CN202310210125A CN116152606A CN 116152606 A CN116152606 A CN 116152606A CN 202310210125 A CN202310210125 A CN 202310210125A CN 116152606 A CN116152606 A CN 116152606A
Authority
CN
China
Prior art keywords
sample
target
picture
target detection
detection model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310210125.6A
Other languages
Chinese (zh)
Inventor
请求不公布姓名
项载蔚
冀春锟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co Ltd filed Critical Hunan Shibite Robot Co Ltd
Publication of CN116152606A publication Critical patent/CN116152606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sample generation method, a target detection model training, a target detection method and a system, wherein the sample generation method comprises the following steps: 1, training based on an original sample set to obtain a target detection model; 2, acquiring a candidate sample set which comprises N candidate sample pictures to be detected; 3, outputting target information of each place on each candidate sample picture based on the candidate sample set and the target detection model; 4, screening out target sample pictures from the candidate sample set based on the output information of the step 3, and determining target positions and category information of all positions on each target sample picture; and 5, outputting the target sample picture and the target position and category information of each place on the picture as training samples. According to the invention, the sample for training the target detection model is automatically generated by using the non-labeling candidate sample set, so that the labor cost is greatly reduced; through continuous iterative learning, the data adaptability of the target detection model to the online workflow is continuously improved, and the accuracy of the target detection result is improved.

Description

Sample generation method, target detection model training, target detection method and system
Technical Field
The present invention relates to the field of machine learning technologies, and in particular, to a sample generation method, a target detection model training method, a target detection method and a target detection system.
Background
In real-world situations, it is often necessary to perform target detection on various objects, such as steel plates and component detection in a factory pipeline, continuous vehicle detection on a highway, pedestrian detection on a sidewalk, and the like.
The existing target detection method mainly researches the optimization of a static data set, and a target detection model is obtained by training the static data set. Therefore, the existing target detection model has low detection precision on continuously-changed data, and still needs to continuously identify and label the collected images by means of manpower to generate a training sample, and further retrain and deploy the target detection model so as to improve the detection precision of the target detection model.
The method for manually identifying the labels to generate the samples needs to consume a large amount of manpower and material resources, and has low sample labeling efficiency and low model training efficiency. In addition, because of unavoidable omission and subjectivity in the manual labeling process, the accuracy of the target detection result is affected.
Disclosure of Invention
In order to solve the problem that a large number of samples need to be manually marked in the prior art, the method is used for training a new target detection model. The invention provides a sample generation method, a target detection model training method, a target detection method and a target detection system, which can continuously and automatically generate a training sample of a target detection model, greatly reduce labor cost and continuously improve the accuracy of a target detection result.
In order to solve the technical problems, the invention adopts the following technical scheme:
the sample generation method is characterized by comprising the following steps of:
step 1, training based on an original sample set to obtain a target detection model, wherein the original sample set comprises a plurality of original sample pictures, and the output of the target detection model is target detection information of each place on each original sample picture;
step 2, acquiring a candidate sample set, wherein the candidate sample set comprises N candidate sample pictures to be detected;
step 3, outputting target information of each candidate sample picture based on the candidate sample set and the target detection model;
step 4, screening out target sample pictures from the candidate sample set based on the output information of the step 3, and determining target positions and category information of all positions on each target sample picture;
and 5, outputting the target sample picture and the target position and category information of each part on the picture as training samples.
By means of the method, the unlabeled candidate sample set is utilized to automatically generate the sample for training the target detection model, so that labor cost is greatly reduced, and accuracy of a target detection result is continuously improved.
As a preferred mode, the target detection information of each place on the picture comprises whether an object to be detected exists in each place on the picture and the object class probability; in the step 3, for each candidate sample picture, the process of outputting the target information of each place on the candidate sample picture includes:
step 301, performing multiple reversible transformation processes (such as adjusting illumination, rotation, scaling, turning, etc.) on the candidate sample pictures to obtain m different pictures P;
step 302, inputting the m different pictures P into a target detection model, and outputting whether targets to be detected and target class probabilities exist everywhere on each picture P;
step 303, calculating target positions and target category probabilities of all positions on the candidate sample pictures corresponding to the m pictures P based on the output result of the step 302;
step 304, based on the calculation result of step 303, obtaining final target positions and target category probability information of each place on the candidate sample picture.
By means of the method, in the sample labeling process, the candidate sample pictures are subjected to amplification processing to obtain a plurality of pictures, target detection information corresponding to each picture is obtained through reasoning of the existing detection model, and the target detection information on the original candidate sample pictures is finally determined through comprehensively considering the target detection information of each picture, so that the sample labeling accuracy is improved.
As a preferred mode, the step 4 includes:
step 401, taking the probability of each object category on the candidate sample picture output in step 303 as a confidence score;
step 402, a first threshold value and a second threshold value are set, wherein the first threshold value is substantially smaller than the second threshold value. And regarding candidate samples with confidence between a first preset threshold and a second preset threshold as useless samples, and temporarily discarding the useless samples. Reserving candidate sample pictures with confidence coefficient lower than a first preset threshold value or higher than a second preset threshold value as target sample pictures to be output; setting a position on the candidate sample picture, the confidence of which is lower than a first preset threshold value, as a background, setting a position on the candidate sample picture, the confidence of which is higher than a second preset threshold value, as a target object, and setting an object boundary frame;
step 403, obtaining object class probabilities of positions corresponding to object boundary boxes on candidate sample pictures corresponding to the pictures P, and taking the maximum value as the object class probability output at the object boundary boxes.
In a preferred manner, in the step 303, the probability of each object class on each picture P is averaged to be used as the probability of each object class on the candidate sample picture.
In a preferred manner, in the step 2, N candidate sample pictures of the region to be detected are obtained through continuous real-time image acquisition of the region to be detected. As another preferred mode, N candidate sample pictures of the region to be detected are obtained from the historical stored image set of the region to be detected.
As a preferable mode, the processing includes a position inversion processing, a picture brightness contrast change processing, a reduction processing, or an enlargement processing.
Based on the same inventive concept, the invention also provides a target detection model training method, which is characterized in that the target detection model is trained by using the training sample generated by the sample generation method, and an updated target detection model is obtained. According to the method, through continuous iterative learning, the data adaptability of the target detection model to the online workflow can be continuously improved, and the target detection accuracy is improved.
Preferably, the invention also provides another target detection model training method, which is characterized in that the original sample set and the training sample generated by the sample generating method are utilized to train the target detection model, and the updated target detection model is obtained.
Based on the same inventive concept, the invention also provides a target detection model, which is characterized in that the target detection model is continuously self-learned and updated by the target detection model training method.
Based on the same inventive concept, the invention also provides a target detection method, which is characterized in that the target detection model is utilized to carry out target detection on the picture to be detected.
Based on the same inventive concept, the invention also provides a target detection system, which is characterized by comprising an image acquisition unit, a model training unit and the target detection model, wherein:
an image acquisition unit: the method comprises the steps that a picture to be detected is collected, one part of the picture to be detected is used as a candidate sample picture to generate a training sample set to train and update a target detection model, and the other part of the picture to be detected is used for being identified by the target detection model to output a target detection result;
model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
Compared with the prior art, the method utilizes the unmarked candidate sample set to automatically generate the sample for training the target detection model, thereby greatly reducing the labor cost; meanwhile, through continuous iterative learning, the data adaptability of the target detection model to the online workflow is continuously improved, and the accuracy of the target detection result is improved.
Drawings
Fig. 1 is a layout diagram of an image acquisition unit according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a training method for a target detection model according to an embodiment of the invention.
In fig. 1, 1 is an image acquisition unit, 101 is an online acquisition camera, 102 is a local memory, 103 is a data storage center, and 104 is a data transmission module.
Detailed Description
In order to make the person skilled in the art better understand the solution of the present invention, the technical solution of the embodiment of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiment. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprising" and "having" and any variations thereof, as used in the description and claims, are intended to cover without excluding any equivalents thereof. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to the invention, the detection target frame is automatically generated and updated for the continuously input picture data, so that the continuous self-learning of the continuous data stream is achieved.
In one embodiment, the sample generation method provided by the invention comprises the following steps:
step 1, based on the original sample set D A Training to obtain target detection model M A And deployed to a production environment. The original sample set comprises a plurality of original sample pictures, and the output of the target detection model is target detection information of each place on each original sample picture. Wherein the original sample set may be an initial manually annotated sample set. The original sample set may also be obtained by other means, not limited herein.
Step 2, obtaining a candidate sample set D B Wherein the candidate sample set comprises N candidate sample pictures P to be detected i I is an integer and i is more than or equal to 1 and less than or equal to N and P i Representing an ith candidate sample picture to be detected;
step 3, predicting and outputting each candidate sample picture P based on the candidate sample set and the target detection model i Target information of each place;
step 4, based on the prediction output information of step 3, selecting a candidate sample set D B Screening out target sample pictures and determining target positions and category information of all positions on each target sample picture;
and 5, outputting the target sample picture and the target position and category information of each part on the picture as training samples.
According to the invention, the sample for training the target detection model is automatically generated by using the non-labeling candidate sample set, so that the labor cost is greatly reduced, and the accuracy of the target detection result is continuously improved.
In some embodiments, the target detection information of each place on the picture comprises whether targets to be detected exist and target class probabilities of each place on the picture; in the step 3, for each candidate sample picture P i Outputting candidate sample picture P i The target information process everywhere above includes:
step 301, for candidate sample picture P i Performing amplification treatment to obtain m different pictures
Figure BDA0004112401530000051
j is an integer and 1.ltoreq.j.ltoreq.m, < >>
Figure BDA0004112401530000052
Representing a j-th transformed picture corresponding to the i-th candidate sample picture to be detected;
step 302, converting the m different transformed pictures
Figure BDA0004112401530000053
Input target detection model M A By means of a model M A Predicting each picture +.>
Figure BDA0004112401530000054
Outputting the detection result of each picture +.>
Figure BDA0004112401530000055
Whether there is an object to be detected everywhere and the object class probability, get +.>
Figure BDA0004112401530000056
Object class probability matrix everywhere above +.>
Figure BDA0004112401530000057
And object position matrix->
Figure BDA0004112401530000058
Step 303, based on the output result of step 302, for m pictures with different transformations
Figure BDA0004112401530000059
Performing the inverse processing operation in step 301 to restore to the original image position, averaging the m prediction results, and outputting candidate sample picture P i Whether there is an object to be detected everywhere and object class probability +.>
Figure BDA0004112401530000061
The position of the outer frame of the object is
Figure BDA0004112401530000062
And obtaining the most accurate prediction result.
In the sample labeling process, the candidate sample pictures are processed to obtain the pictures of different transformations of the candidate sample pictures, the target detection information of each place on the different transformations of the candidate sample pictures is obtained, and the target detection information of each place on each transformation is comprehensively considered to finally determine the target detection information of the original candidate sample pictures, so that the sample labeling accuracy is improved.
In some embodiments, the step 4 includes:
step 401, taking object class probabilities Tcls of all places on the candidate sample picture output in step 303 as confidence scores, and setting a first preset threshold beta and a second preset threshold alpha, wherein alpha is larger than beta;
step 402, reserving a candidate sample picture with the confidence coefficient lower than a first preset threshold value beta or higher than a second preset threshold value alpha as a target sample picture to be output; setting a position on the candidate sample picture, the confidence of which is lower than a first preset threshold value, as a background, setting a position on the candidate sample picture, the confidence of which is higher than a second preset threshold value, as a target object, setting an object boundary frame, and discarding the candidate sample picture, the confidence of which is between the first preset threshold value and the second preset threshold value; in particular, the method comprises the steps of,for Tcls, if there is any position
Figure BDA0004112401530000063
Confidence of->
Figure BDA0004112401530000064
It is rejected as a confusing sample. In the rest candidate samples, setting each position lower than the second preset threshold value alpha as a background, setting the position higher than the first preset threshold value beta as a foreground, and corresponding the position +.>
Figure BDA0004112401530000065
Setting the sample as an object boundary box, finally fusing the prediction results of all samples to generate a self-labeling data set D LB
Step 403, obtaining object class probabilities of positions corresponding to object boundary boxes on candidate sample pictures corresponding to the pictures P, and taking the maximum value as the object class probability output at the object boundary boxes.
In some embodiments, in step 303, the object class probabilities of each picture P are averaged to serve as object class probabilities of each candidate sample picture.
In some embodiments, in the step 2, N candidate sample pictures of the region to be detected are obtained by continuous real-time image acquisition of the region to be detected. In other embodiments, N candidate sample pictures of the region to be detected are obtained from a historical stored image set of the region to be detected.
In some embodiments, the processing includes a position flipping process, a picture brightness contrast change process, a scaling process, or a scaling process. For example, for each candidate sample picture P i Multiple kinds of augmentation such as left-right turning, up-down turning, shrinking to 0.5 times of original image, enlarging to 1.5 times of original image are carried out to obtain multiple pictures including original candidate sample pictures, and then a model M is used A And predicting target detection results of the pictures in parallel to obtain whether objects to be detected exist at corresponding positions on the pictures and object category probability. In such an embodimentIn the formula, whether an object to be detected and an object class probability exist everywhere on an original image corresponding to a plurality of pictures or not is required to be obtained later, and then an average value of the object class probabilities of corresponding positions of the plurality of pictures is obtained to obtain the most accurate prediction probability average value.
In some embodiments, the present invention provides a training method for a target detection model, which trains the target detection model by using training samples generated by the sample generation method, and obtains an updated target detection model. According to the method, through continuous iterative learning, the data adaptability of the target detection model to the online workflow can be continuously improved, and the target detection accuracy is improved.
In a more preferred embodiment, as shown in FIG. 2, the present invention also provides another object detection model training method that utilizes a manually labeled dataset (original sample set D A ) And a self-labeling training sample D generated by the sample generation method LB For the target detection model M A Training is carried out, and an updated target detection model is obtained. In some embodiments, the artificial annotation dataset and the self-annotation training sample set are fused 1:1 sampling, and continuously training the target detection model together. Meanwhile, the variety of pictures can be enhanced by adopting methods such as cutting, zooming, color change, brightness change, mosaic, picture mixing and the like, the fusion of an original sample data set and a self-labeling new sample data set is enhanced, and the comprehensive learning capacity of the two data sets is improved. Using the fused sample set to detect the target model M A Training is continued until the model converges to obtain a new model M B With a new model M B Updating the original model M A Thereafter, the model M A And (3) deploying the model in a production environment, and continuously executing the steps 2 to 5, obtaining the automatically marked sample again, training the model again and updating the model again, so that the model is continuously self-learned until the loss of the model is converged.
In the present invention, if the method of clipping, zooming, color change, brightness change, mosaic and picture mixing is used to fuse the pictures, the method belongs to the prior art, and will not be described herein in detail, but the understanding and implementation of the present invention by those skilled in the art are not affected.
In some embodiments, the invention further provides a target detection model, and the target detection model is continuously self-learned and updated through the target detection model training method. After the target detection model is trained and updated, deploying the updated target detection model into a production environment.
In some embodiments, the invention further provides a target detection method, which uses the target detection model to perform target detection on the picture to be detected.
In some embodiments, the present invention further provides an object detection system, which includes an image acquisition unit, a model training unit, and the object detection model, wherein:
an image acquisition unit: the method comprises the steps of acquiring pictures to be detected, wherein one part of the pictures to be detected is used as candidate sample pictures to generate a training sample set to train and update a target detection model, and the other part of the pictures to be detected is used for being identified by the target detection model to output a target detection result.
Model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
In some embodiments, the object detection system further comprises an online model deployment module for deploying the image acquisition unit, the model training unit, and the object detection model at desired locations.
In some embodiments, the image acquisition unit employs a data reclamation unit that is responsible for reclaiming picture data in the persistent data stream. As shown in fig. 1, in such an embodiment, the image acquisition unit 1 includes an online acquisition camera 101, a local memory 102, a data storage center 103, and a data transmission module 104. The direction indicated by the hollow arrow in fig. 1 is the continuous feeding direction of the workpiece during production, and the diamond and cross represent different kinds of target workpieces to be identified.
The image acquisition unit 1 works as follows:
first, the online acquisition camera 101 photographs an area to be detected, and after photographing is completed, the output picture is stored in the local memory 102 (such as a local disk).
Then, the data transmission module 104 reads the picture from the local memory 102, and transmits the picture back to the data storage center 103 through a network or the like, and the data storage center 103 stores the received picture data according to the receiving time, denoted by D.
The picture data D stored in the data storage center 103 can be used as candidate sample pictures D B Finally, the sample is used as a training sample after automatic labeling; can also be used as an initial manual annotation data set D A
The method can be used for object detection scenes with continuous samples, label generation can be fully utilized for unlabeled samples on a production line, target detection samples can be automatically generated, the sample set and the target detection model can be continuously and circularly updated, labor cost is greatly reduced, data adaptability of the target detection model to online workflow is continuously improved, the model is continuously optimized to adapt to new data, and accuracy of target detection results is improved. The invention is particularly suitable for the scene with continuous data such as workpiece detection (such as steel plate and component detection of factory production line), vehicle detection on a road, pedestrian detection on a road and the like in the factory production line.
As shown in table 1, the experimental comparative effects are as follows:
the production line data of a certain factory is used as an experimental object, 5000 pieces of manual labeling data of a history are used as an original sample set for starting, and in the online data of 3, 4 and 5 months, recovered partial data is used as a test set, and for an initial target detection model (Baseline model), the average accuracy rate (MAP) of the whole class is only 94.1. And after the manually marked data is added, the average accuracy (MAP) of the whole class is improved to 95. After the target detection model is continuously trained by adopting the method, the average accuracy (MAP) of the whole class is improved to 97.87, the effect is far better than that of the original basic target detection model, and the effect is also better than that of the detection model obtained by training a large number of manual labeling samples, and the result shows that the method has obvious effect of improving the average accuracy (MAP) of the whole class.
Table 1 experiment contrast effect table (MAP represents the average correct rate of the whole class)
Figure BDA0004112401530000091
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are all within the scope of the present invention.

Claims (10)

1. A sample generation method, comprising the steps of:
step 1, training based on an original sample set to obtain a target detection model, wherein the original sample set comprises a plurality of original sample pictures, and the output of the target detection model is target detection information of each place on each original sample picture;
step 2, acquiring a candidate sample set, wherein the candidate sample set comprises N candidate sample pictures to be detected;
step 3, outputting target information of each candidate sample picture based on the candidate sample set and the target detection model;
step 4, screening out target sample pictures from the candidate sample set based on the output information of the step 3, and determining target positions and category information of all positions on each target sample picture;
and 5, outputting the target sample picture and the target position and category information of each part on the picture as training samples.
2. The sample generation method according to claim 1, wherein the target detection information of each place on the picture includes whether there is an object to be detected and object class probability of each place on the picture;
in the step 3, for each candidate sample picture, the process of outputting the target information of each place on the candidate sample picture includes:
step 301, performing multiple reversible transformation processes on candidate sample pictures to obtain m different pictures P;
step 302, inputting the m different pictures P into a target detection model, and outputting whether targets to be detected and target class probabilities exist everywhere on each picture P;
step 303, calculating target positions and target category probabilities of all positions on the candidate sample pictures corresponding to the m pictures P based on the output result of the step 302;
step 304, based on the calculation result of step 303, obtaining final target positions and target category probability information of each place on the candidate sample picture.
3. The sample generation method according to claim 2, wherein the step 4 includes:
step 401, taking the probability of each object category on the candidate sample picture output in step 303 as a confidence score;
step 402, reserving all candidate sample pictures with confidence degrees lower than a first preset threshold value or higher than a second preset threshold value as target sample pictures to be output; setting a position on the candidate sample picture, the confidence of which is lower than a first preset threshold value, as a background, setting a position on the candidate sample picture, the confidence of which is higher than a second preset threshold value, as a target object, setting an object boundary frame, and discarding the candidate sample picture with the confidence between the first preset threshold value and the second preset threshold value;
step 403, obtaining object class probabilities of positions corresponding to object boundary boxes on candidate sample pictures corresponding to the pictures P, and taking the maximum value as the object class probability output at the object boundary boxes.
4. A sample generation method according to claim 2 or 3, wherein in the step 303, the probability of each object class on each picture P is averaged to obtain the probability of each object class on the candidate sample picture.
5. A sample generation method according to any one of claims 1 to 3, wherein in step 2, N candidate sample pictures of the region to be detected are obtained by continuous real-time image acquisition of the region to be detected; or, N candidate sample pictures of the region to be detected are obtained from a history storage image set of the region to be detected.
6. A sample generation method according to claim 2 or 3, wherein in step 301, the processing includes a position inversion processing, a picture brightness contrast change processing, a reduction processing, or an enlargement processing.
7. A training method of a target detection model is characterized in that,
training the target detection model with the training sample generated by the sample generation method of any one of claims 1 to 6 to obtain an updated target detection model;
alternatively, the object detection model is trained using the original sample set and the training samples generated by the sample generation method of any one of claims 1 to 6, to obtain an updated object detection model.
8. An object detection model, characterized in that the object detection model is continuously self-learning updated by the object detection model training method according to claim 7.
9. An object detection method, characterized in that an object detection is performed on a picture to be detected using the object detection model according to claim 8.
10. An object detection system comprising an image acquisition unit, a model training unit and an object detection model according to claim 8, wherein:
an image acquisition unit: the method comprises the steps that a picture to be detected is collected, one part of the picture to be detected is used as a candidate sample picture to generate a training sample set to train and update a target detection model, and the other part of the picture to be detected is used for being identified by the target detection model to output a target detection result;
model training unit: for training the target detection model based on the generated training sample set to update the target detection model.
CN202310210125.6A 2022-09-09 2023-03-07 Sample generation method, target detection model training, target detection method and system Pending CN116152606A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022110991289 2022-09-09
CN202211099128.9A CN115496960A (en) 2022-09-09 2022-09-09 Sample generation method, target detection model training method, target detection method and system

Publications (1)

Publication Number Publication Date
CN116152606A true CN116152606A (en) 2023-05-23

Family

ID=84467545

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202211099128.9A Withdrawn CN115496960A (en) 2022-09-09 2022-09-09 Sample generation method, target detection model training method, target detection method and system
CN202310210125.6A Pending CN116152606A (en) 2022-09-09 2023-03-07 Sample generation method, target detection model training, target detection method and system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202211099128.9A Withdrawn CN115496960A (en) 2022-09-09 2022-09-09 Sample generation method, target detection model training method, target detection method and system

Country Status (1)

Country Link
CN (2) CN115496960A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117198551B (en) * 2023-11-08 2024-01-30 天津医科大学第二医院 Kidney function deterioration pre-judging system based on big data analysis

Also Published As

Publication number Publication date
CN115496960A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
CN109741332B (en) Man-machine cooperative image segmentation and annotation method
CN112270280B (en) Open-pit mine detection method in remote sensing image based on deep learning
CN112966772A (en) Multi-person online image semi-automatic labeling method and system
CN111709966B (en) Fundus image segmentation model training method and device
CN110648310A (en) Weak supervision casting defect identification method based on attention mechanism
CN112001407A (en) Model iterative training method and system based on automatic labeling
CN111126115A (en) Violence sorting behavior identification method and device
KR102600475B1 (en) Deep learning-based data augmentation method for product defect detection learning
CN116152606A (en) Sample generation method, target detection model training, target detection method and system
CN112967255A (en) Shield segment defect type identification and positioning system and method based on deep learning
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN113344888A (en) Surface defect detection method and device based on combined model
CN115205727A (en) Experiment intelligent scoring method and system based on unsupervised learning
CN116189191A (en) Variable-length license plate recognition method based on yolov5
JP6988995B2 (en) Image generator, image generator and image generator
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN112990237A (en) Subway tunnel image leakage detection method based on deep learning
CN110363198B (en) Neural network weight matrix splitting and combining method
CN115587989B (en) Workpiece CT image defect detection segmentation method and system
CN113920127B (en) Training data set independent single-sample image segmentation method and system
CN113705531B (en) Identification method of alloy powder inclusions based on microscopic imaging
CN115953385A (en) Interactive industrial cold start defect detection system and method
CN113177566B (en) Feature extraction model training method and device and computer equipment
Prema Automatic number plate recognition using deep learning
CN112287938A (en) Text segmentation method, system, device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination