CN111814867B - Training method of defect detection model, defect detection method and related device - Google Patents

Training method of defect detection model, defect detection method and related device Download PDF

Info

Publication number
CN111814867B
CN111814867B CN202010635033.9A CN202010635033A CN111814867B CN 111814867 B CN111814867 B CN 111814867B CN 202010635033 A CN202010635033 A CN 202010635033A CN 111814867 B CN111814867 B CN 111814867B
Authority
CN
China
Prior art keywords
detection
defect
training
frame
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010635033.9A
Other languages
Chinese (zh)
Other versions
CN111814867A (en
Inventor
黄积晟
任宇鹏
卢维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010635033.9A priority Critical patent/CN111814867B/en
Publication of CN111814867A publication Critical patent/CN111814867A/en
Application granted granted Critical
Publication of CN111814867B publication Critical patent/CN111814867B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a training method of a defect detection model, a defect detection method and a related device. The training method of the defect detection model comprises the following steps: obtaining a training image, wherein the training image is marked with defective real information and a mask area, the mask area is an area formed by pixel points representing defects in the training image, and the real information comprises a defective real frame; detecting the training image by using a defect detection model to obtain defect detection information, wherein the defect detection information comprises a final defect detection frame, the defect detection model is used for classifying positive and negative samples of a plurality of initial detection frames of the training image by using a mask region, and determining the final defect detection frame based on a classification result; and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect. By the method, the detection accuracy of the defect detection model can be improved.

Description

Training method of defect detection model, defect detection method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a training method for a defect detection model, a defect detection method, and a related device.
Background
In the manufacturing industry, many defects exist in many products in the manufacturing process due to artificial or non-artificial factors, and a part of defects can seriously affect the quality of the products. For example, in the manufacturing process of aluminum-based materials, defects such as rubbing, bottom leakage, dirty points and the like can be generated under the artificial or non-artificial factors.
At present, in order to ensure the production quality of products, most of the products are randomly sampled by using a manual inspector, but the manual sampling has too many unreliable factors, is easy to miss and miss, and is difficult to ensure the quality of the products.
Disclosure of Invention
The application mainly solves the technical problem of providing a training method, a defect detection method and a related device for a defect detection model, and can improve the detection accuracy of the defect detection model.
The technical scheme adopted by the application is to provide a training method of a defect detection model, which comprises the following steps: obtaining a training image, wherein the training image is marked with defective real information and a mask area, the mask area is an area formed by pixel points representing defects in the training image, and the real information comprises a defective real frame; detecting the training image by using a defect detection model to obtain defect detection information, wherein the defect detection information comprises a final defect detection frame, the defect detection model is used for classifying positive and negative samples of a plurality of initial detection frames of the training image by using a mask region, and determining the final defect detection frame based on a classification result; and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
The defect detection model comprises a feature extraction network, a region generation network, a feature aggregation layer and a classification layer; detecting the training image by using a defect detection model to obtain defect detection information, wherein the method comprises the following steps: inputting the training image into a feature extraction network to obtain a multi-dimensional feature map; inputting the multidimensional feature map into a region generation network to obtain a plurality of initial detection frames, classifying positive and negative samples of the initial detection frames by using a mask region, and obtaining a plurality of candidate frames based on classification results; inputting the multidimensional feature map and the candidate frames into a feature aggregation layer to obtain a target feature map corresponding to the candidate frames; and inputting the target feature images corresponding to the candidate frames into a classification layer to obtain the detection information of the defects in the training images.
Wherein, positive and negative sample classification is carried out on a plurality of initial detection frames by using a mask area, and the method comprises the following steps: acquiring a first pixel number occupied by a mask area in a real frame and a second pixel number occupied by the mask area in an initial detection frame; if the first ratio between the second pixel number and the first pixel number is smaller than the first reference threshold value, determining that the initial detection frame is a negative sample; and if the first ratio is not smaller than the first reference threshold, determining that the initial detection frame is a positive sample.
If the first ratio is not less than the first reference threshold, determining that the initial detection frame is a positive sample includes: if the first ratio is not smaller than the first reference threshold, obtaining a second ratio between the first pixel number and the third pixel number of the real frame, and obtaining a third ratio between the second pixel number and the fourth pixel number of the initial detection frame; and if the fourth ratio between the third ratio and the second ratio is greater than the second reference threshold, determining that the initial detection frame is a positive sample.
The defect detection information further comprises a defect detection category and a defect detection category confidence; before classifying positive and negative samples of the plurality of initial detection frames using the mask region, the method further comprises: acquiring the cross-over ratio between a final detection frame and a real frame in the last training; weighting and summing the intersection ratio and the confidence coefficient in the last training to obtain a first control value; and taking the product between the first control value and the preset parameter value as a first reference threshold in the training.
The feature extraction network is a feature pyramid network FPN, the region generation network is a region candidate network RPN, the feature aggregation layer is an ROI ALIGN layer, and convolution processing is performed in the ROI ALIGN layer by using variable convolution.
The training image is input to a feature extraction network to obtain a multi-dimensional feature map, which comprises the following steps: sequentially performing N times of downsampling on the training image by using a feature extraction network, and acquiring feature images of the 2 nd to the N times of downsampling to obtain an N-1 dimensional initial feature image, wherein in the N time of downsampling, hole convolution is used for convolution processing, and N is more than 2; for the N-1-dimensional initial feature map, performing ith upsampling on the N-1-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an ith final feature map; wherein i is an integer from 1 to N-1; obtaining a plurality of initial detection frames, including: and traversing the multi-dimensional feature map to obtain a plurality of initial detection frames corresponding to each pixel point.
The defect detection information further comprises a defect detection category and a defect detection category confidence; inputting the target feature map corresponding to the candidate frame into a classification layer to obtain detection information of the defects in the training image, wherein the detection information comprises the following steps: classifying the target feature images corresponding to the candidate frames to obtain detection categories of the candidate frames and confidence degrees of the detection categories; processing the confidence coefficient by using a soft non-maximum value to obtain a processed confidence coefficient; and taking the candidate frame with the processed confidence coefficient higher than the preset confidence coefficient as a final detection frame, and outputting the position information representing the final detection frame and the detection type of the final detection frame and the confidence coefficient of the detection type.
The real information of the defects also comprises the real types of the defects, and the detection information of the defects also comprises the detection types of the defects and the confidence of the detection types; according to the difference between the real information and the detection information of the defect, adjusting network parameters of the defect detection model, including: acquiring the cross-over ratio between a final detection frame and a real frame in the last training, and obtaining a second control value based on the cross-over ratio, wherein the second control value and the cross-over ratio are positively correlated; weighting the second control value and the confidence coefficient in the last training to obtain a third control value; obtaining a first loss value by utilizing the difference between a final detection frame belonging to the positive sample and a real frame obtained in the training, and weighting the first loss value by utilizing a third control value to obtain a second loss value; obtaining a third loss value by utilizing the difference between the detection category belonging to the positive sample and the real category obtained by the training, weighting the third loss value by utilizing a third control value to obtain a fourth loss value, and obtaining a fifth loss value by utilizing the difference between the detection category belonging to the negative sample and the real category obtained by the training; and adjusting network parameters of the defect detection model by using the second loss value, the fourth loss value and the fifth loss value.
Another technical solution adopted by the present application is to provide a defect detection method, which includes: acquiring an image to be processed; detecting an image to be processed by using a defect detection model to obtain detection information of a corresponding defect in the image to be processed, wherein the defect detection model is obtained by training the defect detection model training method in the scheme provided by the application.
Another technical solution adopted by the present application is to provide an image processing apparatus, which includes a processor and a memory coupled to the processor; the memory is used for storing program data, and the processor is used for executing the program data to realize the method in any scheme provided by the application.
Another aspect of the present application is to provide a computer readable storage medium storing program data, which when executed by a processor, is configured to implement the method according to any one of the aspects provided by the present application.
The beneficial effects of the application are as follows: in comparison with the prior art, the method and the device for detecting the defects in the network have the advantages that positive and negative sample classification is carried out on a plurality of initial detection frames of a training image by using a mask area during training of the defect detection model, and a final detection frame of the defects is determined based on classification results, so that network parameters of the defect detection model are adjusted according to differences between real information of the defects and detection information. The mask area is an area formed by pixel points representing defects in the training image, so that the classification quality of positive and negative samples can be improved when the positive and negative samples are classified, and the detection accuracy of the defect detection model is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flowchart of a first embodiment of a training method for a defect detection model according to the present application;
FIG. 2 is a schematic illustration of a training image provided by the present application;
FIG. 3 is a flowchart of a second embodiment of a training method for a defect detection model according to the present application;
FIG. 4 is a schematic diagram illustrating an embodiment of a feature extraction network according to the present application;
FIG. 5 is a schematic flow chart of step 33 in FIG. 3 according to the present application;
FIG. 6 is a schematic diagram of a specific flow chart of step 35 in FIG. 3 provided in the present application;
FIG. 7 is a flowchart of a third embodiment of a training method for a defect detection model according to the present application;
FIG. 8 is a flow chart illustrating an embodiment of a defect detection method according to the present application;
fig. 9 is a schematic view of an embodiment of an image processing apparatus according to the present application;
fig. 10 is a schematic structural diagram of an embodiment of a computer readable storage medium provided by the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of a training method for a defect detection model according to the present application, where the method includes:
Step 11: a training image is acquired.
It will be appreciated that the training images are acquired manually or automatically. The training images comprise defects of products, such as defects of scratch, bottom leakage, dirty points, bubbles, wheel lines, scratches and the like, generated in the manufacturing process of the aluminum profile. The plastic material may have defects such as deformation, breakage, bubbles, scratches, etc. during the manufacturing process. Common defects in metal welding include undercut, weld flash, dent, welding deformation and the like, and sometimes surface pores and surface cracks. The root of the single-sided welding is not welded through, air holes, cracks, and the like. These can be presented by images for category differentiation.
In some embodiments, information corresponding to the defect in the training image, such as the defect type, is labeled by means of manual labeling.
In some embodiments, the training image is labeled with the true information of the defect and a mask region, the mask region being a region of pixels in the training image representing the defect, the true information including a true box of the defect. As shown in fig. 2, if a defect a exists in the training image a, the number of pixels of the defect a and the area formed by the pixels, such as the position information of the defect a relative to the training image, are obtained, the defect a is framed by using a real frame a1, and the defect type of the defect a is described.
In some embodiments, after the training images are obtained, the training images are preprocessed, such as operations of random rotation, mirror image overturning, random blurring, random clipping, brightness change and the like, and the training images are processed to obtain a plurality of training images corresponding to the operations, and labels of the images correspondingly change according to actual operations, so that the number of the training images is greatly expanded. The training can be completed without inputting other related training images again.
Step 12: and detecting the training image by using the defect detection model to obtain defect detection information.
In some embodiments, the detection information of the defect includes a final detection frame of the defect, and the defect detection model classifies positive and negative samples of a plurality of initial detection frames of the training image by using the mask region, and determines the final detection frame of the defect based on the classification result. For example, according to the number of pixels of the mask area included in the initial detection frame as a basis for judgment, positive samples higher than the first set threshold number and negative samples lower than the second set threshold number are used, wherein the first set threshold number is greater than the second set threshold number.
In some embodiments, the defect detection model includes a feature extraction network for feature extraction of the input training image, such as using a convolutional neural network as the feature extraction network, through which a corresponding feature map is obtained. It will be appreciated that the feature map contains important information about the defect.
In some embodiments, FCN (Fully Convolutional Networks, full convolutional network), segNet, etc. networks may be employed to build the model. For example, the input picture is subjected to training learning using an codec-decoder (encoding-decoding) structure of SegNet to obtain the distribution characteristics of the data. The encoder portion of SegNet also uses the first 13 layers of convolutional networks of VGG 16. Each encoder layer corresponds to one decoder layer, and finally the output of each decoder is sent to the next layer for positive and negative sample classification, and a final detection frame of the defect is determined based on the classification result.
Step 13: and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
In some embodiments, the training times of the defect detection model can be adjusted according to the difference between the real information and the detection information of the defect, so as to adjust the network parameters of the defect detection model. If the real information is A and the detection information is B, the training times of the defect detection model can be adjusted at the moment, so that the network parameters of the defect detection model can be adjusted; if the real information is A and the detection information is B, but the confidence coefficient is lower than the set threshold value, the training times of the defect detection model are adjusted, and then the network parameters of the defect detection model are adjusted.
In some embodiments, network parameters of the defect detection model may be adjusted according to differences between real information and detection information of the defect, for example, if a convolutional neural network is present in the defect detection model, the number of convolutional kernels, step size, filling may be set, an excitation function may be adjusted, parameters of the pooling layer may be adjusted, and the like.
In some embodiments, the loss value may also be calculated by the data of the real information and the detection information of the defect, and if the loss value is different from the preset loss threshold, the network parameter of the defect detection model is adjusted.
In an application scene, an image of an aluminum material marked with a defect type of a leakage bottom and a mask area is used as a training image to be input into a defect detection model for training, and firstly, the training image is preprocessed, such as random rotation, mirror image overturning, random blurring, random cutting, brightness change and other operations are adopted, so that a plurality of corresponding images are obtained, and the training image corresponding to the defect type of the leakage bottom is added. After the training image is correspondingly generated, a plurality of initial detection frames are obtained based on each pixel point in the training image, then positive and negative sample classification is carried out on the initial detection frames of the training image by using the mask area, a final detection frame of the defect is determined based on the classification result, and detection information is obtained. And adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
It will be appreciated that the initial detection frame is referenced to the pixel points and an area corresponding to the size of the initial detection frame is acquired on the training image.
In this embodiment, positive and negative sample classification is performed on a plurality of initial detection frames of a training image by using a mask region, and a final detection frame of a defect is determined based on a classification result, so that network parameters of a defect detection model are adjusted according to differences between real information of the defect and the detection information. The mask area is an area formed by pixel points representing defects in the training image, so that the classification quality of positive and negative samples can be improved when the positive and negative samples are classified, the accuracy of a defect detection model is improved, and the detection accuracy is improved.
Referring to fig. 3, fig. 3 is a flowchart of a second embodiment of a training method of a defect detection model according to the present application, where the defect detection model includes a feature extraction network, a region generation network, a feature aggregation layer, and a classification layer. The method comprises the following steps:
Step 31: a training image is acquired.
Step 31 has the same or similar technical scheme as the above embodiment, and will not be described here again.
Step 32: and inputting the training image into a feature extraction network to obtain a multi-dimensional feature map.
Step 32 is described in conjunction with fig. 4:
the feature extraction network comprises a C1 layer, a C2 layer, a C3 layer, a C4 layer, a C5 layer, a P2 layer, a P3 layer, a P4 layer and a P5 layer. Wherein, the C1 layer, the C2 layer, the C3 layer, the C4 layer and the C5 layer are a down sampling process, and the P5 layer, the P4 layer, the P3 layer and the P2 layer are an up sampling process. The corresponding C1 layer, C2 layer, C3 layer, C4 layer, C5 layer each include a convolution layer, a pooling layer, and RELU layers.
After the training image is input, the image is converted into a corresponding color value channel according to the type of the image, such as a gray level image and a color image, for example, the color image has three color value channels of RGB, which respectively represent red, green and blue, and the pixels in each channel can be represented by a two-dimensional array, and the numerical value represents the pixel value between 0 and 255. Assuming a 900 x 600 color picture, the computer may be represented by an array matrix of (900 x 600 x 3). And after conversion is completed, the training images are sequentially downsampled at the C1 layer, the C2 layer, the C3 layer, the C4 layer and the C5 layer, and a plurality of feature images are obtained at the C5 layer. Up-sampling is performed in the order of P5 layer, P4 layer, P3 layer, P2 layer. The P5 layer, the P4 layer, the P3 layer and the P2 layer respectively comprise an up-sampling layer and a deconvolution layer, a plurality of characteristic images of the upper layer are amplified through the up-sampling layer, only the data after being pooled exist in the amplified characteristic images, so that weights of other positions are 0, and missing contents are filled through the deconvolution layer.
There are corresponding relations among the C2 layer, the C3 layer, the C4 layer, the C5 layer, the P4 layer, the P3 layer and the P2 layer, for example, the C2 layer corresponds to the P2 layer, the C3 layer corresponds to the P3 layer, the C4 layer corresponds to the P4 layer and the C5 layer corresponds to the P5 layer. The pooling indexes generated by the pooling layers in the C2 layer, the C3 layer, the C4 layer and the C5 layer are input to the up-sampling layers in the corresponding P2 layer, the P3 layer, the P4 layer and the P5 layer. In the actual operation process, when the pooling layers in the C2 layer, the C3 layer, the C4 layer and the C5 layer generate feature images, pooled indexes are generated, namely the pooled indexes correspond to the position information of elements in the existing feature images in the feature images of the upper layer, the feature images are amplified in the up-sampling layers in the P2 layer, the P3 layer, the P4 layer and the P5 layer, and when the amplified feature images are obtained, the elements in the feature images are placed in the corresponding positions in the amplified feature images according to the corresponding pooled indexes.
Corresponding multiple feature maps are generated in the P2 layer, the P3 layer, the P4 layer and the P5 layer, and the multiple feature maps of each of the P2 layer, the P3 layer, the P4 layer and the P5 layer are defined as a dimension feature map, so that the P2 layer, the P3 layer, the P4 layer and the P5 layer generate multidimensional feature maps.
Thus, the outputs of the C2 layer, C3 layer, C4 layer, and C5 layer are denoted as { C 2,C3,C4,C5 }, and the outputs of the P2 layer, P3 layer, P4 layer, and P5 layer are denoted as { P 2,P3,P4,P5 }. Because of the large memory footprint, we will not represent the output of the C1 layer therein. { P 2,P3,P4,P5 } represents a multidimensional feature map obtained from a feature extraction network.
Step 33: and inputting the multidimensional feature map into a region generation network to obtain a plurality of initial detection frames, classifying positive and negative samples of the initial detection frames by using a mask region, and obtaining a plurality of candidate frames based on classification results.
In some embodiments, the multi-dimensional feature map is input to a region generation network, and a 3*3 convolution is performed on the multi-dimensional feature map in the region generation network, and then the result of the convolution is processed through RELU layers to increase the nonlinear characteristics of the result. And traversing RELU layers to obtain a feature map, and obtaining a plurality of initial detection frames corresponding to each pixel point.
For example, if 9 initial detection frames are set for each pixel point of each feature map in the multi-dimensional feature map, there are corresponding initial detection frames with the number of 9 times of the number of the pixel points. It can be understood that the initial detection frame is based on pixel points, and an area corresponding to the size of the initial detection frame is acquired on the feature map.
Positive and negative sample classification is then performed for the plurality of initial detection frames using the mask region. The specific reference to fig. 5 is as follows:
Step 331: the first number of pixels occupied by the mask area in the real frame is obtained, and the second number of pixels occupied by the mask area in the initial detection frame is obtained.
It will be appreciated that after the initial detection frame is generated, some pixels in the area where it is located may be defective pixels. Thus, the second number of pixels belonging to the mask area in the initial detection frame and the first number of pixels occupied by the mask area in the real frame are acquired. The real box is the area where the defect mask is located in the input training image.
Step 332: if the first ratio between the second pixel number and the first pixel number is smaller than the first reference threshold, determining that the initial detection frame is a negative sample.
It can be understood that if the first ratio between the second number of pixels and the first number of pixels is smaller than the first reference threshold, it is indicated that the number of defect masks in the initial detection frame corresponding to the second number of pixels is smaller, and meets the requirement of the negative sample.
Step 333: if the first ratio between the second pixel number and the first pixel number is not smaller than the first reference threshold, determining that the initial detection frame is a positive sample.
It can be understood that if the first ratio between the second number of pixels and the first number of pixels is not less than the first reference threshold, it is indicated that the number of defect masks in the initial detection frame corresponding to the second number of pixels is greater, and meets the requirement of the positive sample.
For example, the first number of pixels occupied by the mask area in the real frame is M g, the second number of pixels occupied by the mask area in the initial detection frame is M r, the first ratio between the second number of pixels and the first number of pixels is M r/Mg, if M r/Mg<t1, the initial detection frame is determined to be a negative sample, and otherwise, the initial detection frame is determined to be a positive sample. Where t 1 is a variable threshold.
Specifically, if the first ratio is not smaller than the first reference threshold, a second ratio between the first number of pixels and the third number of pixels of the real frame is obtained, and a third ratio between the second number of pixels and the fourth number of pixels of the initial detection frame is obtained. And if the fourth ratio between the third ratio and the second ratio is greater than the second reference threshold, determining that the initial detection frame is a positive sample.
For example, the third number of pixels B g of the real frame, the first number of pixels occupied by the mask area in the real frame is M g, the second ratio between the first number of pixels and the third number of pixels of the real frameThe fourth pixel number of the initial detection frame is B r, wherein the second pixel number of the mask area occupied in the initial detection frame is M r, and the third ratio/>, between the second pixel number and the fourth pixel number of the initial detection frameIf M r/Mg<t1 and the fourth ratio between the third ratio and the second ratio is greater than the second reference threshold, such as P r/Pg>t2, then the initial detection frame is determined to be a positive sample. Where t 1 is a first reference threshold and t 2 is a second reference threshold. Other detection frames which do not meet the conditions do not conduct positive and negative sample classification and do not participate in the subsequent flow. By the method, the noise quantity introduced in the process of sampling the positive and negative samples can be reduced, and the accuracy of classification of the defect detection model on the positive and negative samples is improved.
Step 34: and inputting the multidimensional feature map and the candidate frames into a feature aggregation layer to obtain a target feature map corresponding to the candidate frames.
It is understood that the candidate boxes are those after the positive and negative samples have been classified correspondingly in the above steps. In some embodiments, due to the large number, a portion of the candidate boxes for positive and negative samples may be selected for input to the feature aggregation layer.
In some embodiments, the feature aggregation layer is ROI Pooling layers, mapping is performed in the multi-dimensional feature map according to the position coordinates of a plurality of candidate frames, the positions corresponding to the candidate frames are obtained on each feature map, and then the feature map is subjected to pooling operation and is adjusted to be a target feature map with a fixed size so as to be subjected to subsequent operation. Firstly, the coordinates of the candidate frames are adjusted according to the ratio of the input image to the size of each multi-dimensional feature map, so that the corresponding coordinates of the candidate frames in the feature map are obtained, and the region in the feature map is obtained. Dividing the area into grids; the maximum pooling or average pooling process is performed for each portion of the grid. After the processing, even if the feature map output results with different sizes are all of fixed sizes, the fixed-length output is realized.
In some embodiments, the feature aggregation layer is a ROI ALIGN layer, wherein a convolution process is performed in the ROI ALIGN layer using a variable convolution. Mapping in the multi-dimensional feature map according to the position coordinates of a plurality of candidate frames, obtaining the position corresponding to the candidate frame on each feature map, and then carrying out pooling operation on the position corresponding to the candidate frame, and adjusting the position to be a target feature map with a fixed size so as to carry out subsequent operation. Firstly, the coordinates of the candidate frames are adjusted according to the ratio of the input image to the size of each multi-dimensional feature map, so that the corresponding coordinates of the candidate frames in the feature map are obtained, and the region in the feature map is obtained. Dividing the area into grids; the maximum pooling or average pooling process is performed for each portion of the grid. After the processing, even if the feature map output results with different sizes are all of fixed sizes, the fixed-length output is realized. The ROI ALIGN layer reserves floating point number when the size is changed when the declaration operation is carried out, and the operation can improve the detection precision of small targets and reduce precision errors when regression frames are carried out.
Step 35: and inputting the target feature images corresponding to the candidate frames into a classification layer to obtain the detection information of the defects in the training images.
Specifically, description is made with reference to fig. 6:
step 351: and classifying the target feature images corresponding to the candidate frames to obtain the detection categories of the candidate frames and the confidence degrees of the detection categories.
In some embodiments, the classification layer converts the target feature map into a1×1×n vector for classification, so as to obtain the detection class of the candidate frame and the confidence of the detection class.
It will be appreciated that since the detection categories of defects are numerous, there is a corresponding confidence in the candidate box for each detection category.
Step 352: the confidence is processed using soft non-maxima to obtain a processed confidence.
In some embodiments, this is accomplished using the following:
First, for a certain detection class, the confidence obtained in step 351 is ranked, e.g., from large to small. Then selecting the first candidate frame, and sequentially calculating the overlapping rate of the 2 nd to the last 1 st candidate frames. If the overlap ratio is less than the first threshold, the corresponding confidence level does not change, and if the overlap ratio is greater than the first threshold, the confidence level is updated using the following formula:
Where s represents the calculated confidence, s i represents the current confidence, M represents the candidate box with the highest confidence when doing non-maximal suppression, and b is the candidate box when doing overlap ratio comparison with M. By using soft non-maximum suppression, filtering of an effective detection frame can be effectively avoided, and the detection rate is improved.
Step 353: and taking the candidate frame with the processed confidence coefficient higher than the preset confidence coefficient as a final detection frame, and outputting the position information representing the final detection frame and the detection type of the final detection frame and the confidence coefficient of the detection type.
In some embodiments, after the final detection frame is determined, the final detection frame is regressed into the training image by means of coordinate regression, so as to obtain corresponding position information.
In combination with the above, the detection information of the defect includes a detection category of the defect and a confidence of the detection category.
Through the mode, soft non-maximum suppression is used, effective candidate frames can be effectively reduced and filtered, and the detection rate is improved.
Step 36: and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
By the mode, the mask area is used, the introduction of a noisy anchor frame can be reduced, and the classification accuracy of positive and negative samples is improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating a third embodiment of a training method of a defect detection model according to the present application. The defect detection model comprises a feature extraction network, a region generation network, a feature aggregation layer and a classification layer. The method comprises the following steps:
step 701: a training image is acquired.
Step 702: and inputting the training image into a feature extraction network to obtain a multi-dimensional feature map.
In some embodiments, the feature extraction network is an FPN (Feature Pyramid Networks, feature pyramid network), which can solve the multi-scale problem in object detection, and through simple network connection change, the performance of small object detection is greatly improved under the condition of not increasing the calculation amount of the original model basically.
For example, sequentially performing N times of downsampling on a training image by using a feature extraction network, and acquiring feature images of the 2 nd to the N times of downsampling to obtain an N-1-dimensional initial feature image, wherein in the N time of downsampling, hole convolution is used for convolution processing, and N is more than 2; for the N-1-dimensional initial feature map, performing ith upsampling on the N-1-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an ith final feature map; wherein i is an integer of 1 to N-1.
Step 703: and inputting the multidimensional feature map into a region generation network to obtain a plurality of initial detection frames, classifying positive and negative samples of the initial detection frames by using a mask region, and obtaining a plurality of candidate frames based on classification results.
In some embodiments, the multi-dimensional feature map is traversed to obtain a plurality of initial detection boxes for each pixel. It can be understood that the area division is performed according to a plurality of initial detection frames corresponding to each pixel point, and the defect areas are included in the areas.
In some embodiments, the region generation network is a region candidate network RPN (Region Proposal Network). For example, one of the feature maps has a size of n×16×16, and after entering the area generation network, a feature map of 256×16×16 is obtained by first performing a 3*3 convolution, and then performing two 1*1 convolutions to obtain a feature map of 18×16×16 and a feature map of 36×16×16 respectively. The feature map of 18 x 16 includes a plurality of initial detection frames, positive and negative sample classification is carried out on the plurality of initial detection frames by using a mask area, and a plurality of candidate frames are obtained based on classification results. The profile of 36 x 16 is used to calculate bounding box regression offsets to the initial detection box to obtain an accurate detection box region. And finally, combining the candidate frames and the offset after the classification of the positive and negative samples, processing to obtain a plurality of more accurate candidate frames, and simultaneously eliminating the initial candidate frames which are too small and exceed the boundary.
In some embodiments, the cross-over ratio between the final detection frame and the real frame in the last training is obtained before the positive and negative sample classification is performed on the initial detection frame; weighting and summing the intersection ratio and the confidence coefficient in the last training to obtain a first control value; and taking the product between the first control value and the preset parameter value as a first reference threshold in the training.
Specifically, the expression is performed using the following formula:
t1=βc
c=α·loc_a+(1-α)·cls_c。
Wherein t 1 represents a first reference threshold, β represents a preset parameter, c represents a first control value, loc_a represents a cross-over ratio between a final detection frame and a real frame in the previous training, cls_c represents a confidence level of the final detection frame in the previous training, and α represents a second preset parameter. Along with each training of the defect detection model, the confidence of the final detection frame can be correspondingly improved, the intersection ratio between the final detection frame and the real frame can be improved, the first reference threshold can be improved after the cyclic training, the classification standard of positive and negative samples is further improved, and the precision of the whole defect detection model can be improved.
By using the variable first reference threshold to screen the positive and negative samples, excessive generation of certain type of sample data in the training process can be effectively avoided, and the condition of unbalanced positive and negative samples is avoided.
Step 704: and inputting the multidimensional feature map and the candidate frames into a feature aggregation layer to obtain a target feature map corresponding to the candidate frames.
In some embodiments, the feature aggregation layer is a ROI ALIGN layer, wherein a convolution process is performed in the ROI ALIGN layer using a variable convolution. The detection precision of the small target can be improved, and the precision error in the process of returning to the frame is reduced.
Step 705: and inputting the target feature images corresponding to the candidate frames into a classification layer to obtain the detection information of the defects in the training images.
Step 706: and acquiring the cross-over ratio between the final detection frame and the real frame in the last training, and obtaining a second control value based on the cross-over ratio, wherein the second control value and the cross-over ratio are positively correlated.
In some embodiments, the following formula is employed:
Where f (x) represents the second control value and x represents the cross-over ratio between the final detection frame and the real frame in the last training.
Step 707: and carrying out weighting treatment on the second control value and the confidence coefficient in the last training to obtain a third control value.
In some embodiments, the following formula is employed:
r=(α·f(loc_a)+(1-α)·f(cls_c))γ
Wherein r represents a third control value, loc_a represents the intersection ratio between the final detection frame and the real frame in the last training, gamma represents a preset coefficient, alpha represents a preset parameter, and cls_c represents the confidence of the final detection frame in the last training.
Step 708: and obtaining a first loss value by utilizing the difference between the final detection frame belonging to the positive sample and the real frame obtained in the training, and weighting the first loss value by utilizing a third control value to obtain a second loss value.
In some embodiments, the following formula is used to represent the operation of step 708:
Where L box denotes the second loss value, r denotes the third control value, i denotes the number of candidate boxes for positive samples in the input feature aggregation layer, pos denotes the positive samples, and smooth_l1 denotes the smoothed L1 loss function, i.e., the first loss value.
Step 709: and obtaining a third loss value by utilizing the difference between the detection category belonging to the positive sample and the real category obtained by the training, weighting the third loss value by utilizing a third control value to obtain a fourth loss value, and obtaining a fifth loss value by utilizing the difference between the detection category belonging to the negative sample and the real category obtained by the training.
In some embodiments, the operation of step 709 is formulated as follows:
wherein L cls represents the sum of the fourth loss value and the fifth loss value, A fourth loss value is indicated and is indicative of,Representing a fifth loss value, r representing a third control value, i representing the number of candidate boxes for positive or negative samples in the input feature aggregation layer, pos representing positive samples, neg representing negative samples, BCE representing binary cross entropy.
It is understood that BCE represents a third loss value when performing a difference calculation between the detection class and the true class, which are belonging to positive samples.
Step 710: and adjusting network parameters of the defect detection model by using the second loss value, the fourth loss value and the fifth loss value.
By the above mode, the third control value is used for calculating the loss value, and the third control value is positively correlated with the second control value, so that the effective utilization rate of the voiceprint of the training image is improved along with the increase of the second value. Corresponding calculation is performed for different training images, and the recognition accuracy of the defect detection model to different detection types is improved.
Referring to fig. 8, fig. 8 is a flowchart illustrating an embodiment of a defect detection method according to the present application. The method comprises the following steps:
Step 81: and acquiring an image to be processed.
In some implementations, the graphic image to be processed may be a color image or a black-and-white image. The image to be processed.
Step 82: and detecting the image to be processed by using the defect detection model to obtain detection information of the corresponding defects in the image to be processed.
The defect detection model is obtained by training by the training method of any embodiment.
It can be appreciated that the defect detection method provided by the embodiment can realize detection of the surface defects of the product based on the defect detection model, so that the quality inspection efficiency of the product is improved.
In this embodiment, the defect detection model trained according to the above embodiment is used to perform defect detection processing, so that defects on the surface of a product can be effectively distinguished, the production process of the product is improved, and the production efficiency is improved.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of an image processing apparatus provided by the present application, and the image processing apparatus 90 includes a processor 91 and a memory 92 connected to the processor 91; the memory 92 is used for storing program data, and the processor 91 is used for executing the program data, so as to implement the following method:
obtaining a training image, wherein the training image is marked with defective real information and a mask area, the mask area is an area formed by pixel points representing defects in the training image, and the real information comprises a defective real frame; detecting the training image by using a defect detection model to obtain defect detection information, wherein the defect detection information comprises a final defect detection frame, the defect detection model is used for classifying positive and negative samples of a plurality of initial detection frames of the training image by using a mask region, and determining the final defect detection frame based on a classification result; and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
Or, acquiring an image to be processed; and detecting the image to be processed by using the defect detection model to obtain detection information of the corresponding defects in the image to be processed.
It will be appreciated that, when the processor 91 is configured to execute program data, it is also configured to implement any of the methods of the above embodiments, and specific implementation steps thereof may refer to the above embodiments, which are not repeated herein.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a computer readable storage medium provided by the present application, where the computer readable storage medium 100 is used to store program data 101, and the program data 101, when executed by a processor, is used to implement the following method steps:
obtaining a training image, wherein the training image is marked with defective real information and a mask area, the mask area is an area formed by pixel points representing defects in the training image, and the real information comprises a defective real frame; detecting the training image by using a defect detection model to obtain defect detection information, wherein the defect detection information comprises a final defect detection frame, the defect detection model is used for classifying positive and negative samples of a plurality of initial detection frames of the training image by using a mask region, and determining the final defect detection frame based on a classification result; and adjusting network parameters of the defect detection model according to the difference between the real information and the detection information of the defect.
Or, acquiring an image to be processed; and detecting the image to be processed by using the defect detection model to obtain detection information of the corresponding defects in the image to be processed.
It will be appreciated that the program data 101, when executed by a processor, may be used to implement any of the methods of the above embodiments, and specific implementation steps thereof may refer to the above embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units of the other embodiments described above may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as stand alone products. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or directly or indirectly applied to other related technical fields are included in the scope of the present application.

Claims (10)

1. A method of training a defect detection model, the defect detection model comprising a feature extraction network, a region generation network, a feature aggregation layer, and a classification layer, the method comprising:
Acquiring a training image, wherein the training image is marked with defective real information and a mask area, the mask area is an area formed by pixel points representing the defect in the training image, and the real information comprises a real frame of the defect;
inputting the training image into the feature extraction network to obtain a multi-dimensional feature map;
Inputting the multi-dimensional feature map to the region generation network to obtain a plurality of initial detection frames; positive and negative sample classification of the plurality of initial detection frames using the mask region includes: acquiring a first pixel number occupied by the mask area in the real frame and a second pixel number occupied by the mask area in the initial detection frame; if the first ratio between the second pixel number and the first pixel number is smaller than a first reference threshold value, determining that the initial detection frame is a negative sample; if the first ratio is not smaller than the first reference threshold, determining that the initial detection frame is a positive sample;
obtaining a plurality of candidate frames based on classification results of the initial detection frames;
inputting the multi-dimensional feature map and the candidate boxes into the feature aggregation layer to obtain a target feature map corresponding to the candidate boxes;
inputting a target feature map corresponding to the candidate frame into the classification layer to obtain detection information of the defects in the training image, wherein the detection information of the defects comprises a final detection frame of the defects;
and adjusting network parameters of the defect detection model according to the difference between the real information of the defect and the detection information.
2. The training method of claim 1, wherein if the first ratio is not less than the first reference threshold, determining the initial detection box as a positive sample comprises:
If the first ratio is not smaller than the first reference threshold, obtaining a second ratio between the first pixel number and a third pixel number of the real frame, and obtaining a third ratio between the second pixel number and a fourth pixel number of the initial detection frame;
and if the fourth ratio between the third ratio and the second ratio is greater than a second reference threshold, determining that the initial detection frame is a positive sample.
3. The training method of claim 1, wherein the detection information of the defect further includes a detection category of the defect and a confidence of the detection category;
Before the classifying positive and negative samples of the plurality of initial detection frames with the mask region, the method further comprises:
acquiring the cross-over ratio between a final detection frame and a real frame in the last training;
the intersection ratio and the confidence coefficient in the last training are weighted and summed to obtain a first control value;
and taking the product between the first control value and a preset parameter value as the first reference threshold value in the training.
4. The method of claim 1, wherein the feature extraction network is a feature pyramid network FPN, the region generation network is a region candidate network RPN, and the feature aggregation layer is a ROI ALIGN layer, wherein a convolution process is performed in the ROI ALIGN layer using a variable convolution.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
Inputting the training image into a feature extraction network to obtain a multi-dimensional feature map, wherein the method comprises the following steps of:
Sequentially performing N times of downsampling on the training image by utilizing the characteristic extraction network, and acquiring feature images of the 2 nd to the N times of downsampling to obtain an N-1 dimensional initial feature image, wherein in the N time of downsampling, hole convolution is used for convolution processing, and N is more than 2;
For the N-1-dimensional initial feature map, performing ith upsampling on the N-1-dimensional initial feature map based on the N-i-dimensional initial feature map to obtain an ith final feature map; wherein i is an integer from 1 to N-1;
the obtaining a plurality of initial detection frames includes:
Traversing the multidimensional feature map to obtain a plurality of initial detection frames corresponding to each pixel point.
6. The method of claim 1, wherein the detection information of the defect further comprises a detection category of the defect and a confidence level of the detection category;
Inputting the target feature map corresponding to the candidate frame to the classification layer to obtain detection information of the defects in the training image, wherein the detection information comprises the following steps:
Classifying the target feature images corresponding to the candidate frames to obtain detection categories of the candidate frames and confidence degrees of the detection categories;
processing the confidence coefficient by using a soft non-maximum value to obtain a processed confidence coefficient;
and taking the candidate frame with the processed confidence coefficient higher than the preset confidence coefficient as a final detection frame, and outputting the position information representing the final detection frame and the detection type of the final detection frame and the confidence coefficient of the detection type.
7. The method of claim 1, wherein the real information of the defect further comprises a real class of the defect, and the detection information of the defect further comprises a detection class of the defect and a confidence of the detection class;
the adjusting the network parameters of the defect detection model according to the difference between the real information and the detection information of the defect comprises the following steps:
acquiring the cross-over ratio between a final detection frame and a real frame in the last training, and obtaining a second control value based on the cross-over ratio, wherein the second control value and the cross-over ratio are in positive correlation;
weighting the second control value and the confidence coefficient in the last training to obtain a third control value;
Obtaining a first loss value by utilizing the difference between a final detection frame belonging to the positive sample and a real frame obtained in the training, and weighting the first loss value by utilizing the third control value to obtain a second loss value;
Obtaining a third loss value by utilizing the difference between the detection category belonging to the positive sample and the real category obtained by the training, weighting the third loss value by utilizing the third control value to obtain a fourth loss value, and obtaining a fifth loss value by utilizing the difference between the detection category belonging to the negative sample and the real category obtained by the training;
and adjusting network parameters of the defect detection model by using the second loss value, the fourth loss value and the fifth loss value.
8. A method of defect detection, the method comprising:
acquiring an image to be processed;
Detecting the image to be processed by using a defect detection model to obtain detection information of a corresponding defect in the image to be processed, wherein the defect detection model is trained by the training method of the defect detection model according to any one of claims 1-7.
9. An image processing apparatus comprising a processor and a memory coupled to the processor;
Wherein the memory is for storing program data and the processor is for executing the program data to implement the method of any one of claims 1 to 7 or the method of claim 8.
10. A computer readable storage medium for storing program data which, when executed by a processor, is adapted to carry out the method of claim 8; or a method according to any one of claims 1-7.
CN202010635033.9A 2020-07-03 2020-07-03 Training method of defect detection model, defect detection method and related device Active CN111814867B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010635033.9A CN111814867B (en) 2020-07-03 2020-07-03 Training method of defect detection model, defect detection method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010635033.9A CN111814867B (en) 2020-07-03 2020-07-03 Training method of defect detection model, defect detection method and related device

Publications (2)

Publication Number Publication Date
CN111814867A CN111814867A (en) 2020-10-23
CN111814867B true CN111814867B (en) 2024-06-18

Family

ID=72855535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010635033.9A Active CN111814867B (en) 2020-07-03 2020-07-03 Training method of defect detection model, defect detection method and related device

Country Status (1)

Country Link
CN (1) CN111814867B (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348787B (en) * 2020-11-03 2024-07-23 中科创达软件股份有限公司 Training method of object defect detection model, object defect detection method and device
CN113052789A (en) * 2020-11-03 2021-06-29 哈尔滨市科佳通用机电股份有限公司 Vehicle bottom plate foreign body hitting fault detection method based on deep learning
CN112634209A (en) * 2020-12-09 2021-04-09 歌尔股份有限公司 Product defect detection method and device
CN112233119A (en) * 2020-12-16 2021-01-15 常州微亿智造科技有限公司 Workpiece defect quality inspection method, device and system
CN112633496B (en) * 2020-12-18 2023-08-08 杭州海康威视数字技术股份有限公司 Processing method and device for detection model
CN112634254A (en) * 2020-12-29 2021-04-09 北京市商汤科技开发有限公司 Insulator defect detection method and related device
CN112712088B (en) * 2020-12-31 2023-02-14 洛阳语音云创新研究院 Animal fat condition detection method and device and computer readable storage medium
CN112766110A (en) * 2021-01-08 2021-05-07 重庆创通联智物联网有限公司 Training method of object defect recognition model, object defect recognition method and device
CN112884744A (en) * 2021-02-22 2021-06-01 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN112950563A (en) * 2021-02-22 2021-06-11 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN113160128B (en) * 2021-03-03 2022-11-01 合肥图迅电子科技有限公司 Defect detection method for LED and storage medium
CN112884691A (en) * 2021-03-10 2021-06-01 深圳中科飞测科技股份有限公司 Data enhancement and device, data enhancement equipment and storage medium
CN113204868B (en) * 2021-04-25 2023-02-28 中车青岛四方机车车辆股份有限公司 Defect detection parameter optimization method and optimization system based on POD quantitative analysis
CN113096130B (en) * 2021-06-09 2021-09-14 常州微亿智造科技有限公司 Method and device for detecting object defects
CN113378818B (en) * 2021-06-21 2024-06-07 中国南方电网有限责任公司超高压输电公司柳州局 Electrical equipment defect determining method and device, electronic equipment and storage medium
CN113744199B (en) * 2021-08-10 2023-09-26 南方科技大学 Image breakage detection method, electronic device, and storage medium
CN113695256B (en) * 2021-08-18 2023-05-23 国网江苏省电力有限公司电力科学研究院 Power grid foreign matter detection and identification method and device
CN113808104B (en) * 2021-09-16 2024-04-02 西安交通大学 Metal surface defect detection method and system based on blocking
CN113781485B (en) * 2021-11-12 2022-09-09 成都数联云算科技有限公司 Intelligent detection method and device for PCB defect types, electronic equipment and medium
CN114066900A (en) * 2021-11-12 2022-02-18 北京百度网讯科技有限公司 Image segmentation method and device, electronic equipment and storage medium
CN114611666B (en) * 2022-03-08 2024-05-31 安谋科技(中国)有限公司 Quantification method of NMS function, electronic equipment and medium
CN114782429B (en) * 2022-06-17 2023-04-07 深圳市菲尼基科技有限公司 Image-based lithium battery defect detection method, device, equipment and storage medium
CN114820621B (en) * 2022-06-29 2022-09-06 中冶建筑研究总院(深圳)有限公司 Bolt loss defect detection method, system and device
CN115272249B (en) * 2022-08-01 2024-07-09 腾讯科技(深圳)有限公司 Defect detection method, device, computer equipment and storage medium
CN115457297B (en) * 2022-08-23 2023-09-26 中国航空油料集团有限公司 Oil leakage detection method and device for aviation oil depot and aviation oil safety operation and maintenance system
CN117036227A (en) * 2022-09-21 2023-11-10 腾讯科技(深圳)有限公司 Data processing method, device, electronic equipment, medium and program product

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018000731A1 (en) * 2016-06-28 2018-01-04 华南理工大学 Method for automatically detecting curved surface defect and device thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160379B (en) * 2018-11-07 2023-09-15 北京嘀嘀无限科技发展有限公司 Training method and device of image detection model, and target detection method and device
CN111199175A (en) * 2018-11-20 2020-05-26 株式会社日立制作所 Training method and device for target detection network model
CN109711474B (en) * 2018-12-24 2023-01-17 中山大学 Aluminum product surface defect detection algorithm based on deep learning
CN109859171B (en) * 2019-01-07 2021-09-17 北京工业大学 Automatic floor defect detection method based on computer vision and deep learning
CN110910353B (en) * 2019-11-06 2022-06-10 成都数之联科技股份有限公司 Industrial false failure detection method and system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018000731A1 (en) * 2016-06-28 2018-01-04 华南理工大学 Method for automatically detecting curved surface defect and device thereof

Also Published As

Publication number Publication date
CN111814867A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814867B (en) Training method of defect detection model, defect detection method and related device
CN113450307B (en) Product edge defect detection method
CN111179229B (en) Industrial CT defect detection method based on deep learning
CN107561738B (en) Fast TFT-LCD surface defect detection method based on FCN
CN111553929B (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN109978839B (en) Method for detecting wafer low-texture defects
CN113591795B (en) Lightweight face detection method and system based on mixed attention characteristic pyramid structure
CN110084817B (en) Digital elevation model production method based on deep learning
CN111640125A (en) Mask R-CNN-based aerial photograph building detection and segmentation method and device
CN113469951B (en) Hub defect detection method based on cascade region convolutional neural network
CN111986170A (en) Defect detection algorithm based on Mask R-CNN (deep neural network)
CN112258470B (en) Intelligent industrial image critical compression rate analysis system and method based on defect detection
CN112906689B (en) Image detection method based on defect detection and segmentation depth convolutional neural network
CN109584206B (en) Method for synthesizing training sample of neural network in part surface flaw detection
CN111027538A (en) Container detection method based on instance segmentation model
CN112819748A (en) Training method and device for strip steel surface defect recognition model
CN115775236A (en) Surface tiny defect visual detection method and system based on multi-scale feature fusion
CN111738114A (en) Vehicle target detection method based on anchor-free accurate sampling remote sensing image
CN116645328A (en) Intelligent detection method for surface defects of high-precision bearing ring
CN113744142A (en) Image restoration method, electronic device and storage medium
CN101937562A (en) Construction method for gray-level information content histogram
CN117809123B (en) Anomaly detection and reconstruction method and system for double-stage image
CN116229205A (en) Lipstick product surface defect data augmentation method based on small sample characteristic migration
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN116934761B (en) Self-adaptive detection method for defects of latex gloves

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant