CN114240822A - Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion - Google Patents

Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion Download PDF

Info

Publication number
CN114240822A
CN114240822A CN202111243617.2A CN202111243617A CN114240822A CN 114240822 A CN114240822 A CN 114240822A CN 202111243617 A CN202111243617 A CN 202111243617A CN 114240822 A CN114240822 A CN 114240822A
Authority
CN
China
Prior art keywords
frame
yolov3
image
method based
detection method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111243617.2A
Other languages
Chinese (zh)
Inventor
池明旻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongshan Fudan Joint Innovation Center
Zhongshan Xiaochi Technology Co ltd
Original Assignee
Zhongshan Fudan Joint Innovation Center
Zhongshan Xiaochi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongshan Fudan Joint Innovation Center, Zhongshan Xiaochi Technology Co ltd filed Critical Zhongshan Fudan Joint Innovation Center
Priority to CN202111243617.2A priority Critical patent/CN114240822A/en
Publication of CN114240822A publication Critical patent/CN114240822A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cotton cloth defect detection method based on a YOLOv3 network and multi-scale feature fusion, and designs a defect detection model based on a YOLOv3 network to realize detection of hidden defects of textiles. Firstly, preprocessing work such as random cutting, rotation and the like is carried out on textile image data, and the robustness of the model is enhanced. And then clustering the anchor frame by using a K-Means + + algorithm to obtain parameters gradually approaching to the real frame. And then extracting image features by using a Darknet-53 network, performing feature fusion multi-scale classification prediction on the features, wherein the operations comprise position confidence score, classification confidence score, filtering prediction frames and the like, and finally outputting a target class and a predicted Bbox result.

Description

Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion
Technical Field
The invention belongs to the technical field of textile component analysis and material classification, and particularly relates to a cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion.
Background
In the current textile enterprises, the inspection of fabrics is still mainly based on the traditional artificial visual inspection mode, the inspection speed is generally 15-20m/min 1, and the requirement of online inspection cannot be met, if the fabrics in the textile factory have defects, the economic benefit of the fabrics is seriously influenced, therefore, the inspection method based on machine vision and deep learning becomes a hot point of research in recent years, generally, the traditional machine vision processing method can be roughly divided into statistical methods (mathematical morphology, histogram statistics, genetic algorithm, Bayesian statistics), model methods (total variation model), spectrum methods (fast Fourier transform, wavelet transform and Gabor transform), and because one algorithm is difficult to obtain an ideal defect profile, many students fuse and apply various algorithms, but the traditional algorithms are too dependent on the shallow layer characteristics of the images, the low contrast of the image, uneven illumination and other factors can interfere the detection result, so that the detection precision is lower, the deep learning in recent years is developed rapidly in the aspect of target detection, a verified two-stage pipeline is adopted, frames such as Fast R-CNN, FasterR-CNN, Mask R-CNN and the like become a popular paradigm, firstly, some candidate frames (Proposal) are required to be generated and then classified, so that a better detection effect can be realized, but the detection speed is lower due to the complicated steps, other frames such as SSD and YOLO omit the generation process of the candidate frames, and the target position and the category are directly regressed in an output layer after the whole image is input, so that the higher detection speed is obtained, the YiOLO network researches a data set preprocessing method for detecting weld defects, the network detection speed is improved in a pipeline operation mode, and the quality of the selection of an anchor frame directly influences the detection precision, therefore, the method is improved on the basis of a YOLOv3 network, and provides a cotton cloth defect target detection algorithm FS-YOLOv3(Four Scales YOLOv3), so that the detection precision is further improved by optimizing from three angles of selection of a first check frame, selection of a scale and filtering of a target frame while the speed is ensured.
Disclosure of Invention
The cotton cloth defect detection method based on the YOLOv3 network and the multi-scale feature fusion, provided by the invention, designs a defect detection model based on the YOLOv3 network, and realizes the detection of hidden defects of textiles. Firstly, preprocessing work such as random cutting, rotation and the like is carried out on textile image data, and the robustness of the model is enhanced. And then clustering the anchor frame by using a K-Means + + algorithm to obtain parameters gradually approaching to the real frame. And then extracting image features by using a Darknet-53 network, performing feature fusion multi-scale classification prediction on the features, wherein the operations comprise position confidence score, classification confidence score, filtering prediction frames and the like, and finally outputting a target class and a predicted Bbox result.
The defect point detection network based on YOLO provided by the invention comprises the following steps:
s1: image preprocessing, which mainly includes operations such as normalization and standardization of an image matrix. The softmax function is commonly used as a normalization function, and Min-Max Feature scaling or Standard score is commonly used for normalizing data;
s2: in the cotton cloth defect detection method based on YOLOv3 and multi-scale feature fusion described in patent claim 2, the image data input to the image segmentation method is preprocessed by random cropping, rotation, etc. in order to enhance the robustness of the model. Assuming that the height and width of the output feature map are H, W, respectively, it is equivalent to dividing the image into H × W meshes, and each mesh of the image corresponds to one feature point on the plane of the output feature map. These anchor boxes are placed on each grid, one for each prediction box on each grid. In the stage of filtering the prediction frame, setting a classification confidence threshold as threshold1, positioning a confidence threshold of threshold2, highlighting selection of a prior frame, selection of a scale and a filtered part of a target frame by using a red frame;
s3: the K-Means clustering anchor frame in the cotton cloth defect detection method based on YOLOv3 and multi-scale feature fusion described in patent claim 3 utilizes an anchor point mechanism to predict the next boundary frame to reduce the complexity of model training. And gradually correcting the initial candidate frame according to the real frame along with the continuous learning of the sample characteristics to obtain parameters gradually close to the real frame. In order to reduce the influence of random initialization on the result and obtain a better IOU score, a K-Means + + algorithm is used for replacing the K-Means algorithm, and the intersection ratio (IOU) of a candidate frame and a real frame is used as a position similarity measurement, and the calculation formula is as follows:
d(box,centre)=1-IOU(box,centre)
the box is a real frame of the target, the center is a sample clustering center, the IOU represents the intersection ratio of the real frame of the target and an anchor frame obtained by clustering, the function value is reduced along with the increase of the IOU, the closer the anchor frame obtained by clustering is to the real frame, and the better the effect is.
In the prediction stage, a three-dimensional tensor is predicted for the input flaw image, wherein the three-dimensional tensor comprises coordinate information x of each prediction frame of the flaws of the cotton cloth,yW, h can be according to
bx=σ(x)+Cx
by=σ(y)+cy
The two formulas shift the center point of the anchor frame to be consistent with the real frame according to
bw=pw ew
bh=ph eh
And the width and the height of the prediction frame are stretched by the two formulas, so that the prediction frame is consistent with the real frame.
S4: in the cotton defect detection method based on YOLOv3 and multi-scale feature fusion described in patent claim 4, Darknet-53 is used to extract image features, and 32 convolution kernels are used for an initial convolution layer to filter cotton defect images with the size of 416 × 416; the output of the previous convolutional layer is then taken as the input for the next layer and the downsampling operation is implemented for their filter with a step size of two pixels using 64 convolution kernels, 3 x 3. Through the above operations, a feature map with dimensions of 208 × 208 can be obtained; five sets of networks including 1 ×, 2 ×, 8 ×, 8 ×, 4 × residual blocks are executed to obtain feature maps of 208 × 208, 104 × 104, 52 × 52, 26 × 26, 13 × 13 resolutions, respectively.
S5: in the method for detecting defects of cotton cloth based on YOLOv3 and multi-scale feature fusion described in patent claim 5, a classification confidence threshold is set to be threshold1, and a positioning confidence threshold is set to be threshold2 in the stage of filtering a prediction frame;
s6: in the cotton defect detection method based on YOLOv3 and multi-scale feature fusion described in patent claim 6, the scoring result is compared with a threshold, the previous classification confidence score is compared with the positioning confidence and the threshold set during initialization, if both are greater than the threshold, the prediction box is filtered, otherwise, the Bbox target box is discarded;
s7: in the cotton defect detection method based on YOLOv3 and multi-scale feature fusion described in patent claim 7, the softer-NMS algorithm is introduced to refine the borders, and for a sample image with low contrast, noise is often regarded as a target, and the low-score borders are forcibly removed, which causes accuracy reduction, so the softer-NMS algorithm is introduced to improve the accuracy of the classification confidence borders.
Compared with the prior art, the invention has the following advantages: the speed is guaranteed, and meanwhile the detection precision is improved.
Drawings
The present application will be described in further detail with reference to the following drawings and detailed description.
FIG. 1 is a diagram of a classic YOLOv3 network architecture to which the present invention relates;
FIG. 2 is a flow chart of target detection according to the present invention;
FIG. 3 is a diagram of the improved YOLOv3 network architecture proposed by the present invention;
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In the following description of the embodiments, for purposes of clearly illustrating the structure and operation of the present invention, directional terms are used, but the terms "front", "rear", "left", "right", "outer", "inner", "outward", "inward", "axial", "radial", and the like are to be construed as words of convenience and are not to be construed as limiting terms.
The details of the present invention are described in detail below with reference to the accompanying drawings.
Fig. 1 is a classic YOLOv3 flow chart, using the first 52 layers (no fully connected layers) of darknet-53, yolo _ v3 is a fully convolutional network, largely using residual jump layer connection, and directly abandoning POOLing and using conv's stride to implement downsampling in order to reduce the gradient negative effect caused by POOLing. In this network structure, a convolution with a step size of 2 is used for down-sampling. In order to enhance the accuracy of the algorithm for detecting small targets, in YOLOv3, an fpsn-like upsample and fusion method (finally, 3 scales are fused, and the sizes of the other two scales are 26 × 26 and 52 × 52 respectively) is adopted to perform detection on feature maps of multiple scales. The structure of full convolution is also adopted in 3 prediction branches, wherein the number of convolution kernels of the last convolution layer is 255, and the convolution kernels are in 80 classes aiming at the COCO data set: 3 × (80+4+1) ═ 255, 3 denotes that one grid cell contains 3 bounding boxes, 4 denotes 4 coordinate information of the frame, and 1 denotes the object score.
FIG. 2 is a flow chart of object detection, which details the flow of defect object detection, and proceeds as follows:
step 1: inputting an image and carrying out image normalization;
step 2: dividing the image into H multiplied by W grids;
and step 3: sliding the window to generate a plurality of candidate frames;
step 4, classifying by using a trained classifier;
and 5: screening the Bbox, and calculating the Bbox and the IOU of the rest frames;
step 6: comparing the IOU value with a threshold value, if the IOU value is smaller than the threshold value, retaining the index of the threshold value, and otherwise, discarding the Bbox;
and 7: marking the flaw profile.
FIG. 3 is a diagram of the improved YOLOv3 network architecture proposed by the present invention in which the Darknet-53 is used to extract image features and the initial convolution layer is filtered through 32 convolution kernels of size 416X 416 to obtain a cotton defect image; the output of the previous convolutional layer is then taken as the input for the next layer and the downsampling operation is implemented for their filter with a step size of two pixels using 64 convolution kernels, 3 x 3. Through the above operations, a feature map with dimensions of 208 × 208 can be obtained; five sets of networks including 1 ×, 2 ×, 8 ×, 8 ×, 4 × residual blocks are executed to obtain feature maps of 208 × 208, 104 × 104, 52 × 52, 26 × 26, 13 × 13 resolutions, respectively.
The foregoing is only a preferred embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and substitutions can be made without departing from the technical principle of the present application, and these modifications and substitutions should also be regarded as the protection scope of the present application.

Claims (8)

1. The cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion is characterized by comprising the following steps of:
s1: inputting a cotton cloth image, and carrying out normalization and standardization pretreatment on the cotton cloth image;
s2: randomly cutting an image and adding an anchor frame on the grid;
s3: clustering the anchor frame by using a K-Means + + algorithm;
s4: extracting image features by Darknet-53;
s5: position confidence score and classification confidence score;
s6, comparing the scoring result with a threshold value;
and S7, introducing a softer-NMS algorithm to refine the frame.
2. The cotton cloth defect detection method based on YOLOv3 and multi-scale feature fusion as claimed in claim 1, wherein the S1 image preprocessing mainly comprises normalization, standardization and other operations of an image matrix.
3. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion as claimed in claim 1, wherein S2 randomly crops the image and adds anchor frames to the meshes, and performs preprocessing such as random cropping and rotation on the input image data, so as to enhance the robustness of the model, assuming that the height and width of the output feature map are H, W respectively, which corresponds to dividing the image into H × W meshes, each mesh of the image corresponds to a feature point on the plane of the output feature map, and these anchor frames are placed on each mesh, and each anchor frame on each mesh corresponds to a prediction frame.
4. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion as claimed in claim 1, wherein S3 clusters anchor frames with a K-Means + + algorithm, predicts the next bounding box with an anchor point mechanism to reduce the complexity of model training, gradually corrects the initial candidate frame according to the real frame with continuous learning of sample features to obtain parameters gradually approaching the real frame, and replaces the K-Means algorithm with the K-Means + + algorithm to obtain a better IOU score for reducing the effect of random initialization on the result, and takes the intersection ratio (IOU) of the candidate frame and the real frame as a position similarity measure.
5. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion of claim 1, wherein S4 uses Darknet-53 to extract image features, and the initial convolutional layer uses 32 convolution kernels of 3 x 3 to filter cotton defect images of 416 x 416 size; then taking the output of the previous convolution layer as the input of the next layer, and using 64 convolution kernels of 3 × 3, and implementing a downsampling operation on their filter waves by a step size of two pixels, through which a feature map with a size of 208 × 208 can be obtained; five sets of networks including 1 ×, 2 ×, 8 ×, 8 ×, 4 × residual blocks are executed to obtain feature maps of 208 × 208, 104 × 104, 52 × 52, 26 × 26, 13 × 13 resolutions, respectively.
6. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion as claimed in claim 1, wherein S5 is used for setting a classification confidence score and a classification confidence score, in the stage of filtering a prediction frame, a classification confidence threshold is set to be threshold1, a positioning confidence threshold is set to be threshold2, and selection of a prior frame, selection of a scale and filtering of a target frame are highlighted by a red frame.
7. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion of claim 1, wherein S6 compares the scoring result with a threshold, compares the previous classification confidence score with the positioning confidence with a threshold set during initialization, filters the prediction box if both are greater than the threshold, and discards the Bbox target box if not.
8. The cotton defect detection method based on YOLOv3 and multi-scale feature fusion as claimed in claim 1, wherein S7 introduces a softer-NMS algorithm to refine the border, and for a sample image with low contrast, noise tends to be targeted, and the border with low score is forcibly removed, which causes accuracy reduction, so that the softer-NMS algorithm is introduced to improve the accuracy of the classified confidence border.
CN202111243617.2A 2021-12-28 2021-12-28 Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion Pending CN114240822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111243617.2A CN114240822A (en) 2021-12-28 2021-12-28 Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111243617.2A CN114240822A (en) 2021-12-28 2021-12-28 Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion

Publications (1)

Publication Number Publication Date
CN114240822A true CN114240822A (en) 2022-03-25

Family

ID=80743240

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111243617.2A Pending CN114240822A (en) 2021-12-28 2021-12-28 Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion

Country Status (1)

Country Link
CN (1) CN114240822A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049639A (en) * 2022-07-21 2022-09-13 浙江理工大学 Fabric flaw detection method based on classification weighted YOLOv5 model

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310259A (en) * 2019-06-19 2019-10-08 江南大学 It is a kind of that flaw detection method is tied based on the wood for improving YOLOv3 algorithm
CN110490874A (en) * 2019-09-04 2019-11-22 河海大学常州校区 Weaving cloth surface flaw detecting method based on YOLO neural network
US20200005468A1 (en) * 2019-09-09 2020-01-02 Intel Corporation Method and system of event-driven object segmentation for image processing
CN110827277A (en) * 2019-11-26 2020-02-21 山东浪潮人工智能研究院有限公司 Cloth flaw detection method based on yolo3 network
CN111968076A (en) * 2020-07-24 2020-11-20 西安工程大学 Real-time fabric defect detection method based on S-YOLOV3
CN112613387A (en) * 2020-12-18 2021-04-06 五邑大学 Traffic sign detection method based on YOLOv3
CN113192040A (en) * 2021-05-10 2021-07-30 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110310259A (en) * 2019-06-19 2019-10-08 江南大学 It is a kind of that flaw detection method is tied based on the wood for improving YOLOv3 algorithm
CN110490874A (en) * 2019-09-04 2019-11-22 河海大学常州校区 Weaving cloth surface flaw detecting method based on YOLO neural network
US20200005468A1 (en) * 2019-09-09 2020-01-02 Intel Corporation Method and system of event-driven object segmentation for image processing
CN110827277A (en) * 2019-11-26 2020-02-21 山东浪潮人工智能研究院有限公司 Cloth flaw detection method based on yolo3 network
CN111968076A (en) * 2020-07-24 2020-11-20 西安工程大学 Real-time fabric defect detection method based on S-YOLOV3
CN112613387A (en) * 2020-12-18 2021-04-06 五邑大学 Traffic sign detection method based on YOLOv3
CN113192040A (en) * 2021-05-10 2021-07-30 浙江理工大学 Fabric flaw detection method based on YOLO v4 improved algorithm

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ARTHUR, D等: "k-means plus plus : The Advantages of Careful Seeding", 《PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS》 *
JOSEPH REDMON等: "YOLOv3: An Incremental Improvement", 《ARXIV》 *
YIHUI HE等: "Softer-NMS: Rethinking Bounding Box Regression for Accurate Object Detection", 《ARXIV》 *
刘露露等: "基于FS-YOLOv3及多尺度特征融合的棉布瑕疵检测", 《中南民族大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049639A (en) * 2022-07-21 2022-09-13 浙江理工大学 Fabric flaw detection method based on classification weighted YOLOv5 model
CN115049639B (en) * 2022-07-21 2024-04-26 浙江理工大学 Fabric flaw detection method based on classified re-weighting YOLOv model

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
CN107657279B (en) Remote sensing target detection method based on small amount of samples
CN111612008B (en) Image segmentation method based on convolution network
CN111860171B (en) Method and system for detecting irregular-shaped target in large-scale remote sensing image
CN108288271A (en) Image detecting system and method based on three-dimensional residual error network
CN110532946B (en) Method for identifying axle type of green-traffic vehicle based on convolutional neural network
CN106340016A (en) DNA quantitative analysis method based on cell microscope image
CN111860587B (en) Detection method for small targets of pictures
CN111160407A (en) Deep learning target detection method and system
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN114612472B (en) SegNet improvement-based leather defect segmentation network algorithm
CN114266794B (en) Pathological section image cancer region segmentation system based on full convolution neural network
CN111798409A (en) Deep learning-based PCB defect data generation method
CN112052772A (en) Face shielding detection algorithm
CN112102224A (en) Cloth defect identification method based on deep convolutional neural network
CN112561926A (en) Three-dimensional image segmentation method, system, storage medium and electronic device
CN114842201A (en) Sandstone aggregate image segmentation method based on improved Mask _ Rcnn
CN115049952A (en) Juvenile fish limb identification method based on multi-scale cascade perception deep learning network
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
CN114742758A (en) Cell nucleus classification method in full-field digital slice histopathology picture
CN114240822A (en) Cotton cloth flaw detection method based on YOLOv3 and multi-scale feature fusion
CN111768420A (en) Cell image segmentation model
CN111582057A (en) Face verification method based on local receptive field
CN116597275A (en) High-speed moving target recognition method based on data enhancement
CN110889418A (en) Gas contour identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220325