CN111862092A - Express delivery outer package defect detection method and device based on deep learning - Google Patents

Express delivery outer package defect detection method and device based on deep learning Download PDF

Info

Publication number
CN111862092A
CN111862092A CN202010780139.8A CN202010780139A CN111862092A CN 111862092 A CN111862092 A CN 111862092A CN 202010780139 A CN202010780139 A CN 202010780139A CN 111862092 A CN111862092 A CN 111862092A
Authority
CN
China
Prior art keywords
image
defect
model
training
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010780139.8A
Other languages
Chinese (zh)
Inventor
彭博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202010780139.8A priority Critical patent/CN111862092A/en
Publication of CN111862092A publication Critical patent/CN111862092A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an express delivery outer package defect detection method and device based on deep learning, which are used for distinguishing and positioning defects of an express delivery outer package, such as holes, scratches, cracks, deformation and the like. Relates to the technical field of machine vision. The method comprises the following steps: obtain the express delivery extranal packing image that the degree of depth camera was shot closely. And classifying the defect images of the images to be detected according to a defect image classification model trained in advance, and judging whether the defect regions exist. If the detected image exists, the defect area in the image to be detected is positioned by utilizing the pre-trained image semantic segmentation model. The to-be-detected image shot by the depth camera can reflect the surface texture characteristics and color information of the express package, can also reflect the geometric shape three-dimensional construction information and the like of the package, and therefore accurate defect detection and position identification can be carried out on the express package image according to a pre-trained defect image classification model and a pre-trained image semantic segmentation model.

Description

Express delivery outer package defect detection method and device based on deep learning
Technical Field
The invention relates to the technical field of machine vision, in particular to a method and a device for detecting defects of express delivery outer packages based on deep learning.
Background
With the rapid development of the internet and electronic commerce, the logistics industry is rapidly developed due to the appearance of online shopping, the quantity of the express is greatly improved, and in the process of sending the express from a sender to a receiver, the defect detection of the outer package is a key step for ensuring the express transportation quality. The existing defect detection mode is mainly carried out through a manual inspection mode, but because the manual inspection needs a certain time, and most logistics companies do not have fixers to engage in the work of express delivery outer package detection, the problems of low detection efficiency, high labor cost and the like exist. In addition, the detection result is also affected by the manual detection experience and the degree of detail, and the detection effect is poor.
With the further development of science and technology, the image processing and identifying technology is applied to the defect detection work, so that the detection efficiency is greatly improved, but the situation of false detection often occurs in the image processing mode, and the defect detection accuracy is low.
With the rapid development of computer science, the artificial intelligence technology rapidly falls on the ground in various industries and plays a great role in promoting the development of productivity. Computer vision technology based on deep learning achieves a plurality of application achievements in the aspects of image classification, object recognition, image understanding and the like. However, computer vision technology related to the field of express package defect detection has not been effectively applied.
Disclosure of Invention
The invention aims to provide an express delivery outer package defect detection method and device based on deep learning, which can improve the accuracy and efficiency of express delivery outer package defect detection.
In a first aspect, an embodiment of the present application provides an express delivery external package defect detection method based on deep learning, where the method includes: and acquiring an image to be detected shot by the depth camera. And classifying the images to be detected according to the defect image classification model trained in advance to obtain a classification result, wherein the classification result is a defect type or a non-defect type. If the defect area exists in the image to be detected, performing semantic segmentation prediction on the image to be detected by using a pre-trained image semantic segmentation model, and judging the position and the type of the defect according to the segmentation result obtained by prediction.
In the implementation process, the depth camera shoots the to-be-detected image of the express outer package to reflect the surface characteristics of the target express, and can reflect the geometric three-dimensional structural information and the like of the target express, so that the detection image can be accurately detected and identified according to the pre-trained defect image classification model and the pre-trained image semantic segmentation model.
In some embodiments of the present invention, before the step of classifying the defect image of the image to be detected according to the defect image classification model trained in advance, the method includes: acquiring training samples, wherein the training samples comprise express delivery outer package positive training samples containing defects and express delivery outer package negative training samples containing no defects; preprocessing a training sample, wherein the preprocessing comprises at least one basic operation of clipping, scaling, normalization and standardization; and after carrying out gray processing on the images in the training samples, reserving the feature data of the three channels as the input of a defect image classification model. In the implementation process, the obtained training samples comprise positive training samples and negative training samples, the training samples can be used for ensuring that the trained defect image classification model is more accurate, the training samples are further preprocessed, and the accuracy of identification of the trained defect image classification model can be further improved.
In some embodiments of the present invention, before the step of classifying the defect image of the image to be detected according to the defect image classification model trained in advance, the method includes: and establishing a defect image classification model, wherein the defect image classification model comprises a feature extraction model, a multi-scale feature fusion model and a classification model, the feature extraction model is a six-layer convolutional neural network model, the multi-scale feature fusion model is a feature pyramid model, and the classifier is a multi-layer fully-connected network model. And training the defect image classification model by using the training sample to obtain the trained defect image classification model.
In some embodiments of the present invention, the step of classifying the image to be detected according to the pre-trained defect image classification model to obtain a classification result, wherein the classification result is a defect class or a non-defect class, includes: and inputting the image to be detected into a feature extraction model, and processing the image to be detected by the feature extraction model to obtain six feature maps. And inputting the six feature maps into a multi-scale feature fusion model for feature processing, and obtaining five feature matrixes. And reducing the dimension of the five feature matrixes into one-dimensional vectors, inputting the one-dimensional vectors into a classifier, and outputting two classification results. At least 6 convolution modules are utilized to carry out convolution on training data to obtain feature maps, the feature maps contain semantic information of images under different size receptive fields, the lower-layer feature maps are sensitive to defects and flaws with small sizes in the images, and the upper-layer feature maps are sensitive to defects and flaws with large sizes. And performing deconvolution and one-dimensional convolution operation on the multi-level feature graph by using a multi-scale feature fusion module to ensure that the dimensions of the upper-layer features are the same as those of the lower-layer features, and then realizing multi-scale feature fusion in a feature addition mode. And obtaining fusion characteristics by using a classifier, carrying out nonlinear calculation on the characteristics through a full-connection network, and finally outputting two classification results of defect image detection.
In some embodiments of the present invention, before the step of performing semantic segmentation prediction on an image to be detected by using a pre-trained image semantic segmentation model, the method includes: the method comprises the steps of constructing an image semantic segmentation model based on deep learning, wherein the image semantic segmentation model comprises at least two basic semantic segmentation submodels, an attention model and a fusion unit. And processing the training images in the training set by using at least two basic semantic segmentation submodels to obtain at least two feature maps corresponding to each training image, wherein the training images are labeled with semantic segmentation information in advance, and the feature maps comprise semantic information. And calculating at least two feature maps corresponding to each training image and pre-labeled semantic segmentation information by using an attention model to obtain an attention value of each feature map. And fusing at least two characteristic graphs of the training image by utilizing a fusion unit according to the attention value to obtain a predicted semantic segmentation result of the training image. And performing iterative training on at least two basic semantic segmentation submodels and the attention model according to the predicted semantic segmentation result and pre-labeled semantic segmentation information to obtain a trained image semantic segmentation model.
In some embodiments of the present invention, the step of processing the plurality of training images in the training set using the at least two basic semantic segmentation submodels is preceded by the steps of: a plurality of training images are obtained, and image processing is performed on each training image by adopting an image enhancement technology. And labeling semantic segmentation information to each training image to obtain a training set.
In a second aspect, an embodiment of the present application provides an express delivery outer packing defect detection device based on deep learning, and the device includes: wait to detect image acquisition module for the express delivery extranal packing that obtains the degree of depth camera and shoot waits to detect the image. And the detection module is used for classifying the defect images of the images to be detected according to the defect image classification model trained in advance and obtaining a classification result, wherein the classification result is a defect type or a non-defect type. And the defect area positioning module is used for performing semantic segmentation prediction on the image to be detected by utilizing a pre-trained image semantic segmentation model if the defect area exists in the image to be detected, and judging the position and the type of the defect through the segmentation result obtained through prediction.
In some embodiments of the invention, an apparatus comprises: and the training sample acquisition module acquires training samples, wherein the training samples comprise positive training samples containing defects and negative training samples containing no defects. And the preprocessing module is used for preprocessing the training sample, wherein the preprocessing comprises at least one basic operation of clipping, scaling, normalization and standardization. And the characteristic data retention model is used for retaining the characteristic data of the three channels as the input of the defect image classification model after carrying out gray scale processing on the image in the training sample.
In some embodiments of the invention, the apparatus further comprises: the model establishing module is used for establishing a defect image classification model, wherein the defect image classification model comprises a feature extraction model, a multi-scale feature fusion model and a classification model, the feature extraction model is a six-layer convolutional neural network model, the multi-scale feature fusion model is a feature pyramid model, and the classifier is a multi-layer fully-connected network model. And the training module is used for training the defect image classification model by using the training samples to obtain the trained defect image classification model.
In some embodiments of the invention, the detection module comprises: and the characteristic diagram acquisition unit is used for inputting the image to be detected into the characteristic extraction model and obtaining six characteristic diagrams after the processing of the characteristic extraction model. And the characteristic processing unit is used for inputting the six characteristic graphs into the multi-scale characteristic fusion model for characteristic processing and obtaining five characteristic matrixes. And the classification unit is used for converting the five characteristic matrixes into one-dimensional vectors, inputting the one-dimensional vectors into the classifier and outputting a classification result.
In some embodiments of the invention, an apparatus comprises: the image semantic segmentation model building module is used for building an image semantic segmentation model based on deep learning, wherein the image semantic segmentation model comprises at least two basic semantic segmentation submodels, an attention model and a fusion unit. The feature map acquisition module is used for processing a plurality of training images in the training set by utilizing at least two basic semantic segmentation submodels to obtain at least two feature maps corresponding to each training image, wherein the training images are labeled with semantic segmentation information in advance, and the feature maps comprise semantic information. And the attention value acquisition module is used for calculating at least two feature maps corresponding to each training image and pre-labeled semantic segmentation information by using an attention model to obtain the attention value of each feature map. And the fusion module is used for fusing at least two characteristic graphs of the training image according to the attention value by using the fusion unit to obtain a predicted semantic segmentation result of the training image. And the iterative training module is used for performing iterative training on at least two basic semantic segmentation submodels and the attention model according to the predicted semantic segmentation result and the pre-labeled semantic segmentation information so as to obtain a trained image semantic segmentation model.
In some embodiments of the invention, an apparatus comprises: and the image enhancement processing module is used for acquiring a plurality of training images and processing each training image by adopting an image enhancement technology. And the training set acquisition module is used for labeling semantic segmentation information to each training image so as to acquire a training set.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory for storing one or more programs. A processor. The program or programs, when executed by a processor, implement the method of any of the first aspects as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method according to any one of the first aspect described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flow chart of an express delivery external package defect detection method based on deep learning according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-scale feature fusion defect image detection classification model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an image semantic segmentation model based on attention model fusion according to an embodiment of the present invention;
fig. 4 is a block diagram of a structure of an express delivery external package defect detection device based on deep learning according to an embodiment of the present invention;
fig. 5 is a schematic structural block diagram of an electronic device according to an embodiment of the present invention.
Icon: 100-express delivery external package defect detection device based on deep learning; 110-an image acquisition module to be detected; 120-a detection module; 130-a defective area locating module; 101-a memory; 102-a processor; 103-communication interface.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the individual features of the embodiments can be combined with one another without conflict.
Referring to fig. 1, fig. 1 is a flow chart of an express delivery external package defect detection method based on deep learning according to an embodiment of the present invention, where the express delivery external package defect detection method based on deep learning includes the following steps:
step S110: and acquiring an image to be detected shot by the depth camera.
Depth images, also known as range images, refer to images having as pixel values the distances, i.e., depths, from an image grabber to points in a scene, which directly reflect the geometry of the visible surface of a scene. The artificial intelligence needs to acquire and identify external information by means of computer vision, and the depth camera can acquire an image to be detected of the target object so as to ensure that the target object can be identified accurately according to the image to be detected subsequently.
In order to improve the applicability of the express delivery outer package defect detection method based on deep learning, a deep camera is adopted to obtain an image to be detected of the target express delivery package. If the express delivery parcel has comparatively complicated spatial structure, can adopt the degree of depth camera to acquire the waiting of this express delivery and detect the image, then wait to detect the defect on image recognition this express delivery surface or structure according to this.
Step S120: and classifying the images to be detected according to the defect image classification model trained in advance to obtain a classification result, wherein the classification result is a defect type or a non-defect type.
The defect image detection model trained in advance is modeled in a deep learning mode, and in order to improve the detection accuracy of small flaws, a multi-scale feature fusion module is added into the model, and feature maps of different sizes are fused, so that the model can comprehensively predict image features under different receptive fields. In one embodiment, before model training is performed on defect data and non-defect data, a feature extraction module network of the model is pre-trained through an ImageNet data set, so that the weights of the partial models can more effectively extract image features and have higher convergence rate. The integral model has the function of identifying the images with high precision, and the classification accuracy of the express package images to be detected by the model is improved.
And constructing an express delivery outer package defect detection classification model with multi-scale feature fusion. Referring to fig. 2, the defect detection model includes a feature extraction module, a multi-scale feature fusion module, and a classification module. The characteristic extraction module is constructed by adopting a neural network designed by a 6-layer convolution module; the multi-scale feature fusion module adopts a pyramid-structured network, and performs feature fusion in a mode of performing dimension change and then adding by calculating deep feature maps and shallow feature maps in a convolution, deconvolution and the like, so that the model has defect identification capability under different receptive fields; and the classification module splices and reduces the dimension of the multi-scale features, then performs nonlinear calculation through a full-connection network, and finally obtains two classification results of the defect detection.
The construction of the feature extraction module is shown in block 201 of fig. 2. The method comprises 6 uniformly distributed feature learning stages, wherein each stage comprises 3 convolution layers and 1 pooling layer, the convolution kernel size is 3 x 3, and the pooling mode adopts average pooling. The 6 feature learning stages adopt the same structure with the same number of layers. Inputting the training image into a feature extraction model, and performing convolution by six convolution modules to sequentially obtain 6 groups of feature maps with different sizes, wherein the 6 groups of feature maps are respectively represented as [ C1, C2, C3, C4, C5 and C6 ].
Constructing a multi-scale feature fusion module, please refer to a block 202 in fig. 2, performing deconvolution operation on the feature maps of [ C2, C3, C4, C5, C6]5 groups respectively, so that the receptive fields of the feature maps are respectively the same as those of the feature maps of the previous layer, denoted as [ T1, T2, T3, T4, T5], and then performing one-dimensional convolution operation on the input feature maps of [ C1, C2, C3, C4, C5] respectively to obtain [ C1 ', C2 ', C3 ', C4 ', C5 ' ], wherein the feature maps obtained after the one-dimensional convolution are the same as the feature map depth after deconvolution. Then the characteristics of [ T1, T2, T3, T4 and T5] are respectively added with the characteristics of [ C1 ', C2 ', C3 ', C4 ' and C5 ' ] to obtain a fused characteristic map [ P1, P2, P3, P4 and P5 ]. Because the sizes of the 5 fused feature maps are different, different convolution kernels are respectively designed to perform convolution operation on the P feature map to generate five feature maps [ M1, M2, M3, M4 and M5] with the same size, the 5M features are subjected to feature splicing and then converted into a one-dimensional vector f1, and the vector contains image features with different scales and has the capability of identifying and classifying defective images with different sizes.
Building a classification model, please refer to block 203 in fig. 2, a three-layer fully-connected network is designed in the classification module, wherein the final output dimension is 1, which represents the two classification results of the model, i.e. defective or non-defective. As an embodiment, a dropout layer is arranged behind the first two layers of the full connection layer, and is used for reducing the number of intermediate features, reducing redundancy and increasing orthogonality among features of each layer.
As an embodiment, before performing defect image detection on an image to be detected according to a defect image detection model trained in advance, the defect image detection model may be trained through the following steps. Firstly, obtaining training samples, wherein the training samples comprise a positive training sample containing defects and a negative training sample containing no defects, then preprocessing the training samples, extracting features from the preprocessed training samples by using a convolution network, establishing a defect image detection model, and inputting the extracted features into the defect image detection model for training to obtain a trained defect image detection model.
The proportion between the positive training samples and the negative training samples can be determined in a data balance mode. Such as a cascade learning approach, resampling, etc. Or other modes can be performed according to actual needs to enable the model to meet the needs of the user, for example, when the express delivery outer package defect detection method based on deep learning is used for detecting product defects, if detection fails, huge loss is generated in the whole production process, and therefore a penalty coefficient of sample mistake distribution can be set for each sample, so that the safety of the defect image detection model at the training position of the user is higher.
The operation of preprocessing the training samples may include a data enhancement operation, and when the sample size of the training samples is not large enough, the operation may be used to perform data enhancement on the existing picture. On one hand, the method is used for increasing the sample size, and on the other hand, the generalization capability of the model can be improved. Wherein the data enhancement may include at least one of a flipping transform, a random cropping, a color dithering, a translation transform, a scaling transform, a contrast transform, a noise perturbation, and a rotation/reflection transform. The operation of preprocessing the training samples can include picture size adjustment, and the size of the pictures can be uniformly adjusted, so that machine learning can be conveniently carried out later.
In the implementation process, the obtained training samples comprise positive training samples and negative training samples, the training samples can be used for ensuring that the trained defect image detection model is more accurate, the training samples are further preprocessed, and the accuracy of identification of the trained defect image detection model can be further improved.
Step S130: if the defect area exists in the image to be detected, performing semantic segmentation prediction on the image to be detected by using a pre-trained image semantic segmentation model, and judging the position and the type of the defect according to the segmentation result obtained by prediction.
Image Semantic Segmentation (Semantic Segmentation) is an important ring in image processing and machine vision technology with respect to image understanding, and is also an important branch in the AI field. The semantic segmentation is to classify each pixel point in the image, determine the category (such as belonging to the background, people or vehicles) of each point, and thus perform region division. Therefore, after the defect area in the image to be detected is detected, the defect area in the image to be detected can be accurately positioned by adopting the pre-trained image semantic segmentation model so as to realize accurate defect detection on the external package of the target express, and meanwhile, the defective express is conveniently recycled or otherwise operated by a worker.
In some embodiments of the present invention, before the step of classifying the defect image of the image to be detected according to the defect image classification model trained in advance, the method includes: and establishing a defect image classification model, wherein the defect image classification model comprises a feature extraction model, a multi-scale feature fusion model and a classification model, the feature extraction model is a six-layer convolutional neural network model, the multi-scale feature fusion model is a feature pyramid model, and the classification model is a multi-layer fully-connected network model. And training the defect image classification model by using the training sample to obtain the trained defect image classification model.
In some embodiments of the present invention, the step of classifying the image to be detected according to the pre-trained defect image classification model to obtain a classification result, wherein the classification result is a defect class or a non-defect class, includes: and inputting the image to be detected into a feature extraction model, and processing the image to be detected by the feature extraction model to obtain six feature maps. And inputting the six feature maps into a multi-scale feature fusion model for feature processing, and obtaining five feature matrixes. Converting the five characteristic matrixes into one-dimensional vectors, inputting the one-dimensional vectors into a classifier, and outputting a classification result.
In detail, the defect image classification model is composed of a feature extraction model, a multi-scale feature fusion model and a classification model. The feature extraction model is improved and designed on the basis of a convolutional neural network, the multi-scale feature fusion model is designed by adopting a feature pyramid model, and the classifier is designed by adopting a multilayer full-connection network.
In the defect image detection stage, firstly, the obtained image to be detected is input into a feature extraction model, feature extraction is carried out through the feature extraction model in the defect image classification model, for example, 6 groups of feature maps with serial numbers of C1-C6 can be respectively obtained after passing through 6 layers of convolution network modules.
Inputting the 6 feature maps into a multi-scale feature fusion model for next feature processing, and performing deconvolution operation on the C2, C3, C4, C5 and C6 feature maps to obtain five feature maps P1-P5, so that the P1-P5 and the C1-C5 have the same dimensionality. And performing convolution operation on C1-C5 and P1-P5 by using 1-1 convolution kernel to reduce the depth of the feature map to 1, adding C1-C5 and P1-P5 features respectively to obtain five feature matrixes R1-R5, converting the 5 feature matrixes into one-dimensional vectors, and sending the one-dimensional vectors into the classifier.
And inputting the one-dimensional matrix into a classification network, wherein the classification network adopts a three-layer full-connection network design, the last layer of dimensionality is set to be 1, and two classification results of the defect image detection are output.
As another embodiment, before the step of locating the defect region in the image to be detected by using the pre-trained semantic segmentation model, the construction and training of the semantic segmentation model may be completed by the following procedures.
Firstly, an image semantic segmentation model based on deep learning is constructed, please refer to fig. 3, where the image semantic segmentation model includes at least two basic semantic segmentation submodels, an attention model and a fusion unit. The at least two basic semantic segmentation submodels may be full Convolutional neural Network (FCN) models, extended Convolutional delatednet, deep lab semantic segmentation models, and the like.
Then, processing a plurality of training images in the training set by using at least two basic semantic segmentation submodels to obtain at least two feature maps corresponding to each training image, wherein the training images are labeled with semantic segmentation information in advance, and the feature maps comprise semantic information. The training set used for training comprises a large number of training images, the semantic segmentation information of the training images can be labeled by adopting related software in advance, then one training image is simultaneously input into at least two basic semantic segmentation submodels, and at least two characteristic graphs are correspondingly obtained. For example, if at least two basic semantic segmentation submodels are a delatednet model and a deplab model, respectively, after the training image P is input into the two models, a first feature map is obtained according to the delatednet model, and a second feature map is obtained according to the other deplab model. It can be understood that, if there are a plurality of basic semantic segmentation submodels, the training image may be input into the plurality of basic semantic segmentation submodels and then correspond to feature maps equal in number to the plurality of basic semantic segmentation submodels.
Secondly, the attention model is used to calculate at least two feature maps corresponding to each training image and the pre-labeled semantic segmentation information to obtain an attention value of each feature map, such as w1 and w2 shown in fig. 2, which are the attention values corresponding to the feature maps obtained by the two basic semantic segmentation submodels.
And then, fusing at least two feature maps of the training image according to the attention value by using a fusion unit to obtain a predicted semantic segmentation result of the training image. The process of fusing the attention values of the at least two feature maps corresponding to the training images by the fusion unit may include: first, the dimension of each semantic segmentation information of each feature map of the training image is multiplied by the corresponding attention value, wherein the feature map is a three-dimensional matrix. And then, summing the obtained multiplication results according to corresponding elements in all the characteristic graphs corresponding to the training images, and selecting the label with the maximum value from the summation results as a prediction semantic segmentation result of the training images.
And finally, performing iterative training on at least two basic semantic segmentation submodels and the attention model according to the predicted semantic segmentation result and pre-labeled semantic segmentation information to obtain a trained image semantic segmentation model. Because the predicted semantic segmentation result and the pre-labeled semantic segmentation information may be different, when the predicted semantic segmentation result is different from the pre-labeled semantic segmentation information, the recognition accuracy of the image semantic segmentation model is too low, and the defect region in the image to be detected cannot be accurately recognized, so that the training of at least two basic semantic segmentation submodels and the attention model can be continued until the matching degree between the predicted semantic segmentation result and the pre-labeled semantic segmentation information reaches a certain threshold value, and the trained image semantic segmentation model can be obtained
As a specific implementation manner in the foregoing embodiment, before processing a plurality of training images in a training set by using at least two basic semantic segmentation submodels, a plurality of training images may be obtained, and image processing may be performed on each training image by using an image enhancement technique, and then semantic segmentation information is labeled on each training image to obtain the training set.
As a specific implementation manner in the foregoing embodiment, parameters of at least two basic semantic segmentation submodels and an attention model may be modified according to a predicted semantic segmentation result of a training image and pre-labeled semantic segmentation information. The parameter correction method can adopt a cross entropy loss function to calculate the error between the predicted semantic segmentation result and the pre-labeled semantic segmentation information, and update the parameters of the basic semantic segmentation submodel and the attention model according to the error by using a back propagation algorithm.
Referring to fig. 4, fig. 4 is a block diagram illustrating a structure of an express delivery package defect detecting apparatus 100 based on deep learning according to an embodiment of the present invention, where the express delivery package defect detecting apparatus 100 based on deep learning includes:
and the image acquisition module 110 to be detected is used for acquiring an image to be detected, which is shot by the depth camera.
The detection module 120 is configured to classify the image to be detected according to the pre-trained defect image classification model and obtain a classification result, where the classification result is a defect type or a non-defect type.
And the defective area positioning module 130 is configured to, if a defective area is detected in the image to be detected, perform semantic segmentation prediction on the image to be detected by using a pre-trained image semantic segmentation model, and determine the position and the type of the defect according to a segmentation result obtained through the prediction.
In some embodiments of the invention, an apparatus comprises: and the training sample acquisition module acquires training samples, wherein the training samples comprise positive training samples containing defects and negative training samples containing no defects. And the preprocessing module is used for preprocessing the training sample, wherein the preprocessing comprises at least one basic operation of clipping, scaling, normalization and standardization. And the characteristic data retention model is used for retaining the characteristic data of the three channels as the input of the defect image classification model after carrying out gray scale processing on the image in the training sample.
In some embodiments of the invention, the apparatus further comprises:
the model establishing module is used for establishing a defect image classification model, wherein the defect image classification model comprises a feature extraction model, a multi-scale feature fusion model and a classification model, the feature extraction model is a six-layer convolutional neural network model, the multi-scale feature fusion model is a feature pyramid model, and the classification model is a multi-layer fully-connected network model.
And the training module is used for training the defect image classification model by using the training samples to obtain the trained defect image classification model.
In some embodiments of the invention, the detection module comprises:
and the characteristic diagram acquisition unit is used for inputting the image to be detected into the characteristic extraction model and obtaining six characteristic diagrams after the processing of the characteristic extraction model.
And the characteristic processing unit is used for inputting the six characteristic graphs into the multi-scale characteristic fusion model for characteristic processing and obtaining five characteristic matrixes.
And the classification unit is used for converting the five characteristic matrixes into one-dimensional vectors, inputting the one-dimensional vectors into the classifier and outputting a classification result.
In some embodiments of the invention, an apparatus comprises:
the image semantic segmentation model building module is used for building an image semantic segmentation model based on deep learning, wherein the image semantic segmentation model comprises at least two basic semantic segmentation submodels, an attention model and a fusion unit.
The feature map acquisition module is used for processing a plurality of training images in the training set by utilizing at least two basic semantic segmentation submodels to obtain at least two feature maps corresponding to each training image, wherein the training images are labeled with semantic segmentation information in advance, and the feature maps comprise semantic information.
And the attention value acquisition module is used for calculating at least two feature maps corresponding to each training image and pre-labeled semantic segmentation information by using an attention model to obtain the attention value of each feature map.
And the fusion module is used for fusing at least two characteristic graphs of the training image according to the attention value by using the fusion unit to obtain a predicted semantic segmentation result of the training image.
And the iterative training module is used for performing iterative training on at least two basic semantic segmentation submodels and the attention model according to the predicted semantic segmentation result and the pre-labeled semantic segmentation information so as to obtain a trained image semantic segmentation model.
In some embodiments of the invention, an apparatus comprises:
and the image enhancement processing module is used for acquiring a plurality of training images and processing each training image by adopting an image enhancement technology.
And the training set acquisition module is used for labeling semantic segmentation information to each training image so as to acquire a training set.
Referring to fig. 5, fig. 5 is a schematic structural block diagram of an electronic device according to an embodiment of the present disclosure. The electronic device comprises a memory 101, a processor 102 and a communication interface 103, wherein the memory 101, the processor 102 and the communication interface 103 are electrically connected to each other directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 101 may be used to store software programs and modules, such as program instructions/modules corresponding to the deep learning-based express package defect detection apparatus 100 provided in the embodiments of the present application, and the processor 102 executes the software programs and modules stored in the memory 101, so as to execute various functional applications and data processing. The communication interface 103 may be used for communicating signaling or data with other node devices.
The Memory 101 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 102 may be an integrated circuit chip having signal processing capabilities. The processor 102 may be a general-purpose processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
It will be appreciated that the configuration shown in fig. 5 is merely illustrative and that the electronic device may include more or fewer components than shown in fig. 5 or have a different configuration than shown in fig. 5. The components shown in fig. 5 may be implemented in hardware, software, or a combination thereof.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
To sum up, the express delivery outer package defect detection method and device based on deep learning provided by the embodiment of the application comprise the following steps: and acquiring an image to be detected shot by the depth camera. And classifying the defect images of the images to be detected according to the defect image classification model trained in advance. If the defect area exists in the image to be detected, the defect area in the image to be detected is positioned by utilizing the pre-trained image semantic segmentation model. Because the depth camera shoots the to-be-detected image of the express delivery outer package, the geometric shape, the three-dimensional construction information and the like of the target express delivery outer package can be reflected, and therefore accurate detection and identification can be carried out on the detection image according to the pre-trained defect image classification model and the pre-trained image semantic segmentation model.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.

Claims (10)

1. An express delivery external package defect detection method based on deep learning is characterized by comprising the following steps:
acquiring an image to be detected of the express delivery outer package shot by a depth camera;
classifying the defect images of the images to be detected according to a defect image classification model trained in advance to obtain a classification result, wherein the classification result is a defect type or a non-defect type;
if the defect area exists in the image to be detected, performing semantic segmentation prediction on the image to be detected by using a pre-trained image semantic segmentation model, and judging the position and the type of the defect by using a segmentation result obtained by prediction.
2. The method of claim 1, wherein the step of classifying the defect image of the image to be detected according to the pre-trained defect image classification model comprises:
obtaining training samples, wherein the training samples comprise positive training samples containing defects and negative training samples containing no defects;
preprocessing the training sample, wherein the preprocessing comprises at least one basic operation of clipping, scaling, normalization and standardization;
and after carrying out gray processing on the images in the training samples, reserving the feature data of the three channels as the input of a defect image classification model.
3. The method of claim 2, wherein the step of classifying the defect image of the image to be detected according to the pre-trained defect image classification model comprises:
establishing a defect image classification model, wherein the defect image classification model comprises a feature extraction model, a multi-scale feature fusion model and a classifier, the feature extraction model is a six-layer convolutional neural network model, the multi-scale feature fusion model is a feature pyramid model, and the classifier is a multi-layer fully-connected network model;
and training the defect image classification model by using the training sample to obtain the trained defect image classification model.
4. The method as claimed in claim 3, wherein the step of classifying the image to be detected according to a pre-trained defect image classification model to obtain a classification result, wherein the classification result is a defect class or a non-defect class, comprises:
inputting an image to be detected into a feature extraction model, and obtaining six feature graphs after processing by the feature extraction model;
inputting the six feature maps into the multi-scale feature fusion model for feature processing, and obtaining five feature matrixes;
converting the five characteristic matrixes into one-dimensional vectors, inputting the one-dimensional vectors into a classifier, and outputting a classification result.
5. The method of claim 1, wherein the step of performing semantic segmentation prediction on the image to be detected by using the pre-trained image semantic segmentation model comprises:
constructing an image semantic segmentation model based on deep learning, wherein the image semantic segmentation model comprises at least two basic semantic segmentation submodels, an attention model and a fusion unit;
processing a plurality of training images in a training set by using the at least two basic semantic segmentation submodels to obtain at least two feature maps corresponding to each training image, wherein the training images are labeled with semantic segmentation information in advance, and the feature maps comprise semantic information;
calculating at least two feature maps corresponding to each training image and the pre-labeled semantic segmentation information by using the attention model to obtain an attention value of each feature map;
fusing at least two feature maps of the training image according to the attention value by using the fusion unit to obtain a predicted semantic segmentation result of the training image;
and performing iterative training on the at least two basic semantic segmentation submodels and the attention model according to the predicted semantic segmentation result and the pre-labeled semantic segmentation information to obtain the trained image semantic segmentation model.
6. The method of claim 5, wherein the step of processing the plurality of training images in the training set using the at least two basic semantic segmentation submodels is preceded by:
acquiring a plurality of training images, and performing image processing on each training image by adopting an image enhancement technology;
and labeling semantic segmentation information to each training image to obtain a training set.
7. The utility model provides an express delivery extranal packing defect detection device based on deep learning, its characterized in that, the device includes:
the to-be-detected image acquisition module is used for acquiring an image to be detected, which is shot by the depth camera;
the detection module is used for carrying out defect image classification on the image to be detected according to a defect image classification model trained in advance and obtaining a classification result, wherein the classification result is a defect type or a non-defect type;
and the defect area positioning module is used for performing semantic segmentation prediction on the image to be detected by utilizing a pre-trained image semantic segmentation model if the defect area exists in the image to be detected, and judging the position and the type of the defect through a segmentation result obtained by prediction.
8. The apparatus of claim 7, wherein the apparatus comprises:
the training sample acquisition module is used for acquiring training samples, wherein the training samples comprise positive training samples containing defects and negative training samples containing no defects;
the preprocessing module is used for preprocessing the training sample, wherein the preprocessing comprises at least one basic operation of clipping, scaling, normalization and standardization;
and the characteristic data retention model is used for retaining the characteristic data of the three channels as the input of the defect image classification model after carrying out gray scale processing on the image in the training sample.
9. An electronic device, comprising:
a memory for storing one or more programs;
a processor;
the one or more programs, when executed by the processor, implement the method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202010780139.8A 2020-08-05 2020-08-05 Express delivery outer package defect detection method and device based on deep learning Pending CN111862092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010780139.8A CN111862092A (en) 2020-08-05 2020-08-05 Express delivery outer package defect detection method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010780139.8A CN111862092A (en) 2020-08-05 2020-08-05 Express delivery outer package defect detection method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN111862092A true CN111862092A (en) 2020-10-30

Family

ID=72971459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010780139.8A Pending CN111862092A (en) 2020-08-05 2020-08-05 Express delivery outer package defect detection method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN111862092A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489037A (en) * 2020-12-15 2021-03-12 科大讯飞华南人工智能研究院(广州)有限公司 Defect detection and related model training method, electronic equipment and storage device
CN112614101A (en) * 2020-12-17 2021-04-06 广东道氏技术股份有限公司 Polished tile flaw detection method based on multilayer feature extraction and related equipment
CN112836724A (en) * 2021-01-08 2021-05-25 重庆创通联智物联网有限公司 Object defect recognition model training method and device, electronic equipment and storage medium
CN113192021A (en) * 2021-04-26 2021-07-30 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN113205110A (en) * 2021-03-19 2021-08-03 哈工大机器人(中山)无人装备与人工智能研究院 Panel defect classification model establishing method and panel defect classification method
CN113344888A (en) * 2021-06-17 2021-09-03 四川启睿克科技有限公司 Surface defect detection method and device based on combined model
CN113420664A (en) * 2021-06-23 2021-09-21 国网电子商务有限公司 Image-based potential safety hazard detection method, device, equipment and storage medium
CN113450311A (en) * 2021-06-01 2021-09-28 国网河南省电力公司漯河供电公司 Pin screw defect detection method and system based on semantic segmentation and spatial relationship
CN113592832A (en) * 2021-08-05 2021-11-02 深圳职业技术学院 Industrial product defect detection method and device
CN113781430A (en) * 2021-09-09 2021-12-10 北京云屿科技有限公司 Glove surface defect detection method and system based on deep learning
CN113822869A (en) * 2021-09-27 2021-12-21 望知科技(深圳)有限公司 Transparent soft packaging bag static detection method and system based on machine vision
CN114066810A (en) * 2021-10-11 2022-02-18 安庆师范大学 Method and device for detecting concave-convex point defects of packaging box
CN114240926A (en) * 2021-12-28 2022-03-25 湖南云箭智能科技有限公司 Board card defect type identification method, device and equipment and readable storage medium
CN114240928A (en) * 2021-12-29 2022-03-25 湖南云箭智能科技有限公司 Board quality partition detection method, device and equipment and readable storage medium
CN114398818A (en) * 2021-06-02 2022-04-26 江苏盛邦纺织品有限公司 Textile jacquard detection method and system based on deep learning
CN114647762A (en) * 2022-03-23 2022-06-21 中国水利水电科学研究院 Dam detection method based on map comparison
CN114842275A (en) * 2022-07-06 2022-08-02 成都数之联科技股份有限公司 Circuit board defect judging method, training method, device, equipment and storage medium
CN115018154A (en) * 2022-06-06 2022-09-06 邢台路桥建设总公司 Loess collapsibility prediction method
CN115170804A (en) * 2022-07-26 2022-10-11 无锡九霄科技有限公司 Surface defect detection method, device, system and medium based on deep learning
CN115358981A (en) * 2022-08-16 2022-11-18 腾讯科技(深圳)有限公司 Glue defect determining method, device, equipment and storage medium
CN115496749A (en) * 2022-11-14 2022-12-20 江苏智云天工科技有限公司 Product defect detection method and system based on target detection training preprocessing
CN116129221A (en) * 2023-01-16 2023-05-16 五邑大学 Lithium battery defect detection method, system and storage medium
CN116206111A (en) * 2023-03-07 2023-06-02 广州市易鸿智能装备有限公司 Defect identification method and device, electronic equipment and storage medium
CN116245846A (en) * 2023-03-08 2023-06-09 华院计算技术(上海)股份有限公司 Defect detection method and device for strip steel, storage medium and computing equipment
CN116843625A (en) * 2023-06-05 2023-10-03 广东粤桨产业科技有限公司 Defect detection model deployment method, system and equipment for industrial quality inspection scene
CN117408967A (en) * 2023-10-24 2024-01-16 欧派家居集团股份有限公司 Board defect detection method and system based on 3D visual recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
CN110473173A (en) * 2019-07-24 2019-11-19 熵智科技(深圳)有限公司 A kind of defect inspection method based on deep learning semantic segmentation
WO2019233166A1 (en) * 2018-06-04 2019-12-12 杭州海康威视数字技术股份有限公司 Surface defect detection method and apparatus, and electronic device
CN110889560A (en) * 2019-12-06 2020-03-17 西北工业大学 Express delivery sequence prediction method with deep interpretability
CN111415027A (en) * 2019-01-08 2020-07-14 顺丰科技有限公司 Method and device for constructing component prediction model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019233166A1 (en) * 2018-06-04 2019-12-12 杭州海康威视数字技术股份有限公司 Surface defect detection method and apparatus, and electronic device
CN111415027A (en) * 2019-01-08 2020-07-14 顺丰科技有限公司 Method and device for constructing component prediction model
CN110097544A (en) * 2019-04-25 2019-08-06 武汉精立电子技术有限公司 A kind of display panel open defect detection method
CN110473173A (en) * 2019-07-24 2019-11-19 熵智科技(深圳)有限公司 A kind of defect inspection method based on deep learning semantic segmentation
CN110889560A (en) * 2019-12-06 2020-03-17 西北工业大学 Express delivery sequence prediction method with deep interpretability

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈志新 等: "基于深度学习的复杂分拣图像快速识别方法研究", 电子技术应用 *

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489037A (en) * 2020-12-15 2021-03-12 科大讯飞华南人工智能研究院(广州)有限公司 Defect detection and related model training method, electronic equipment and storage device
CN112614101A (en) * 2020-12-17 2021-04-06 广东道氏技术股份有限公司 Polished tile flaw detection method based on multilayer feature extraction and related equipment
CN112614101B (en) * 2020-12-17 2024-02-20 广东道氏技术股份有限公司 Polished tile flaw detection method based on multilayer feature extraction and related equipment
CN112836724A (en) * 2021-01-08 2021-05-25 重庆创通联智物联网有限公司 Object defect recognition model training method and device, electronic equipment and storage medium
CN113205110A (en) * 2021-03-19 2021-08-03 哈工大机器人(中山)无人装备与人工智能研究院 Panel defect classification model establishing method and panel defect classification method
CN113205110B (en) * 2021-03-19 2024-03-19 哈工大机器人(中山)无人装备与人工智能研究院 Method for establishing panel defect classification model and panel defect classification method
CN113192021A (en) * 2021-04-26 2021-07-30 深圳中科飞测科技股份有限公司 Detection method and device, detection equipment and storage medium
CN113450311A (en) * 2021-06-01 2021-09-28 国网河南省电力公司漯河供电公司 Pin screw defect detection method and system based on semantic segmentation and spatial relationship
CN114398818A (en) * 2021-06-02 2022-04-26 江苏盛邦纺织品有限公司 Textile jacquard detection method and system based on deep learning
CN114398818B (en) * 2021-06-02 2024-05-24 中科维卡(苏州)自动化科技有限公司 Textile jacquard detection method and system based on deep learning
CN113344888A (en) * 2021-06-17 2021-09-03 四川启睿克科技有限公司 Surface defect detection method and device based on combined model
CN113420664A (en) * 2021-06-23 2021-09-21 国网电子商务有限公司 Image-based potential safety hazard detection method, device, equipment and storage medium
CN113592832A (en) * 2021-08-05 2021-11-02 深圳职业技术学院 Industrial product defect detection method and device
CN113781430A (en) * 2021-09-09 2021-12-10 北京云屿科技有限公司 Glove surface defect detection method and system based on deep learning
CN113781430B (en) * 2021-09-09 2023-08-25 北京云屿科技有限公司 Glove surface defect detection method and system based on deep learning
CN113822869B (en) * 2021-09-27 2024-02-27 望知科技(深圳)有限公司 Machine vision-based transparent soft packaging bag static detection method and system
CN113822869A (en) * 2021-09-27 2021-12-21 望知科技(深圳)有限公司 Transparent soft packaging bag static detection method and system based on machine vision
CN114066810A (en) * 2021-10-11 2022-02-18 安庆师范大学 Method and device for detecting concave-convex point defects of packaging box
CN114240926A (en) * 2021-12-28 2022-03-25 湖南云箭智能科技有限公司 Board card defect type identification method, device and equipment and readable storage medium
CN114240928B (en) * 2021-12-29 2024-03-01 湖南云箭智能科技有限公司 Partition detection method, device and equipment for board quality and readable storage medium
CN114240928A (en) * 2021-12-29 2022-03-25 湖南云箭智能科技有限公司 Board quality partition detection method, device and equipment and readable storage medium
CN114647762A (en) * 2022-03-23 2022-06-21 中国水利水电科学研究院 Dam detection method based on map comparison
CN115018154A (en) * 2022-06-06 2022-09-06 邢台路桥建设总公司 Loess collapsibility prediction method
CN114842275A (en) * 2022-07-06 2022-08-02 成都数之联科技股份有限公司 Circuit board defect judging method, training method, device, equipment and storage medium
CN114842275B (en) * 2022-07-06 2023-04-07 成都数之联科技股份有限公司 Circuit board defect judging method, training method, device, equipment and storage medium
CN115170804A (en) * 2022-07-26 2022-10-11 无锡九霄科技有限公司 Surface defect detection method, device, system and medium based on deep learning
CN115170804B (en) * 2022-07-26 2024-01-26 无锡九霄科技有限公司 Surface defect detection method, device, system and medium based on deep learning
CN115358981A (en) * 2022-08-16 2022-11-18 腾讯科技(深圳)有限公司 Glue defect determining method, device, equipment and storage medium
CN115496749A (en) * 2022-11-14 2022-12-20 江苏智云天工科技有限公司 Product defect detection method and system based on target detection training preprocessing
CN115496749B (en) * 2022-11-14 2023-01-31 江苏智云天工科技有限公司 Product defect detection method and system based on target detection training preprocessing
CN116129221B (en) * 2023-01-16 2024-02-20 五邑大学 Lithium battery defect detection method, system and storage medium
CN116129221A (en) * 2023-01-16 2023-05-16 五邑大学 Lithium battery defect detection method, system and storage medium
CN116206111A (en) * 2023-03-07 2023-06-02 广州市易鸿智能装备有限公司 Defect identification method and device, electronic equipment and storage medium
CN116206111B (en) * 2023-03-07 2024-02-02 广州市易鸿智能装备有限公司 Defect identification method and device, electronic equipment and storage medium
CN116245846A (en) * 2023-03-08 2023-06-09 华院计算技术(上海)股份有限公司 Defect detection method and device for strip steel, storage medium and computing equipment
CN116245846B (en) * 2023-03-08 2023-11-21 华院计算技术(上海)股份有限公司 Defect detection method and device for strip steel, storage medium and computing equipment
CN116843625A (en) * 2023-06-05 2023-10-03 广东粤桨产业科技有限公司 Defect detection model deployment method, system and equipment for industrial quality inspection scene
CN117408967A (en) * 2023-10-24 2024-01-16 欧派家居集团股份有限公司 Board defect detection method and system based on 3D visual recognition
CN117408967B (en) * 2023-10-24 2024-03-19 欧派家居集团股份有限公司 Board defect detection method and system based on 3D visual recognition

Similar Documents

Publication Publication Date Title
CN111862092A (en) Express delivery outer package defect detection method and device based on deep learning
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
CN109190752B (en) Image semantic segmentation method based on global features and local features of deep learning
CN109840556B (en) Image classification and identification method based on twin network
CN111257341B (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
WO2022033095A1 (en) Text region positioning method and apparatus
CN111611874B (en) Face mask wearing detection method based on ResNet and Canny
WO2023193401A1 (en) Point cloud detection model training method and apparatus, electronic device, and storage medium
CN114969405A (en) Cross-modal image-text mutual inspection method
CN113378976B (en) Target detection method based on characteristic vertex combination and readable storage medium
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN116128839A (en) Wafer defect identification method, device, electronic equipment and storage medium
CN111914902A (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN115984662A (en) Multi-mode data pre-training and recognition method, device, equipment and medium
CN114861842A (en) Few-sample target detection method and device and electronic equipment
CN114972268A (en) Defect image generation method and device, electronic equipment and storage medium
CN117636045A (en) Wood defect detection system based on image processing
CN117274689A (en) Detection method and system for detecting defects of packaging box
CN117036243A (en) Method, device, equipment and storage medium for detecting surface defects of shaving board
CN116188361A (en) Deep learning-based aluminum profile surface defect classification method and device
CN115830385A (en) Image detection method and device, electronic equipment and computer readable storage medium
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
CN115358981A (en) Glue defect determining method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination