CN117274689A - Detection method and system for detecting defects of packaging box - Google Patents

Detection method and system for detecting defects of packaging box Download PDF

Info

Publication number
CN117274689A
CN117274689A CN202311216001.5A CN202311216001A CN117274689A CN 117274689 A CN117274689 A CN 117274689A CN 202311216001 A CN202311216001 A CN 202311216001A CN 117274689 A CN117274689 A CN 117274689A
Authority
CN
China
Prior art keywords
feature
feature map
neural network
map
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311216001.5A
Other languages
Chinese (zh)
Inventor
杭守冬
仝丽霞
吴帆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Yonghua Packaging Co ltd
Original Assignee
Anhui Yonghua Packaging Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Yonghua Packaging Co ltd filed Critical Anhui Yonghua Packaging Co ltd
Priority to CN202311216001.5A priority Critical patent/CN117274689A/en
Publication of CN117274689A publication Critical patent/CN117274689A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/90Investigating the presence of flaws or contamination in a container or its contents
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2201/00Features of devices classified in G01N21/00
    • G01N2201/12Circuits of general importance; Signal processing
    • G01N2201/129Using chemometrical methods
    • G01N2201/1296Using chemometrical methods using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The application relates to the field of intelligent detection, and particularly discloses a detection method and a detection system for detecting defects of a packaging box. Therefore, the problem of low efficiency in manual detection can be effectively solved, and meanwhile, the detection accuracy can be improved.

Description

Detection method and system for detecting defects of packaging box
Technical Field
The present application relates to the field of intelligent detection, and more particularly, to a detection method for detecting defects of a package box and a system thereof.
Background
The package is a box for packaging articles, often used to protect goods, providing a convenient means of transportation and storage. In the process of manufacturing the packaging boxes, a manufacturer can detect defects of each batch of packaging boxes, so that products are ensured to meet quality requirements, and the quality requirements comprise detection on aspects such as materials, sizes and appearance. In the case of inspecting the appearance, the appearance inspection is generally performed manually to inspect whether or not the package is damaged, deformed, scratched, stained, or the like, but such a manual inspection is inefficient.
Therefore, there is a need for an optimized inspection scheme for inspection of package defects.
Disclosure of Invention
The present application has been made in order to solve the above technical problems. The embodiment of the application provides a detection method and a detection system for detecting defects of a packaging box, which are based on an artificial intelligent detection technology of machine vision, and whether the defects exist or not is judged by extracting and fusing multi-scale features of images of the packaging box. Therefore, the problem of low efficiency in manual detection can be effectively solved, and meanwhile, the detection accuracy can be improved.
According to one aspect of the present application, there is provided a detection method for detecting defects of a package, comprising:
acquiring a packaging box image of a defect to be detected;
the packaging box image is processed through a first convolution neural network model using a spatial attention mechanism to obtain an image feature matrix;
passing the image feature matrix through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
Performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fusion feature map;
and the fusion feature map passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
In the above detection method for detecting a package defect, the step of obtaining an image feature matrix from the package image by using a first convolutional neural network model of a spatial attention mechanism includes: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; pooling the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; passing the activation feature map through a spatial attention module to obtain a spatial attention score map; multiplying the spatial attention score map and the activation feature map by position points to obtain a spatial attention feature map; the input of the first layer of the first convolutional neural network model is the packaging box image, and the output of the last layer of the first convolutional neural network model is the image feature matrix.
In the above detection method for detecting defects of a package box, the step of passing the image feature matrix through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale, includes: using the second convolution neural network with the three-dimensional convolution kernel of the first scale to perform respective processing on input data in forward transfer of the layer: performing three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution kernel having the first scale to obtain the first feature map; using the third convolutional neural network with the three-dimensional convolutional kernel with the second scale to respectively perform the forward transfer of the layers on the input data: and carrying out three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution check with the second scale to obtain the second feature map.
In the above detection method for detecting defects of a package box, the performing geometric complexity constraint on the first feature map and the second feature map based on feature manifold to obtain a fused feature map includes: the first feature map and the second feature map respectively pass through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector; performing association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fusion full-perception feature matrix; calculating transfer matrixes of all feature matrixes of the first feature graph along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of all transfer matrixes to obtain a plurality of first transfer feature values; calculating transfer matrixes of all feature matrixes of the second feature graph along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of all transfer matrixes to obtain a plurality of second transfer feature values; performing maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first feature manifold geometric complexity constraint characteristic vector and a second feature manifold geometric complexity constraint characteristic vector; respectively weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature values of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as weights so as to obtain an optimized first feature map and an optimized second feature map; and aggregating the optimized first feature map and the optimized second feature map along a channel dimension to obtain the fusion feature map.
In the above detection method for detecting defects of a package box, the step of passing the fused feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the package box has defects, includes: processing the fused feature map using the classifier in a classification formula to generate the classification result; wherein, the classification formula is:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project()}
where O is the classification result and Project () represents projecting the fused feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias vector of each fully connected layer, softmax is a normalized exponential function.
According to another aspect of the present application, there is provided a detection system for detecting defects of a package, comprising:
the image acquisition module is used for acquiring an image of the packaging box with the defect to be detected;
the spatial attention coding module is used for enabling the packaging box image to obtain an image feature matrix through a first convolution neural network model using a spatial attention mechanism;
the multi-scale associated feature extraction module is used for enabling the image feature matrix to pass through a double-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
The feature fusion module is used for carrying out geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fusion feature map;
and the detection result generation module is used for enabling the fusion feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
Compared with the prior art, the detection method and the detection system for detecting the defects of the packaging box, which are provided by the application, are based on the artificial intelligent detection technology of machine vision, and whether the defects exist or not is judged by extracting and fusing the multi-scale features of the images of the packaging box. Therefore, the problem of low efficiency in manual detection can be effectively solved, and meanwhile, the detection accuracy can be improved.
Drawings
The foregoing and other objects, features and advantages of the present application will become more apparent from the following more particular description of embodiments of the present application, as illustrated in the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a flowchart of a detection method for package defect detection according to an embodiment of the present application.
Fig. 2 is a block diagram of a detection method for detecting defects of a package according to an embodiment of the present application.
Fig. 3 is a flowchart of a method for detecting a package defect according to an embodiment of the present application, in which the image feature matrix is passed through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolutional kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel having a second scale.
Fig. 4 is a flowchart of performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fused feature map in a detection method for detecting a package box defect according to an embodiment of the present application.
Fig. 5 is a system block diagram of a detection system for package defect detection according to an embodiment of the present application.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application and not all of the embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Summary of the application
As described in the background art above, the packing box is a box for protecting articles, and is convenient to transport and store. In the manufacturing process, a manufacturer can detect defects of each batch of packaging boxes so as to ensure that the products meet the quality requirements. This includes detecting aspects of material, size and appearance. In general, the manual appearance inspection is performed to detect whether the package has a problem of breakage, deformation, scratch, stain, etc., but this is inefficient. Therefore, an optimized defect detection scheme for packages is desired.
In recent years, deep learning and neural networks have been widely used in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks have also shown levels approaching and even exceeding humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like. The development of deep learning and neural networks provides a new solution idea and scheme for detecting defects of packaging boxes.
Specifically, in the technical scheme of the application, firstly, a packaging box image of a defect to be detected is obtained. This step is to obtain relevant data of the package, and provide input data for the subsequent steps, thereby performing defect detection of the package.
The package image is then passed through a first convolutional neural network model using a spatial attention mechanism to obtain an image feature matrix. Spatial attention mechanisms are a technique that can help focus on the most important areas in an image. The package image may contain a large amount of irrelevant information or redundant areas, and by using a spatial attention mechanism, the model can be focused more on the most important area for defect detection, thereby improving the accuracy and efficiency of detection. In particular, by using a first convolutional neural network model of the spatial attention mechanism, the image may be weighted for spatial attention prior to extracting the image features, which may make the model more focused on the areas that are most important for defect detection and reduce interference with irrelevant information. In this way, the sensitivity of the model to pack defects can be increased, making it easier to detect defective areas. In summary, by using the first convolutional neural network model of the spatial attention mechanism, the most important features in the image can be extracted, so that the accuracy and efficiency of package defect detection are improved.
And then, the image feature matrix passes through a double-flow network model comprising a second convolution neural network and a third convolution neural network to obtain a first feature map and a second feature map. Notably, the second convolutional neural network uses a three-dimensional convolutional kernel of a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel of a second scale. It should be appreciated that defects of different dimensions require features of different dimensions to represent. The second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, which aims at capturing features with smaller scale, and in the defect detection of the packing box, some tiny defects may need to be captured by the smaller convolutional kernel; the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale, whose purpose is to capture larger scale features, some larger defects or overall texture features may require a larger convolutional kernel to capture in defect detection of the package.
Further, feature manifold-based geometric complexity constraints are performed on the first feature map and the second feature map to obtain a fused feature map. It should be understood that the feature information of different scales can be combined by fusing the first feature map and the second feature map to form a more comprehensive and rich feature representation, so that the obtained fused feature map can better capture defect features of different scales, and the accuracy and the robustness of detection are improved. And finally, the fusion feature map passes through a classifier to obtain a classification result for indicating whether the packing box has defects. In this way, the accuracy and efficiency of detecting defects of the packaging box can be improved.
In particular, it is considered that in actual image data, there may be some noise or outliers, which may interfere with the structure of the feature manifold. While the first and second feature maps typically contain a large amount of redundant information that may not significantly contribute to the task of detecting a defect, some key feature points in the first and second feature maps may be very important for the detection of a defect. Therefore, the geometric complexity constraint of using the sparse global feature manifold expression as a pivot is carried out on the high-dimensional feature manifolds of the first feature map and the second feature map, so that the redundancy of the features can be reduced, the key features are strengthened, the influence of noise is reduced, and the quality of the fused feature map and the defect detection performance are improved. This helps to enhance the characteristics of the defect, improve the differentiation of the defect, and reduce the redundancy of the characteristics.
Specifically, performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fused feature map, including: s1: the first feature map and the second feature map respectively pass through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector; s2: performing association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fusion full-perception feature matrix; s3, calculating transfer matrixes of each feature matrix of the first feature map along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of the transfer matrixes to obtain a plurality of first transfer feature values; s4: calculating transfer matrixes of all feature matrixes of the second feature graph along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of all transfer matrixes to obtain a plurality of second transfer feature values; s5: performing maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first feature manifold geometric complexity constraint characteristic vector and a second feature manifold geometric complexity constraint characteristic vector; s6: respectively weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature values of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as weights so as to obtain an optimized first feature map and an optimized second feature map; s7: and aggregating the optimized first feature map and the optimized second feature map along a channel dimension to obtain the fusion feature map.
In the technical scheme of the application, first, element-by-element global sensing is performed on high-dimensional feature manifolds of the first feature map and the second feature map respectively through a full-sensing module based on a full-connection layer so as to obtain a first full-sensing feature vector and a second full-sensing feature vector, and a fused full-sensing feature matrix between the first full-sensing feature vector and the second full-sensing feature vector is used as sparse global feature manifold expression of the first feature map and the second feature map. And taking the fused full-perception feature matrix as a pivot, and taking the global average value of each feature matrix of the first feature map and the second feature map along the channel dimension relative to the transfer matrix of the fused full-perception feature matrix as a quantitative measurement feature value of feature manifold geometric complexity of each feature matrix of the first feature map and the second feature map along the channel dimension relative to sparse global feature manifold expression of the first feature map and the second feature map. And further, taking a quantized measurement characteristic value of the geometric complexity of the characteristic manifold as a weight to perform high-dimensional characteristic manifold modulation on the first characteristic map and the second characteristic map and perform characteristic aggregation along the channel dimension to obtain the fusion characteristic map.
In this way, by performing geometric complexity constraint on the high-dimensional feature manifold of the first feature map and the second feature map with the sparse global feature manifold expression as a pivot, manifold features of each feature matrix along a channel dimension of the first feature map and the second feature map can be enabled to have feature manifold set complexity information, so that the fused feature map obtained through fusion has a class boundary with strong definition, and meanwhile, the classification model can be enabled to be more robust to changes of input data, and even when the model faces noise, abnormal values or unseen samples, the model can be well adapted and correctly classified.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Exemplary method
Fig. 1 is a flowchart of a detection method for package defect detection according to an embodiment of the present application. As shown in fig. 1, a detection method for detecting defects of a package according to an embodiment of the present application includes: s110, acquiring a packaging box image of a defect to be detected; s120, the packaging box image is processed through a first convolution neural network model using a spatial attention mechanism to obtain an image feature matrix; s130, the image feature matrix passes through a double-flow network model comprising a second convolution neural network and a third convolution neural network to obtain a first feature map and a second feature map, wherein the second convolution neural network uses a three-dimensional convolution kernel with a first scale, and the third convolution neural network uses a three-dimensional convolution kernel with a second scale; s140, performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fusion feature map; and S150, the fusion feature map passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
Fig. 2 is a block diagram of a detection method for detecting defects of a package according to an embodiment of the present application. In this architecture, as shown in fig. 2, first, a package image of a defect to be detected is acquired. The package image is then passed through a first convolutional neural network model using a spatial attention mechanism to obtain an image feature matrix. Then, the image feature matrix is passed through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale. Further, feature manifold-based geometric complexity constraints are performed on the first feature map and the second feature map to obtain a fused feature map. And finally, the fusion feature map passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
In step S110, a package image of a defect to be detected is acquired. It will be appreciated that this step is to obtain relevant data for the package and to provide input data for subsequent steps to perform defect detection of the package. The image of the packaging box to be detected is obtained by shooting and collecting data through a camera.
In step S120, the package image is passed through a first convolutional neural network model using a spatial attention mechanism to obtain an image feature matrix. It should be appreciated that spatial attention mechanisms are one technique that may help focus on the most important areas in an image. The package image may contain a large amount of irrelevant information or redundant areas, and by using a spatial attention mechanism, the model can be focused more on the most important area for defect detection, thereby improving the accuracy and efficiency of detection. In particular, by using a first convolutional neural network model of the spatial attention mechanism, the image may be weighted for spatial attention prior to extracting the image features, which may make the model more focused on the areas that are most important for defect detection and reduce interference with irrelevant information. In this way, the sensitivity of the model to pack defects can be increased, making it easier to detect defective areas. In summary, by using the first convolutional neural network model of the spatial attention mechanism, the most important features in the image can be extracted, so that the accuracy and efficiency of package defect detection are improved.
Specifically, the step of obtaining the image feature matrix by using the first convolutional neural network model of the spatial attention mechanism to the package image comprises the following steps: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; pooling the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; passing the activation feature map through a spatial attention module to obtain a spatial attention score map; multiplying the spatial attention score map and the activation feature map by position points to obtain a spatial attention feature map; the input of the first layer of the first convolutional neural network model is the packaging box image, and the output of the last layer of the first convolutional neural network model is the image feature matrix.
In step S130, the image feature matrix is passed through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolution kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolution kernel having a second scale. It should be appreciated that defects of different dimensions require features of different dimensions to represent. The second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, which aims at capturing features with smaller scale, and in the defect detection of the packing box, some tiny defects may need to be captured by the smaller convolutional kernel; the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale, whose purpose is to capture larger scale features, some larger defects or overall texture features may require a larger convolutional kernel to capture in defect detection of the package.
Fig. 3 is a flowchart of a method for detecting a package defect according to an embodiment of the present application, in which the image feature matrix is passed through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolutional kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel having a second scale. As shown in fig. 3, the step of passing the image feature matrix through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolution kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolution kernel with a second scale includes: s131, performing, in forward transfer of the layer, input data using the second convolutional neural network using the three-dimensional convolutional kernel having the first scale: performing three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution kernel having the first scale to obtain the first feature map; s132, performing, in forward transfer of the layer, input data using the third convolutional neural network using the three-dimensional convolutional kernel having the second scale: and carrying out three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution check with the second scale to obtain the second feature map.
In particular, it is considered that in actual image data, there may be some noise or outliers, which may interfere with the structure of the feature manifold. While the first and second feature maps typically contain a large amount of redundant information that may not significantly contribute to the task of detecting a defect, some key feature points in the first and second feature maps may be very important for the detection of a defect. Therefore, the geometric complexity constraint of using the sparse global feature manifold expression as a pivot is carried out on the high-dimensional feature manifolds of the first feature map and the second feature map, so that the redundancy of the features can be reduced, the key features are strengthened, the influence of noise is reduced, and the quality of the fused feature map and the defect detection performance are improved. This helps to enhance the characteristics of the defect, improve the differentiation of the defect, and reduce the redundancy of the characteristics.
In step S140, geometric complexity constraints based on feature manifolds are performed on the first feature map and the second feature map to obtain a fused feature map. It should be understood that the feature information of different scales can be combined by fusing the first feature map and the second feature map to form a more comprehensive and rich feature representation, so that the obtained fused feature map can better capture defect features of different scales, and the accuracy and the robustness of detection are improved.
Fig. 4 is a flowchart of performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fused feature map in a detection method for detecting a package box defect according to an embodiment of the present application. As shown in fig. 4, the performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fused feature map includes: s141, the first feature map and the second feature map are respectively passed through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector; s142, performing association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fused full-perception feature matrix; s143, calculating transfer matrixes of each feature matrix of the first feature map along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of the transfer matrixes to obtain a plurality of first transfer feature values; s144, calculating transfer matrixes of each feature matrix of the second feature map along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of the transfer matrixes to obtain a plurality of second transfer feature values; s145, carrying out maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first characteristic manifold geometric complexity constraint characteristic vector and a second characteristic manifold geometric complexity constraint characteristic vector; s146, weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature values of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as weights to obtain an optimized first feature map and an optimized second feature map; and S147, aggregating the optimized first feature map and the optimized second feature map along a channel dimension to obtain the fusion feature map.
Specifically, in the technical scheme of the application, first, element-by-element global sensing is performed on high-dimensional feature manifolds of the first feature map and the second feature map respectively through a full-sensing module based on a full-connection layer so as to obtain a first full-sensing feature vector and a second full-sensing feature vector, and a fused full-sensing feature matrix between the first full-sensing feature vector and the second full-sensing feature vector is used as sparse global feature manifold expression of the first feature map and the second feature map. And taking the fused full-perception feature matrix as a pivot, and taking the global average value of each feature matrix of the first feature map and the second feature map along the channel dimension relative to the transfer matrix of the fused full-perception feature matrix as a quantitative measurement feature value of feature manifold geometric complexity of each feature matrix of the first feature map and the second feature map along the channel dimension relative to sparse global feature manifold expression of the first feature map and the second feature map. And further, taking a quantized measurement characteristic value of the geometric complexity of the characteristic manifold as a weight to perform high-dimensional characteristic manifold modulation on the first characteristic map and the second characteristic map and perform characteristic aggregation along the channel dimension to obtain the fusion characteristic map.
In this way, by performing geometric complexity constraint on the high-dimensional feature manifold of the first feature map and the second feature map with the sparse global feature manifold expression as a pivot, manifold features of each feature matrix along a channel dimension of the first feature map and the second feature map can be enabled to have feature manifold set complexity information, so that the fused feature map obtained through fusion has a class boundary with strong definition, and meanwhile, the classification model can be enabled to be more robust to changes of input data, and even when the model faces noise, abnormal values or unseen samples, the model can be well adapted and correctly classified.
In step S150, the fused feature map is passed through a classifier to obtain a classification result, where the classification result is used to indicate whether the package box has a defect. The classifier is operative to compare the extracted features with known defect features to determine if the input package matches a known defect. In this way, the accuracy and efficiency of detecting defects of the packaging box can be improved.
Specifically, the step of passing the fused feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the packing box has a defect, and the step of including: processing the fused feature map using the classifier in a classification formula to generate the classification result; wherein, the classification formula is:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project()}
Where O is the classification result and Project () represents projecting the fused feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias vector of each fully connected layer, softmax is a normalized exponential function.
In summary, the detection method for detecting defects of the packaging box according to the embodiment of the application is explained, which is based on the artificial intelligent detection technology of machine vision, and whether the defects exist or not is judged by extracting and fusing multi-scale features of the images of the packaging box. Therefore, the problem of low efficiency in manual detection can be effectively solved, and meanwhile, the detection accuracy can be improved.
Exemplary System
Fig. 5 is a system block diagram of a detection system for package defect detection according to an embodiment of the present application. As shown in fig. 5, a detection system 100 for detecting defects of a package according to an embodiment of the present application includes: an image acquisition module 110, configured to acquire an image of a package box with a defect to be detected; a spatial attention encoding module 120, configured to obtain an image feature matrix by using a first convolutional neural network model of a spatial attention mechanism for the package box image; a multi-scale associated feature extraction module 130, configured to pass the image feature matrix through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, where the second convolutional neural network uses a three-dimensional convolution kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolution kernel having a second scale; a feature fusion module 140, configured to perform geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fused feature map; and the detection result generation module 150 is configured to pass the fusion feature map through a classifier to obtain a classification result, where the classification result is used to indicate whether the packing box has a defect.
In one example, in the above detection system 100 for detecting defects of a package, the spatial attention encoding module 120 includes: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: carrying out convolution processing on the input data to obtain a convolution characteristic diagram; pooling the convolution feature map to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; passing the activation feature map through a spatial attention module to obtain a spatial attention score map; multiplying the spatial attention score map and the activation feature map by position points to obtain a spatial attention feature map; the input of the first layer of the first convolutional neural network model is the packaging box image, and the output of the last layer of the first convolutional neural network model is the image feature matrix.
In one example, in the above detection system 100 for detecting defects of a package, the multi-scale associated feature extraction module 130 includes: a first scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the second convolutional neural network with a three-dimensional convolutional kernel of a first scale, respectively: performing three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution kernel having the first scale to obtain the first feature map; a second scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the third convolutional neural network with a three-dimensional convolutional kernel of a second scale, respectively: and carrying out three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution check with the second scale to obtain the second feature map.
In one example, in the above detection system 100 for detecting defects of a package, the feature fusion module 140 includes: the full-perception feature vector generation unit is used for respectively passing the first feature map and the second feature map through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector; the association coding unit is used for carrying out association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fusion full-perception feature matrix; the first transfer eigenvalue generation unit is used for calculating transfer matrixes of all eigenvalues of the first eigenvalue along the channel dimension relative to the fusion full-perception eigenvalue, and calculating the global average value of all transfer matrixes to obtain a plurality of first transfer eigenvalues; the second transfer characteristic value generation unit is used for calculating transfer matrixes of all characteristic matrixes of the second characteristic diagram along the channel dimension relative to the fused full-perception characteristic matrix, and calculating global average values of all the transfer matrixes to obtain a plurality of second transfer characteristic values; the normalization processing unit is used for performing maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first characteristic manifold geometric complexity constraint characteristic vector and a second characteristic manifold geometric complexity constraint characteristic vector; the weighting unit is used for respectively weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature value of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as a weight so as to obtain an optimized first feature map and an optimized second feature map; and the aggregation unit is used for aggregating the optimized first feature map and the optimized second feature map along the channel dimension to obtain the fusion feature map. .
In one example, in the above detection system 100 for detecting defects of a package, the detection result generating module 150 is configured to: processing the fused feature map using the classifier in a classification formula to generate the classification result; wherein, the classification formula is:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project()}
where O is the classification result and Project () represents projecting the fused feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the whole layersThe bias vector of the link layer, softmax, is a normalized exponential function.
In summary, the inspection system 100 for inspecting defects of a package according to the embodiment of the present application is illustrated, which is based on the artificial intelligence inspection technology of machine vision, and judges whether the defects exist by extracting and fusing multi-scale features of the images of the package. Therefore, the problem of low efficiency in manual detection can be effectively solved, and meanwhile, the detection accuracy can be improved.
As described above, the inspection system 100 for packing box defect inspection according to the embodiment of the present application may be implemented in various wireless terminals, such as a server or the like for packing box defect inspection. In one example, the detection system 100 for package defect detection according to embodiments of the present application may be integrated into a wireless terminal as one software module and/or hardware module. For example, the detection system 100 for package defect detection may be a software module in the operating system of the wireless terminal, or may be an application developed for the wireless terminal; of course, the detection system 100 for package defect detection may also be one of a plurality of hardware modules of the wireless terminal.
Alternatively, in another example, the detection system 100 for pack defect detection and the wireless terminal may be separate devices, and the detection system 100 for pack defect detection may be connected to the wireless terminal through a wired and/or wireless network and transmit interactive information in a agreed data format.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present application is described with reference to fig. 6.
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 6, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that may be executed by the processor 11 to implement the detection methods for pack defect detection and/or other desired functions of the various embodiments of the present application described above. Various contents such as a package image of a defect to be detected may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
The input means 13 may comprise, for example, a keyboard, a mouse, etc.
The output device 14 may output various information to the outside, including a result of judging whether the package is defective or not, and the like. The output means 14 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 10 that are relevant to the present application are shown in fig. 6 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present application may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in the detection method for pack defect detection according to the various embodiments of the present application described in the "exemplary methods" section of the present specification.
The computer program product may write program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in the detection method for pack defect detection according to the various embodiments of the present application described in the above "exemplary method" section of the present specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

Claims (10)

1. A method for detecting defects in a package, comprising:
acquiring a packaging box image of a defect to be detected;
the packaging box image is processed through a first convolution neural network model using a spatial attention mechanism to obtain an image feature matrix;
passing the image feature matrix through a dual-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
performing geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fusion feature map;
and the fusion feature map passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
2. The method for detecting defects in a package according to claim 1, wherein passing the package image through a first convolutional neural network model using a spatial attention mechanism to obtain an image feature matrix, comprises:
Input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model:
carrying out convolution processing on the input data to obtain a convolution characteristic diagram;
pooling the convolution feature map to obtain a pooled feature map;
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
passing the activation feature map through a spatial attention module to obtain a spatial attention score map;
multiplying the spatial attention score map and the activation feature map by position points to obtain a spatial attention feature map;
the input of the first layer of the first convolutional neural network model is the packaging box image, and the output of the last layer of the first convolutional neural network model is the image feature matrix.
3. The method for detecting defects in a package according to claim 2, wherein the step of passing the image feature matrix through a dual-flow network model including a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolution kernel having a first scale, and the third convolutional neural network uses a three-dimensional convolution kernel having a second scale, comprises:
Using the second convolution neural network with the three-dimensional convolution kernel of the first scale to perform respective processing on input data in forward transfer of the layer: performing three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution kernel having the first scale to obtain the first feature map;
using the third convolutional neural network with the three-dimensional convolutional kernel with the second scale to respectively perform the forward transfer of the layers on the input data: and carrying out three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution check with the second scale to obtain the second feature map.
4. A method for detecting defects in a package according to claim 3, wherein performing geometric complexity constraint based on feature manifold on the first and second feature maps to obtain a fused feature map comprises:
the first feature map and the second feature map respectively pass through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector;
performing association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fusion full-perception feature matrix;
Calculating transfer matrixes of all feature matrixes of the first feature graph along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of all transfer matrixes to obtain a plurality of first transfer feature values;
calculating transfer matrixes of all feature matrixes of the second feature graph along the channel dimension relative to the fused full-perception feature matrix, and calculating global average values of all transfer matrixes to obtain a plurality of second transfer feature values;
performing maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first feature manifold geometric complexity constraint characteristic vector and a second feature manifold geometric complexity constraint characteristic vector;
respectively weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature values of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as weights so as to obtain an optimized first feature map and an optimized second feature map;
and aggregating the optimized first feature map and the optimized second feature map along a channel dimension to obtain the fusion feature map.
5. The method for detecting defects in a package according to claim 4, wherein the step of passing the fused feature map through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the package has defects, and the method comprises the steps of: processing the fused feature map using the classifier in a classification formula to generate the classification result;
wherein, the classification formula is:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project()}
where O is the classification result and Project () represents projecting the fused feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias vector of each fully connected layer, softmax is a normalized exponential function.
6. A detection system for detecting defects in a package, comprising:
the image acquisition module is used for acquiring an image of the packaging box with the defect to be detected;
the spatial attention coding module is used for enabling the packaging box image to obtain an image feature matrix through a first convolution neural network model using a spatial attention mechanism;
the multi-scale associated feature extraction module is used for enabling the image feature matrix to pass through a double-flow network model comprising a second convolutional neural network and a third convolutional neural network to obtain a first feature map and a second feature map, wherein the second convolutional neural network uses a three-dimensional convolutional kernel with a first scale, and the third convolutional neural network uses a three-dimensional convolutional kernel with a second scale;
The feature fusion module is used for carrying out geometric complexity constraint based on feature manifold on the first feature map and the second feature map to obtain a fusion feature map;
and the detection result generation module is used for enabling the fusion feature map to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the packing box has defects or not.
7. The inspection system for package defect inspection of claim 6, wherein the spatial attention encoding module is configured to:
input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model:
carrying out convolution processing on the input data to obtain a convolution characteristic diagram;
pooling the convolution feature map to obtain a pooled feature map;
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
passing the activation feature map through a spatial attention module to obtain a spatial attention score map;
multiplying the spatial attention score map and the activation feature map by position points to obtain a spatial attention feature map;
the input of the first layer of the first convolutional neural network model is the packaging box image, and the output of the last layer of the first convolutional neural network model is the image feature matrix.
8. The inspection system for package defect inspection of claim 7, wherein the multi-scale associated feature extraction module comprises:
a first scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the second convolutional neural network with a three-dimensional convolutional kernel of a first scale, respectively: performing three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution kernel having the first scale to obtain the first feature map;
a second scale feature extraction unit, configured to perform, in forward transfer of the layer, input data using the third convolutional neural network with a three-dimensional convolutional kernel of a second scale, respectively: and carrying out three-dimensional convolution processing, mean pooling processing and nonlinear activation processing on the input data based on the three-dimensional convolution check with the second scale to obtain the second feature map.
9. The inspection system for package defect inspection of claim 8, wherein the feature fusion module comprises:
the full-perception feature vector generation unit is used for respectively passing the first feature map and the second feature map through a full-perception module based on a full-connection layer to obtain a first full-perception feature vector and a second full-perception feature vector;
The association coding unit is used for carrying out association coding on the first full-perception feature vector and the second full-perception feature vector to obtain a fusion full-perception feature matrix;
the first transfer eigenvalue generation unit is used for calculating transfer matrixes of all eigenvalues of the first eigenvalue along the channel dimension relative to the fusion full-perception eigenvalue, and calculating the global average value of all transfer matrixes to obtain a plurality of first transfer eigenvalues;
the second transfer characteristic value generation unit is used for calculating transfer matrixes of all characteristic matrixes of the second characteristic diagram along the channel dimension relative to the fused full-perception characteristic matrix, and calculating global average values of all the transfer matrixes to obtain a plurality of second transfer characteristic values;
the normalization processing unit is used for performing maximum value-based normalization processing on the first transfer characteristic values and the second transfer characteristic values to obtain a first characteristic manifold geometric complexity constraint characteristic vector and a second characteristic manifold geometric complexity constraint characteristic vector;
the weighting unit is used for respectively weighting each feature matrix of the first feature map along the channel dimension and each feature matrix of the second feature map along the channel dimension by taking the feature value of each position in the first feature manifold geometric complexity constraint feature vector and the second feature manifold geometric complexity constraint feature vector as a weight so as to obtain an optimized first feature map and an optimized second feature map;
And the aggregation unit is used for aggregating the optimized first feature map and the optimized second feature map along the channel dimension to obtain the fusion feature map.
10. The inspection system for inspecting defects in packages according to claim 9, wherein the inspection result generating module is configured to: processing the fused feature map using the classifier in a classification formula to generate the classification result;
wherein, the classification formula is:
O=softmax{(W n ,B n ):…:(W 1 ,B 1 )|Project()}
wherein O is the classificationAs a result, project () represents projecting the fused feature map as a vector, W 1 To W n Weight matrix for all the connection layers of each layer, B 1 To B n Representing the bias vector of each fully connected layer, softmax is a normalized exponential function.
CN202311216001.5A 2023-09-19 2023-09-19 Detection method and system for detecting defects of packaging box Pending CN117274689A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311216001.5A CN117274689A (en) 2023-09-19 2023-09-19 Detection method and system for detecting defects of packaging box

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311216001.5A CN117274689A (en) 2023-09-19 2023-09-19 Detection method and system for detecting defects of packaging box

Publications (1)

Publication Number Publication Date
CN117274689A true CN117274689A (en) 2023-12-22

Family

ID=89203854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311216001.5A Pending CN117274689A (en) 2023-09-19 2023-09-19 Detection method and system for detecting defects of packaging box

Country Status (1)

Country Link
CN (1) CN117274689A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117910073A (en) * 2024-01-18 2024-04-19 和源顺(湖州)工艺品有限公司 Artwork package design optimization system and method based on 3D printing technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117910073A (en) * 2024-01-18 2024-04-19 和源顺(湖州)工艺品有限公司 Artwork package design optimization system and method based on 3D printing technology

Similar Documents

Publication Publication Date Title
CN115375691B (en) Image-based semiconductor diffusion paper source defect detection system and method thereof
CN117274689A (en) Detection method and system for detecting defects of packaging box
CN115456789B (en) Abnormal transaction detection method and system based on transaction pattern recognition
CN116309580B (en) Oil and gas pipeline corrosion detection method based on magnetic stress
KR20210038303A (en) System and method of classifying manufactured products
CN116858789A (en) Food safety detection system and method thereof
Hu et al. LE–MSFE–DDNet: a defect detection network based on low-light enhancement and multi-scale feature extraction
CN114418980A (en) Deep learning method, system, equipment and medium for printed circuit board defect identification
CN117030129A (en) Paper cup on-line leakage detection method and system thereof
CN117036271A (en) Production line quality monitoring method and system thereof
CN115810005A (en) Corrugated carton defect detection acceleration method, system, equipment and storage medium based on parallel computing
Miao et al. Cost‐Sensitive Siamese Network for PCB Defect Classification
KR101966750B1 (en) Device, system, and method for estimating visibility by machine learning using image data
CN117636045A (en) Wood defect detection system based on image processing
CN111738290B (en) Image detection method, model construction and training method, device, equipment and medium
CN111476144B (en) Pedestrian attribute identification model determining method and device and computer readable storage medium
CN116797586A (en) Automatic paper cup defect detection method and system
CN117131348A (en) Data quality analysis method and system based on differential convolution characteristics
CN117173154A (en) Online image detection system and method for glass bottle
CN111340139A (en) Method and device for judging complexity of image content
CN112960213A (en) Intelligent package quality detection method using characteristic probability distribution representation
Zhang et al. Automatic forgery detection for x-ray non-destructive testing of welding
Wong et al. Automatic target recognition based on cross-plot
CN117250521B (en) Charging pile battery capacity monitoring system and method
CN117474881A (en) System and method for detecting on-line quality of medicinal glass bottle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination