CN117910073A - Artwork package design optimization system and method based on 3D printing technology - Google Patents

Artwork package design optimization system and method based on 3D printing technology Download PDF

Info

Publication number
CN117910073A
CN117910073A CN202410077741.3A CN202410077741A CN117910073A CN 117910073 A CN117910073 A CN 117910073A CN 202410077741 A CN202410077741 A CN 202410077741A CN 117910073 A CN117910073 A CN 117910073A
Authority
CN
China
Prior art keywords
artwork
detected
package
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410077741.3A
Other languages
Chinese (zh)
Inventor
郑益
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heyuan Shun Huzhou Crafts Co ltd
Original Assignee
Heyuan Shun Huzhou Crafts Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Heyuan Shun Huzhou Crafts Co ltd filed Critical Heyuan Shun Huzhou Crafts Co ltd
Priority to CN202410077741.3A priority Critical patent/CN117910073A/en
Publication of CN117910073A publication Critical patent/CN117910073A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/12Geometric CAD characterised by design entry means specially adapted for CAD, e.g. graphical user interfaces [GUI] specially adapted for CAD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/10Additive manufacturing, e.g. 3D printing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2113/00Details relating to the application field
    • G06F2113/20Packaging, e.g. boxes or containers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Architecture (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of 3D printing, and particularly discloses a artwork package design optimization system and method based on a 3D printing technology. The automatic detection system can improve the accuracy and efficiency of the detection of the packaging quality and reduce the requirement of manual detection.

Description

Artwork package design optimization system and method based on 3D printing technology
Technical Field
The application relates to the technical field of 3D printing, in particular to a artwork package design optimization system and method based on a 3D printing technology.
Background
3D printing (3D pr inting, also called additive manufacturing, additive manufacturing) is a technology for constructing objects by using a layer-by-layer printing method by using a bondable material such as powdered metal or plastic based on a digital model file, and the technology is proposed in the middle of the 80 th century in the united states at the earliest, 3D printing is often used for manufacturing models in the fields of die manufacturing, industrial design and the like, and then gradually used for directly manufacturing some products, which has profound effects on the traditional process flow, production line, factory mode and industrial chain combination, and is a representative subversion technology of manufacturing industry.
Along with the rapid development of the economy in China, the production of the artware in China is also more and more rapidly developed, the artware is an artistic product which is prepared by processing raw materials or semi-finished products by hands or machines, the artware is derived from life, but the value higher than the life is created, the artware is a crystal of people wisdom, the creativity and the artistry of human beings are fully reflected, and the artware is an invaluable treasure of human beings. In the process of manufacturing part of the artware, 3D printing equipment is used for printing treatment, a 3D printing process is introduced into the manufacturing of the artware package, and meanwhile, the traditional turnover molding process is combined, so that miniature and small-sized exquisite artware can be manufactured, and the sectional molding of large-size artware can be realized. In the existing handicraft production device, the existing handicraft packaging piece production positioning device can enable the packaging piece to be worn in the production process in the use process, and the product quality of the packaging piece is affected. However, conventional quality inspection of artwork packages often requires visual inspection by means of manpower, which is time-consuming and laborious, and is prone to leakage. By using artificial intelligence techniques, automated packaging quality detection can be achieved. The automatic detection can improve the accuracy and efficiency of detection and reduce the requirement of manual detection.
Accordingly, an artwork package design optimization system and method based on 3D printing technology is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a artwork package design optimization system and method based on a 3D printing technology, which realize automatic package quality detection and improve accuracy and efficiency through image feature extraction and classifier analysis.
Accordingly, according to one aspect of the present application, there is provided a artwork package design optimization system based on 3D printing technology, comprising:
The artwork package image acquisition module is used for acquiring artwork package images to be detected and reference artwork package images;
The artwork packaging feature extraction module is used for extracting a packaging feature matrix to be detected and a reference packaging feature matrix from the artwork external packaging image to be detected and the reference artwork external packaging image respectively;
The feature matrix fusion module is used for constructing an associated feature matrix between the packaging feature matrix to be detected and the reference packaging feature matrix, and optimizing the associated feature matrix to obtain an optimized associated feature matrix;
And the artware packaging result analysis module is used for enabling the optimized association characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the artware packaging to be detected meets the preset requirements of the reference artware packaging.
According to another aspect of the present application, there is provided a method for optimizing artwork package design based on 3D printing technology, comprising:
Acquiring an outer package image of the artwork to be detected and an outer package image of a reference artwork;
extracting a packaging characteristic matrix to be detected and a reference packaging characteristic matrix from the to-be-detected artwork external packaging image and the reference artwork external packaging image respectively;
constructing an association characteristic matrix between the to-be-detected packaging characteristic matrix and the reference packaging characteristic matrix, and optimizing the association characteristic matrix to obtain an optimized association characteristic matrix;
And the optimized association characteristic matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the to-be-detected artwork package meets the preset requirements of the reference artwork package.
Compared with the prior art, the artwork package design optimization system and method based on the 3D printing technology provided by the application have the advantages that the artwork package image to be detected and the reference artwork package image are obtained, the package feature matrix is extracted, the package feature matrix is fused to obtain the associated feature matrix, and the associated feature matrix is analyzed through the classifier to obtain the classification result, so that whether the artwork package to be detected meets the preset requirement of the reference artwork package or not is judged. The automatic detection system can improve the accuracy and efficiency of the detection of the packaging quality and reduce the requirement of manual detection.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a block diagram schematic diagram of an artwork package design optimization system based on 3D printing technology according to an embodiment of the present application.
Fig. 2 is a schematic block diagram of an artwork packaging feature extraction module in an artwork packaging design optimization system based on 3D printing technology according to an embodiment of the present application.
Fig. 3 is a schematic block diagram of an image blocking processing unit in an artwork package design optimization system based on a 3D printing technology according to an embodiment of the present application.
Fig. 4 is a flowchart of a method for optimizing a design of a artwork package based on a 3D printing technique according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments, features and aspects of the application will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better illustration of the application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, well known methods, procedures, components, and circuits have not been described in detail so as not to obscure the present application.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
FIG. 1 illustrates a block diagram schematic of a artwork package design optimization system based on 3D printing techniques according to an embodiment of the present application. As shown in fig. 1, a artwork package design optimization system 100 based on a 3D printing technology according to an embodiment of the present application includes: the artwork package image acquisition module 110 is used for acquiring the artwork package image to be detected and the reference artwork package image; the artwork packaging feature extraction module 120 is configured to extract a packaging feature matrix to be detected and a reference packaging feature matrix from the artwork external package image to be detected and the reference artwork external package image respectively; the feature matrix fusion module 130 is configured to construct an associated feature matrix between the to-be-detected package feature matrix and the reference package feature matrix, and optimize the associated feature matrix to obtain an optimized associated feature matrix; and the artwork packaging result analysis module 140 is configured to pass the optimized correlation feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the artwork packaging to be detected meets the predetermined requirements of the reference artwork packaging.
In an embodiment of the present application, the artwork package image acquisition module 110 is configured to acquire an artwork package image to be detected and a reference artwork package image. It should be appreciated that by comparing the overwrap images of the artwork to be inspected and the reference artwork, any differences or defects may be detected and a determination may be made as to whether there are design or manufacturing issues. Therefore, the packaging quality problem can be found and corrected in time, and the packaging of the artware is ensured to meet the expected standard and requirement. Consider that for comparison and analysis, it is to be assessed whether the artwork package to be tested meets the predetermined requirements of the reference artwork package. Any differences or defects, such as breakage, deformation, dislocation and the like, can be detected by comparing the artwork external package image to be detected with the reference artwork external package image. Therefore, the packaging quality problem can be found in time, the artware is well protected in the transportation and display processes, and the damage and quality problem are avoided. The reference artwork external packaging image represents predetermined packaging requirements and criteria. By comparing the detected handicraft package with the reference handicraft, whether the detected handicraft package meets the preset requirement or not can be determined. If the difference or the defect exists, the adjustment and the improvement can be performed in time, the package is ensured to meet the design requirement, and the overall quality and the image of the artwork are improved. By comparing and analyzing the outer package image of the to-be-detected artware and the outer package image of the reference artware, the design problem or the defect can be found. This may provide references and guidance for optimization of package design, improving package construction, material selection and assembly to improve package functionality, reliability and aesthetics.
Specifically, a camera or a scanner and other devices can be used to shoot or scan the outer package of the artwork to be detected so as to obtain an image of the outer package. These images may include photographs or scanned images of different angles and views. The external packing image of the reference artwork can be a standard sample designed in advance, or can be an artwork package which is manufactured and meets the requirements. In the same manner, the outer package of the reference artwork can be photographed or scanned by using a camera or a scanner or the like to obtain an image of the outer package thereof. In acquiring an image, a high resolution device may be used to ensure sharpness and accuracy of the image. In addition, different lighting conditions and angles may be used for shooting or scanning in order to better capture the details and characteristics of the package.
In the embodiment of the present application, the artwork packaging feature extraction module 120 is configured to extract a packaging feature matrix to be detected and a reference packaging feature matrix from the artwork external package image to be detected and the reference artwork external package image respectively. It should be appreciated that by extracting the package feature matrix to be detected and the reference package feature matrix, the two may be compared to determine the similarity and difference between them. Therefore, the similarity between the to-be-detected artwork package and the reference artwork package can be quantitatively evaluated, and whether the to-be-detected package meets the preset standard and requirement or not is judged. The characteristic matrix of the package is extracted, so that the to-be-detected artwork package and the reference artwork package can be deeply analyzed. These feature matrices may include information about the shape, size, structure, details, etc. of the package. By comparing and analyzing these features, problems or deficiencies in the package design can be found for optimization and improvement. Extracting the to-be-detected package feature matrix and the reference package feature matrix can convert the package features into quantifiable indexes. Algorithms and models can then be used to analyze and evaluate these features to yield specific numerical results such as similarity scores, geometric parameters, and the like. These results can provide basis and reference for optimization of package design.
Specifically, in one embodiment of the present application, fig. 2 illustrates a block diagram schematic diagram of an artwork packaging feature extraction module in an artwork packaging design optimization system based on a 3D printing technology according to an embodiment of the present application. As shown in fig. 2, in the above-mentioned artwork packaging design optimization system 100 based on the 3D printing technology, the artwork packaging feature extraction module 120 includes: the image blocking processing unit 121 is configured to block the to-be-detected artwork external package image and the reference artwork external package image respectively, and then obtain a plurality of to-be-detected package image block feature vectors and a plurality of reference package image block feature vectors through convolutional encoding; the twin network unit 122 is configured to arrange the plurality of to-be-detected package image block feature vectors and the plurality of reference package image block feature vectors into matrices respectively, and then obtain the to-be-detected package feature matrix and the reference package feature matrix through a twin network.
Accordingly, in a specific example of the present application, the image blocking processing unit 121 is configured to block the to-be-detected artwork external package image and the reference artwork external package image respectively, and then obtain a plurality of to-be-detected package image block feature vectors and a plurality of reference package image block feature vectors through convolutional encoding. It should be understood that the artwork external package to be detected and the reference artwork external package may contain a great deal of detail and characteristic information. To better capture this information, dividing the image into tiles may enable each tile to better represent a local feature. By extracting features from each patch, the shape, texture, color, etc. features of the package can be more fully described. Feature extraction of the entire image may result in high-dimensional feature vectors, which may increase computational complexity and may cause dimensional disasters. By dividing the image into small blocks and performing feature extraction on each small block, a plurality of low-dimensional feature vectors can be obtained. Therefore, the dimension of the feature vector can be reduced, the calculation load is reduced, and the expression capability of the feature is improved. After the to-be-detected artwork external packaging image and the reference artwork external packaging image are divided into small blocks, the similarity between each small block can be correspondingly compared. This allows a more accurate assessment of the degree of difference and similarity between the package to be tested and the reference package. By comparing the feature vectors of the small blocks, a feature comparison result with finer granularity can be obtained, and the similarity of the package can be evaluated more accurately. After the image is divided into small blocks, the robustness of the system to external noise, deformation and interference can be enhanced. Even if some areas in the entire image are damaged or deformed, other areas can still provide useful characteristic information. By block processing and convolution coding, the system can more flexibly process various package image conditions, and the robustness and stability are improved.
Specifically, the artwork external package image to be detected and the reference artwork external package image are respectively divided into small blocks. The image may be divided using a fixed size sliding window or other blocking algorithm. The size of each nub can be determined according to particular needs, with the appropriate size generally being selected to cover important packaging features. Each block is convolutionally encoded. Convolutional coding is a feature extraction method based on convolutional neural network, and can extract advanced features of images. A pre-trained convolutional neural network model (e.g., VGG, res net, etc.) is used as the feature extractor. Each segment is input into the model and a feature representation of the segment is obtained. Higher level features can be progressively extracted through the convolution and pooling layers of the model. For each block, a feature vector is extracted from the convolutionally encoded output. The output of the fully connected layer may be selected or further pooled to reduce the dimension. Each block will get a feature vector representing the feature information of the block. And obtaining a plurality of feature vectors of the package image blocks to be detected after the package images of the artware to be detected are subjected to block processing and convolution encoding. Each feature vector represents feature information of one block. And similarly, for the reference artwork external package image, obtaining a plurality of reference package image block feature vectors after block processing and convolution encoding. Each feature vector represents feature information of one block.
Further, fig. 3 illustrates a block diagram schematic of an image blocking processing unit in the artwork package design optimization system based on the 3D printing technology according to an embodiment of the present application. As shown in fig. 3, in the artwork packaging feature extraction module 120 of the artwork packaging design optimization system 100 based on the 3D printing technology, the image segmentation processing unit 121 includes: an image block sequence subunit 1211, configured to perform image blocking processing on the to-be-detected artwork external package image and the reference artwork external package image respectively to obtain a to-be-detected artwork external package image block sequence and a reference artwork external package image block sequence; the to-be-detected convolutional encoding subunit 1212 is configured to obtain the feature vectors of the plurality of to-be-detected packaging image blocks by using a first convolutional neural network model of spatial attention, where the to-be-detected packaging image block sequence is the to-be-detected artwork external packaging image block sequence; a reference convolutional encoding subunit 1213, configured to obtain the plurality of reference wrapped image block feature vectors by using a second convolutional neural network model of spatial attention for the reference artwork-wrapped image block sequence, respectively.
Specifically, the image block sequence subunit 1211 is configured to perform image blocking processing on the to-be-detected artwork external package image and the reference artwork external package image respectively to obtain a to-be-detected artwork external package image block sequence and a reference artwork external package image block sequence. It should be appreciated that in performing the image feature comparison and similarity measurement, the sequence of artwork-external-package image tiles to be detected and the sequence of reference artwork-external-package image tiles need to have the same dimensions to ensure that the comparison between the tiles is efficient and accurate. Otherwise, comparison between blocks of different scales will introduce scale bias, making the feature comparison unreliable. And uniformly partitioning the to-be-detected artwork external package image and the reference artwork external package image, and dividing the images into small blocks with the same size. The image may be divided using a fixed size sliding window or other blocking algorithm. Ensuring that the image block sequence of the outer package of the artware to be detected and the image block sequence of the outer package of the reference artware have the same scale. This can be achieved by adjusting the block size at the time of the blocking process or performing a scaling operation on the image. And dividing the external packing image of the artwork to be detected into blocks to obtain a sequence consisting of small blocks. Each image block has the same dimensions and contains characteristic information of the block. And dividing the reference artwork external packing image into blocks to obtain a sequence of small blocks. Each image block has the same dimensions and contains characteristic information of the block. By uniformly partitioning the to-be-detected artwork external package image and the reference artwork external package image and ensuring that the to-be-detected artwork external package image block sequence and the reference artwork external package image block sequence have the same scale, the follow-up characteristic comparison and similarity measurement can be ensured to be accurate and reliable. Thus, the similarity and the difference between the to-be-detected artwork package and the reference artwork package can be better evaluated.
Accordingly, in a specific example of the present application, the image block sequence subunit 1211 is configured to: and respectively carrying out image uniform blocking treatment on the to-be-detected artwork external packing image and the reference artwork external packing image to obtain the to-be-detected artwork external packing image block sequence and the reference artwork external packing image block sequence, wherein the dimensions between each to-be-detected artwork external packing image block in the to-be-detected artwork external packing image block sequence and each reference artwork external packing image block in the reference artwork external packing image block sequence are the same.
Specifically, the to-be-detected convolutional encoding subunit 1212 is configured to obtain the feature vectors of the plurality of to-be-detected package image blocks by using a first convolutional neural network model of spatial attention, where the to-be-detected package image block sequence is respectively. It should be appreciated that the outer packaging of the artwork to be inspected may generally be divided into a plurality of image tiles, each image tile corresponding to a small region or portion. The image blocks are organized into a sequence in a sequence such that each image block is independently feature extracted and represented. Spatial attention is a mechanism for weighting different regions of an image in a convolutional neural network to focus more on important regions and features. By using spatial attention, the network can learn the importance weights of different areas in the image, thereby capturing key features in the packaging image to be detected more accurately. The first convolutional neural network model refers to a model for extracting features of a sequence of image blocks to be detected. This model is typically composed of a convolution layer, a pooling layer, a full-join layer, etc., and can learn the local features of the image block and convert them into the form of feature vectors. And extracting the characteristics of each packaging image block to be detected by using a first convolution neural network model of the spatial attention to obtain a corresponding characteristic vector. This feature vector may be regarded as an abstract representation of the image block, which contains the local feature information of the image block.
Accordingly, in a specific example of the present application, the convolutional encoding subunit 1212 to be detected is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model: performing convolution processing on the input data based on convolution check to obtain a convolution characteristic diagram; passing the convolution feature map through a spatial attention module to obtain the spatial attention score matrix; multiplying the spatial attention score matrix and each feature matrix of the convolution feature map along the channel dimension by the spatial attention feature map according to position points; carrying out pooling treatment on the space attention feature map along the channel dimension to obtain a pooled feature map; non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map; the input of the first layer of the first convolutional neural network model is the image block sequence of the outer package of the artwork to be detected, and the output of the last layer of the first convolutional neural network model is the characteristic vectors of the plurality of image blocks to be detected.
Specifically, the reference convolutional encoding subunit 1213 is configured to obtain the plurality of reference wrapped image block feature vectors by using a second convolutional neural network model of spatial attention for the reference artwork external package image block sequence respectively. It should be appreciated that in order to extract the feature vectors of the reference artwork external package image tiles. By extracting these feature vectors, important features of the reference package can be captured and used for comparison and evaluation with the package to be inspected. The sequence of reference artwork-overwrapped image tiles is processed using a second convolutional neural network model having spatial attention. Spatial attention is a mechanism that allows a network to focus on important areas or features in an image. By introducing spatial attention, the model can learn to assign higher weights to the relevant regions and capture meaningful information. Specifically, the sequence of reference artwork external package image blocks is input into a second convolutional neural network model, and the feature vectors of a plurality of reference package image blocks are obtained from the model by utilizing a spatial attention mechanism in the model. Each feature vector represents a feature of a particular reference packed image block. These feature vectors encode important visual information that can be used for subsequent classification, matching or comparison with the packaged image block to be inspected. Thus, by using a second convolutional neural network model with spatial attention, a sequence of reference artwork-overwrapped image tiles may be processed to obtain a feature vector for each reference overwrapped image tile. These feature vectors capture important features of the reference package and can be used for further analysis and comparison with the package to be inspected.
Accordingly, in a specific example of the present application, the twin network unit 122 is configured to arrange the plurality of to-be-detected package image block feature vectors and the plurality of reference package image block feature vectors into matrices respectively, and then obtain the to-be-detected package feature matrix and the reference package feature matrix through a twin network. It should be understood that the feature vectors of the image blocks to be detected and the feature vectors of the reference image blocks to be detected are arranged in a matrix, and their feature representations may be unified into the form of a matrix. This has the advantage that matrix manipulation and deep learning models can be conveniently applied to process and analyze these features. By using the twin network, the feature matrix of the package image block to be detected and the feature matrix of the reference package image block can be processed simultaneously. Therefore, the advantages of parallel computing can be fully utilized, and the processing speed and the processing efficiency are improved. The twinning network may learn the feature representation between the package to be detected and the reference package and encode it into more meaningful and distinguishable feature vectors. Through deep learning of the feature matrix, the network can automatically extract local and global features of the image block, so that the characteristics and the structure of the package are better captured. By comparing and matching the to-be-detected package feature matrix with the reference package feature matrix, the similarity and the relevance between the to-be-detected package feature matrix and the reference package feature matrix can be found. This helps to achieve the relevant tasks of package identification, matching, and comparison.
Specifically, the twin network unit 122 is configured to: and respectively arranging the feature vectors of the image blocks to be detected and the feature vectors of the reference image blocks to be detected into a global feature matrix of the image blocks to be detected and a global feature matrix of the reference image blocks to be detected, and then obtaining the feature matrix to be detected and the reference image blocks through a twin network model comprising a first image encoder and a second image encoder, wherein the first image encoder and the second image encoder have the same network structure. The first image encoder and the second image encoder are a third convolutional neural network model comprising a plurality of hybrid convolutional layers. Further, the twin network unit includes: performing multi-scale depth convolution coding on the global feature matrix of the image to be detected by using a first image encoder of the twin network model to obtain the feature matrix to be detected; and performing multi-scale depth convolution encoding on the reference packaging image global feature matrix by using a second image encoder of the twin network model to obtain the reference packaging feature matrix.
In the embodiment of the present application, the feature map fusion module 130 is configured to construct an association feature matrix between the to-be-detected package feature matrix and the reference package feature matrix, and optimize the association feature matrix to obtain an optimized association feature matrix. It should be appreciated that the feature vectors of the packages to be inspected and the feature vectors of the reference packages are combined to form an associated feature matrix. Typically, the size of this matrix is the number of package features to be detected multiplied by the number of reference package features. Each element in the correlation feature matrix represents a degree of similarity or correlation between the package feature to be detected and the reference package feature. This can help us quantify the relationship between the package to be tested and the reference package and determine if they have similar characteristics. The correlation characteristic matrix may provide an indicator for evaluating the similarity between the package to be tested and the reference package. By comparing the values of the elements in the matrix, one can determine the relative degree of similarity between the package to be tested and the reference package, for sorting, categorizing or other further analysis.
In particular, it is considered that in the technical solution of the present application, the attention mechanism is applied to the convolutional neural network model for extracting the feature vectors of the artwork external package image block to be detected and the reference artwork external package image block. The attention mechanism may make the network model more focused on important features or areas, thereby improving the performance of the model. However, if the attention mechanism is applied without limitation, it may cause a problem of degradation of information of the feature distribution in different directions during parameter updating of the model. The problem of degradation of information of the feature distribution in different directions during the parameter updating of the model is mainly due to the fact that the attention mechanism is over focusing on a specific feature or area, so that other features cannot be sufficiently updated during the updating. This may result in a reduction in the variability of features in different directions, thereby reducing the perceptibility of the model to feature variations in different directions. In order to solve the problem, an inter-feature node topology aggregation method based on an objective function can be adopted to optimize the associated feature matrix. This approach constrains the role of the attention mechanism by introducing an objective function so that the features are updated in balance in different directions. In particular, the objective function may be designed to encourage variability of features in different directions, thereby preserving diversity and richness of features. By optimizing the objective function, the attention mechanism can be enabled to act on the feature distribution more uniformly, and the problem of information degradation of the features in different directions is avoided. Through topological aggregation among feature nodes based on objective functions, diversity and richness of features in different directions can be maintained, and therefore classification performance of the model for packaging the artware to be detected is improved. The optimized association characteristic matrix can better capture the difference between the artwork package to be detected and the reference artwork package, so as to accurately indicate whether the artwork package to be detected meets the preset requirements of the reference artwork package. Based on the above, in the technical scheme of the application, the association feature matrix is subjected to the inter-feature node topology aggregation based on the objective function.
Specifically, in one embodiment of the present application, the feature map fusion module 130 includes: the fusion unit is used for fusing the packaging characteristic matrix to be detected and the reference packaging characteristic matrix to obtain an association characteristic matrix; and the optimization unit is used for carrying out topology aggregation between feature nodes based on the objective function on the associated feature matrix to obtain an optimized associated feature matrix. Accordingly, the optimizing unit includes: a feature node factor calculating subunit, configured to calculate feature node factors based on an objective function for each row vector in the association feature matrix; and the characteristic matrix weighting subunit is used for respectively weighting each row vector in the associated characteristic matrix by using characteristic node factors based on the objective function corresponding to each row vector so as to obtain the optimized associated characteristic matrix.
Further, the computation feature node factor subunit is configured to: calculating characteristic node factors based on an objective function of each row vector in the associated characteristic matrix according to the following formula;
Wherein, the formula is:
wi=-log[|softmax(Vi)-τ|]×bool[softmax(Vi)-τ]+α||Vi||F
Wherein V i represents an i-th row vector in the correlation feature matrix, softmax (V i) represents a class probability value obtained by a classifier for the i-th row vector in the correlation feature matrix alone, α represents a predetermined hyper-parameter, τ is a hyper-parameter representing a shift value, bol represents a boolean function, log represents a logarithmic function value based on 2, V i||F represents a Frobenius norm of the i-th row vector in the correlation feature matrix, and w i represents an objective function-based feature node factor of the i-th row vector.
Further, the bool function is expressed as:
That is, in order to avoid the problem of information degradation of feature distribution in different directions during parameter updating of a model due to an attention mechanism, a method for topological aggregation between feature nodes based on an objective function is provided herein, and the associated feature matrix is optimized to obtain a feature representation with more discriminant. Specifically, the method considers that the class probability values of each row vector in the associated feature matrix obtained based on the Softmax function follow the probability distribution of the class probability values under different attention mechanisms, so that the probability values of each class are more similar to the real class distribution by carrying out information compensation on the shift of the probability distribution, and the information entropy brought by compensation is maximized through the bool function and the F norm, thereby effectively solving the problem of information degradation. Therefore, the accuracy of classification judgment of the correlation feature matrix through the classifier is improved, and meanwhile generalization capability and robustness of the model are enhanced.
In the embodiment of the present application, the artwork packaging result analysis module 140 is configured to pass the correlation feature matrix through a classifier to obtain a classification result, where the classification result is used to indicate whether the artwork packaging to be detected meets the predetermined requirements of the reference artwork packaging. It should be appreciated that the correlation feature matrix is obtained by fusing the to-be-detected package feature matrix with the reference package feature matrix. The correlation feature matrix contains feature correlation information between the packaging image to be detected and the reference packaging image. The feature correlations may capture similarities and differences between the two images, reflecting the relative relationship between the package to be tested and the reference package. The associated feature matrix is input into a classifier, and the associated feature map can be mapped to different categories or labels through the learning and deducing processes of the classifier. The classifier can be a traditional machine learning algorithm, and the object of the classifier is to classify the resident license of the associated feature through the classifier according to the feature representation of the associated feature matrix to obtain a classification result. This classification result may indicate whether the artwork package to be detected meets the predetermined requirements of the reference artwork package. In general, the classification result may be binary, indicating whether the package to be tested is satisfactory or unsatisfactory.
Accordingly, in one embodiment of the present application, the artwork packaging result analysis module comprises: expanding the optimized association feature matrix into classification feature vectors according to row vectors or column vectors; performing full-connection coding on the classification feature vectors by using a full-connection layer of the classifier to obtain full-connection coding feature vectors; the fully-connected coding feature vector is subjected to a Softmax classification function of the classifier to obtain a first probability that the to-be-detected artwork package meets the preset requirements of the reference artwork package and a second probability that the to-be-detected artwork package does not meet the preset requirements of the reference artwork package; the classification result is determined based on a comparison between the first probability and the second probability.
In summary, according to the artwork package design optimization system and method based on the 3D printing technology provided by the embodiment of the application, the package feature matrix is extracted by acquiring the artwork package image to be detected and the reference artwork package image, and is fused to obtain the associated feature matrix, and the associated feature matrix is analyzed by the classifier to obtain the classification result, so as to judge whether the artwork package to be detected meets the preset requirement of the reference artwork package. The automatic detection system can improve the accuracy and efficiency of the detection of the packaging quality and reduce the requirement of manual detection.
As described above, the artwork package design optimization system 100 based on the 3D printing technology according to the embodiment of the present application may be implemented in various terminal devices, such as a server of the artwork package design optimization system based on the 3D printing technology. In one example, the artwork package design optimization system 100 according to the 3D printing technology may be integrated into the terminal device as one software module and/or hardware module. For example, the artwork package design optimization system 100 based on 3D printing technology may be a software module in the operating system of the terminal device or may be an application developed for the terminal device; of course, the artwork packaging design optimization system 100 based on the 3D printing technology may also be one of the numerous hardware modules of the terminal device.
Alternatively, in another example, the 3D printing technology-based artwork package design optimization system 100 and the terminal device may be separate devices, and the 3D printing technology-based artwork package design optimization system 100 may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information according to the agreed data format.
Fig. 4 is a flowchart of a method for optimizing a design of a artwork package based on a 3D printing technique according to an embodiment of the present application. As shown in fig. 4, the method for optimizing the artwork package design based on the 3D printing technology according to the embodiment of the present application includes the steps of: s110, obtaining an external package image of the artwork to be detected and a reference external package image of the artwork; s120, extracting a packaging characteristic matrix to be detected and a reference packaging characteristic matrix from the outer package image of the artwork to be detected and the outer package image of the reference artwork respectively; s130, constructing an association characteristic matrix between the to-be-detected package characteristic matrix and the reference package characteristic matrix, and optimizing the association characteristic matrix to obtain an optimized association characteristic matrix; and S140, the optimized association characteristic matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the to-be-detected artwork package meets the preset requirements of the reference artwork package.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described 3D printing technology-based artwork package design optimization method have been described in detail in the above description of the 3D printing technology-based artwork package design optimization system with reference to fig. 1 to 3, and thus, repetitive descriptions thereof will be omitted.
The basic principles of the present disclosure have been described above in connection with specific embodiments, but it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
In addition, as used herein, the use of "or" in the recitation of items beginning with "at least one" indicates a separate recitation, such that recitation of "at least one of A, B or C" means a or B or C, or AB or AC or BC, or ABC (i.e., a and B and C), for example. Furthermore, the term "exemplary" does not mean that the described example is preferred or better than other examples.
It is also noted that in the systems and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
Various changes, substitutions, and alterations are possible to the techniques described herein without departing from the teachings of the techniques defined by the appended claims. Furthermore, the scope of the claims of the present disclosure is not limited to the particular aspects of the process, machine, manufacture, composition of matter, means, methods and acts described above. The processes, machines, manufacture, compositions of matter, means, methods, or acts, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding aspects described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or acts.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. Artwork package design optimizing system based on 3D printing technology, characterized by comprising:
The artwork package image acquisition module is used for acquiring artwork package images to be detected and reference artwork package images;
The artwork packaging feature extraction module is used for extracting a packaging feature matrix to be detected and a reference packaging feature matrix from the artwork external packaging image to be detected and the reference artwork external packaging image respectively;
The feature matrix fusion module is used for constructing an associated feature matrix between the packaging feature matrix to be detected and the reference packaging feature matrix, and optimizing the associated feature matrix to obtain an optimized associated feature matrix;
And the artware packaging result analysis module is used for enabling the optimized association characteristic matrix to pass through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the artware packaging to be detected meets the preset requirements of the reference artware packaging.
2. The artwork package design optimization system according to claim 1, wherein the artwork package feature extraction module comprises:
The image blocking processing unit is used for respectively blocking the to-be-detected artwork external packing image and the reference artwork external packing image and then obtaining a plurality of to-be-detected packing image block feature vectors and a plurality of reference packing image block feature vectors through convolution coding;
And the twin network unit is used for respectively arranging the plurality of to-be-detected package image block feature vectors and the plurality of reference package image block feature vectors into matrixes and then obtaining the to-be-detected package feature matrix and the reference package feature matrix through a twin network.
3. The artwork package design optimization system according to claim 2, wherein the image segmentation processing unit comprises:
The image block sequence subunit is used for respectively carrying out image blocking processing on the to-be-detected artwork external package image and the reference artwork external package image to obtain an external package image block sequence of the to-be-detected artwork and a reference artwork external package image block sequence;
the to-be-detected convolutional coding subunit is used for respectively obtaining the characteristic vectors of the plurality of to-be-detected packaging image blocks by using a first convolutional neural network model of spatial attention through the to-be-detected artwork external packaging image block sequence;
and the reference convolution coding subunit is used for respectively obtaining the characteristic vectors of the plurality of reference packaging image blocks by using a second convolution neural network model of spatial attention through the reference artwork packaging image block sequence.
4. A artwork package design optimization system according to claim 3 wherein said image block sequence subunit is configured to: and respectively carrying out image uniform blocking treatment on the to-be-detected artwork external packing image and the reference artwork external packing image to obtain the to-be-detected artwork external packing image block sequence and the reference artwork external packing image block sequence, wherein the dimensions between each to-be-detected artwork external packing image block in the to-be-detected artwork external packing image block sequence and each reference artwork external packing image block in the reference artwork external packing image block sequence are the same.
5. The 3D printing technology based artwork package design optimization system of claim 4 wherein the convolution encoding sub-unit to be detected is configured to: input data are respectively carried out in forward transfer of layers by using each layer of the first convolutional neural network model:
Performing convolution processing on the input data based on convolution check to obtain a convolution characteristic diagram;
Passing the convolution feature map through a spatial attention module to obtain the spatial attention score matrix;
Multiplying the spatial attention score matrix and each feature matrix of the convolution feature map along the channel dimension by the spatial attention feature map according to position points;
carrying out pooling treatment on the space attention feature map along the channel dimension to obtain a pooled feature map;
non-linear activation is carried out on the pooled feature map so as to obtain an activated feature map;
The input of the first layer of the first convolutional neural network model is the image block sequence of the outer package of the artwork to be detected, and the output of the last layer of the first convolutional neural network model is the characteristic vectors of the plurality of image blocks to be detected.
6. The artwork package design optimization system according to claim 5, wherein the twin network unit is configured to: and respectively arranging the feature vectors of the image blocks to be detected and the feature vectors of the reference image blocks to be detected into a global feature matrix of the image blocks to be detected and a global feature matrix of the reference image blocks to be detected, and then obtaining the feature matrix to be detected and the reference image blocks through a twin network model comprising a first image encoder and a second image encoder, wherein the first image encoder and the second image encoder have the same network structure.
7. The artwork package design optimization system according to claim 6, wherein the feature map fusion module comprises:
The fusion unit is used for fusing the packaging characteristic matrix to be detected and the reference packaging characteristic matrix to obtain an association characteristic matrix;
And the optimization unit is used for carrying out topology aggregation between feature nodes based on the objective function on the associated feature matrix to obtain an optimized associated feature matrix.
8. The artwork package design optimization system according to claim 7, wherein the optimization unit comprises:
A feature node factor calculating subunit, configured to calculate feature node factors based on an objective function for each row vector in the association feature matrix;
And the characteristic matrix weighting subunit is used for respectively weighting each row vector in the associated characteristic matrix by using characteristic node factors based on the objective function corresponding to each row vector so as to obtain the optimized associated characteristic matrix.
9. The 3D printing technology based artwork package design optimization system of claim 8 wherein the compute feature node factor subunit is configured to: calculating characteristic node factors based on an objective function of each row vector in the associated characteristic matrix according to the following formula;
Wherein, the formula is:
wi=-log[|softmax(Vi)-τ|]×bool[softmax(Vi)-τ]+α||Vi||F
Wherein V i represents an i-th row vector in the correlation feature matrix, softmax (V i) represents a class probability value obtained by a classifier for the i-th row vector in the correlation feature matrix alone, α represents a predetermined hyper-parameter, τ is a hyper-parameter representing a shift value, bol represents a boolean function, log represents a logarithmic function value based on 2, V i||F represents a Frobenius norm of the i-th row vector in the correlation feature matrix, and w i represents an objective function-based feature node factor of the i-th row vector.
10. The utility model provides a handicraft package design optimization method based on 3D printing technique which characterized in that includes:
Acquiring an outer package image of the artwork to be detected and an outer package image of a reference artwork;
extracting a packaging characteristic matrix to be detected and a reference packaging characteristic matrix from the to-be-detected artwork external packaging image and the reference artwork external packaging image respectively;
constructing an association characteristic matrix between the to-be-detected packaging characteristic matrix and the reference packaging characteristic matrix, and optimizing the association characteristic matrix to obtain an optimized association characteristic matrix;
And the optimized association characteristic matrix passes through a classifier to obtain a classification result, wherein the classification result is used for indicating whether the to-be-detected artwork package meets the preset requirements of the reference artwork package.
CN202410077741.3A 2024-01-18 2024-01-18 Artwork package design optimization system and method based on 3D printing technology Pending CN117910073A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410077741.3A CN117910073A (en) 2024-01-18 2024-01-18 Artwork package design optimization system and method based on 3D printing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410077741.3A CN117910073A (en) 2024-01-18 2024-01-18 Artwork package design optimization system and method based on 3D printing technology

Publications (1)

Publication Number Publication Date
CN117910073A true CN117910073A (en) 2024-04-19

Family

ID=90691881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410077741.3A Pending CN117910073A (en) 2024-01-18 2024-01-18 Artwork package design optimization system and method based on 3D printing technology

Country Status (1)

Country Link
CN (1) CN117910073A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952650A (en) * 2024-01-30 2024-04-30 和源顺(湖州)工艺品有限公司 Handicraft e-commerce sales management system based on big data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680987A (en) * 2023-06-15 2023-09-01 宁波同耀新材料科技有限公司 Forming method and system of graphite crucible
CN116703878A (en) * 2023-06-20 2023-09-05 滁州国恒自动化科技有限公司 Automatic detection system and method for household appliance shell production line
CN116894831A (en) * 2023-07-24 2023-10-17 安徽安奇新材料有限公司 Polyurethane material detection method and system
CN117274689A (en) * 2023-09-19 2023-12-22 安徽永桦包装有限公司 Detection method and system for detecting defects of packaging box

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116680987A (en) * 2023-06-15 2023-09-01 宁波同耀新材料科技有限公司 Forming method and system of graphite crucible
CN116703878A (en) * 2023-06-20 2023-09-05 滁州国恒自动化科技有限公司 Automatic detection system and method for household appliance shell production line
CN116894831A (en) * 2023-07-24 2023-10-17 安徽安奇新材料有限公司 Polyurethane material detection method and system
CN117274689A (en) * 2023-09-19 2023-12-22 安徽永桦包装有限公司 Detection method and system for detecting defects of packaging box

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952650A (en) * 2024-01-30 2024-04-30 和源顺(湖州)工艺品有限公司 Handicraft e-commerce sales management system based on big data

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN108416266B (en) Method for rapidly identifying video behaviors by extracting moving object through optical flow
CN111768388B (en) Product surface defect detection method and system based on positive sample reference
CN110119753B (en) Lithology recognition method by reconstructed texture
WO2022236876A1 (en) Cellophane defect recognition method, system and apparatus, and storage medium
CN111462120B (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN112070727B (en) Metal surface defect detection method based on machine learning
CN109544522A (en) A kind of Surface Defects in Steel Plate detection method and system
CN111507357B (en) Defect detection semantic segmentation model modeling method, device, medium and equipment
CN117910073A (en) Artwork package design optimization system and method based on 3D printing technology
CN115439694A (en) High-precision point cloud completion method and device based on deep learning
CN111274895A (en) CNN micro-expression identification method based on cavity convolution
CN115147363A (en) Image defect detection and classification method and system based on deep learning algorithm
CN115147380A (en) Small transparent plastic product defect detection method based on YOLOv5
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN117636045A (en) Wood defect detection system based on image processing
CN116843615B (en) Lead frame intelligent total inspection method based on flexible light path
CN117079125A (en) Kiwi fruit pollination flower identification method based on improved YOLOv5
CN116912670A (en) Deep sea fish identification method based on improved YOLO model
CN115294430A (en) Machine vision rubbish identification and positioning technology based on sensor coupling
JP7070308B2 (en) Estimator generator, inspection device, estimator generator method, and estimator generator
Sun et al. Intelligent Site Detection Based on Improved YOLO Algorithm
CN113989793A (en) Graphite electrode embossed seal character recognition method
CN114092396A (en) Method and device for detecting corner collision flaw of packaging box

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination