CN117524340A - Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network - Google Patents
Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network Download PDFInfo
- Publication number
- CN117524340A CN117524340A CN202410010523.8A CN202410010523A CN117524340A CN 117524340 A CN117524340 A CN 117524340A CN 202410010523 A CN202410010523 A CN 202410010523A CN 117524340 A CN117524340 A CN 117524340A
- Authority
- CN
- China
- Prior art keywords
- textile
- data
- layer
- training
- pooling layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000004753 textile Substances 0.000 title claims abstract description 63
- 238000012512 characterization method Methods 0.000 title claims abstract description 14
- 238000011176 pooling Methods 0.000 claims abstract description 102
- 238000012549 training Methods 0.000 claims abstract description 53
- 238000001228 spectrum Methods 0.000 claims abstract description 29
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 claims abstract description 22
- 230000004927 fusion Effects 0.000 claims abstract description 22
- 238000013256 Gubra-Amylin NASH model Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 17
- 238000002329 infrared spectrum Methods 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 24
- 238000012360 testing method Methods 0.000 claims description 22
- 230000004913 activation Effects 0.000 claims description 12
- 239000004744 fabric Substances 0.000 claims description 11
- 239000000835 fiber Substances 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 3
- 229920000742 Cotton Polymers 0.000 claims description 3
- 239000004677 Nylon Substances 0.000 claims description 3
- 229920000297 Rayon Polymers 0.000 claims description 3
- 229920002334 Spandex Polymers 0.000 claims description 3
- 229920004933 Terylene® Polymers 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000012886 linear function Methods 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 229920001778 nylon Polymers 0.000 claims description 3
- 239000005020 polyethylene terephthalate Substances 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000004759 spandex Substances 0.000 claims description 3
- 210000002268 wool Anatomy 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 2
- -1 wool Polymers 0.000 claims description 2
- 238000010521 absorption reaction Methods 0.000 description 3
- 238000002485 combustion reaction Methods 0.000 description 3
- 239000000126 substance Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013098 chemical test method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000007922 dissolution test Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000002657 fibrous material Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004497 NIR spectroscopy Methods 0.000 description 1
- 229920002678 cellulose Polymers 0.000 description 1
- 235000010980 cellulose Nutrition 0.000 description 1
- 238000007705 chemical test Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000009658 destructive testing Methods 0.000 description 1
- 238000004090 dissolution Methods 0.000 description 1
- 238000004043 dyeing Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 239000002904 solvent Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000010186 staining Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 229920002994 synthetic fiber Polymers 0.000 description 1
- 239000012209 synthetic fiber Substances 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16C—COMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
- G16C20/00—Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
- G16C20/20—Identification of molecular entities, parts thereof or of chemical compositions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0475—Generative networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16C—COMPUTATIONAL CHEMISTRY; CHEMOINFORMATICS; COMPUTATIONAL MATERIALS SCIENCE
- G16C20/00—Chemoinformatics, i.e. ICT specially adapted for the handling of physicochemical or structural data of chemical particles, elements, compounds or mixtures
- G16C20/70—Machine learning, data mining or chemometrics
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Chemical & Material Sciences (AREA)
- Crystallography & Structural Chemistry (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a textile component quantitative characterization method based on a multilayer one-dimensional CNN depth network, which comprises the steps of selecting textile spectrum data formed by target components from near infrared spectrum sample data, and preprocessing the textile spectrum data; constructing a GAN model, and training the GAN model by using training set data and random noise samples; constructing a multi-pooling fusion one-dimensional convolution kernel neural network, generating pseudo sample data by using a trained generator, training the multi-pooling fusion one-dimensional convolution kernel neural network by using the pseudo sample data, and taking the trained multi-pooling fusion one-dimensional convolution kernel neural network as a classifier; quantitative characterization of the composition of the textile is performed using a classifier. The method utilizes a combination method of a multilayer one-dimensional CNN depth network, a GAN model and a multi-pooling fusion one-dimensional convolution kernel neural network to realize efficient, accurate and quantitative characterization of textile components, and has good application prospect.
Description
Technical Field
The invention relates to a textile fabric component identification technology, in particular to a textile component quantitative characterization method based on a multilayer one-dimensional CNN depth network.
Background
Near infrared spectroscopy is widely used in the identification of textile fabric components. It is a nondestructive analysis method, and the spectral information of the sample is obtained by measuring the light absorption and reflection characteristics of the sample in the near infrared band (700-2500 nm).
Near infrared spectrum has the following characteristics: (1) full spectrum: the near infrared spectrum covers a wide wavelength range, can capture absorption peaks and absorption bands of substances under different wavelengths, and provides rich chemical information; (2) rapidity: the near infrared spectrum acquisition and analysis process is rapid, and the spectrum data of the sample can be acquired within a few seconds; (3) non-destructive: the near infrared spectrum collection process does not need to contact the sample, can not cause any damage to the sample, and is suitable for online or real-time detection.
In the identification of textile fabric components, the near infrared spectrum technology can realize the rapid and accurate identification of textile components according to the absorption characteristics of substances such as different celluloses, proteins, synthetic fibers and the like in the near infrared band. By collecting a series of sample spectra of known components and establishing a statistical model between the spectra and the components, the spectra of the unknown samples can be compared with the model, so that the components can be judged.
In order to improve the identification accuracy, the convolutional neural network (Convolutional Neural Network, CNN) is used for carrying out component identification on the near infrared spectrum data of the textile fabric, and the automatic, accurate and efficient component classification can be realized.
Common textile fabric component identification methods include visual observation, chemical testing and the like, and each method has certain defects. Visual observation is one of the simplest and most commonly used methods to initially determine the composition by observing the appearance characteristics of the textile fabric. However, due to the similarity of the fiber materials and the presence of blending, the components of the textile fabric often cannot be accurately determined by visual inspection alone. Chemical tests generally include combustion tests, dissolution tests, staining tests, and the like. The combustion test judges the components by observing the characteristics of the fibers in the combustion process, the dissolution test judges the dissolution condition of the fibers in a specific solvent, and the dyeing test identifies the components by utilizing the affinity difference between the dye and different fiber materials. These chemical testing methods can provide more accurate ingredient determinations, but require specialized experimental conditions and equipment, and destructive testing of the fibers.
Disclosure of Invention
The invention aims to: aiming at the problems, the invention aims to provide a textile component quantitative characterization method based on a multilayer one-dimensional CNN depth network.
The technical scheme is as follows: the invention relates to a textile component quantitative characterization method based on a multilayer one-dimensional CNN depth network, which comprises the following steps:
selecting textile spectrum data composed of target components from near infrared spectrum sample data, preprocessing the textile spectrum data, and dividing the preprocessed textile spectrum data into a training set and a testing set;
constructing a GAN model, and training the GAN model by using training set data and random noise samples; wherein the GAN model includes a generator and a arbiter;
constructing a multi-pooling fusion one-dimensional convolution kernel neural network, generating pseudo sample data by using a trained generator, training the multi-pooling fusion one-dimensional convolution kernel neural network by using the pseudo sample data, and taking the trained multi-pooling fusion one-dimensional convolution kernel neural network as a classifier; wherein the output of the classifier is the content of the target component;
evaluating the classifier by using the test set, and calculating the precision index;
quantitative characterization of the composition of the textile is performed using a classifier.
Further, constructing the multi-pooling fusion one-dimensional convolution kernel neural network comprises:
the input layer is respectively connected with the input end of a first convolution kernel, the output end of the first convolution kernel is sequentially connected with the first maximum pooling layer, the second convolution kernel, the second maximum pooling layer, the third convolution kernel and the third maximum pooling layer, the output end of the first convolution kernel is sequentially connected with the first average pooling layer, the fourth convolution kernel, the second average pooling layer, the fifth convolution kernel and the third average pooling layer, the output end of the third maximum pooling layer and the output end of the third average pooling layer are added, the input end of the full connection layer is connected after the addition, the output end of the full connection layer is connected with the activation function layer, and numerical values representing the content of target components in a sample are output through the activation function layer.
Further, training the GAN model using the training set data and the random noise samples includes the steps of:
step 21, defining a loss function and an optimizer; the loss function uses two kinds of cross entropy loss functions, the optimizer uses an Adam optimizer, and the generator and the discriminator are respectively provided with the optimizers;
step 22, training the discriminator:
step 221, for the input real data sample, calculating the output and loss of the real sample by the discriminator;
step 222, the generator generates the same number of dummy data samples as the real data samples, and calculates the output and loss of the dummy data samples by the discriminator;
step 223, adding the real sample loss and the pseudo data sample loss as the total loss of the discriminator, and carrying out back propagation and parameter updating;
training the discriminators in a mode of alternately training in the steps 221-223;
step 23, training generator:
step 231, the generator generates the same number of dummy data samples as the real data samples again, and calculates the output and loss of the dummy data samples by the discriminator;
step 232, setting the pseudo data loss as the loss of the real data, and then carrying out back propagation and parameter updating;
training of the discriminant model is achieved by means of alternating training of steps 231-232.
Further, training the multi-pooling fusion one-dimensional convolutional nuclear neural network using the pseudo-sample data comprises the steps of:
step 31, setting the number of samples used in each training as z, and changing the shape of the spectrum data of the pseudo textile fabric generated by the generator from z multiplied by 61 to z multiplied by 1 multiplied by 61; wherein z is an integer;
step 32, performing feature extraction by using 16 first convolution kernels with the size of 3×1 and the original z× 1×61 tensors, and outputting to obtain z× 16×59 tensors;
step 33, performing pooling operation on the output zx16x59 tensors by using a first maximum pooling layer and a first average pooling layer respectively, and outputting to obtain two zx16x19 tensors; wherein the first maximum pooling layer and the first average pooling layer are each 1 x 3 in size;
step 34, performing feature extraction on the two z×16×19 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×17 tensors;
step 35, performing pooling operation on the two output z×64×17 tensors by using a second maximum pooling layer and a second average pooling layer respectively, and outputting to obtain two z×64×5 tensors; wherein the second maximum pooling layer and the second average pooling layer are each 1 x 3 in size;
step 36, performing feature extraction on the two z×64×5 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×3 tensors;
step 37, performing pooling operation on the two output z64×3 tensors by using a third maximum pooling layer and a third average pooling layer respectively, obtaining and adding the two z64×1 tensors, and outputting one z64×1 tensor; wherein the third maximum pooling layer and the third average pooling layer are each 1 x 3 in size;
step 38, inputting the output tensor of z×64×1 into the fully connected layer, outputting the tensor with z×6 as the output result, inputting the output tensor into the softmax function layer, and mapping the tensor value output into the (0, 1) interval; wherein the number of neurons in the fully connected layer is 6.
Further, the target components comprise cotton, terylene, wool, viscose fiber, spandex and nylon, textile spectrum data composed of the six components are selected from near infrared spectrum sample data, and linear normalization processing is carried out on the data by using a linear function.
Further, evaluating the classifier using the test set, the calculating the precision index includes:
the selected evaluation indexes comprise a determination coefficient R-square, an average absolute error MAE and a root mean square error RMSE, and the calculation expressions are respectively as follows:
,
wherein,for the real content of textile fibers->For model predictive content of textile fibres, +.>Is the average value of the fiber content of the textile, and n is the number of samples.
The beneficial effects are that: compared with the prior art, the invention has the remarkable advantages that:
1. high efficiency and accuracy: according to the invention, by adopting a multilayer one-dimensional CNN depth network and combining with the pretreatment of textile spectrum data, the textile spectrum data formed by target components can be effectively selected from near infrared spectrum sample data, thereby being beneficial to improving the accuracy and efficiency of quantitative characterization;
2. data enhancement: according to the invention, the training set data and the random noise samples are utilized to train the generator by using the generated countermeasure network GAN model, so that the pseudo sample data can be generated, the number of original training sets is expanded, and the diversity and the richness of the samples for training are increased;
3. a multi-pooling fusion one-dimensional convolution kernel neural network: the invention constructs a multi-pooling fusion one-dimensional convolution kernel neural network as a classifier, and the network combines the characteristics of various pooling modes and convolution kernels, so that the characteristics in the textile spectrum data can be better extracted, and the quantitative characterization of textile components can be realized;
4. the manual intervention is reduced: the whole process is automatically processed based on the deep learning model, so that the dependence on manual experts is reduced, the time and the cost are saved, and the reproducibility and the stability of the method are improved.
Drawings
FIG. 1 is a flow chart of a method for quantitatively characterizing textile components based on a multi-layer one-dimensional CNN depth network in an embodiment;
in the embodiment of fig. 2, the GAN model and the multi-layer one-dimensional CNN network are structured schematically;
FIG. 3 is a schematic diagram of the structure of the maximum pooling layer and the average pooling layer in the embodiment;
FIG. 4 is a comparison of the decision coefficient errors of the embodiment and the CNN model that is not optimized using the GAN network;
FIG. 5 is a comparison of the mean absolute error of an embodiment with a CNN model that is not optimized using a GAN network;
FIG. 6 is a root mean square error comparison of an embodiment with a CNN model that is not optimized using a GAN network;
FIG. 7 is a graph showing the predicted component content of six samples in a test set according to an embodiment;
FIG. 8 is the true component content of the corresponding sample in the test set of the example.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples.
Fig. 1 is a flowchart of a method for quantitatively characterizing textile components based on a multi-layer one-dimensional CNN depth network according to the embodiment, which specifically includes the following steps:
and 1, selecting textile spectrum data formed by target components from near infrared spectrum sample data, preprocessing the textile spectrum data, and dividing the preprocessed textile spectrum data into a training set and a testing set.
In one example, textile spectrum data composed of six components are selected from near infrared spectrum sample data aiming at cotton C, terylene T, wool W, viscose fiber V, spandex P and nylon N which need to be identified as target components, linear normalization processing is carried out on the data by utilizing a linear function, and data division is carried out on 1400 sample data after pretreatment, wherein 700 samples are used as training sets, and 700 samples are used as test sets.
Step 2, constructing a GAN model, and training the GAN model by using training set data and random noise samples; wherein the GAN model includes a generator and a arbiter.
In one example, a bifurcated arbiter is constructed for discriminating whether the input data is real data or spurious data generated by the generator. Defining the shape of input spectrum data as 61, wherein the discriminator comprises 3 fully connected layers, the output size of the 1 st fully connected layer is 128, and the ReLU is used as an activation function; the output size of the 2 nd fully connected layer is 64, again using the ReLU activation function; the output size of the last 1 fully connected layer is 1, a sigmoid activation function is used, and the output result represents the probability that the input data is real spectrum data.
In one example, a generator is constructed for generating dummy data, the generator comprising 3 fully connected layers, the 1 st fully connected layer having an input size of 61 and an output size of 128, the activation function using a ReLU; the output size of the 2 nd fully connected layer is 256, again using the ReLU activation function; the output size of the last 1 full-connection layer is required to be the same as the types of the components of the textile fabric, and is set to be 6, and the output result is between 0 and 1 by using a sigmoid activation function.
The generator and the discriminant are combined together to construct a GAN model for joint training, and the input parameters are the generator and the discriminant models defined previously.
In one example, training the GAN model using training set data and random noise samples includes the steps of:
step 21, defining a loss function and an optimizer; the loss function uses two kinds of cross entropy loss functions, the optimizer uses an Adam optimizer, and the generator and the discriminator are respectively provided with the optimizers;
step 22, training the discriminator:
step 221, for the input real data sample, calculating the output and loss of the real sample by the discriminator;
step 222, the generator generates the same number of dummy data samples as the real data samples, and calculates the output and loss of the dummy data samples by the discriminator;
step 223, adding the real sample loss and the pseudo data sample loss as the total loss of the discriminator, and carrying out back propagation and parameter updating;
training the discriminators in a mode of alternately training in the steps 221-223;
step 23, training generator:
step 231, the generator generates the same number of dummy data samples as the real data samples again, and calculates the output and loss of the dummy data samples by the discriminator;
step 232, the goal of the generator is to make the arbiter unable to distinguish between real data and dummy data, so the dummy data loss is set as the loss of the real data, and then the back propagation and parameter update are performed;
training of the discriminant model is achieved by means of alternating training of steps 231-232.
The GAN model takes the generator and the discriminator as opponents to conduct countermeasure training, and the purpose of generating vivid data is achieved by competing the manner of the countermeasure training between the generator and the discriminator. The generator is responsible for generating false data samples and the arbiter is responsible for distinguishing between real data and data samples generated by the generator. Along with the progress of training, the generator and the discriminator are continuously game with each other, the progress of the other side is promoted, and finally, dynamic balance is achieved, and the generator can generate a vivid data sample, so that the discriminator cannot accurately distinguish real data from false data.
Step 3, constructing a multi-pooling fusion one-dimensional convolution kernel neural network, generating pseudo sample data by using a trained generator, training the multi-pooling fusion one-dimensional convolution kernel neural network by using the pseudo sample data, and taking the trained multi-pooling fusion one-dimensional convolution kernel neural network as a classifier; wherein the output of the classifier is the content of the target component.
As shown in fig. 2, constructing the multi-pooling fusion one-dimensional convolution kernel neural network includes:
the input layer is respectively connected with the input end of a first convolution kernel, the output end of the first convolution kernel is sequentially connected with the first maximum pooling layer, the second convolution kernel, the second maximum pooling layer, the third convolution kernel and the third maximum pooling layer, the output end of the first convolution kernel is sequentially connected with the first average pooling layer, the fourth convolution kernel, the second average pooling layer, the fifth convolution kernel and the third average pooling layer, the output end of the third maximum pooling layer and the output end of the third average pooling layer are added, the input end of the full connection layer is connected after the addition, the output end of the full connection layer is connected with the activation function layer, and numerical values representing the content of target components in a sample are output through the activation function layer.
In the multi-pooling fusion one-dimensional convolution kernel neural network, the convolution kernel is used for extracting local features of spectrum data, the pooling layer is used for reducing dimensionality and extracting the most obvious features, the pooling layer comprises a maximum pooling layer and an average pooling layer, and the full connection layer is used for final classification. Fig. 3 is a schematic diagram of the structure of the maximum and average pooling layers.
Specifically, training the multi-pooling fusion one-dimensional convolution kernel neural network by using the pseudo-sample data comprises the following steps:
step 31, setting the number of samples used in each training as z, and changing the shape of the spectrum data of the pseudo textile fabric generated by the generator from z multiplied by 61 to z multiplied by 1 multiplied by 61; where z is an integer, e.g., 256;
step 32, performing feature extraction by using 16 first convolution kernels with the size of 3×1 and the original z× 1×61 tensors, and outputting to obtain z× 16×59 tensors;
step 33, performing pooling operation on the output zx16x59 tensors by using a first maximum pooling layer and a first average pooling layer respectively, and outputting to obtain two zx16x19 tensors; wherein the first maximum pooling layer and the first average pooling layer are each 1 x 3 in size;
step 34, performing feature extraction on the two z×16×19 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×17 tensors;
step 35, performing pooling operation on the two output z×64×17 tensors by using a second maximum pooling layer and a second average pooling layer respectively, and outputting to obtain two z×64×5 tensors; wherein the second maximum pooling layer and the second average pooling layer are each 1 x 3 in size;
step 36, performing feature extraction on the two z×64×5 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×3 tensors;
step 37, performing pooling operation on the two output z64×3 tensors by using a third maximum pooling layer and a third average pooling layer respectively, obtaining and adding the two z64×1 tensors, and outputting one z64×1 tensor; wherein the third maximum pooling layer and the third average pooling layer are each 1 x 3 in size;
step 38, inputting the output tensor of z×64×1 into the fully connected layer, outputting the tensor with z×6 as the output result, inputting the output tensor into the softmax function layer, and mapping the tensor value output into the (0, 1) interval; each number represents the percentage of six components contained in the sample and is calculated as:
,
wherein,represents the i-th value in tensor, < ->Representing the first in tensorsjA value.
And 4, evaluating the classifier by using the test set, and calculating the precision index.
In one example, evaluating the classifier using the test set, calculating the precision index includes:
the selected evaluation indexes comprise a determination coefficient R-square, an average absolute error MAE and a root mean square error RMSE, and the calculation expressions are respectively as follows:
,
wherein,for the real content of textile fibers->For model predictive content of textile fibres, +.>Is the average value of the fiber content of the textile, and n is the number of samples.
In one example, the trained classifier is evaluated using a test set, the accuracy index is calculated, fig. 4, fig. 5 and fig. 6 are the accuracy comparison of the method with a one-dimensional CNN model using only one pooling mode, and fig. 7 and fig. 8 are the predicted component content and the real component content of six kinds of samples in a part of the test set. In fig. 4-6, the inventive decision coefficients, the mean absolute error and the root mean square error in the test data are respectively [2.90,0.94,6.34], the decision coefficients, the mean absolute error and the root mean square error of the network using only the maximum pooling are respectively [4.47,0.92,7.60], and the decision coefficients, the mean absolute error and the root mean square error of the network using only the average pooling are respectively [4.93,0.88,9.16]. The specific calculation steps are as follows:
step 401: the test set is input into a classifier, and then a decision coefficient (R-Squared) of the predicted component content and the true component content is calculated, wherein,for the real content of textile fibers->For model predictive content of textile fibres, +.>Is the average value of the fiber content of the textile, and n is the number of samples. The closer the R-Squared value is to 1, the better the fitting effect of the model is, and the calculation expression is:
,
step 402, after inputting the test set into the classifier, calculating the Mean Absolute Error (MAE) between the predicted component content and the true component content, where the calculation expression is:
,
step 403, after inputting the test set into the classifier, calculating a Root Mean Square Error (RMSE) of the predicted component content and the true component content, wherein the RMSE amplifies a sample with a larger error in the model predicted fiber content compared with the MAE, so that the smaller the value of the RMSE, the more meaningful, and the calculation expression is:
and 5, quantitatively characterizing the components of the textile by using a classifier.
And inputting the near infrared spectrum data of the textile after pretreatment (such as spectrum smoothing, normalization and the like) into a classifier to obtain the percentages of six target components in the textile.
Claims (6)
1. The quantitative characterization method of the textile components based on the multilayer one-dimensional CNN depth network is characterized by comprising the following steps:
selecting textile spectrum data composed of target components from near infrared spectrum sample data, preprocessing the textile spectrum data, and dividing the preprocessed textile spectrum data into a training set and a testing set;
constructing a GAN model, and training the GAN model by using training set data and random noise samples; wherein the GAN model includes a generator and a arbiter;
constructing a multi-pooling fusion one-dimensional convolution kernel neural network, generating pseudo sample data by using a trained generator, training the multi-pooling fusion one-dimensional convolution kernel neural network by using the pseudo sample data, and taking the trained multi-pooling fusion one-dimensional convolution kernel neural network as a classifier; wherein the output of the classifier is the content of the target component;
evaluating the classifier by using the test set, and calculating the precision index;
quantitative characterization of the composition of the textile is performed using a classifier.
2. The method for quantitatively characterizing textile components based on a multilayer one-dimensional CNN depth network according to claim 1, wherein constructing a multi-pooling fusion one-dimensional convolutional kernel neural network comprises:
the input layer is respectively connected with the input end of a first convolution kernel, the output end of the first convolution kernel is sequentially connected with the first maximum pooling layer, the second convolution kernel, the second maximum pooling layer, the third convolution kernel and the third maximum pooling layer, the output end of the first convolution kernel is sequentially connected with the first average pooling layer, the fourth convolution kernel, the second average pooling layer, the fifth convolution kernel and the third average pooling layer, the output end of the third maximum pooling layer and the output end of the third average pooling layer are added, the input end of the full connection layer is connected after the addition, the output end of the full connection layer is connected with the activation function layer, and numerical values representing the content of target components in a sample are output through the activation function layer.
3. The method for quantitatively characterizing textile components based on a multi-layer one-dimensional CNN depth network according to claim 2, wherein training the GAN model using training set data and random noise samples comprises:
step 21, defining a loss function and an optimizer; the loss function uses two kinds of cross entropy loss functions, the optimizer uses an Adam optimizer, and the generator and the discriminator are respectively provided with the optimizers;
step 22, training the discriminator:
step 221, for the input real data sample, calculating the output and loss of the real sample by the discriminator;
step 222, the generator generates the same number of dummy data samples as the real data samples, and calculates the output and loss of the dummy data samples by the discriminator;
step 223, adding the real sample loss and the pseudo data sample loss as the total loss of the discriminator, and carrying out back propagation and parameter updating;
training the discriminators in a mode of alternately training in the steps 221-223;
step 23, training generator:
step 231, the generator generates the same number of dummy data samples as the real data samples again, and calculates the output and loss of the dummy data samples by the discriminator;
step 232, setting the pseudo data loss as the loss of the real data, and then carrying out back propagation and parameter updating;
training of the discriminant model is achieved by means of alternating training of steps 231-232.
4. A method for quantitatively characterizing textile components based on a multi-layer one-dimensional CNN depth network according to claim 3, wherein training a multi-pooling fusion one-dimensional convolutional nuclear neural network using pseudo-sample data comprises the steps of:
step 31, setting the number of samples used in each training as z, and changing the shape of the spectrum data of the pseudo textile fabric generated by the generator from z multiplied by 61 to z multiplied by 1 multiplied by 61; wherein z is an integer;
step 32, performing feature extraction by using 16 first convolution kernels with the size of 3×1 and the original z× 1×61 tensors, and outputting to obtain z× 16×59 tensors;
step 33, performing pooling operation on the output zx16x59 tensors by using a first maximum pooling layer and a first average pooling layer respectively, and outputting to obtain two zx16x19 tensors; wherein the first maximum pooling layer and the first average pooling layer are each 1 x 3 in size;
step 34, performing feature extraction on the two z×16×19 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×17 tensors;
step 35, performing pooling operation on the two output z×64×17 tensors by using a second maximum pooling layer and a second average pooling layer respectively, and outputting to obtain two z×64×5 tensors; wherein the second maximum pooling layer and the second average pooling layer are each 1 x 3 in size;
step 36, performing feature extraction on the two z×64×5 tensors respectively by using 64 convolution kernels with the size of 3×1, and outputting to obtain two z×64×3 tensors;
step 37, performing pooling operation on the two output z64×3 tensors by using a third maximum pooling layer and a third average pooling layer respectively, obtaining and adding the two z64×1 tensors, and outputting one z64×1 tensor; wherein the third maximum pooling layer and the third average pooling layer are each 1 x 3 in size;
step 38, inputting the output tensor of z×64×1 into the fully connected layer, outputting the tensor with z×6 as the output result, inputting the output tensor into the softmax function layer, and mapping the tensor value output into the (0, 1) interval; wherein the number of neurons in the fully connected layer is 6.
5. The quantitative characterization method of textile components based on the multilayer one-dimensional CNN depth network according to claim 1, wherein the target components comprise cotton, terylene, wool, viscose fiber, spandex and nylon, the textile spectrum data composed of the six components are selected from the near infrared spectrum sample data, and the data are subjected to linear normalization processing by using a linear function.
6. The method for quantitatively characterizing textile components based on a multi-layer one-dimensional CNN depth network according to claim 1, wherein evaluating the classifier using a test set, calculating the precision index comprises:
the selected evaluation indexes comprise a determination coefficient R-square, an average absolute error MAE and a root mean square error RMSE, and the calculation expressions are respectively as follows:
,
wherein,for the real content of textile fibers->For model predictive content of textile fibres, +.>Is the average value of the fiber content of the textile, and n is the number of samples.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410010523.8A CN117524340A (en) | 2024-01-04 | 2024-01-04 | Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410010523.8A CN117524340A (en) | 2024-01-04 | 2024-01-04 | Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117524340A true CN117524340A (en) | 2024-02-06 |
Family
ID=89751614
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410010523.8A Pending CN117524340A (en) | 2024-01-04 | 2024-01-04 | Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117524340A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117740727A (en) * | 2024-02-19 | 2024-03-22 | 南京信息工程大学 | Textile component quantitative inversion method based on infrared hyperspectrum |
CN117786617A (en) * | 2024-02-27 | 2024-03-29 | 南京信息工程大学 | Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635141A (en) * | 2019-01-29 | 2019-04-16 | 京东方科技集团股份有限公司 | For retrieving method, electronic equipment and the computer readable storage medium of image |
CN113970532A (en) * | 2021-10-09 | 2022-01-25 | 池明旻 | Fabric fiber component detection system and prediction method based on near infrared spectrum |
KR20230082701A (en) * | 2021-12-01 | 2023-06-09 | 세종대학교산학협력단 | Method and apparatus for detecting cerebral microbleeds based on transfer learning |
-
2024
- 2024-01-04 CN CN202410010523.8A patent/CN117524340A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109635141A (en) * | 2019-01-29 | 2019-04-16 | 京东方科技集团股份有限公司 | For retrieving method, electronic equipment and the computer readable storage medium of image |
CN113970532A (en) * | 2021-10-09 | 2022-01-25 | 池明旻 | Fabric fiber component detection system and prediction method based on near infrared spectrum |
KR20230082701A (en) * | 2021-12-01 | 2023-06-09 | 세종대학교산학협력단 | Method and apparatus for detecting cerebral microbleeds based on transfer learning |
Non-Patent Citations (2)
Title |
---|
朱杰: "《多特征多媒体数据的识别与分析》", 30 June 2021, 电子科技大学出版社, pages: 12 * |
赵学军: "《高光谱遥感数据处理:压缩与融合》", 31 December 2021, 北京邮电大学出版社, pages: 200 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117740727A (en) * | 2024-02-19 | 2024-03-22 | 南京信息工程大学 | Textile component quantitative inversion method based on infrared hyperspectrum |
CN117740727B (en) * | 2024-02-19 | 2024-05-14 | 南京信息工程大学 | Textile component quantitative inversion method based on infrared hyperspectrum |
CN117786617A (en) * | 2024-02-27 | 2024-03-29 | 南京信息工程大学 | Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion |
CN117786617B (en) * | 2024-02-27 | 2024-04-30 | 南京信息工程大学 | Cloth component analysis method and system based on GA-LSTM hyperspectral quantitative inversion |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117524340A (en) | Textile component quantitative characterization method based on multilayer one-dimensional CNN depth network | |
CN109493287B (en) | Deep learning-based quantitative spectral data analysis processing method | |
CN106124449B (en) | A kind of soil near-infrared spectrum analysis prediction technique based on depth learning technology | |
CN107219188B (en) | A method of based on the near-infrared spectrum analysis textile cotton content for improving DBN | |
CN105630743A (en) | Spectrum wave number selection method | |
CN101520412A (en) | Near infrared spectrum analyzing method based on isolated component analysis and genetic neural network | |
CN113686804B (en) | Textile fiber component nondestructive cleaning analysis method based on deep regression network | |
CN104965973B (en) | A kind of Apple Mould Core multiple-factor Non-Destructive Testing discrimination model and method for building up thereof | |
EP3290908B1 (en) | Unknown sample determining method | |
CN110702656A (en) | Vegetable oil pesticide residue detection method based on three-dimensional fluorescence spectrum technology | |
CN114091539A (en) | Multi-mode deep learning rolling bearing fault diagnosis method | |
CN111562235A (en) | Method for rapidly identifying black-leaf outbreak disease and infection degree of tobacco leaves based on near infrared spectrum | |
CN111693487A (en) | Fruit sugar degree detection method and system based on genetic algorithm and extreme learning machine | |
CN113970532B (en) | Fabric fiber component detection system and prediction method based on near infrared spectrum | |
CN105911000A (en) | Characteristic wave band based blood spot egg on-line detection method | |
CN109142251B (en) | LIBS quantitative analysis method of random forest auxiliary artificial neural network | |
CN107247033A (en) | Differentiate the method for Huanghua Pear maturity based on rapid decay formula life cycle algorithm and PLSDA | |
CN110376154A (en) | Fruit online test method and system based on spectrum correction | |
CN108827925A (en) | Edible vegetable oil true and false rapid detection method and detection device based on optical fiber type fluorescence spectroscopy technique | |
CN117538287A (en) | Method and device for nondestructive testing of phosphorus content of Huangguan pear | |
CN108627498A (en) | A kind of flour doping quantitative detecting method of multispectral data fusion | |
CN111912823A (en) | Multi-component pesticide residue fluorescence detection analysis method | |
CN116858822A (en) | Quantitative analysis method for sulfadiazine in water based on machine learning and Raman spectrum | |
CN114354666B (en) | Soil heavy metal spectral feature extraction and optimization method based on wavelength frequency selection | |
CN114414523A (en) | Textile fiber component qualitative method based on automatic waveband selection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |