CN115375600B - Reconstructed image quality weighing method and system based on self-encoder - Google Patents

Reconstructed image quality weighing method and system based on self-encoder Download PDF

Info

Publication number
CN115375600B
CN115375600B CN202211288588.6A CN202211288588A CN115375600B CN 115375600 B CN115375600 B CN 115375600B CN 202211288588 A CN202211288588 A CN 202211288588A CN 115375600 B CN115375600 B CN 115375600B
Authority
CN
China
Prior art keywords
image set
encoder
feature
original image
reconstructed image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211288588.6A
Other languages
Chinese (zh)
Other versions
CN115375600A (en
Inventor
***
赵峰
庄莉
梁懿
秦亮
王秋琳
徐杰
吕君玉
刘浩锋
余金沄
何敏
刘开培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Original Assignee
Wuhan University WHU
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU, State Grid Information and Telecommunication Co Ltd, Fujian Yirong Information Technology Co Ltd filed Critical Wuhan University WHU
Priority to CN202211288588.6A priority Critical patent/CN115375600B/en
Publication of CN115375600A publication Critical patent/CN115375600A/en
Application granted granted Critical
Publication of CN115375600B publication Critical patent/CN115375600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a reconstructed image quality measuring method based on a self-encoder, which comprises the following steps: collecting a plurality of original images to generate an original image set; constructing a self-encoder network comprising an encoder and a decoder; inputting original images in the original image set as training samples into a self-encoder network to perform image reproduction to obtain reproduction images, calculating the reproduction loss between the reproduction images and the corresponding original images, and training the self-encoder network based on the reproduction loss to complete the training of the self-encoder network; taking out the coder in the trained self-coder network as a feature extractor; obtaining a reconstructed image set, respectively inputting the original image set and the reconstructed image set into a feature extractor, and respectively obtaining feature distribution of the original image set and feature distribution of the reconstructed image set; and calculating the Frechet distance of the characteristic distribution of the original image set and the characteristic distribution of the reconstructed image set, and measuring the data quality of the reconstructed image set according to the Frechet distance.

Description

Reconstructed image quality weighing method and system based on self-encoder
Technical Field
The invention relates to a reconstructed image quality measuring method and system based on a self-encoder, and belongs to the technical field of image processing.
Background
In the field of deep learning, collection of image data often consumes a large amount of manpower and material resources. In order to reduce the collection amount of data, some new image data with similar but different characteristics are artificially manufactured by other methods such as matting and the like. And (3) forming a reconstructed image set by using artificially manufactured new image data, wherein the reconstructed image set has similar background but different specific characteristics with the original image set. The influence of the image data set on the model is huge, a series of interferences such as noise, deformation and the like are inevitably introduced during manufacturing of the reconstructed image set, and meanwhile, the image distribution of the reconstructed image set is changed to a certain extent, so that how to quantify the quality and distribution difference of the reconstructed image set and the original image set needs to be solved.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a reconstructed image quality weighing method and a reconstructed image quality weighing system based on a self-encoder.
The technical scheme of the invention is as follows:
in one aspect, the present invention provides a reconstructed image quality measurement method based on an auto-encoder, including the following steps:
collecting a plurality of original images, and preprocessing the original images to generate an original image set;
constructing a self-encoder network comprising an encoder and a decoder;
inputting original images in an original image set as training samples into a self-encoder network for image reproduction to obtain reproduction images, constructing a loss function to calculate the reproduction loss between the reproduction images and the corresponding original images, and performing iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached to finish the training of the self-encoder network;
taking out the coder in the trained self-coder network as a feature extractor;
reconstructing images in the original image set to obtain a reconstructed image set, and respectively inputting the original image set and the reconstructed image set into a feature extractor to respectively obtain feature distribution of the original image set and feature distribution of the reconstructed image set;
and calculating the Frechet distance of the characteristic distribution of the original image set and the characteristic distribution of the reconstructed image set, and measuring the data quality of the reconstructed image set according to the calculated Frechet distance.
As a preferred embodiment, the method for constructing the self-encoder network including the encoder and the decoder specifically includes:
constructing a network basic module which comprises a CBL module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacking mode; the C3 module is made up of three continuous layers of convolutional layers stacked;
defining an encoder structure, wherein the encoder comprises a CBL modules, b downsampling modules and a C3 module, and is used for inputting an original image x and outputting a corresponding feature vector z;
defining a decoder structure, the decoder comprising a CBL modulesB up-sampling modules and a C3 module, the decoder is used for inputting the characteristic vector z and carrying out image reproduction according to the characteristic vector z to generate a reproduction image
Figure 265939DEST_PATH_IMAGE002
As a preferred embodiment, the method for iteratively training the self-encoder network based on the calculated recurrence loss includes:
constructing a mean square error loss function, which comprises the following steps:
Figure 904862DEST_PATH_IMAGE004
where x is the original image input to the encoder,
Figure 942088DEST_PATH_IMAGE002
for a reconstructed image generated by the decoder, z = E (x) is the feature vector output by the encoder, which is greater than or equal to>
Figure 423885DEST_PATH_IMAGE006
And->
Figure 198068DEST_PATH_IMAGE008
A function that restores the feature vectors for the decoder;
and updating parameters of the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and reproduced images, and repeating the step until the self-encoder network converges or reaches the set iteration times.
As a preferred embodiment, the specific method for respectively inputting the original image set and the reconstructed image set into the feature extractor to respectively obtain the feature distribution of the original image set and the feature distribution of the reconstructed image set includes:
inputting the original image set into a feature extractor, and extracting features of each original image in the original image setObtaining m n-dimensional eigenvectors Zx, averaging each dimension of the m eigenvectors Zx to obtain n-dimensional vectors
Figure 81711DEST_PATH_IMAGE010
Calculating n x n-order covariance matrix of original image features through m n-dimensional feature vectors, and based on the n-dimensional vectors->
Figure 430783DEST_PATH_IMAGE010
And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
inputting the reconstructed image set into a feature extractor, extracting features from each reconstructed image in the reconstructed image set to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain the n-dimensional vector
Figure 134297DEST_PATH_IMAGE012
Calculating n x n order reconstructed image feature covariance matrix through m n-dimensional feature vectors, and enabling the n-dimensional vectors to be greater than or equal to>
Figure 820493DEST_PATH_IMAGE012
And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set;
the method for calculating the Frechet distance of the original image set feature distribution and the reconstructed image set feature distribution and measuring the data quality of the reconstructed image set according to the calculated Frechet distance specifically comprises the following steps:
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution according to the following formula:
Figure 948855DEST_PATH_IMAGE014
wherein, the first and the second end of the pipe are connected with each other,
Figure 327884DEST_PATH_IMAGE010
is an n-dimensional vector of the original image set, is->
Figure 659639DEST_PATH_IMAGE012
For reconstructing an n-dimensional vector of an image set>
Figure 149527DEST_PATH_IMAGE016
For the original image feature covariance matrix, < >>
Figure 742182DEST_PATH_IMAGE018
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
and measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set is, and the better the data quality of the reconstructed image set is.
In another aspect, the present invention further provides a system for measuring quality of reconstructed images based on an auto-encoder, including:
the data set construction module is used for collecting a plurality of original images and preprocessing the original images to generate an original image set;
the self-encoder network construction module is used for constructing a self-encoder network comprising an encoder and a decoder;
the training module is used for inputting the original images in the original image set as training samples into the self-encoder network to carry out image reproduction to obtain reproduction images, constructing a loss function to calculate the reproduction loss between the reproduction images and the corresponding original images, carrying out iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached, and finishing the training of the self-encoder network;
the characteristic extractor acquisition module is used for taking out the trained encoder in the self-encoder network as a characteristic extractor;
the characteristic distribution calculation module is used for reconstructing the images in the original image set to obtain a reconstructed image set, inputting the original image set and the reconstructed image set into the characteristic extractor respectively, and obtaining the characteristic distribution of the original image set and the characteristic distribution of the reconstructed image set respectively;
and the quality measuring module is used for calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution and measuring the data quality of the reconstructed image set according to the calculated Frechet distance.
As a preferred embodiment, the self-encoder network building module specifically includes:
the basic module building unit is used for building a network basic module and comprises a CBL module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacked mode; the C3 module is comprised of three continuous convolutional layer stacks;
the encoder structure construction unit is used for defining an encoder structure, the encoder comprises a CBL modules, b downsampling modules and a C3 module, and the encoder is used for inputting an original image x and outputting a corresponding feature vector z;
a decoder structure construction unit for defining a decoder structure, wherein the decoder comprises a CBL modules, b up-sampling modules and a C3 module, and is used for inputting a characteristic vector z, and generating a reproduction image by image reproduction according to the characteristic vector z
Figure 180860DEST_PATH_IMAGE002
As a preferred embodiment, the training module is specifically configured to:
constructing a mean square error loss function, which comprises the following steps:
Figure 858966DEST_PATH_IMAGE019
where x is the original image input to the encoder,
Figure 886965DEST_PATH_IMAGE002
for the reproduced image generated by the decoder, z = E (x) is the feature vector, which is output by the encoder, is { (R) } and { (R) } is the feature vector>
Figure 334127DEST_PATH_IMAGE006
And->
Figure 930324DEST_PATH_IMAGE008
A function for the decoder to restore the feature vector;
and updating parameters of the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and reproduced images, and repeating the step until the self-encoder network converges or reaches the set iteration times.
As a preferred embodiment, the feature distribution calculation module includes:
the original image set feature distribution calculation module is used for inputting the original image set into the feature extractor, extracting features of each original image in the original image set to obtain m n-dimensional feature vectors Zx, and averaging each dimension of the m feature vectors Zx to obtain n-dimensional vectors Zx
Figure 95727DEST_PATH_IMAGE010
Calculating n x n-order covariance matrix of original image features through m n-dimensional feature vectors, and based on the n-dimensional vectors->
Figure 927416DEST_PATH_IMAGE010
And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
a reconstructed image set feature distribution calculation module used for inputting the reconstructed image set into the feature extractor, extracting features of each reconstructed image in the reconstructed image set to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain the n-dimensional vector
Figure 229085DEST_PATH_IMAGE012
Calculating n x n order reconstructed image feature covariance matrix through m n-dimensional feature vectors, and enabling the n-dimensional vectors to be greater than or equal to>
Figure 245451DEST_PATH_IMAGE012
And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set;
the quality measurement module is specifically configured to:
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution according to the following formula:
Figure 163729DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 533530DEST_PATH_IMAGE010
is an n-dimensional vector of the original image set, is->
Figure 830650DEST_PATH_IMAGE012
For reconstructing an n-dimensional vector of an image set>
Figure 893284DEST_PATH_IMAGE016
For the original image feature covariance matrix, < >>
Figure 767699DEST_PATH_IMAGE018
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
and measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set, and the better the data quality of the reconstructed image set is.
In yet another aspect, the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for reconstructing an image quality scale based on a self-encoder according to any embodiment of the present invention when executing the program.
In yet another aspect, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for self-encoder based reconstructed image quality weighing according to any of the embodiments of the present invention.
The invention has the following beneficial effects:
the reconstructed image quality weighing method based on the self-encoder trains the self-encoder network by using the original data set, takes the encoder in the trained self-encoder as a feature extractor to extract image features, does not need to additionally add labels, and reduces the workload of data labeling; the original image set characteristic distribution and the reconstructed image set characteristic distribution are respectively obtained through the characteristic extractor, the difference between the data sets is weighed through a statistical method, the limitation caused by only measuring the quality of a single piece of data is avoided, finally, the quality and the distribution difference of the image set are quantified according to the calculated Frechet distance, and the data quality of the reconstructed image set can be rapidly compared.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a diagram illustrating an example of computing an image feature covariance matrix in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
It should be understood that the step numbers used herein are for convenience of description only and are not intended as limitations on the order in which the steps are performed.
It is to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The terms "comprises" and "comprising" indicate the presence of the described features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The term "and/or" refers to any and all possible combinations of one or more of the associated listed items and includes such combinations.
The first embodiment is as follows:
referring to fig. 1, a reconstructed image quality measurement method based on an auto-encoder includes the following steps:
s100, collecting a plurality of original images, and preprocessing the original images to generate an original image set; the preprocessing includes unifying resolutions of all original images, in this embodiment, unifying all the original images into a resolution of 256 × 3;
s200, constructing a self-encoder network comprising an encoder (encoder) and a decoder (decoder); the self-encoder network takes the input information as a learning target and can perform feature learning on the input information;
s300, inputting original images in the original image set as training samples into a self-encoder network for image reproduction to obtain reproduction images, constructing a loss function to calculate reproduction loss between the reproduction images and the corresponding original images, and performing iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached to finish training of the self-encoder network;
s400, taking out the encoder in the trained self-encoder network as a feature extractor;
s500, carrying out reconstruction processing on the images in the original image set to obtain a reconstructed image set, wherein the reconstruction processing is carried out on the original images, such as PS conversion processing, color conversion processing, gray level adjustment processing, brightness adjustment processing and the like; respectively inputting the original image set and the reconstructed image set into a feature extractor, and respectively obtaining feature distribution of the original image set and feature distribution of the reconstructed image set;
s600, calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution, and measuring the data quality of the reconstructed image set according to the calculated Frechet distance; the smaller the calculated Frechet distance, the closer the reconstructed image set is to the original image set, and the better the quality is. For example, assuming that there are an original image set a, a reconstructed image set B, and a reconstructed image set C, if the data quality of the data set is better than that of the reconstructed image set B and the reconstructed image set C, the data sets a, B, and C need to be input into an encoder, so as to obtain the feature distribution of the original image set a, the feature distribution of the reconstructed image set B, and the feature distribution of the reconstructed image set C;
respectively calculating the Frechet distance F1 between the feature distribution of the original image set A and the feature distribution of the reconstructed image set B and the Frechet distance F2 between the feature distribution of the original image set A and the feature distribution of the reconstructed image set C; comparing the Frechet distance F1 with the Frechet distance F2 can determine which reconstructed image set has better data quality, for example, if the Frechet distance F2 is smaller than the Frechet distance F1, the data quality of the reconstructed image set C is better than that of the reconstructed image set B.
In the embodiment, a reconstructed image set quality measuring method based on the Frechet distance is utilized, an encoder in a trained self-encoder is used as a feature extractor, no additional label is required to be added, and the workload of data annotation is reduced; the original image set and the reconstructed image set are input into a feature extractor, feature distribution of the original image set and feature distribution of the reconstructed image set are respectively obtained, differences among the data sets are weighed through a statistical method, limitation caused by only measuring the quality of a single piece of data is avoided, finally, the quality and the distribution difference of the image sets are quantized according to the Frechet distance, and the data quality of the reconstructed image sets can be rapidly compared.
As a preferred implementation manner of this embodiment, in step S200, the method for constructing a self-encoder network including an encoder and a decoder specifically includes:
s201, constructing a network basic module which comprises a CBL (Conv BatchNatchNorm LeakyReLU) module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacked mode; the BatchNorm layer can pull back the characteristic value distribution to the standard normal distribution again, so that the gradient is enlarged, the gradient is prevented from disappearing, and convergence is accelerated. The input of a batch at a certain layer of the network is recorded as
Figure 206771DEST_PATH_IMAGE021
Wherein->
Figure 109130DEST_PATH_IMAGE023
Represents a sample, and n is the number of the batch data. So that the mean and variance of the elements in the batch of data are &>
Figure 342665DEST_PATH_IMAGE025
And &>
Figure 704377DEST_PATH_IMAGE027
Standardized for each element>
Figure 415981DEST_PATH_IMAGE029
,/>
Figure 953272DEST_PATH_IMAGE031
Is a smaller number set to prevent division by 0 errors, e.g.
Figure 357709DEST_PATH_IMAGE033
(ii) a In order to compensate the nonlinear expression capability lost by the network due to standardization, scaling and offset operations are carried out to realize identity transformation, namely that the network output->
Figure 206716DEST_PATH_IMAGE035
In which>
Figure 846645DEST_PATH_IMAGE037
,/>
Figure 97498DEST_PATH_IMAGE039
(ii) a The LeakyReLU calculation formula is:
Figure 672835DEST_PATH_IMAGE041
wherein, leak is a very small constant, and the use of the leak ReLU activation function can prevent the negative axis information from being lost completely, thereby avoiding godNecrosis of channel; the C3 module consists of a stack of three continuous convolutional layers.
S202, defining an encoder structure, where in this embodiment, an encoder includes 6 CBL modules, 6 downsampling modules, and a C3 module, and is configured to input a preprocessed original image x and output a corresponding n-dimensional feature vector z; the encoder encodes the input image, thereby achieving the purposes of dimension reduction and feature extraction.
S203, defining a decoder structure, in this embodiment, the decoder includes 6 CBL modules, 6 upsampling modules and a C3 module, and is configured to input a feature vector z, perform image reproduction according to the feature vector z, and generate a reproduced image
Figure 9139DEST_PATH_IMAGE002
. The decoder restores the input feature vector into a recurrent image, and ensures that the feature vector is not distorted.
As a preferred implementation manner of this embodiment, in step S300, the constructing loss function calculates a recurrent loss between the recurrent image and the corresponding original image, and the method for iteratively training the self-encoder network based on the calculated recurrent loss specifically includes:
s301, constructing a mean square error loss function, specifically as follows:
Figure 203491DEST_PATH_IMAGE019
where x is the original image input to the encoder,
Figure 308850DEST_PATH_IMAGE002
for the reproduced image generated by the decoder, z = E (x) is the feature vector, which is output by the encoder, is { (R) } and { (R) } is the feature vector>
Figure 55089DEST_PATH_IMAGE006
And->
Figure 613110DEST_PATH_IMAGE008
For decoder to feature vectorA function to perform the reduction;
s302, calculating a gradient of each weight in the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and recurrent images, updating the weight in the self-encoder network by adopting a proper learning rate lr, and repeating the step until the self-encoder network converges or reaches a set iteration number.
As a preferred embodiment of this embodiment, in step S500, the specific method for respectively inputting the original image set and the reconstructed image set into the feature extractor to respectively obtain the feature distribution of the original image set and the feature distribution of the reconstructed image set includes:
s501, inputting an original image set into a feature extractor, extracting features of each original image in the original image set through the feature extractor to obtain m n-dimensional feature vectors Zx, and averaging each dimension of the m feature vectors Zx to obtain n-dimensional vectors
Figure 358955DEST_PATH_IMAGE010
Calculating n x n-order covariance matrix of original image features through m n-dimensional feature vectors, and based on the n-dimensional vectors->
Figure 318821DEST_PATH_IMAGE010
And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
with specific reference to fig. 2, for example: inputting a first original image into a feature extractor to obtain a four-dimensional feature vector [1.0,2.0,3.0,4.0], inputting a second original image into the feature extractor to obtain a four-dimensional feature vector [1.1,2.1,3.1,4.1], inputting a third original image into the feature extractor to obtain a four-dimensional feature vector [1.2,2.2,3.2,4.2]; averaging each dimension of the three four-dimensional feature vectors:
[(1.0+1.1+1.2)/3,(2.0+2.1+2.2)/3,(3.0,3.1,3.2)/3,(4.0,+4.1+4.2)/3]to obtain a four-dimensional vector
Figure 235962DEST_PATH_IMAGE010
=[1.1,2.1,3.1,4.1];
And continuously calculating an image feature covariance matrix according to the three four-dimensional feature vectors:
cov (X, Y) = E [ (X-E [ X ]) (Y-E [ Y ]) ], where E [ X ] represents the expectation of the variable X;
cov(1,1)=[(1.0-1.1)(1.0-1.1)+(1.1-1.1)(1.1-1.1)+(1.2-1.1)(1.2-1.1)]/3=0.01;
cov(1,2)=[(1.0-1.1)(2.0-2.1)+(1.1-1.1)(2.1-2.1)+(1.2-1.1)(2.2-2.1)]/3=0.01;
cov(1,3)=[(1.0-1.1)(3.0-3.1)+(1.1-1.1)(3.1-3.1)+(1.2-1.1)(3.2-3.1)]/3=0.01;
cov(1,4)=[(1.0-1.1)(4.0-4.1)+(1.1-1.1)(4.1-4.1)+(1.2-1.1)(4.2-4.1)]/3=0.01;
by analogy, calculating a 4 × 4-order covariance matrix of the features of the original image:
Figure 546857DEST_PATH_IMAGE043
s502, inputting the reconstructed image set into a feature extractor, extracting features of each reconstructed image in the reconstructed image set through the feature extractor to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain the n-dimensional vector
Figure 83012DEST_PATH_IMAGE012
Calculating n x n order reconstructed image feature covariance matrix through m n-dimensional feature vectors, and enabling the n-dimensional vectors to be greater than or equal to>
Figure 162963DEST_PATH_IMAGE012
And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set; n-dimension vector pick>
Figure 516584DEST_PATH_IMAGE012
And a method for calculating a covariance matrix of the features of the reconstructed image and the n-dimensional vector ≥ of the original image>
Figure 173831DEST_PATH_IMAGE010
The method is the same as the calculation method of the covariance matrix of the characteristics of the original image.
In step S600, the method for calculating the frichet distance between the feature distribution of the original image set and the feature distribution of the reconstructed image set, and measuring the data quality of the reconstructed image set according to the calculated frichet distance specifically includes:
s601, calculating the Frechet distance of the original image set feature distribution and the reconstructed image set feature distribution according to the following formula:
Figure 372731DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure 307189DEST_PATH_IMAGE010
is an n-dimensional vector of the original image set, is->
Figure 831711DEST_PATH_IMAGE012
For reconstructing an n-dimensional vector of an image set>
Figure 461407DEST_PATH_IMAGE045
Covariance matrix for original image features, <' > based on the original image feature>
Figure 729577DEST_PATH_IMAGE018
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
s602, measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set is, and the better the data quality of the reconstructed image set is.
Example two:
the embodiment provides a reconstructed image quality measuring system based on a self-encoder, which comprises:
the data set construction module is used for collecting a plurality of original images, preprocessing the original images and generating an original image set; this module is used to implement the function of step S100 in the above-mentioned first embodiment, which is not described herein again;
the self-encoder network construction module is used for constructing a self-encoder network comprising an encoder and a decoder; this module is used to implement the function of step S200 in the first embodiment, which is not described herein again;
the training module is used for inputting the original images in the original image set as training samples into the self-encoder network for image reproduction to obtain reproduction images, constructing a loss function to calculate the reproduction loss between the reproduction images and the corresponding original images, and performing iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached to finish the training of the self-encoder network; this module is used to implement the function of step S300 in the first embodiment, which is not described herein again;
the characteristic extractor acquisition module is used for taking out the trained encoder in the self-encoder network as a characteristic extractor; this module is used to implement the function of step S400 in the above-mentioned first embodiment, which is not described herein again;
the characteristic distribution calculation module is used for reconstructing the images in the original image set to obtain a reconstructed image set, inputting the original image set and the reconstructed image set into the characteristic extractor respectively, and obtaining the characteristic distribution of the original image set and the characteristic distribution of the reconstructed image set respectively; this module is used to implement the function of step S500 in the above-mentioned first embodiment, which is not described herein again;
the quality measuring module is used for calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution and measuring the data quality of the reconstructed image set according to the calculated Frechet distance; this module is used to implement the function of step S600 in the above embodiment, and is not described herein again.
As a preferred embodiment of this embodiment, the self-encoder network building module specifically includes:
the basic module building unit is used for building a network basic module and comprises a CBL module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacked mode; the C3 module is comprised of three continuous convolutional layer stacks;
the encoder structure building unit is used for defining an encoder structure, the encoder comprises a CBL modules, b downsampling modules and a C3 module, and the encoder is used for inputting an original image x and outputting a corresponding feature vector z;
a decoder structure construction unit for defining decoder structure, the decoder comprises a CBL modules, b upsampling modules and a C3 module, the decoder is used for inputting a characteristic vector z, and performing image reproduction according to the characteristic vector z to generate a reproduced image
Figure 518541DEST_PATH_IMAGE002
As a preferred embodiment of this embodiment, the training module is specifically configured to:
constructing a mean square error loss function, which comprises the following steps:
Figure 840063DEST_PATH_IMAGE004
where x is the original image input to the encoder,
Figure 347268DEST_PATH_IMAGE002
for the reproduced image generated by the decoder, z = E (x) is the feature vector, which is output by the encoder, is { (R) } and { (R) } is the feature vector>
Figure 153550DEST_PATH_IMAGE006
And->
Figure 797021DEST_PATH_IMAGE008
A function for the decoder to restore the feature vector;
and updating parameters of the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and reproduced images, and repeating the step until the self-encoder network converges or reaches the set iteration times.
As a preferred embodiment of this embodiment, the feature distribution calculating module includes:
an original image set feature distribution calculation module, configured to input an original image set into a feature extractor, extract features from each original image in the original image set to obtain m n-dimensional feature vectors Zx, and average each dimension of the m feature vectors Zx to obtain an n-dimensional vector
Figure 538712DEST_PATH_IMAGE010
Calculating n x n-order covariance matrix of original image features through m n-dimensional feature vectors, and based on the n-dimensional vectors->
Figure 533213DEST_PATH_IMAGE010
And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
a reconstructed image set feature distribution calculation module used for inputting the reconstructed image set into the feature extractor, extracting features of each reconstructed image in the reconstructed image set to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain the n-dimensional vector
Figure 143186DEST_PATH_IMAGE012
Calculating n x n order reconstructed image feature covariance matrix through m n-dimensional feature vectors, and enabling the n-dimensional vectors to be greater than or equal to>
Figure 296956DEST_PATH_IMAGE012
And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set;
the quality measurement module is specifically configured to:
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution according to the following formula:
Figure 68602DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure 691345DEST_PATH_IMAGE010
is an n-dimensional vector of the original image set, is->
Figure 839429DEST_PATH_IMAGE012
For reconstructing an n-dimensional vector of an image set>
Figure 457493DEST_PATH_IMAGE016
For the original image feature covariance matrix, < >>
Figure 400041DEST_PATH_IMAGE018
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
and measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set, and the better the data quality of the reconstructed image set is.
Example three:
this embodiment provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the program, the method for reconstructing an image quality scale based on an auto-encoder according to any embodiment of the present invention is implemented.
Example four:
the present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method for self-encoder based reconstructed image quality weighing according to any of the embodiments of the present invention.
In the embodiments of the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, and means that there may be three relationships, for example, a and/or B, and may mean that a exists alone, a and B exist simultaneously, and B exists alone. Wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" and similar expressions refer to any combination of these items, including any combination of singular or plural items. For example, at least one of a, b, and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, any function, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (8)

1. A reconstructed image quality weighing method based on an auto-encoder is characterized by comprising the following steps:
collecting a plurality of original images, and preprocessing the original images to generate an original image set;
constructing a self-encoder network comprising an encoder and a decoder;
inputting original images in an original image set as training samples into a self-encoder network for image reproduction to obtain reproduction images, constructing a loss function to calculate the reproduction loss between the reproduction images and the corresponding original images, and performing iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached to finish the training of the self-encoder network;
taking out the encoder in the trained self-encoder network as a feature extractor;
reconstructing images in the original image set to obtain a reconstructed image set, and respectively inputting the original image set and the reconstructed image set into a feature extractor to respectively obtain feature distribution of the original image set and feature distribution of the reconstructed image set;
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution, and measuring the data quality of the reconstructed image set according to the calculated Frechet distance;
the specific method for respectively inputting the original image set and the reconstructed image set into the feature extractor and respectively obtaining the feature distribution of the original image set and the feature distribution of the reconstructed image set comprises the following steps:
inputting an original image set into a feature extractor, extracting features of each original image in the original image set to obtain m n-dimensional feature vectors Zx, and averaging each dimension of the m feature vectors Zx to obtain an n-dimensional vector mu x Calculating n x n order original image feature covariance matrix by m n dimension feature vectors, and calculating n dimension vector mu x And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
inputting the reconstructed image set into a feature extractor, extracting features from each reconstructed image in the reconstructed image set to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain an n-dimensional vector mu g Calculating n x n order reconstructed image feature covariance matrix by m n dimension feature vectors, and calculating n dimension vector mu g And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set;
the method for calculating the Frechet distance of the original image set feature distribution and the reconstructed image set feature distribution and measuring the data quality of the reconstructed image set according to the calculated Frechet distance specifically comprises the following steps:
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution according to the following formula:
Figure FDA0003992125940000021
wherein, mu x Is an n-dimensional vector, μ, of the original image set g To reconstruct an n-dimensional vector of the image set,
Figure FDA0003992125940000022
for the original image feature covariance matrix, < >>
Figure FDA0003992125940000023
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
and measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set is, and the better the data quality of the reconstructed image set is.
2. The method according to claim 1, wherein the method for constructing the self-encoder network comprising the encoder and the decoder comprises:
constructing a network basic module which comprises a CBL module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacking mode; the C3 module is comprised of three continuous convolutional layer stacks;
defining an encoder structure, wherein the encoder comprises a CBL modules, b downsampling modules and a C3 module, and is used for inputting an original image x and outputting a corresponding feature vector z;
defining a decoder structure, wherein the decoder comprises a CBL modules, b upsampling modules and a C3 module, and is used for inputting a characteristic vector z, performing image reproduction according to the characteristic vector z and generating a reproduced image
Figure FDA0003992125940000034
3. The method according to claim 2, wherein the reconstruction image quality metric method based on the self-encoder is characterized in that the reconstruction loss function calculates a reconstruction loss between a reconstruction image and a corresponding original image, and the method for iteratively training the self-encoder network based on the calculated reconstruction loss specifically comprises:
constructing a mean square error loss function, which comprises the following steps:
Figure FDA0003992125940000031
Figure FDA0003992125940000032
where x is the original image input to the encoder,
Figure FDA0003992125940000033
for a reproduced image generated by the decoder, z = E (x) is a feature vector output by the encoder, and D (z) and D (E (x)) are functions restored by the decoder;
and updating parameters of the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and reproduced images, and repeating the step until the self-encoder network converges or reaches the set iteration times.
4. A reconstructed image quality measurement system based on an auto-encoder, comprising:
the data set construction module is used for collecting a plurality of original images and preprocessing the original images to generate an original image set;
the self-encoder network construction module is used for constructing a self-encoder network comprising an encoder and a decoder;
the training module is used for inputting the original images in the original image set as training samples into the self-encoder network for image reproduction to obtain reproduction images, constructing a loss function to calculate the reproduction loss between the reproduction images and the corresponding original images, and performing iterative training on the self-encoder network based on the calculated reproduction loss until an iteration termination condition is reached to finish the training of the self-encoder network;
the characteristic extractor acquisition module is used for taking out the coder in the trained self-coder network as a characteristic extractor;
the characteristic distribution calculation module is used for reconstructing images in the original image set to obtain a reconstructed image set, and inputting the original image set and the reconstructed image set into the characteristic extractor respectively to obtain characteristic distribution of the original image set and characteristic distribution of the reconstructed image set respectively;
the quality measuring module is used for calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution and measuring the data quality of the reconstructed image set according to the calculated Frechet distance;
wherein the feature distribution calculation module comprises:
the original image set feature distribution calculation module is used for inputting the original image set into the feature extractor, extracting features of each original image in the original image set to obtain m n-dimensional feature vectors Zx, and obtaining m n-dimensional feature vectors Zx for the m original imagesAveraging each dimension of the feature vector Zx to obtain an n-dimensional vector mu x Calculating n x n order original image feature covariance matrix by m n dimension feature vectors, and calculating n dimension vector mu x And the original image characteristic covariance matrix is used as the characteristic distribution of the original image set;
a reconstructed image set feature distribution calculation module used for inputting the reconstructed image set into the feature extractor, extracting features of each reconstructed image in the reconstructed image set to obtain m n-dimensional feature vectors Zg, and averaging each dimension of the m feature vectors Zg to obtain an n-dimensional vector mu g Calculating n x n order reconstructed image feature covariance matrix through m n-dimensional feature vectors, and converting the n-dimensional vector mu into n-dimensional vector g And reconstructing the image feature covariance matrix as the feature distribution of the reconstructed image set;
the quality measurement module is specifically configured to:
calculating the Frechet distance of the original image set characteristic distribution and the reconstructed image set characteristic distribution according to the following formula:
Figure FDA0003992125940000051
/>
wherein, mu x Is an n-dimensional vector, μ, of the original image set g To reconstruct an n-dimensional vector of the image set,
Figure FDA0003992125940000052
for the original image feature covariance matrix, < >>
Figure FDA0003992125940000053
For reconstructing the image characteristic covariance matrix, tr represents the sum of elements on the diagonal of the matrix;
and measuring the data quality of the reconstructed image set according to the calculated Frechet distance, wherein the smaller the calculated Frechet distance is, the closer the reconstructed image set is to the original image set is, and the better the data quality of the reconstructed image set is.
5. The system according to claim 4, wherein the self-encoder network construction module specifically comprises:
the basic module building unit is used for building a network basic module and comprises a CBL module and a C3 module, wherein the CBL module consists of a convolution layer, a BN batch normalization layer and a LeakyReLU activation layer in a stacked mode; the C3 module is comprised of three continuous convolutional layer stacks;
the encoder structure construction unit is used for defining an encoder structure, the encoder comprises a CBL modules, b downsampling modules and a C3 module, and the encoder is used for inputting an original image x and outputting a corresponding feature vector z;
a decoder structure construction unit for defining decoder structure, the decoder comprises a CBL modules, b upsampling modules and a C3 module, the decoder is used for inputting a characteristic vector z, and performing image reproduction according to the characteristic vector z to generate a reproduced image
Figure FDA0003992125940000064
6. The system of claim 4, wherein the training module is specifically configured to:
constructing a mean square error loss function, which comprises the following steps:
Figure FDA0003992125940000061
Figure FDA0003992125940000062
where x is the original image input to the encoder,
Figure FDA0003992125940000063
for the reproduced image generated by the decoder, z = E (x) is the encoder outputThe extracted feature vectors, D (z) and D (E (x)), are functions of the decoder for restoring the feature vectors;
and updating parameters of the self-encoder network by adopting a back propagation algorithm according to the loss value calculated by each group of original images and reproduced images, and repeating the step until the self-encoder network converges or reaches the set iteration times.
7. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the self-encoder based reconstructed image quality metric method of any of claims 1 to 3 when executing the program.
8. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the self-encoder based reconstructed image quality metric method according to any of claims 1 to 3.
CN202211288588.6A 2022-10-20 2022-10-20 Reconstructed image quality weighing method and system based on self-encoder Active CN115375600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211288588.6A CN115375600B (en) 2022-10-20 2022-10-20 Reconstructed image quality weighing method and system based on self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211288588.6A CN115375600B (en) 2022-10-20 2022-10-20 Reconstructed image quality weighing method and system based on self-encoder

Publications (2)

Publication Number Publication Date
CN115375600A CN115375600A (en) 2022-11-22
CN115375600B true CN115375600B (en) 2023-04-07

Family

ID=84072861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211288588.6A Active CN115375600B (en) 2022-10-20 2022-10-20 Reconstructed image quality weighing method and system based on self-encoder

Country Status (1)

Country Link
CN (1) CN115375600B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672885A (en) * 2019-01-08 2019-04-23 中国矿业大学(北京) A kind of video image encoding and decoding method for mine intelligent monitoring

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10148897B2 (en) * 2005-07-20 2018-12-04 Rearden, Llc Apparatus and method for capturing still images and video using coded lens imaging techniques
CN109471049B (en) * 2019-01-09 2021-09-17 南京航空航天大学 Satellite power supply system anomaly detection method based on improved stacked self-encoder
CN113077005B (en) * 2021-04-13 2024-04-05 西安交通大学 Anomaly detection system and method based on LSTM self-encoder and normal signal data
CN113920210B (en) * 2021-06-21 2024-03-08 西北工业大学 Image low-rank reconstruction method based on adaptive graph learning principal component analysis method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109672885A (en) * 2019-01-08 2019-04-23 中国矿业大学(北京) A kind of video image encoding and decoding method for mine intelligent monitoring

Also Published As

Publication number Publication date
CN115375600A (en) 2022-11-22

Similar Documents

Publication Publication Date Title
Adami et al. Rigorous derivation of the cubic NLS in dimension one
WO2019128660A1 (en) Method and device for training neural network, image processing method and device and storage medium
Yang et al. Fast ℓ 1-minimization algorithms and an application in robust face recognition: A review
CN109003229B (en) Magnetic resonance super-resolution reconstruction method based on three-dimensional enhanced depth residual error network
CN111127316B (en) Single face image super-resolution method and system based on SNGAN network
CN110288526B (en) Optimization method for improving imaging quality of single-pixel camera by image reconstruction algorithm based on deep learning
CN112364942B (en) Credit data sample equalization method and device, computer equipment and storage medium
Cheng et al. Background error covariance iterative updating with invariant observation measures for data assimilation
CN114140442A (en) Deep learning sparse angle CT reconstruction method based on frequency domain and image domain degradation perception
CN115375600B (en) Reconstructed image quality weighing method and system based on self-encoder
CN114708496A (en) Remote sensing change detection method based on improved spatial pooling pyramid
Pilavcı et al. Variance Reduction in Stochastic Methods for Large-Scale Regularized Least-Squares Problems
Wen et al. The power of complementary regularizers: Image recovery via transform learning and low-rank modeling
Anirudh et al. Improving limited angle ct reconstruction with a robust gan prior
CN112241938A (en) Image restoration method based on smooth Tak decomposition and high-order tensor Hank transformation
CN112001865A (en) Face recognition method, device and equipment
CN116843679A (en) PET image partial volume correction method based on depth image prior frame
CN116228520A (en) Image compressed sensing reconstruction method and system based on transform generation countermeasure network
Liu et al. Deep nonparametric estimation of intrinsic data structures by chart autoencoders: Generalization error and robustness
Li et al. Clustering based multiple branches deep networks for single image super-resolution
CN113487491A (en) Image restoration method based on sparsity and non-local mean self-similarity
Wang et al. A trust region-CG algorithm for deblurring problem in atmospheric image reconstruction
CN108734222B (en) Convolutional neural network image classification method based on correction network
CN113034473A (en) Lung inflammation image target detection method based on Tiny-YOLOv3
Wu et al. A Steganalysis framework based on CNN using the filter subset selection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant