CN106355191A - Deep generating network random training algorithm and device - Google Patents

Deep generating network random training algorithm and device Download PDF

Info

Publication number
CN106355191A
CN106355191A CN201610666223.0A CN201610666223A CN106355191A CN 106355191 A CN106355191 A CN 106355191A CN 201610666223 A CN201610666223 A CN 201610666223A CN 106355191 A CN106355191 A CN 106355191A
Authority
CN
China
Prior art keywords
parameters
batch
data set
gradient
maximum moment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610666223.0A
Other languages
Chinese (zh)
Inventor
朱军
任勇
李佳莲
罗宇岑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201610666223.0A priority Critical patent/CN106355191A/en
Publication of CN106355191A publication Critical patent/CN106355191A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep generating network random training algorithm and a device. The method comprises the steps of inputting a data set comprising a condition variable and generating the data itself; randomly dividing the data set into several batches comprising a number of samples; updating the sample data for each batch by the gradient backward propagation and outputing the the parameter; wherein the parameter updating of the sample data for each batch by gradient backward propagation comprises using a conditional maximum moment matching criterion. The device is used for realizing the method. The deep generating network random training algorithm can extend the application range of the depth generation model based on the moment matching.

Description

Deep generation network random training algorithm and device
Technical Field
The invention relates to the field of machine learning, in particular to a deep generation network random training algorithm and a device.
Background
The deep generation network uses a multi-layer structure to describe the distribution of data, and each layer of the deep generation network is subjected to some nonlinear conversion. Deep generation networks have found wide application in a number of tasks that require stochastic and probabilistic reasoning, such as image generation, data completion, and the like. With the addition of some discriminant features, the performance deficiency of the deep generation network in the problems of classification, prediction and the like is also obviously improved.
In many examples of depth generation models, Goodfellow et al introduced a Generational Adaptive Network (GAN) in 2015, which simulates a game of chance for generating data. However, its optimization goal is a max-min problem, which is generally difficult to train. In the meantime, Li et al proposed a Generic Momentum Matching Network (GMMN) that samples from a simple distribution, such as a uniform distribution, and then obtains a sample by Network propagation. Unlike GAN, GMMN is embedding target probabilities into some regenerative kernel hilbert space, whose optimization goal can be summarized as minimizing the difference (in a norm sense) between two elements in this space, a criterion known as Maximum Mean redundancy (MMD). Through the nuclear technique, the optimization target has a simple form, and then training can be completed through the combination of gradient random descent and back propagation.
Although GMMN has succeeded in unsupervised data generation, it can only be applied here. For more extensive problems such as classification, prediction problems, and generation of data from different condition variables, GMMN training targets cannot be applied here because they do not contain condition variables. The GAN in contrast can be easily extended to a condition variable based version, so the relatively narrow range of applications of GMMN limits its impact. However, for the problem of probability embedding in hilbert space, Song et al made a study on conditional probability embedding in 2009. Unlike MMD, the embedding of a single probability distribution is an element of the hilbert space, whereas the embedding of conditional probabilities can be understood as an operator between hilbert spaces. This extended technology provides a reference for GMMN extension.
The latest result in the field lays a solid foundation for the proposal of a conditional deep generation network which adopts conditional maximum moment matching as a training criterion. However, these techniques do not enable the application of previous deep generation networks based on moment matching, such as conditional generation of data and classification problems.
Therefore, how to extend the application range of the depth generation model based on the moment matching so as to be applied to various tasks such as image generation according to categories, data classification, instruction extraction of a bayesian network, and the like is of great significance.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a deep generation network random training algorithm and a device.
In one aspect, the present invention provides a deep generation network random training algorithm, including:
inputting a data set comprising condition variables and the generated data itself;
randomly partitioning the data set into batches comprising a number of samples;
updating parameters of the sample data of each batch through gradient backward propagation and outputting the parameters;
wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
According to the deep generation network random training algorithm provided by the invention, as the conditional maximum moment matching criterion is used in the processing process of the sample data, and the conditional maximum moment matching criterion is an extension of MMD, the application range of the deep generation model based on moment matching can be expanded.
On the other hand, the invention also provides a deep generation network random training device, which comprises:
an input unit for inputting a data set including condition variables and the generated data itself;
a dividing unit for randomly dividing the data set into a plurality of batches including a certain number of samples;
the training unit is used for updating parameters of the sample data of each batch through gradient back propagation and outputting the parameters;
wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
According to the deep generation network random training device provided by the invention, the conditional maximum moment matching criterion is used in the processing process of the sample data, and the conditional maximum moment matching criterion is an extension of MMD, so that the application range of a deep generation model based on moment matching can be expanded.
Drawings
FIG. 1 is a schematic diagram of a deep generation network;
FIG. 2 is a schematic flow chart of an embodiment of a deep generation network random training algorithm of the present invention;
fig. 3 is a schematic structural diagram of an embodiment of a deep generation network random training apparatus according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic structural diagram of a deep generation network, and referring to fig. 3, it should be noted that, for the deep generation network, training is reversed, that is, propagation is reversed from an output layer through a gradient. For the generation of the network, the inputs are the set of vectors (x, h) of conditional variables and of implicit variables, obtained by sampling, whose role is to encode the implicit layer of the generated data. The condition variables and the implicit variables are linked through splicing of vectors, for example, so that an input layer of the network is formed.
The intermediate layer of the network is typically either a fully connected layer (MLP) or a convolutional layer (CNN). Non-linear conversion is required between network layers, and therefore ReLu can be selected as a non-linear converter of network data.
The input propagates through the network, ultimately generating data y. The output of the algorithm is the model parameters w, which contain all the parameters of the network. For a deep generation network, each layer has its own parameters, for example, for a fully-connected layer (MLP), the relationship between its input x and output y is equal to relu (Wx + b), and its parameters are (W, b). The model parameters w can be obtained by summarizing the parameters of each layer.
Fig. 2 is a schematic flow chart of an embodiment of a deep-generation network random training algorithm of the present invention, and referring to fig. 1, the embodiment discloses a generation network random training algorithm, which includes:
s1, inputting a data set comprising condition variables and the generated data;
s2, randomly dividing the data set into a plurality of batches comprising a certain number of samples;
s3, updating parameters of the sample data of each batch through back-propagation (back-propagation) and outputting the parameters; wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
According to the deep generation network random training algorithm provided by the invention, as the conditional maximum moment matching criterion is used in the processing process of the sample data, and the conditional maximum moment matching criterion is an extension of MMD, the application range of the deep generation model based on moment matching can be expanded.
Specifically, in step S1, the input data is a data set; e.g. input data setsWherein x isiIs a conditional variable, yiTo generate the data itself; for example, for a picture marked as category a, a is a condition variable xiPicture for generating data itself yi
In step S2, the entire data set may be approximated randomly one batch at a time from the data set. The number of samples in the batch may be selected according to the structure of the data set, and specifically, for a data set with a simple structure, the number of samples included in each batch may be smaller, for example, 100-. For a data set with a relatively complex structure, the number of samples in each batch can be increased appropriately.
Step S3 includes the following steps:
s31, randomly selecting a batch B from the input data set D, and generating a sample y for any x belonging to B; note B' as the generated data set (x, y);
specifically, for any x ∈ B, it will form a vector set (x, h) with the implicit variable h obtained by sampling. The set of vectors is propagated through the network to generate samples y.
S32, calculating the conditional maximum moment matching criterion according to B and B
S33, acquiring the derivative of the conditional maximum moment matching criterion on the parameterAnd taking the derivative of the conditional maximum moment matching criterion with respect to the parameter as the gradient of the output layer;
s34, obtaining the gradient of each intermediate layer according to a chain type derivation rule;
s35, updating the parameter w through a gradient descent algorithm;
and S36, repeating the steps S31-S35 until the parameter w meets the convergence condition, and then outputting the parameter w.
Specifically, the Conditional Maximum moment matching Criterion (CMMD) includes: the operator differences between the two regenerated kernel hilbert spaces are compared to determine the difference between the two conditional probabilities.
Specifically, the conditional maximum moment matching criterion is the generalization of MMD, which compares the operator difference between two regenerative kernel hilbert spaces to determine the difference between two conditional probabilities. For example, let a conditional random variable X whose corresponding regenerative nuclear hilbert space is F; if two other conditional random variables Y and Z are set, and the hilbert spaces of the two corresponding regenerative kernels are both G, the two conditional probabilities are P (Y | X) and P (Z | X), respectively. In order to compare the two conditional probabilities, the two conditional probabilities are embeddedEntering the Hilbert space, an operator C between the two Hilbert spaces can be obtainedY|XAnd CZ|XIt can also be seen as the space formed by the product of two Hilbert space tensorsOne element of (1). The conditional maximum moment matching criterion compares the difference between two embeddings in the norm sense of the space equipment:
L C M M D 2 = | | C Y | X - C Z | X | | F ⊗ G 2
in practical application, two data sets are setAndthe difference of the two probability embedded estimates can be compared to approximate LCMMD
L ^ C M M D 2 = | | C ^ s Y | X - C ^ d Y | X | | F ⊗ G 2
Wherein,andare empirical estimates generated from the two data sets (i.e., the training data set and the model-generated data set), respectively. In general, the embedding of conditional probabilities is infinite dimensional, but through Kernel techniques, the above objectives can be efficiently computed by the Kernel Gram matrix. For continuous variables, a gaussian kernel function may be selected; for discrete variables, the Delta kernel may be selected. Specifically, the kernel technique is accurately calculated by passing the inner product operation of the wireless dimension through a kernel function, and thusThere are forms that can be calculated practically:
L ^ C M M D 2 = T r ( L d · C 1 ) + T r ( L s · C 2 ) + T r ( L d s · C 3 )
where L is the Gram matrix of the output variables and C is the parameter matrix determined by the training data.
And after the random training of the current batch is finished, repeating the steps until the random training of all batches in the data set is finished.
The deep generation network random training algorithm provided by the invention integrates the deep generation network model in the prior art and the maximum moment matching criterion of the condition provided by the invention, so that the application range of the deep generation model based on moment matching can be effectively expanded, and the deep generation model can be applied to various tasks such as image generation according to categories, data classification, instruction extraction of Bayesian networks and the like.
Fig. 3 is a schematic structural diagram of an embodiment of a deep generation network random training apparatus according to the present invention, and referring to fig. 3, the present invention further provides a deep generation network random training apparatus, including: input unit 1, segmentation unit 2, and training unit 3. Wherein the input unit 1 is used for inputting a data set comprising condition variables and generation data; the dividing unit 2 is used for dividing the data set into a plurality of batches including a certain number of samples at random; the training unit 3 is used for updating parameters of the sample data of each batch through gradient back propagation and outputting the parameters; wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
According to the deep generation network random training device provided by the invention, the conditional maximum moment matching criterion is used in the processing process of the sample data, and the conditional maximum moment matching criterion is an extension of MMD, so that the application range of a deep generation model based on moment matching can be expanded.
Specifically, the input unit 1 inputs a data set; e.g. input data setsWherein x isiIs a conditional variable, yiTo generate the data itself; for example, for a picture marked as category a, a is a condition variable xiPicture for generating data itself yi
The segmentation unit 2 may approximate the entire data set randomly with one batch at a time from the data set. The number of samples in the batch may be selected according to the structure of the data set, and specifically, for a data set with a simple structure, the number of samples included in each batch may be smaller, for example, 100-. For a data set with a relatively complex structure, the number of samples in each batch can be increased appropriately.
The training unit 3 is specifically configured to:
randomly selecting a batch B from an input data set D, and generating a sample y for any x belonging to B; note B' as the generated data set (x, y);
specifically, for any x ∈ B, it will form a vector set (x, h) with the implicit variable h obtained by sampling. The set of vectors is propagated through the network to generate samples y.
Calculating the condition maximum moment matching criterion according to B and B
Obtaining derivatives of the conditional maximum moment matching criterion with respect to the parametersAnd taking the derivative of the conditional maximum moment matching criterion with respect to the parameter as the gradient of the output layer;
acquiring the gradient of each intermediate layer according to a chain type derivation rule;
updating the parameter w through a gradient descent algorithm;
and repeating the steps until the parameter w meets the convergence condition, and outputting the parameter w.
Specifically, the Conditional Maximum moment matching Criterion (CMMD) includes: the operator differences between the two regenerated kernel hilbert spaces are compared to determine the difference between the two conditional probabilities.
Specifically, the conditional maximum moment matching criterion is the generalization of MMD, which compares the operator difference between two regenerative kernel hilbert spaces to determine the difference between two conditional probabilities. For example, let a conditional random variable X whose corresponding regenerative nuclear hilbert space is F; if two other conditional random variables Y and Z are set, and the hilbert spaces of the two corresponding regenerative kernels are both G, the two conditional probabilities are P (Y | X) and P (Z | X), respectively. In order to compare the two conditional probabilities, the two conditional probabilities are embedded into the Hilbert space, and an operator C between the two Hilbert spaces can be obtainedY|XAnd CZ|XIt can also be seen as the space formed by the product of two Hilbert space tensorsOne element of (1). The conditional maximum moment matching criterion compares the difference between two embeddings in the norm sense of the space equipment:
L C M M D 2 = | | C Y | X - C Z | X | | F ⊗ G 2
in practical application, two data sets are setAndthe difference of the two probability embedded estimates can be compared to approximate LCMMD
L ^ C M M D 2 = | | C ^ s Y | X - C ^ d Y | X | | F ⊗ G 2
Wherein,andare empirical estimates generated from the two data sets (i.e., the training data set and the model-generated data set), respectively. In general, the embedding of conditional probabilities is infinite dimensional, but through Kernel techniques, the above objectives can be efficiently computed by the Kernel Gram matrix. For continuous variables, a gaussian kernel function may be selected; for discrete variables, the Delta kernel may be selected. Specifically, the kernel technique is accurately calculated by passing the inner product operation of the wireless dimension through a kernel function, and thusThere are forms that can be calculated practically:
L ^ C M M D 2 = T r ( L d · C 1 ) + T r ( L s · C 2 ) + T r ( L d s · C 3 )
where L is the Gram matrix of the output variables and C is the parameter matrix determined by the training data.
After the random training of the current batch is completed, the device repeats the steps until the random training of all batches in the data set is completed.
The deep generation network random training device provided by the invention integrates the deep generation network model in the prior art and the maximum moment matching criterion of the condition provided by the invention, so that the application range of the deep generation model based on moment matching can be effectively expanded, and the deep generation model can be applied to various tasks such as image generation according to categories, data classification, instruction extraction of Bayesian networks and the like.
The present invention achieves excellent results in a number of tasks. In the classification problem, the classification error rate of the handwritten digit MINST data set is 0.9% by using the training algorithm, the MINST data set comprises 60,000 handwritten digits from 0 to 9, and the classification problem is 10. This result is the same as the current state-of-the-art classifiers, such as Network in Network, etc. A3.17% error rate was achieved in another 10-class problem SVHN dataset, which contained 600,000 real world gate number photographs from 0 to 9. This result is also comparable to the current state of the art classifiers.
In the generation problem, high quality samples are generated on the MINST dataset using the training algorithm described above. In the Face data set Yale Face, different high-quality pictures are generated according to different people. The Yale Face data set contained facial features of 38 different people, with a total of 2,414 pictures of training data.
In the knowledge extraction task of the bayesian network, the result on the Boston hosting dataset using the training algorithm described above is almost no performance loss. In particular, Boston Housing contains 506 data of 13 dimensions, from which the goal is to make predictions. In the experiment, firstly, a Bayesian network PBP is trained to predict, and then the knowledge is extracted by using the training algorithm of the invention. The prediction of the PBP network has a mean value of 2.574 under the mean square error metric, and after extraction has a mean value of 2.580 with little performance loss.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A deep generation network random training algorithm, comprising:
inputting a data set comprising condition variables and the generated data itself;
randomly partitioning the data set into batches comprising a number of samples;
updating parameters of the sample data of each batch through gradient backward propagation and outputting the parameters;
wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
2. The algorithm of claim 1, wherein the conditional maximum moment matching criteria comprises:
the operator differences between the two regenerated kernel hilbert spaces are compared to determine the difference between the two conditional probabilities.
3. The algorithm of claim 1, wherein the updating parameters of the sample data of each batch through gradient back propagation and outputting the parameters comprises:
acquiring a generated data set corresponding to the sample data of any batch;
calculating a conditional maximum moment matching criterion according to the batch of sample data and the corresponding generated data set;
and acquiring the derivative of the conditional maximum moment matching criterion on the parameter, and taking the derivative of the conditional maximum moment matching criterion on the parameter as the gradient of the output layer.
Acquiring the gradient of each intermediate layer according to a chain type derivation rule;
updating the parameters through a gradient descent algorithm and outputting the parameters;
and repeating the steps until the parameters meet the convergence condition, and outputting the parameters.
4. The algorithm of claim 1, wherein the randomly partitioning the data set into batches comprising a number of samples comprises:
and selecting the number of the samples of the batch according to the structure of the data set.
5. An apparatus for deep generation network random training, comprising:
an input unit for inputting a data set including condition variables and the generated data itself;
a dividing unit for randomly dividing the data set into a plurality of batches including a certain number of samples;
the training unit is used for updating parameters of the sample data of each batch through gradient back propagation and outputting the parameters;
wherein, the updating parameters of the sample data of each batch through gradient back propagation comprises using a conditional maximum moment matching criterion.
6. The apparatus of claim 5, wherein the conditional maximum moment matching criteria comprises:
the operator differences between the two regenerated kernel hilbert spaces are compared to determine the difference between the two conditional probabilities.
7. The apparatus according to claim 5, wherein the training unit is specifically configured to:
acquiring a generated data set corresponding to the sample data of any batch;
calculating a conditional maximum moment matching criterion according to the batch of sample data and the corresponding generated data set;
and acquiring the derivative of the conditional maximum moment matching criterion on the parameter, and taking the derivative of the conditional maximum moment matching criterion on the parameter as the gradient of the output layer.
Acquiring the gradient of each intermediate layer according to a chain type derivation rule;
updating the parameters through a gradient descent algorithm and outputting the parameters;
and repeating the steps until the parameters meet the convergence condition, and outputting the parameters.
8. The apparatus according to claim 5, wherein the segmentation unit is specifically configured to:
and selecting the number of the samples of the batch according to the structure of the data set.
CN201610666223.0A 2016-08-12 2016-08-12 Deep generating network random training algorithm and device Pending CN106355191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610666223.0A CN106355191A (en) 2016-08-12 2016-08-12 Deep generating network random training algorithm and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610666223.0A CN106355191A (en) 2016-08-12 2016-08-12 Deep generating network random training algorithm and device

Publications (1)

Publication Number Publication Date
CN106355191A true CN106355191A (en) 2017-01-25

Family

ID=57844017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610666223.0A Pending CN106355191A (en) 2016-08-12 2016-08-12 Deep generating network random training algorithm and device

Country Status (1)

Country Link
CN (1) CN106355191A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108122209A (en) * 2017-12-14 2018-06-05 浙江捷尚视觉科技股份有限公司 A kind of car plate deblurring method based on confrontation generation network
CN108665058A (en) * 2018-04-11 2018-10-16 徐州工程学院 A kind of generation confrontation network method based on segmentation loss
CN110619342A (en) * 2018-06-20 2019-12-27 鲁东大学 Rotary machine fault diagnosis method based on deep migration learning
CN111512344A (en) * 2017-08-08 2020-08-07 西门子股份公司 Generating synthetic depth images from CAD data using enhanced generative antagonistic neural networks
WO2022077343A1 (en) * 2020-10-15 2022-04-21 Robert Bosch Gmbh Method and apparatus for weight-sharing neural network with stochastic architectures

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224948A (en) * 2015-09-22 2016-01-06 清华大学 A kind of generation method of the largest interval degree of depth generation model based on image procossing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105224948A (en) * 2015-09-22 2016-01-06 清华大学 A kind of generation method of the largest interval degree of depth generation model based on image procossing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YONG REN 等: "Conditional Generative Moment-Matching Networks", 《ARXIV:1606.04218》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111512344A (en) * 2017-08-08 2020-08-07 西门子股份公司 Generating synthetic depth images from CAD data using enhanced generative antagonistic neural networks
CN108122209A (en) * 2017-12-14 2018-06-05 浙江捷尚视觉科技股份有限公司 A kind of car plate deblurring method based on confrontation generation network
CN108122209B (en) * 2017-12-14 2020-05-15 浙江捷尚视觉科技股份有限公司 License plate deblurring method based on countermeasure generation network
CN108665058A (en) * 2018-04-11 2018-10-16 徐州工程学院 A kind of generation confrontation network method based on segmentation loss
CN108665058B (en) * 2018-04-11 2021-01-05 徐州工程学院 Method for generating countermeasure network based on segment loss
CN110619342A (en) * 2018-06-20 2019-12-27 鲁东大学 Rotary machine fault diagnosis method based on deep migration learning
CN110619342B (en) * 2018-06-20 2023-02-03 鲁东大学 Rotary machine fault diagnosis method based on deep migration learning
WO2022077343A1 (en) * 2020-10-15 2022-04-21 Robert Bosch Gmbh Method and apparatus for weight-sharing neural network with stochastic architectures

Similar Documents

Publication Publication Date Title
US20200410384A1 (en) Hybrid quantum-classical generative models for learning data distributions
Oh et al. A tutorial on quantum convolutional neural networks (QCNN)
Torkzadehmahani et al. Dp-cgan: Differentially private synthetic data and label generation
EP3800586A1 (en) Generative structure-property inverse computational co-design of materials
CN110119467B (en) Project recommendation method, device, equipment and storage medium based on session
Van Den Oord et al. Pixel recurrent neural networks
CN110490128B (en) Handwriting recognition method based on encryption neural network
CN106355191A (en) Deep generating network random training algorithm and device
CN109754078A (en) Method for optimization neural network
US20200293876A1 (en) Compression of deep neural networks
CN112396191B (en) Method, system and device for updating model parameters based on federal learning
CN112418292B (en) Image quality evaluation method, device, computer equipment and storage medium
US11907818B2 (en) Compression of machine-learned models via entropy penalized weight reparameterization
EP3639206A1 (en) Systems and methods for compression and distribution of machine learning models
US20210287067A1 (en) Edge message passing neural network
WO2021042857A1 (en) Processing method and processing apparatus for image segmentation model
Liu et al. A survey on computationally efficient neural architecture search
US20220383127A1 (en) Methods and systems for training a graph neural network using supervised contrastive learning
CN116523079A (en) Reinforced learning-based federal learning optimization method and system
Xu et al. Solving inverse problems in stochastic models using deep neural networks and adversarial training
CN115862751A (en) Quantum chemistry property calculation method for updating polymerization attention mechanism based on edge features
WO2023086198A1 (en) Robustifying nerf model novel view synthesis to sparse data
Luan et al. LRP‐based network pruning and policy distillation of robust and non‐robust DRL agents for embedded systems
US20220083870A1 (en) Training in Communication Systems
Xia et al. VI-DGP: A variational inference method with deep generative prior for solving high-dimensional inverse problems

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170125

RJ01 Rejection of invention patent application after publication