CN110458786B - Priori GAN model medical image generation method - Google Patents

Priori GAN model medical image generation method Download PDF

Info

Publication number
CN110458786B
CN110458786B CN201910700326.8A CN201910700326A CN110458786B CN 110458786 B CN110458786 B CN 110458786B CN 201910700326 A CN201910700326 A CN 201910700326A CN 110458786 B CN110458786 B CN 110458786B
Authority
CN
China
Prior art keywords
image
network
generated
model
generating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910700326.8A
Other languages
Chinese (zh)
Other versions
CN110458786A (en
Inventor
郑申海
房斌
李腊全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Kaixiangyuan Software Technology Co.,Ltd.
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201910700326.8A priority Critical patent/CN110458786B/en
Publication of CN110458786A publication Critical patent/CN110458786A/en
Application granted granted Critical
Publication of CN110458786B publication Critical patent/CN110458786B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a prior GAN model medical image generation method, in particular to the field of medical image synthesis, which comprises the following steps: step one, inventing a meaningful improved model, namely generating a countermeasure network PCGAN under the prior condition; optimizing and generating a network G and judging a network D by using the idea of DGAN; constructing a two-dimensional or three-dimensional shape space of the organ by a Principal Component Analysis (PCA) method to generate a label graph with physical meaning explanation; and step four, generating the medical image by using the optimized network parameters and the prior label. The invention uses PCA to generate a label image with physical meaning explanation, reduces or cancels the participation of experts in data annotation in the later period, thereby establishing a synthetic image database, solving the problem of scarce effective samples of the deep learning method based on data to a certain extent, and integrally ensuring that the invention can effectively reduce the workload of experts and improve the working efficiency.

Description

Priori GAN model medical image generation method
Technical Field
The invention relates to the technical field of medical image synthesis, in particular to a prior GAN model medical image generation method.
Background
Medical image models are only as good as their annotators, which themselves should be skilled radiologists. It is desirable to reduce the workload of expert labeling from a model perspective. Active learning models are commonly used to assist in the annotation of new data and the generation of confrontational model generation data.
Active learning is an effective method for assisting in adding labeled samples. Active learning may begin with a small label dataset and several expert annotators, annotating the images by machine learning and scoring with a classifier, with images scored below a certainty threshold being sent to experts for manual labeling. However, when the expert does not agree with the boundaries of some complex images, the accuracy and effectiveness of actively learning and establishing the data set are not guaranteed.
The other method is data generation, and the research on data generation is not a brand new field, and the method is to enable a computer to generate data similar to a sample through learning of certain data. Self-encoding networks generate data by encoding and decoding both networks. But it has limited generating capability and is not suitable for generating a large amount of data. The variational automatic coding model can generate relatively realistic images. The VAE, however, requires that the network produce an image close to an image in the original data set, which makes the VAE unable to learn to generate a new image. I.e., VAE is just a mimic capability and no creativity. The generative confrontation network (GAN) innovatively combines generative and confrontational models, looking for a nash equilibrium between generative and confrontational to produce a more realistic image.
In the prior art, the marking of the GAN model of the medical image needs a later-stage expert to participate in data marking, so that the workload of the medical expert is increased, and the working efficiency is reduced.
Disclosure of Invention
In order to overcome the above-mentioned defects in the prior art, embodiments of the present invention provide a method for generating a medical image of a prior GAN model, in which a simple probability distribution is converted into a true probability distribution of given observation data through a parameterized probability generation model, and new data similar to the observation data is generated based on the obtained probability distribution model to generate a naive GAN model; constructing an ideal mathematical model of the GAN; fixing the discriminant model D and calculating by the gradient flow formula:
Figure GDA0003559231280000021
optimizing and generating a model G; a meaningful improved model is generated, namely a conditional generation countermeasure network CGAN, the objective function of which is:
Figure GDA0003559231280000022
constructing a two-dimensional or three-dimensional shape space of an organ by a Principal Component Analysis (PCA) method; by utilizing the idea of DGAN, a label image with physical meaning explanation is generated by PCA, and the participation of experts in data annotation in the later period is reduced or cancelled, so that a synthetic image database is established, the problem of scarcity of effective samples of a deep learning method based on data is solved to a certain extent, the workload of the experts can be effectively reduced, and the working efficiency is improved.
In order to achieve the purpose, the invention provides the following technical scheme: a prior GAN model medical image generation method comprises the following steps:
step one, inventing a meaningful improved model, namely generating a countermeasure network PCGAN under the prior condition;
optimizing and generating a network G and judging a network D by using the idea of DGAN;
constructing a two-dimensional or three-dimensional shape space of the organ by a Principal Component Analysis (PCA) method to generate a label graph with physical meaning explanation;
step four, generating a medical image by using the optimized network parameters and the prior label graph;
and step five, comparing the unmarked image with the marked image, searching a similar area on the unmarked image and the marked image, marking the unmarked image, and correcting the mark.
In a preferred embodiment, the step one is specifically that the naive GAN method converts a simple probability distribution into a true probability distribution of given observation data through a parameterized probability generation model, generates new data similar to the observation data based on the obtained probability distribution model, and takes T as the probability generation model, which can transform a uniform distribution into a gaussian distribution:
Figure GDA0003559231280000031
wherein (u)1,u2)~U(0,1);
All nxn images form a space, which is referred to as an image space χ, each image is regarded as a point (x ∈ χ) in the space, and V (x) is used to represent the probability of whether a picture expresses a real object, and then V is the target probability measure to be learned by GAN, and such probability measure is usually expressed by expectation. In case of adding conditional constraints, we get a condition-generating discriminant network, i.e. CGAN, whose mathematical description is:
Figure GDA0003559231280000032
wherein x represents a real picture, z represents random noise input into the G network, and the CGAN model learns that the random noise z generates a mapping G: { y, z } → x between images similar to the structure y under the constraint of a label graph y.
Considering that the target edges of the generated image and the label map have large gradient values, the gradient difference is taken as an extra loss of the generated network, i.e. the constraint term is defined as:
Figure GDA0003559231280000033
where dA is the integration unit, Ω is the tag target region,
Figure GDA0003559231280000034
is the target boundary of the label graph, and f is a distance measure, i.e. the gradient difference term only considers the gradient difference at the target boundary. For network stability applicability, in a generated network, a TV norm constraint is used for generating an image to keep target boundary information, a Tikhonov norm constraint is used for generating an image to keep smooth gray level change in a target area, so that a clear image matched with a label image is generated, namely, the regular constraint term is defined as:
Figure GDA0003559231280000035
the first two items are Tikhonov norm, the generation of the constrained image and the smoothness of the target area of the generated image, and the third item is used for constraining the definition of the edge of the generated image. In order to keep the discriminant network unchanged, multiple constraints of the generation network are comprehensively considered, and the following loss functions about the generation network are proposed:
Figure GDA0003559231280000036
in a preferred embodiment, the step two is specifically that the loss function of the step one is minimized by first determining the generation network G and maximizing the discrimination network D:
Figure GDA0003559231280000041
wherein L isbceIs binary cross entropy. Assume that its network parameter is wdThe gradient flow is as follows:
Figure GDA0003559231280000042
then determining a discrimination network D, and generating a network G to the maximum extent:
Figure GDA0003559231280000043
assume that its network parameter is wgThe gradient flow is as follows:
Figure GDA0003559231280000044
in a preferred embodiment, the step three is to construct a shape matrix M ═ phi with dimension d × N from N samples, assuming that each organ is represented by a vector phi with dimension d12,…,φN]Method of using statistical shape model, a new shape
Figure GDA0003559231280000045
Can be given by:
Figure GDA0003559231280000046
wherein
Figure GDA0003559231280000047
Is the average shape, P is a matrix composed of t eigenvectors corresponding to the first t largest eigenvalues of the covariance matrix, biIs a shape parameter vector of dimension t, by randomly sampling biThe value of the element may result in a "new shape" of the shape space.
In a preferred embodiment, the step four is specifically to generate a binary label map from the prior shape formed by the points or the grid in the step three, wherein the label map has a physical statistical significance and guides the generation of the original image. Hypothetical mesh shape
Figure GDA0003559231280000048
The corresponding binary label graph is y according to the obtained optimization parameter wgAnd wdThe forward calculation can obtain a generated image G (z | y) corresponding to the label map.
The invention has the technical effects and advantages that:
the invention constructs a two-dimensional or three-dimensional shape space of an organ by a Principal Component Analysis (PCA), utilizes the thought of DGAN, uses the PCA to generate a label image with physical meaning explanation, compares an unmarked image with a marked image, finds a similar area on the unmarked image and the marked image, marks the unmarked image, corrects the mark, and reduces or cancels the participation of experts in the data marking in the later period, thereby establishing a synthetic image database, solving the problem of scarce effective samples of the deep learning method based on the data to a certain extent, and integrally ensuring that the invention can effectively reduce the workload of the experts and improve the working efficiency.
Drawings
FIG. 1 is a diagram illustrating the physical interpretation of the GAN model of the present invention.
Fig. 2 is a schematic diagram of the CGAN result of the present invention, wherein the first column is a condition diagram, the second column is a CGAN output result diagram, and the third example is a real picture diagram.
Fig. 3 is a liver result image of the PCGAN model of the present invention, the binary image is a sampling image of the PCA space, and the gray-scale image is the generated corresponding CT image of the liver.
Figure 4 is a flow chart of the PCGAN study of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A method for generating a medical image of a prior GAN model as shown in fig. 1-4, comprising the steps of:
step one, inventing a meaningful improved model, namely generating a countermeasure network PCGAN under the prior condition;
optimizing and generating a network G and judging a network D by using the idea of DGAN;
step three, constructing a two-dimensional or three-dimensional shape space of the organ by a Principal Component Analysis (PCA) method, and generating a label graph with physical meaning explanation;
step four, generating a medical image by using the optimized network parameters and the prior label graph;
comparing the unmarked image with the marked image, searching a similar area on the unmarked image and the marked image, marking the unmarked image, and correcting the mark;
the first step is specifically that a simple probability distribution is converted into a true probability distribution of given observation data through a parameterized probability generation model by a naive GAN method, new data similar to the observation data is generated based on the obtained probability distribution model, T is used as a probability generation model, and the probability generation model T can convert uniform distribution into Gaussian distribution:
Figure GDA0003559231280000061
wherein (u)1,u2)~U(0,1);
All nxn images form a space, which is referred to as an image space χ, each image is regarded as a point (x ∈ χ) in the space, and V (x) is used to represent the probability of whether a picture expresses a real object, and then V is the target probability measure to be learned by GAN, and such probability measure is usually expressed by expectation. In case of adding conditional constraints, we get a condition-generating discriminant network, i.e. CGAN, whose mathematical description is:
Figure GDA0003559231280000062
wherein x represents a real picture, z represents random noise input into the G network, and the CGAN model learns that the random noise z generates a mapping G: { y, z } → x between images similar to the structure y under the constraint of a label graph y.
Considering that the target edges of the generated image and the label map have large gradient values, the gradient difference is taken as an extra loss of the generated network, i.e. the constraint term is defined as:
Figure GDA0003559231280000063
where dA is the integration unit, Ω is the tag target region,
Figure GDA0003559231280000064
is the target boundary of the label graph, and f is a distance measure, i.e. the gradient difference term only considers the gradient difference at the target boundary. For network stability applicability, in a generated network, a TV norm constraint is used for generating an image to keep target boundary information, a Tikhonov norm constraint is used for generating an image to keep smooth gray level change in a target area, so that a clear image matched with a label image is generated, namely, the regular constraint term is defined as:
Figure GDA0003559231280000065
the first two items are Tikhonov norm, the generation of the constrained image and the smoothness of the target area of the generated image, and the third item is used for constraining the definition of the edge of the generated image. In order to keep the discriminant network unchanged, multiple constraints of the generation network are comprehensively considered, and the following loss functions about the generation network are proposed:
Figure GDA0003559231280000066
the second step is specifically that the loss function of the first step is minimized by the following two steps, first determining a generation network G, and maximizing a discrimination network D:
Figure GDA0003559231280000071
wherein L isbceIs binary cross entropy. Assume that its network parameter is wdThe gradient flow is as follows:
Figure GDA0003559231280000072
then determining a discrimination network D, and maximally generating a network G:
Figure GDA0003559231280000073
assume that its network parameter is wgThe gradient flow is as follows:
Figure GDA0003559231280000074
the third step is to assume that each organ has the dimension ofd is represented by a vector phi, and a shape matrix M with dimension d multiplied by N can be constructed by N samples12,…,φN]Method of using statistical shape model, a new shape
Figure GDA0003559231280000075
Can be given by:
Figure GDA0003559231280000076
wherein
Figure GDA0003559231280000077
Is the average shape, P is a matrix composed of t eigenvectors corresponding to the first t largest eigenvalues of the covariance matrix, biIs a shape parameter vector of dimension t, by randomly sampling biThe value of the element, a "new shape" of the shape space can be obtained;
the fourth step is specifically that a binary label graph is generated by the prior shape formed by the points or the grids in the third step, and the label graph has physical statistical significance and guides the generation of the original image. Hypothetical mesh shape
Figure GDA0003559231280000078
The corresponding binary label graph is y according to the obtained optimization parameter wgAnd wdThe forward calculation can obtain a generated image G (z | y) corresponding to the label map.
The working principle of the invention is as follows:
referring to fig. 1-4 of the specification, a simple probability distribution is converted into a true probability distribution for given observed data by means of a parameterized probability generation model, new data similar to the observed data is generated based on the obtained probability distribution model,
Figure GDA0003559231280000079
generating a naive GAN model; by the formula:
Figure GDA0003559231280000081
constructing an ideal mathematical model of the GAN; fixing the discriminant model D and calculating by the gradient flow formula:
Figure GDA0003559231280000082
optimizing and generating a model G; a meaningful improved model is generated, namely a conditional generation countermeasure network CGAN, the objective function of which is:
Figure GDA0003559231280000083
constructing a two-dimensional or three-dimensional shape space of an organ by a Principal Component Analysis (PCA) method; by utilizing the thought of DGAN, a label image with physical meaning explanation is generated by PCA, and the participation of experts in data annotation in the later period is reduced or cancelled, so that a synthetic image database is established, the problem of scarcity of effective samples of a data-based deep learning method is solved to a certain extent, the unmarked image is compared with the marked image, a similar area on the unmarked image and the marked image is searched, the unmarked image is marked, and the mark is corrected integrally, so that the workload of the experts can be effectively reduced, and the working efficiency is improved.
And finally: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included in the scope of the present invention.

Claims (4)

1. A prior GAN model medical image generation method is characterized in that: the method comprises the following steps:
step one, inventing a meaningful improved model, namely, generating a countermeasure network PCGAN under prior conditions, specifically, a naive GAN method converts a simple probability distribution into a real probability distribution of given observation data through a parameterized probability generation model, then generates new data similar to the observation data based on the obtained probability distribution model, takes T as a probability generation model, and the probability generation model T can convert uniform distribution into Gaussian distribution:
Figure FDA0003559231270000011
wherein (u)1,u2)~U(0,1);
All nxn images form a space, which is recorded as an image space χ, each image is regarded as a point x ∈ χ in the space, and V (x) is used to represent the probability of whether a picture expresses a real object, so V is the target probability measure to be learned by GAN, the probability measure is usually represented by expectation, and under the condition of adding condition limitation, a condition generation judgment network, namely CGAN, is obtained, and the mathematical description of the condition generation judgment network is as follows:
Figure FDA0003559231270000012
x represents a real picture, z represents random noise input into a G network, and the CGAN model learns that the random noise z generates a mapping G: { y, z } → x between images similar to a structure y under the constraint of a conditional label graph y;
optimizing and generating a network G and judging a network D by using the idea of DGAN;
step three, constructing a two-dimensional or three-dimensional shape space of the organ by a Principal Component Analysis (PCA) method, and generating a label graph with physical meaning explanation;
step four, generating a medical image by using the optimized network parameters and the prior label graph;
and step five, comparing the unmarked image with the marked image, searching a similar area on the unmarked image and the marked image, marking the unmarked image, and correcting the mark.
2. The method of claim 1, wherein the method comprises: in the first step, considering that the target edges of the generated image and the label map have larger gradient values, the gradient difference is used as an extra loss of the generated network, that is, the constraint term is defined as:
Figure FDA0003559231270000021
where dA is the integration unit, Ω is the tag target region,
Figure FDA0003559231270000022
the method is characterized in that the target boundary of a label graph is represented, f is a distance measure, namely, only gradient difference at the target boundary is considered in a gradient difference item, and for network stability applicability, in a generated network, an image is generated by using TV norm constraint to keep target boundary information, gray level change in a target area of the image generated by using Tikhonov norm constraint is kept smooth, so that a clear graph matched with the label graph is generated, namely, a regular constraint item is defined as:
Figure FDA0003559231270000023
the first two terms are Tikhonov norm, the generation of a constrained image and the smoothness of a generated image target area, the third term is used for constraining the definition of an image edge, in order to keep a discriminant network unchanged, multiple constraints of the generated network are comprehensively considered, and the following loss function related to the generated network is established:
Figure FDA0003559231270000024
3. the method of claim 1, wherein the method comprises: step three is specifically that each organ is defined by a dimension dThe vector phi indicates that a shape matrix M with dimension d × N can be constructed from N samples [ phi ═ N12,…,φN]Method of using statistical shape model, a new shape
Figure FDA0003559231270000025
Can be given by:
Figure FDA0003559231270000026
wherein
Figure FDA0003559231270000027
Is the average shape, P is a matrix composed of t eigenvectors corresponding to the first t largest eigenvalues of the covariance matrix, biIs a shape parameter vector of dimension t, by randomly sampling biThe value of the element may result in a "new shape" of the shape space.
4. The method of claim 1, wherein the method comprises: the fourth step is specifically that the prior shape formed by the points or the grids in the third step is used for generating a binary label graph, the label graph has physical statistical significance, generation of the original image is guided, and the grid shape
Figure FDA0003559231270000028
The corresponding binary label graph is y according to the obtained optimization parameter wgAnd wdThe forward calculation can obtain a generated image G (z | y) corresponding to the label map.
CN201910700326.8A 2019-07-31 2019-07-31 Priori GAN model medical image generation method Active CN110458786B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910700326.8A CN110458786B (en) 2019-07-31 2019-07-31 Priori GAN model medical image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910700326.8A CN110458786B (en) 2019-07-31 2019-07-31 Priori GAN model medical image generation method

Publications (2)

Publication Number Publication Date
CN110458786A CN110458786A (en) 2019-11-15
CN110458786B true CN110458786B (en) 2022-05-17

Family

ID=68484171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910700326.8A Active CN110458786B (en) 2019-07-31 2019-07-31 Priori GAN model medical image generation method

Country Status (1)

Country Link
CN (1) CN110458786B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014105246A2 (en) * 2012-10-05 2014-07-03 Massachusetts Institute Of Technology Nanofluidic sorting system for gene synthesis and pcr reaction products
CN108182657A (en) * 2018-01-26 2018-06-19 深圳市唯特视科技有限公司 A kind of face-image conversion method that confrontation network is generated based on cycle
US10043109B1 (en) * 2017-01-23 2018-08-07 A9.Com, Inc. Attribute similarity-based search
CN109522973A (en) * 2019-01-17 2019-03-26 云南大学 Medical big data classification method and system based on production confrontation network and semi-supervised learning
CN109740677A (en) * 2019-01-07 2019-05-10 湖北工业大学 It is a kind of to improve the semisupervised classification method for generating confrontation network based on principal component analysis

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014105246A2 (en) * 2012-10-05 2014-07-03 Massachusetts Institute Of Technology Nanofluidic sorting system for gene synthesis and pcr reaction products
US10043109B1 (en) * 2017-01-23 2018-08-07 A9.Com, Inc. Attribute similarity-based search
CN108182657A (en) * 2018-01-26 2018-06-19 深圳市唯特视科技有限公司 A kind of face-image conversion method that confrontation network is generated based on cycle
CN109740677A (en) * 2019-01-07 2019-05-10 湖北工业大学 It is a kind of to improve the semisupervised classification method for generating confrontation network based on principal component analysis
CN109522973A (en) * 2019-01-17 2019-03-26 云南大学 Medical big data classification method and system based on production confrontation network and semi-supervised learning

Also Published As

Publication number Publication date
CN110458786A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
Micchelli et al. On learning vector-valued functions
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
CN111583263B (en) Point cloud segmentation method based on joint dynamic graph convolution
Huang et al. Analysis and synthesis of 3D shape families via deep‐learned generative models of surfaces
Jancsary et al. Regression Tree Fields—An efficient, non-parametric approach to image labeling problems
CN111259979A (en) Deep semi-supervised image clustering method based on label self-adaptive strategy
JP5766620B2 (en) Object region detection apparatus, method, and program
US9449395B2 (en) Methods and systems for image matting and foreground estimation based on hierarchical graphs
Gao The diffusion geometry of fibre bundles: Horizontal diffusion maps
CN107358172B (en) Human face feature point initialization method based on human face orientation classification
Singh Learning Bayesian networks from incomplete data
CN109800768A (en) The hash character representation learning method of semi-supervised GAN
CN111291705A (en) Cross-multi-target-domain pedestrian re-identification method
CN109063725B (en) Multi-view clustering-oriented multi-graph regularization depth matrix decomposition method
CN112380374B (en) Zero sample image classification method based on semantic expansion
López-Rubio Probabilistic self-organizing maps for qualitative data
CN110458786B (en) Priori GAN model medical image generation method
CN110717402B (en) Pedestrian re-identification method based on hierarchical optimization metric learning
CN116310545A (en) Cross-domain tongue image classification method based on depth layering optimal transmission
CN115688234A (en) Building layout generation method, device and medium based on conditional convolution
CN112446345B (en) Low-quality three-dimensional face recognition method, system, equipment and storage medium
Bhandari et al. From Beginning to BEGANing: Role of Adversarial Learning in Reshaping Generative Models
JP7148078B2 (en) Attribute estimation device, attribute estimation method, attribute estimator learning device, and program
Guanglong et al. Correlation Analysis between the Emotion and Aesthetics for Chinese Classical Garden Design Based on Deep Transfer Learning
CN113361530A (en) Image semantic accurate segmentation and optimization method using interaction means

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231011

Address after: No. 17, 14th Floor, Building 8, No. 9, North Section 2, 1st Ring Road, Jinniu District, Chengdu City, Sichuan Province, 610081

Patentee after: Chengdu Shanyu Technology Co.,Ltd.

Address before: 400065 No. 2, Chongwen Road, Nan'an District, Chongqing

Patentee before: CHONGQING University OF POSTS AND TELECOMMUNICATIONS

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231222

Address after: Room 509, 5th Floor, Unit 1, Building 2, No. 1537, Jiannan Avenue Middle Section, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu, Sichuan Province, 610095

Patentee after: Sichuan Kaixiangyuan Software Technology Co.,Ltd.

Address before: No. 17, 14th Floor, Building 8, No. 9, North Section 2, 1st Ring Road, Jinniu District, Chengdu City, Sichuan Province, 610081

Patentee before: Chengdu Shanyu Technology Co.,Ltd.