CN113298906A - Paired clothing image generation method based on sketch guidance - Google Patents
Paired clothing image generation method based on sketch guidance Download PDFInfo
- Publication number
- CN113298906A CN113298906A CN202110647233.0A CN202110647233A CN113298906A CN 113298906 A CN113298906 A CN 113298906A CN 202110647233 A CN202110647233 A CN 202110647233A CN 113298906 A CN113298906 A CN 113298906A
- Authority
- CN
- China
- Prior art keywords
- image
- clothing
- garment
- convolution
- sketch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 24
- 238000012549 training Methods 0.000 claims abstract description 8
- 230000004913 activation Effects 0.000 claims description 19
- 238000010606 normalization Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 2
- 238000005070 sampling Methods 0.000 claims description 2
- 239000004576 sand Substances 0.000 claims description 2
- 230000006870 function Effects 0.000 description 16
- 230000008569 process Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 101150064138 MAP1 gene Proteins 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 101150077939 mapA gene Proteins 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a paired clothing image generation method based on sketch guidance, which comprises the following steps: (1) acquiring paired garment images; (2) acquiring a draft graph representing the personalized preference of a user; (3) constructing a clothing image generation model, wherein the clothing image generation model comprises a generator module, a real discriminator and a compatibility discriminator, and the generator module comprises an encoder and a decoder; inputting a reference clothing image and a draft image in an encoder, outputting a high-level image characteristic, and inputting the high-level image characteristic to a decoder to obtain a synthesized clothing image; inputting the synthesized clothing image and the real clothing image into an authenticity discriminator, and calculating discrimination loss; inputting the synthesized clothing pair and the real clothing pair into a compatibility discriminator, calculating a compatibility score, and further calculating a compatibility loss; (4) and training the clothing image generation model, and applying after the training is finished. By using the invention, a new garment image which is more matched and compatible with the reference garment can be generated.
Description
Technical Field
The invention belongs to the technical field of image generation, and particularly relates to a paired clothing image generation method based on sketch guidance.
Background
With the rapid development of the fashion industry towards online business, computer vision issues related to fashion are receiving increasing attention. One particularly interesting task is fashion recommendation, the goal of which is that a given garment recommends a garment that is compatible with the given garment. The key to fashion recommendations is to shape the compatibility between fashion clothing. In the field of computer vision, the high compatibility of clothing means that fashion clothing of different categories is reasonably and beautifully matched to form a complete set of clothing. Currently, many mature methods are successful in predicting fashion garment compatibility, and in a practical application scenario, garment recommendation can be roughly divided into 3 steps: firstly, inputting a reference garment A; then, sequentially selecting the clothes from the database and carrying out compatibility detection on the clothes and the clothes A; and finally, selecting the clothing B with the highest score for recommendation. However, there are two problems: 1. if the number of the stored clothes is too small, the proper clothes cannot be selected for recommendation; 2. if too many garments are in stock, the compatibility of each pair of garments needs to be calculated, resulting in an efficiency problem with the recommendation of garments. The main reason is that most compatibility detection methods use a deep neural network, so the calculation amount is large and the time consumption is long.
In recent years, generation of a countermeasure Network (GAN) has been a great success in the field of computer vision due to its powerful generation capability. In particular, the conventional GAN and the variant thereof have remarkable performances in the aspects of image generation, image editing, representation learning and the like, and make up for the defects of the conventional method. Constructing a generation countermeasure network to synthesize new clothes compatible with the given clothes, which can solve the clothes vacancy problem of small stock for the case of small quantity of the stock clothes; for the case of a large number of clothes in stock, a real clothes sample similar to the synthesized clothes can be searched from the database through Hash search and other search technologies, which is more efficient than the compatibility prediction of paired clothes by enumerating all single clothes.
Some problems with existing image generation techniques: firstly, the style of the generated clothes is too single; for the same piece of clothing, different users may prefer to choose different shapes and styles of clothing to fit it, but for GAN, given a source image, it will inevitably produce the same clothing result each time. Secondly, the randomly generated clothing and the input clothing image are not matched and compatible, and perfect recommendation cannot be realized.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a paired garment image generation method based on sketch guidance, which is used for generating a new garment image which is more matched and compatible with a reference garment.
A paired clothing image generation method based on sketch guidance comprises the following steps:
(1) acquiring paired garment images to form a garment data set, wherein each pair of garment images comprises an upper garment and a lower garment;
(2) obtaining a draft graph representing the personalized preference of a user to form a draft graph data set;
(3) constructing a clothing image generation model, wherein the clothing image generation model comprises a generator module and a real discriminator EganAnd compatibility discriminator Dcom(ii) a The generator module comprises an encoder E and a decoder D;
inputting reference clothing image and draft image in encoder E, and outputting a high-level image characteristic d0(ii) a Feature the high-level image d0Inputting the image into a decoder D to obtain a synthesized clothing image;
inputting the synthesized clothes image and the real clothes image into an authenticity discriminator, and calculating discrimination loss Ladv;
Inputting the synthesized clothing pair and the real clothing pair into a compatibility discriminator, calculating a compatibility score, and further calculating a compatibility loss Lcom(ii) a The synthetic garment pair comprises a synthetic garment image and a reference garment image, and the real garment pair comprises a real garment image and a reference garment image;
(4) and training the clothing image generation model by using the clothing data set and the draft image data set, and inputting the given reference clothing and the draft image of the user into the generator after the training is finished to generate the corresponding complementary clothing image.
Further, in the step (2), edge detection is carried out on all the clothing images by using a Canny algorithm, and a draft image is obtained.
Further, after the clothing data set and the draft image data set are obtained, the operations of cutting, scaling, horizontal turning and normalization are further carried out on the image data, and an image with the size of 128 × 128 is obtained.
In the step (3), the encoder E gradually performs feature downsampling on the input reference clothing image and the input draft image through 7 rolling blocks; the first 6 convolution blocks are all composed of a convolution layer with convolution kernel size of 5 × 5, a batch normalization layer and an LReLU activation function, and the last convolution block is composed of a convolution layer with convolution kernel size of 4 × 4 and an LReLU activation function; finally, it is compressed into a high-level image feature representation d0∈R512×1×1。
The decoder D receives the high-level image feature representation D input from the encoder E0Then using 7 deconvolution layer pairs d0And gradually up-sampling to obtain a synthesized clothing image.
Further, the generator also includes an encoder E having 6 convolutional layerssAnd a CSC module;
to encoder EsCorresponding ith layer characteristic EiJumping to decoder D, and the i-th layer output D of DiInputting the CSC module to obtain the i +1 th layer feature representation di+1;
In CSC module, d is firstiAnd EiCascaded on a channel and then sequentially passed through 3 convolutional layers with convolutional kernel sizes of 1 × 1, 3 × 3, 1 × 1 to obtain dimension and diThe same new feature, the last residual join, the new feature and diAdding to obtain final output di+1The specific formula is defined as:
di+1=ReLU(C1(C3(C1(f(di,Ei))))+di)
where f (-) is the cascade operation of the feature vectors in the channel dimension, C1Represents 1 × 1 convolution + batch normalization layer + ReLU activation function,C3Then 3 x 3 convolution + batch normalization layer + ReLU activation function.
Said authenticity discriminator DganThe clothes image processing system comprises 4 convolution blocks, wherein the first three convolution blocks comprise a convolution layer with convolution kernel size of 5 multiplied by 5, batch normalization BN and an LReLU activation function, and are mainly responsible for downsampling the clothes image; only one convolution layer with convolution kernel size of 3 x 3 in the last convolution block is mainly responsible for reducing the number of channels of image features so as to meet the output requirement.
The compatibility discriminator DcomThe system consists of two parts, namely F and M, wherein F is a 5-layer convolutional network, and M is a measurement network consisting of an adaptive average pooling function and a Sigmoid activation function; f a pair of fashion clothing images (I)r&Ig) The cascade is used as input and is converted into a 1-dimensional potential representation after passing through a 5-layer convolutional network:
zo=F(Ir,Ig)
wherein, IrRepresenting a real pair of garments, IgRepresenting a synthetic clothing pair, zoA potential spatial representation of the reference garment and the resulting garment as a whole is input into M, and a final compatibility score is obtained:
Sr,g=M(zo)。
compared with the prior art, the invention has the following beneficial effects:
1. the generator in the invention adopts the reference clothing image and the draft of the complementary clothing drawn by the user as input data; personalized clothing design is realized by introducing draft of user design, and the same clothing is prevented from being synthesized every time. In the generator, the draft has two uses: firstly, the potential vector is jointly input into an encoder together with a reference image to obtain a potential vector, and the potential vector plays a guiding role in the initial stage of image generation; secondly, in the process of decoding, the draft is sampled step by step, the obtained feature vectors and each decoding layer are subjected to Conditional Skip Connection (CSC), and the shape constraint effect is achieved at the later stage of image generation.
2. The present invention uses two discriminators to guide the garment creation process. First, a classical true/false discriminator is applied to make the synthetic garment globally resemble a real sample. And secondly, identifying the compatibility degree between the generated clothing template and the given clothing sample by adopting a compatibility discriminator.
Drawings
FIG. 1 is a network framework diagram of a garment image generation model according to the present invention;
FIG. 2 is an overall network structure of the generator of the present invention;
FIG. 3 is a network architecture of the arbiter of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
As shown in fig. 1, a paired clothing image generation method based on sketch guidance includes the following steps:
1) data preprocessing
(1.1) preparing pairs of garment data sets: the data set for detecting the garment compatibility is mainly a FashionVC data set proposed in a paper published by Song professor of Shandong university at ACM 2017 conference, 248 fashion experts are tracked on a Polyvore website, then historical garment matching data of the fashion experts are crawled and combined, a threshold value is set, and the number of clicks of each pair of garments is less than 50 and is not sampled. There were 20,726 sets of paired garments in the dataset, and the visual image of each garment was background clean.
(1.2) obtaining a draft image: the user-drawn draft is used for representing personalized preference of the user, but the data collection of the draft is difficult, and the user-drawn draft is replaced by the edge detection graph of the target clothing. The Canny algorithm is used in the text to carry out edge detection on all clothing images to obtain a 'draft image'. The Canny edge detection algorithm is a multi-level edge detection algorithm that applies a Non-Maximum Suppression (NMS) technique to eliminate boundary false detections and determines the boundaries that may exist by means of a double threshold.
(1.3) defining an ImageLoader class to process the clothing image and the manuscript image, and after reading the image, performing operations such as cutting, zooming, horizontal turning, normalization and the like on the image data to finally obtain the image characteristics with the size of 128 x 128. .
2) Generator module
(2.1) encoder E: as shown in fig. 2, in the encoder, a reference image and a sketch drawn by a user are input, and after cascading the two, feature downsampling is performed step by 7 convolution blocks. The first 6 convolutional blocks are each composed of a convolutional layer with a convolutional kernel size of 5 × 5, a Batch Normalization layer (BN) and an lreul activation function, and the last convolutional block is composed of a convolutional layer with a convolutional kernel size of 4 × 4 and an lreul activation function. Finally, it is compressed into a high-level image feature representation d0∈R512×1×1. This high level feature is input to the decoder D.
(2.2) decoder D: as shown in FIG. 2, in the decoder, a high-level image feature representation d input from the encoder is received0Then using 7 deconvolution layer pairs d0And gradually upsampling to synthesize a clothing image.
(2.3) conditional jump connection: sketch-guided garment image compositing presents additional challenges to the generator compared to compositing random content. The fine local shape information degrades with increasing depth of the encoder network, resulting in distorted results from the generator. In some extreme cases, the generated content may lose the tiny shape information in the sketch entirely, so that the generated clothing does not match the user's sketch exactly. To ensure that the synthesized garment image is consistent with the sketch, we introduced a Conditional Skip Connection (CSC). The sketch is used as a constraint condition to participate in each decoding process, the edge and detail information of the sketch is strengthened, the generator can be effectively helped to remember the information of the expected shape, and the purpose of completely matching the synthesized clothing and the sketch pattern is achieved.
As shown in FIG. 2, first, an encoder E having 6 convolutional layers is constructedsThen encoder EsCorresponding ith layer ofCharacteristic EiJumping to decoder D, and the i-th layer output D of DiInputting the CSC module to obtain the i +1 th layer feature representation di+1. In CSC module, d is firstiAnd EiCascaded on a channel and then sequentially passed through 3 convolutional layers with convolutional kernel sizes of 1 × 1, 3 × 3, 1 × 1 to obtain dimension and diThe same new feature, the last residual join, the new feature and diAdding to obtain final output di+1The specific formula is defined as:
di+1=ReLU(C1(C3(C1(f(di,Ei))))+di)
where f (-) is the cascade operation of the feature vectors in the channel dimension, C1Represents 1 × 1 convolution + batch normalization layer + ReLU activation function, C3Then 3 x 3 convolution + batch normalization layer + ReLU activation function.
3) Discriminator module
The use of multiple discriminators can alleviate the model collapse problem in GAN training. The task of the invention is to meet both authenticity and compatibility requirements. Based on the above purposes, two discriminators were designed: authenticity discriminator (D)gan) And a compatibility discriminator (D)com)。
(3.1) authenticity discriminator: authenticity discriminator (D)gan) Is responsible for the visual quality of each composite image, in other words it ensures that each generated image looks like a real image. As shown in fig. 3 (a), the authenticity discriminator mainly consists of 4 convolution blocks, the first three convolution blocks mainly include a convolution layer with convolution kernel size of 5 × 5, batch normalization BN and lreul activation function, which are mainly responsible for downsampling the garment image; only one convolution layer with convolution kernel size of 3 x 3 in the last convolution block is mainly responsible for reducing the number of channels of image features so as to meet the output requirement. To ensure image generation quality, we define the min-max objective function of the truth discriminator as:
wherein x represents a query garment, S represents a line draft drawn by a user, G (x, S) represents a garment sample generated by querying the garment and the line draft, and y represents a real garment sample compatible with the query garment x.
(3.2) a compatibility discriminator: compatibility discriminator (D)com) And the system is responsible for identifying whether the generated clothing image and the reference image have high compatibility. DcomAnd evaluating style preference of the user, and guiding training of the generator so that the clothing image synthesized by the generator is compatible with the reference image. As shown in FIG. 3 (b), DcomThe method comprises two parts of F and M, wherein F is a 5-layer convolutional network, and M is a measurement network consisting of an adaptive average pooling function and a Sigmoid activation function. F a pair of fashion clothing images (I)r,Ig) The cascade is used as input and is converted into a 1-dimensional potential representation after passing through a 5-layer convolutional network:
zo=F(Ir,Ig).
wherein z isoA potential spatial representation of the reference garment and the resulting garment as a whole is input into M, and a final compatibility score S is obtained:
S=M(zo).
compatibility discriminator DcomCan be calculated by the following formula:
wherein S isq,tRepresenting a real pair of garments (I)q,It) Is determined as a result of compatibility, and Sq,gRepresenting a real pair of garments (I)q,Ig) The compatibility discrimination result of (1).
(3.3) other loss functions: as shown in FIG. 1, for the task of fashion garment synthesis, we also want the resulting garment IgAnd a real garment ItAs consistent as possible. Therefore, we use L1Calculating the resulting garment I from the lossgAnd a real garment ItImage reconstruction loss in between:
Lreconstruction=||It-Ig||1.
in addition to this, we introduce perceptual and stylistic losses. Respectively calculating generated clothes I by using pre-trained VGG16gAnd a real garment ItAnd then calculates L between the two1Distance. The perceptual loss is expressed as:
whereinIs a feature map of the i-th layer of the VGG16 that was trained in advance. Garment I generated by optimizing the perception loss functiongAnd a real garment ItConsistency can be maintained over the feature space.
Style loss helps produce a garment that is consistent with the actual garment in terms of color, texture, etc. The style loss calculation activates the statistical error between the feature maps, expressed as:
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.
Claims (8)
1. A paired clothing image generation method based on sketch guidance is characterized by comprising the following steps:
(1) acquiring paired garment images to form a garment data set, wherein each pair of garment images comprises an upper garment and a lower garment;
(2) obtaining a draft graph representing the personalized preference of a user to form a draft graph data set;
(3) constructing a clothing image generation model, wherein the clothing image generation model comprises a generator module and a real discriminator DganAnd compatibility discriminator Dcom(ii) a The generator module comprises an encoder E and a decoder D;
inputting reference clothing image and draft image in encoder E, and outputting a high-level image characteristic d0(ii) a Feature the high-level image d0Inputting the image into a decoder D to obtain a synthesized clothing image;
inputting the synthesized clothes image and the real clothes image into an authenticity discriminator, and calculating discrimination loss Ladv;
Inputting the synthesized clothing pair and the real clothing pair into a compatibility discriminator, calculating a compatibility score, and further calculating a compatibility loss Lcom(ii) a The synthetic garment pair comprises a synthetic garment image and a reference garment image, and the real garment pair comprises a real garment image and a reference garment image;
(4) and training the clothing image generation model by using the clothing data set and the draft image data set, and inputting the given reference clothing and the draft image of the user into the generator after the training is finished to generate the corresponding complementary clothing image.
2. The sketch guidance-based paired clothing image generation method according to claim 1, wherein in the step (2), a Canny algorithm is used to perform edge detection on all clothing images to obtain a sketch.
3. The sketch-guidance-based paired clothing image generation method according to claim 1, wherein after obtaining the clothing data set and the sketch data set, the method further comprises performing cropping, scaling, horizontal flipping and normalization operations on the image data to obtain an image with a size of 128 x 128.
4. The paired garment image generation method based on sketch guidance as claimed in claim 1, wherein in step (3), said encoder E performs feature downsampling on the input reference garment image and the sketch map step by step through 7 volume blocks; the first 6 convolution blocks are all composed of a convolution layer with convolution kernel size of 5 × 5, a batch normalization layer and an LReLU activation function, and the last convolution block is composed of a convolution layer with convolution kernel size of 4 × 4 and an LReLU activation function; finally, it is compressed into a high-level image feature representation d0∈R512×1×1。
5. The sketch-based paired garment image generation method as claimed in claim 4, wherein in step (3), the decoder D receives the high-level image feature representation D input from the encoder E0Then using 7 deconvolution layer pairs d0And gradually up-sampling to obtain a synthesized clothing image.
6. The method of claim 5, wherein the generator further comprises an encoder E having 6 convolutional layerssAnd a CSC module;
to encoder EsCorresponding ith layer characteristic EiJumping to decoder D, and the i-th layer output D of DiInputting the CSC module to obtain the i +1 th layer feature representation di+1;
In CSC module, d is firstiAnd EiCascaded on a channel and then sequentially passed through 3 convolutional layers with convolutional kernel sizes of 1 × 1, 3 × 3, 1 × 1 to obtain dimension and diThe same new feature, the last residual join, the new feature and diAdding to obtain final output di+1Detailed description of the inventionFormula (la) is defined as:
di+1=ReLU(C1(C3(C1(f(di,Ei))))+di)
where f (.) is the cascade operation of the feature vectors in the channel dimension, C1Represents 1 × 1 convolution + batch normalization layer + ReLU activation function, C3Then 3 x 3 convolution + batch normalization layer + ReLU activation function.
7. The sketch-guided paired garment image generating method as claimed in claim 1, wherein in step (3), said authenticity discriminator DganThe clothes image processing system comprises 4 convolution blocks, wherein the first three convolution blocks comprise a convolution layer with convolution kernel size of 5 multiplied by 5, batch normalization BN and an LReLU activation function, and are mainly responsible for downsampling the clothes image; only one convolution layer with convolution kernel size of 3 x 3 in the last convolution block is mainly responsible for reducing the number of channels of image features so as to meet the output requirement.
8. The paired clothing image generation method based on sketch guidance as claimed in claim 1, wherein in step (3), compatibility discriminator DcomThe system consists of two parts, namely F and M, wherein F is a 5-layer convolutional network, and M is a measurement network consisting of an adaptive average pooling function and a Sigmoid activation function; f a pair of fashion clothing images (I)r&Ig) The cascade is used as input and is converted into a 1-dimensional potential representation after passing through a 5-layer convolutional network:
zo=F(Ir,Ig)
wherein, IrRepresenting a real pair of garments, IgRepresenting a synthetic clothing pair, zoA potential spatial representation of the reference garment and the resulting garment as a whole is input into M, and a final compatibility score is obtained:
Sr,g=M(zo)。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110647233.0A CN113298906A (en) | 2021-06-10 | 2021-06-10 | Paired clothing image generation method based on sketch guidance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110647233.0A CN113298906A (en) | 2021-06-10 | 2021-06-10 | Paired clothing image generation method based on sketch guidance |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113298906A true CN113298906A (en) | 2021-08-24 |
Family
ID=77327916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110647233.0A Pending CN113298906A (en) | 2021-06-10 | 2021-06-10 | Paired clothing image generation method based on sketch guidance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298906A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718552A (en) * | 2016-01-19 | 2016-06-29 | 北京服装学院 | Clothing freehand sketch based clothing image retrieval method |
CN110223359A (en) * | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
CN110659958A (en) * | 2019-09-06 | 2020-01-07 | 电子科技大学 | Clothing matching generation method based on generation of countermeasure network |
JP2020013543A (en) * | 2018-07-20 | 2020-01-23 | 哈爾濱工業大学(深セン) | Model clothing recommendation method based upon generative adversarial network |
CN110909754A (en) * | 2018-09-14 | 2020-03-24 | 哈尔滨工业大学(深圳) | Attribute generation countermeasure network and matching clothing generation method based on same |
CN111861672A (en) * | 2020-07-28 | 2020-10-30 | 青岛科技大学 | Multi-mode-based generating type compatible garment matching scheme generating method and system |
-
2021
- 2021-06-10 CN CN202110647233.0A patent/CN113298906A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718552A (en) * | 2016-01-19 | 2016-06-29 | 北京服装学院 | Clothing freehand sketch based clothing image retrieval method |
JP2020013543A (en) * | 2018-07-20 | 2020-01-23 | 哈爾濱工業大学(深セン) | Model clothing recommendation method based upon generative adversarial network |
CN110909754A (en) * | 2018-09-14 | 2020-03-24 | 哈尔滨工业大学(深圳) | Attribute generation countermeasure network and matching clothing generation method based on same |
CN110223359A (en) * | 2019-05-27 | 2019-09-10 | 浙江大学 | It is a kind of that color model and its construction method and application on the dual-stage polygamy colo(u)r streak original text of network are fought based on generation |
CN110659958A (en) * | 2019-09-06 | 2020-01-07 | 电子科技大学 | Clothing matching generation method based on generation of countermeasure network |
CN111861672A (en) * | 2020-07-28 | 2020-10-30 | 青岛科技大学 | Multi-mode-based generating type compatible garment matching scheme generating method and system |
Non-Patent Citations (1)
Title |
---|
LINLIN LIU等: "Toward AI fashion design: An Attribute-GAN model for clothing match", 《NEUROCOMPUTING》, vol. 341, 14 May 2019 (2019-05-14), pages 156 - 167 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110378985B (en) | Animation drawing auxiliary creation method based on GAN | |
Pan et al. | Loss functions of generative adversarial networks (GANs): Opportunities and challenges | |
Kadam et al. | Detection and localization of multiple image splicing using MobileNet V1 | |
CN109685724B (en) | Symmetric perception face image completion method based on deep learning | |
CN112102303B (en) | Semantic image analogy method for generating antagonistic network based on single image | |
Keren | Painter identification using local features and naive bayes | |
CN113762138B (en) | Identification method, device, computer equipment and storage medium for fake face pictures | |
CN111861945B (en) | Text-guided image restoration method and system | |
Li et al. | Globally and locally semantic colorization via exemplar-based broad-GAN | |
Lv et al. | Approximate intrinsic voxel structure for point cloud simplification | |
Zhu et al. | Learning deep patch representation for probabilistic graphical model-based face sketch synthesis | |
CN113724354B (en) | Gray image coloring method based on reference picture color style | |
CN115618452B (en) | Clothing image intelligent generation system with designer style | |
CN113902613A (en) | Image style migration system and method based on three-branch clustering semantic segmentation | |
Karjus et al. | Compression ensembles quantify aesthetic complexity and the evolution of visual art | |
Liang et al. | Depth map guided triplet network for deepfake face detection | |
Gao | A method for face image inpainting based on generative adversarial networks | |
Tan et al. | Text-to-image synthesis with self-supervised bi-stage generative adversarial network | |
El-Gayar et al. | A novel approach for detecting deep fake videos using graph neural network | |
CN109035318B (en) | Image style conversion method | |
CN113658285B (en) | Method for generating face photo to artistic sketch | |
CN113298906A (en) | Paired clothing image generation method based on sketch guidance | |
CN112529081A (en) | Real-time semantic segmentation method based on efficient attention calibration | |
Miao et al. | Chinese font migration combining local and global features learning | |
US20240169701A1 (en) | Affordance-based reposing of an object in a scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |