CN109543742A - A kind of image local information transfer method based on GAN and Self-Attention - Google Patents

A kind of image local information transfer method based on GAN and Self-Attention Download PDF

Info

Publication number
CN109543742A
CN109543742A CN201811368715.7A CN201811368715A CN109543742A CN 109543742 A CN109543742 A CN 109543742A CN 201811368715 A CN201811368715 A CN 201811368715A CN 109543742 A CN109543742 A CN 109543742A
Authority
CN
China
Prior art keywords
image
feature
network
self
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811368715.7A
Other languages
Chinese (zh)
Inventor
杨辉
郑军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jushi Technology (shanghai) Co Ltd
Original Assignee
Jushi Technology (shanghai) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jushi Technology (shanghai) Co Ltd filed Critical Jushi Technology (shanghai) Co Ltd
Priority to CN201811368715.7A priority Critical patent/CN109543742A/en
Publication of CN109543742A publication Critical patent/CN109543742A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The image local information transfer method based on GAN and Self-Attention that the present invention relates to a kind of, comprising the following steps: 1) obtain image and migrate task;2) task is migrated according to described image, carries out image local information transfer using a trained neural network, the image after output migration;The neural network includes image composer and arbiter, wherein, described image generator includes convolutional network and deconvolution network, the convolutional network is by picture convolution at hiding feature space, part is hidden into feature exchange in conjunction with Self-Attention model, the feature after exchange realizes that image local information characteristics are migrated by the picture of the deconvolution network deconvolution Cheng Xin;The arbiter is used to differentiate the true and false relationship for generating picture.Compared with prior art, the present invention has many advantages, such as to avoid the influence of local migration and image overall pixel, solves image local migration to the disturbing factor of general image.

Description

A kind of image local information transfer method based on GAN and Self-Attention
Technical field
The present invention relates to deep learning fields, are related to a kind of image information processing method, are based on GAN more particularly, to one kind With the image local information transfer method of Self-Attention.
Background technique
As machine learning method continues to develop and develops, big data has become the every profession and trades such as academia and industry Knowledge precious deposits.However, the case where uneven data category or even shortage of data occur in many fields at present, this is seriously affected The stability and robustness of machine mould training.How to be had become according to the missing data of existing data set generation high quality Primarily solve the problems, such as.
Currently, application, which generates the new image data of confrontation network (GAN) generation, has become grinding in deep learning field Study carefully hot spot.Wherein, generating using the new picture of image-2-image translation technical research is usually to solve picture number According to the effective way of missing.However, most of in the prior art all concentrate on image Style Transfer field, the content of migration is usual It is the matching of image whole style and content.Small part image migrating technology is in the migration for solving local feature, however these skills Art is difficult to control the contradictory relation of the positioning of local feature and general image feature invariant, so the picture robustness after migration Have to be hoisted.For example, a kind of method that Taihong Xiao et al. proposes ELEGANT (ECCV 2018), ELEGANT are mainly answered Replace traditional label information with hiding encoded information is exchanged, abundantization markup information is to improve the multiplicity that face character migrates Property and migration quality.But ELEGANT does not control the local message of image and the disturbing factor of full figure pixel.Therefore it is moving It is easy to influence the generation of other Pixel Informations in image domains during moving.
Summary of the invention
It is an object of the present invention to overcome the above-mentioned drawbacks of the prior art and provide one kind based on GAN and The image local information transfer method of Self-Attention.
The purpose of the present invention can be achieved through the following technical solutions:
A kind of image local information transfer method based on GAN and Self-Attention, comprising the following steps:
1) it obtains image and migrates task, including picture pair to be migrated and the feature for needing to migrate;
2) task is migrated according to described image, carries out image local information transfer using a trained neural network, it is defeated Image after migrating out;
The neural network includes image composer and arbiter, wherein described image generator include convolutional network and Deconvolution network, the convolutional network by picture convolution at hiding feature space, in conjunction with Self-Attention model by part Feature exchange is hidden, the feature after exchange realizes that image local is believed by the picture of the deconvolution network deconvolution Cheng Xin Cease feature migration;The arbiter is used to differentiate the true and false relationship for generating picture.
Further, in described image generator, the company of convolutional network Yu deconvolution network is established by U-net structure It connects.
Further, the convolutional network include input layer, convolutional layer, non-linear normalizing layer, network activation layer and Self-Attention layers.
Further, the deconvolution network includes hiding feature input layer, warp lamination, non-linear normalizing layer, net Network active coating, Tanh active coating and Self-Attention layers.
When further, to the training of described image generator, in each iteration, the output of image composer is put into and is sentenced Loss error is calculated in other device, carries out backpropagation, is joined using the network that stochastic gradient descent method updates described image generator Number, until reaching the frequency of training upper limit.
When further, to arbiter training, in each iteration, respectively from the defeated of training set and image composer Out in take out an image, after zooming in and out, while inputting in determining device, obtain loss error, carry out backpropagation, using with Machine gradient descent method updates the network parameter of the arbiter, until reaching the frequency of training upper limit.
Further, the arbiter is equipped with multiple.
Further, the arbiter is equipped with 4, the original image and the image after migration that two of them differentiate generation respectively, Other two is true and false to differentiating after the multiple dimensioned scaling of image progress of the original image of generation and migration respectively.
Further, the arbiter includes input layer, convolutional layer, non-linear normalizing layer, network activation layer, full connection Layer and two classification active coatings.
In order to solve image local migration to the disturbing factor of general image, the present invention is mainly answered when coding is hidden in exchange With Self-Attention from attention model.Self-Attention is a kind of empty based on feature map from attention Between autonomous learning method.The present invention mainly passes through convolutional neural networks for picture convolution into hiding feature space, then in conjunction with Part is hidden feature exchange by Self-Attention model, finally by the picture of the feature deconvolution Cheng Xin after exchange to reality Existing image local feature migration.Self-Attention can allow the present invention to learn to the pixel inside image from attention model Related information aimings drill the ability that local feature position can be positioned with boosting algorithm in conjunction with multiple scale arbiters.Therefore, originally Invention can effectively shift image local feature while reduce the interference that is distributed to image whole pixel information, to be promoted The effect of the local feature migration of image.
In order to further enhance the effect of image migration, the present invention is also added in a network generates confrontation mechanism, algorithm It mainly include image composer and arbiter.Generator is used to generate the new images after migration, and arbiter generates figure for differentiating The true and false relationship of picture.The generation quality of image can be promoted by generating the game mechanism of confrontation in this way.In generator network, The present invention also applies U-net structure that encoder Encoder and decoder Decoder Fusion Features are improved to the matter that image generates Amount, reduces interference of local migration's information to the whole pixel of image.In differentiating network, the present invention uses multiple Discriminators improves picture quality, and generator can be guided further to learn the local message of image, enhancing figure in this way The migration effect of picture.
To sum up, innovation advantage of the invention embodies in the following areas:
First, the present invention proposes that the exchange of hiding information feature and Self-Attention is combined to do image from attention model Feature local migration, Self-Attention can learn the direct relationship of pixel in image, effectively avoid local migration and figure As the influence of global pixel.
Second, present invention uses U-net network layer-across connecting structures, effectively enhance generator generator generation The quality of the global pixel of picture reduces local message migration bring interference.
Third, the present invention is when application generates confrontation network structure (GAN), using multiple arbiter discriminators The generation for promoting picture guides e-learning image local feature, enhancing migration effect using multiple scale discriminators Fruit.
Detailed description of the invention
Fig. 1 is generator structural schematic diagram of the present invention;
Fig. 2 is arbiter structural schematic diagram of the present invention;
Fig. 3 is generator of the present invention training flow diagram;
Fig. 4 is arbiter of the present invention training flow diagram;
Fig. 5 is testing process schematic diagram of the present invention;
Fig. 6 is the effect diagram of migration face fringe of the present invention on celebA public data collection;
Fig. 7 is the effect diagram of migration face beard of the present invention on celebA public data collection.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings.The present embodiment is with technical solution of the present invention Premised on implemented, the detailed implementation method and specific operation process are given, but protection scope of the present invention is not limited to Following embodiments.
The present invention provides a kind of image local information transfer method based on GAN and Self-Attention, this method fortune For row in GPU, acquisition image first migrates task, including picture pair to be migrated and the feature for needing to migrate;Then according to institute Image migration task is stated, carries out image local information transfer using a trained neural network;Figure after finally output migration Picture.
As depicted in figs. 1 and 2, the neural network includes image composer generator and multiple arbiters Discriminators, wherein described image generator includes convolutional network and deconvolution network, and the convolutional network is by picture Part is hidden feature exchange at hiding feature space, in conjunction with Self-Attention model by convolution, and the feature after exchange passes through The picture of the deconvolution network deconvolution Cheng Xin is to realize that image local information characteristics migrate;The arbiter is multiple to sentence Other device for differentiating the true and false relationship for generating picture, and further promotes the local message migration of image, after promoting image migration Quality.
In described image generator, the connection of convolutional network Yu deconvolution network is established by U-net structure.Convolutional network Including input layer, convolutional layer, non-linear normalizing layer, network activation layer and Self-Attention layers;Deconvolution network includes Hide feature input layer, warp lamination, non-linear normalizing layer, network activation layer and Tanh active coating.
In the present embodiment, building for image composer is as follows:
The level of convolutional network and every layer of characteristic pattern quantity 1a) are preset according to the scale of problem, the present embodiment is adopted With Input1-Conv64-Norm-LeakyReLu-NonLocalBlock-Conv128-Norm- LeakyReLu- NonLocalBlock-Conv256-Norm-LeakyReLu-NonLocalBlock-Conv512-Norm-LeakyReLu- The structure of NonLocalBlock-Conv1024-Norm-LeakyReLu, wherein Input table shows input layer, thereafter the number of institute's band Word n indicates the feature map quantity of this layer, and Conv indicates that convolutional layer, Norm indicate non-linear normalizing layer, LeakyReLu Indicate that network activation layer, NonLocalBlock indicate Self-Attention layers;
The structure of deconvolution network 1b) is preset, the present embodiment uses Input1-NonLocalBlock- ConvTranspose512-Norm-ReLu-NonLocalBlock-ConvTranspose256-Norm-ReLu- NonLocalBlock-ConvTranspose128-Norm-ReLu-NonLocalBlock-ConvTranspose64-Norm- The structure of ReLu-ConvTranspose3-Tanh, wherein Input table shows hiding feature input layer, thereafter the digital n table of institute's band Show the feature map quantity of this layer, ConvTranspose indicates that warp lamination, Norm indicate non-linear normalizing layer, ReLu Indicate that network activation layer, Tanh indicate Tanh active coating, NonLocalBlock indicates Self-Attention layers;
U-net structure 1c) is finally built, the corresponding Conv and ConvTranspose in Convnet and Deconvnet Unet model is established, the identical Conv output feature of feature map quantity is input to ConvTranspose computing module In.
Be illustrated in figure 3 the training process of image composer: input picture has two labeled as A, B, by convolutional network Available two groups of length is that the convolution in 1024 channels hides characteristic layer, these layers is then divided into 2*n group, wherein n is to need The feature quantity to be migrated, and hide feature to every group and be numbered.Assuming that i-th of local feature is migrated, here by i-th The hiding feature of group, which is interchangeable, generates the hiding feature in two groups of new 1024 channels.It is available 4 groups long altogether after exchange feature Degree hides feature for 1024 channels, hides feature for this four groups and is respectively fed to generate four new pictures in deconvolution network A',B',C,D.The feature in neural network learning picture finally is allowed with the mode of back-propagation gradient again, optimizes A ', B ', C, D The quality that four pictures generate.
As shown in Fig. 2, arbiter is equipped with 4, after two of them differentiate the original image and migration of generation respectively in the present embodiment Image, other two differentiates true and false after zooming in and out respectively to the image of the original image of generation and migration, and then promote image Local message migration.The build process of arbiter is as follows:
First arbiter discriminator0 2a) is constructed, network structure uses Input1-Conv64-Norm- LeakyReLu-Conv128-Norm-LeakyReLu-Conv256-Norm-LeakyReLu-Conv512-Norm- LeakyReLu-Linear-Sigmoid, Input table show input layer, and the digital n of institute's band indicates the feature map of this layer thereafter Quantity, Conv indicate that convolutional layer, Norm indicate that non-linear normalizing layer, LeakyReLu indicate that network activation layer, Linear indicate Full articulamentum, Sigmoid indicate two classification active coatings.Training discriminator0 the step of it is as follows: will input picture A, B with And the input as arbiter discriminator0 of picture A ', B ' original image that generator generates, finally allow arbiter Discriminator0 training study A, B, A ', four picture of B ' it is true and false, as shown in Figure 4.
Second arbiter discriminator1 2b) is constructed, network structure is consistent with discriminator0.Training The step of discriminator1, is as follows: will input picture A ', the B ' that picture A, B and generator generate put be twice after conduct The input of arbiter discriminator1, finally allow arbiter discriminator1 training study A, B, A ', B ' four figure Piece it is true and false.
2c) construct third arbiter discriminator2, network structure and discriminator0 and Discriminator1 is consistent.The step of training discriminator2, is as follows: input picture A, B and generator are generated Input of picture C, D original image as arbiter discriminator2 finally allows arbiter discriminator2 training to learn A, tetra- picture of B, C, D is true and false;
The 4th arbiter discriminator3 2d) is constructed, network structure is consistent with discriminator2.Training The step of discriminator3, is as follows: will input after picture C, D that picture A, B and generator generate reduce one times to be used as and sentence The input of other device discriminator3 finally allows arbiter discriminator3 training to learn tetra- picture of A, B, C, D It is true and false.
The training of above-mentioned image composer and determining device executes respectively.When to the training of described image generator, changing every time The output of image composer is put into arbiter and calculates loss error, carries out backpropagation, utilize stochastic gradient descent by Dai Zhong Method updates the network parameter of described image generator, until reaching the frequency of training upper limit.When to arbiter training, each In iteration, an image is taken out from the output of training set and image composer respectively, after zooming in and out, while inputting determining device In, loss error is obtained, backpropagation is carried out, the network parameter of the arbiter is updated using stochastic gradient descent method, until Reach the frequency of training upper limit.
After obtaining trained neural network, the test of image local information transfer is carried out using image measurement collection, is surveyed It is similar with the training process of image composer to try process, as shown in figure 5, specific steps are as follows: obtain classification 1 and classification in test set Any one image in 2 inputs in generator generator, obtains four images after generator generator operation A',B',C,D.Wherein C, D are the result after input picture A, B local feature exchange migration respectively.
In order to verify performance of the invention, the present embodiment is on the public data collection (celebA) of a face attribute migration It is tested.Good training set test set is divided first, in accordance with celebA, training is concentrated with 202000 picture input models It is trained, test set has 599.After picture training one time all in training set, a model is obtained for testing. Fig. 6 and Fig. 7 is the test effect result schematic diagram that face Liu Haihe beard is migrated in test set respectively.The Liu Haihe beard of face The local message migration of face characteristic is belonged to, wherein face is opened on the left side Fig. 6 and Fig. 7 two is taken out from celebA test set The output that face is training pattern of the present invention is opened on face original image, the right two.It can be seen that the fringe of Ms has been in Fig. 6 Man's head portrait is successfully migrated to suffer, the upper corners of the mouth beard of young man is successfully migrated in middle-aged and the old's head portrait in Fig. 7, and And the face feature at other positions remains intact in their head portraits, is not interfered.
Above two picture is model time later test result of training on training set, and it is several all over can be with to continue more training Obtain better test result.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that those skilled in the art without It needs creative work according to the present invention can conceive and makes many modifications and variations.Therefore, all technologies in the art Personnel are available by logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Technical solution, all should be within the scope of protection determined by the claims.

Claims (9)

1. a kind of image local information transfer method based on GAN and Self-Attention, which is characterized in that including following step It is rapid:
1) it obtains image and migrates task, including picture pair to be migrated and the feature for needing to migrate;
2) task is migrated according to described image, carries out image local information transfer using a trained neural network, output is moved Image after shifting;
The neural network includes image composer and arbiter, wherein described image generator includes convolutional network and warp Part at hiding feature space, is hidden picture convolution by product network, the convolutional network in conjunction with Self-Attention model Feature exchange, the feature after exchange realize that image local information is special by the picture of the deconvolution network deconvolution Cheng Xin It relocates residents from locations to be used for construction of new buildings or factories shifting;The arbiter is used to differentiate the true and false relationship for generating picture.
2. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, in described image generator, the connection of convolutional network Yu deconvolution network is established by U-net structure.
3. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, the convolutional network includes input layer, convolutional layer, non-linear normalizing layer, network activation layer and Self-Attention Layer.
4. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, the deconvolution network includes hiding feature input layer, warp lamination, non-linear normalizing layer, network activation layer, Tanh Active coating and Self-Attention layers.
5. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, when to the training of described image generator, in each iteration, the output of image composer is put into calculate in arbiter and is damaged It is poor to make mistakes, and carries out backpropagation, and the network parameter of described image generator is updated using stochastic gradient descent method, until reaching instruction Practice maximum number of times.
6. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, when to arbiter training, in each iteration, takes out one from the output of training set and image composer respectively Image after zooming in and out, while inputting in determining device, obtains loss error, carries out backpropagation, utilize stochastic gradient descent method The network parameter of the arbiter is updated, until reaching the frequency of training upper limit.
7. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, the arbiter is equipped with multiple.
8. the image local information transfer method according to claim 7 based on GAN and Self-Attention, feature It is, the arbiter is equipped with 4, and the original image and the image after migration that two of them differentiate generation respectively, other two is distinguished Differentiate after the multiple dimensioned scaling of image progress of original image and migration to generation true and false.
9. the image local information transfer method according to claim 1 based on GAN and Self-Attention, feature It is, the arbiter includes that input layer, convolutional layer, non-linear normalizing layer, network activation layer, full articulamentum and two classification swash Layer living.
CN201811368715.7A 2018-11-16 2018-11-16 A kind of image local information transfer method based on GAN and Self-Attention Pending CN109543742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811368715.7A CN109543742A (en) 2018-11-16 2018-11-16 A kind of image local information transfer method based on GAN and Self-Attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811368715.7A CN109543742A (en) 2018-11-16 2018-11-16 A kind of image local information transfer method based on GAN and Self-Attention

Publications (1)

Publication Number Publication Date
CN109543742A true CN109543742A (en) 2019-03-29

Family

ID=65847856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811368715.7A Pending CN109543742A (en) 2018-11-16 2018-11-16 A kind of image local information transfer method based on GAN and Self-Attention

Country Status (1)

Country Link
CN (1) CN109543742A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033034A (en) * 2019-04-01 2019-07-19 深圳大学 A kind of image processing method, device and the computer equipment of non-homogeneous texture
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Cutter state monitoring data set enhancing method based on generative countermeasure network
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111198964A (en) * 2020-01-10 2020-05-26 中国科学院自动化研究所 Image retrieval method and system
CN112132133A (en) * 2020-06-16 2020-12-25 杭州中科睿鉴科技有限公司 Identification image data enhancement method and authenticity intelligent identification method
CN112699726A (en) * 2020-11-11 2021-04-23 中国科学院计算技术研究所数字经济产业研究院 Image enhancement method, genuine-fake commodity identification method and equipment
WO2021097429A1 (en) * 2019-11-15 2021-05-20 Waymo Llc Multi object tracking using memory attention

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method
CN108647560A (en) * 2018-03-22 2018-10-12 中山大学 A kind of face transfer method of the holding expression information based on CNN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107577651A (en) * 2017-08-25 2018-01-12 上海交通大学 Chinese character style migratory system based on confrontation network
CN108647560A (en) * 2018-03-22 2018-10-12 中山大学 A kind of face transfer method of the holding expression information based on CNN
CN108564119A (en) * 2018-04-04 2018-09-21 华中科技大学 A kind of any attitude pedestrian Picture Generation Method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WUGUANGBIN1230: "SAGAN—Self-Attention Generative Adversarial Networks", 《HTTPS://BLOG.CSDN.NET/WUGUANGBIN1230/ARTICLE/DETAILS/83829879》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033034B (en) * 2019-04-01 2023-09-12 深圳大学 Picture processing method and device for non-uniform texture and computer equipment
CN110033034A (en) * 2019-04-01 2019-07-19 深圳大学 A kind of image processing method, device and the computer equipment of non-homogeneous texture
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
WO2021097429A1 (en) * 2019-11-15 2021-05-20 Waymo Llc Multi object tracking using memory attention
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111127309B (en) * 2019-12-12 2023-08-11 杭州格像科技有限公司 Portrait style migration model training method, portrait style migration method and device
CN111037365B (en) * 2019-12-26 2021-08-20 大连理工大学 Cutter state monitoring data set enhancing method based on generative countermeasure network
CN111037365A (en) * 2019-12-26 2020-04-21 大连理工大学 Cutter state monitoring data set enhancing method based on generative countermeasure network
CN111198964B (en) * 2020-01-10 2023-04-25 中国科学院自动化研究所 Image retrieval method and system
CN111198964A (en) * 2020-01-10 2020-05-26 中国科学院自动化研究所 Image retrieval method and system
CN112132133A (en) * 2020-06-16 2020-12-25 杭州中科睿鉴科技有限公司 Identification image data enhancement method and authenticity intelligent identification method
CN112132133B (en) * 2020-06-16 2023-11-17 中国科学院计算技术研究所数字经济产业研究院 Identification image data enhancement method and true-false intelligent identification method
CN112699726A (en) * 2020-11-11 2021-04-23 中国科学院计算技术研究所数字经济产业研究院 Image enhancement method, genuine-fake commodity identification method and equipment

Similar Documents

Publication Publication Date Title
CN109543742A (en) A kind of image local information transfer method based on GAN and Self-Attention
Huang et al. Architectural drawings recognition and generation through machine learning
CN107833183A (en) A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring
CN110427875A (en) Infrared image object detection method based on depth migration study and extreme learning machine
CN107358648A (en) Real-time full-automatic high quality three-dimensional facial reconstruction method based on individual facial image
CN110046572A (en) A kind of identification of landmark object and detection method based on deep learning
CN107563439A (en) A kind of model for identifying cleaning food materials picture and identification food materials class method for distinguishing
CN108960499A (en) A kind of Fashion trend predicting system merging vision and non-vision feature
CN109492229A (en) A kind of cross-cutting sensibility classification method and relevant apparatus
CN110008961A (en) Text real-time identification method, device, computer equipment and storage medium
CN114092697B (en) Building facade semantic segmentation method with attention fused with global and local depth features
CN109147772A (en) A kind of DNN-HMM acoustic model parameters migration structure
CN112069993B (en) Dense face detection method and system based on five-sense organ mask constraint and storage medium
Liu et al. Dunhuang murals contour generation network based on convolution and self-attention fusion
Jiang et al. [Retracted] Application of Intelligent Education Using Interactive Modeling in Higher Ideological and Political Education Platform
Gui et al. DLP-GAN: learning to draw modern Chinese landscape photos with generative adversarial network
CN110264551A (en) A kind of motion retargeting method and system
Song et al. Graphdecoder: Recovering diverse network graphs from visualization images via attention-aware learning
CN110472108A (en) Garment fabric sample retrieving method based on text profile matching
CN107977372A (en) A kind of mask method and device of face key element annotation
Maietti et al. Documentation, Analysis and Representation of Modernist Heritage Through Building Information Modeling
CN110222693A (en) The method and apparatus for constructing character recognition model and identifying character
CN113704490B (en) New industrial and scientific education knowledge graph construction method for heterogeneous data
Akboy-İlk Architectural Documentation: Built Environment, Modernization, and Turkish Nationalism
Yang et al. Development And Challenges of Generative Artificial Intelligence in Education and Art

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329