CN107705242B - Image stylized migration method combining deep learning and depth perception - Google Patents
Image stylized migration method combining deep learning and depth perception Download PDFInfo
- Publication number
- CN107705242B CN107705242B CN201710596250.XA CN201710596250A CN107705242B CN 107705242 B CN107705242 B CN 107705242B CN 201710596250 A CN201710596250 A CN 201710596250A CN 107705242 B CN107705242 B CN 107705242B
- Authority
- CN
- China
- Prior art keywords
- image
- loss
- model
- depth
- style
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 19
- 238000013508 migration Methods 0.000 title claims abstract description 17
- 230000005012 migration Effects 0.000 title claims abstract description 17
- 230000008447 perception Effects 0.000 title claims abstract description 10
- 238000013135 deep learning Methods 0.000 title claims abstract description 8
- 238000010586 diagram Methods 0.000 claims abstract description 10
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000012549 training Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 abstract description 15
- 238000012886 linear function Methods 0.000 abstract description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000010422 painting Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000001537 neural effect Effects 0.000 description 2
- 125000003118 aryl group Chemical group 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image stylized migration method combining deep learning and depth perception, which comprises the following steps: 1) preprocessing the original image x through an image transformation network to generate a picture y*(ii) a Will generate a picture y*Respectively inputting the original image x into the trained first model, acquiring the characteristic diagram of each layer of the model and calculating the loss values of the style and the content; 2) will generate a picture y*Respectively inputting the original image x into the trained second model, acquiring a depth of field estimation image of a model output layer, and calculating a depth of field loss value; 3) fitting the loss functions of the content, the style and the depth of field into a linear function, and calculating to generate a total loss value of the graph and the original graph; 4) and automatically adjusting model parameters through an optimization algorithm, then generating a new stylized image from the stylized image generated at the previous time through the model again, calculating a new loss value, and updating the model parameters until the model converges. The method can keep the depth of field information and the three-dimensional structure sense of the original image in the image style migration process, so that different styles are more naturally and three-dimensionally merged into the original image, and the stylized quality of the image is improved.
Description
Technical Field
The invention belongs to the field of image processing and deep learning, and particularly relates to an image style migration method capable of keeping the depth of field, object contour and distance feeling of an original image.
Background
Deep neural networks have now been widely used for various computer vision problems. In the field of computer vision, there are some problems such as image denoising, segmentation, and the like, and some problems such as keyword detection, and relatively advanced problems such as target recognition, and the like. In addition to supervisory tasks (such as scene classification problems) that are performed with large amounts of labeled training data, deep neural networks can also solve many abstract problems where there is no real training data. Image style migration is a problem in that she can transfer styles by extracting the styles of artistic paintings and then using the styles for the contents of target images.
Neural Style Transfer (NST) is to separate and combine the content and Style of a picture by a convolutional Neural network, so that the semantic content of the picture can be combined with different artistic styles to show different charms. While the technology of migrating the style of one image to another has existed for nearly 15 years, the use of neural networks to do this has only recently emerged.
For example, in document 1(a Neural Algorithm of aromatic Style), researchers Gatys, Ecker, and Bethge introduced a method of iteratively implementing image Style migration using a deep Convolutional Neural Network (CNN). In the image style migration technique of document 1, an original content image and a style image are directly input into a trained convolutional neural network (VGG19) to extract a feature map of a specific network layer, and style loss and content loss are calculated; the total loss is then iterated to a minimum value to obtain a generated map.
For example, in document 2(Perceptual losses for real-time style transfer and super-resolution), it is described to use Perceptual loss functions (Perceptual loss functions) to train a feedforward network for an image conversion task. The trained feedforward network can solve the optimization problem proposed by Gatys et al in real time. Compared to optimization-based methods, the network can give similar results, while being up to three orders of magnitude faster.
Disclosure of Invention
Aiming at the defects in the prior art, the depth of field loss is added in the target loss function of the depth perception network (Hourglass3) in combination, and the depth of field of the original image and the generated style diagram is estimated. In the style migration process, the generated graph not only naturally fuses corresponding styles and contents, but also keeps the far and near structure information of the original graph. The effect is particularly remarkable in the aspect of processing the landscape painting.
In order to achieve the above object, the proposed style migration method comprises the following steps:
(1) respectively inputting the generated picture y and the original picture x into a trained VGG16 model (http:// cs. stanford. edu/pepole/jcjohns/fast-neural-style/models/VGG 16.t7) and acquiring a feature map of each layer of the model; the image generates a plurality of characteristic graphs through a VGG16 model, and the characteristic graph of a VGG16 network relu2_2 layer is compared with an original image characteristic graph to calculate and obtain a target content loss value; the style of the image is represented by a Gram matrix, and the feature maps of the layers of relu1_2, relu2_2, relu3_3 and relu4_3 of the VGG16 are compared with the feature map of the original image to calculate the target style loss value.
(2) Respectively inputting the generated picture y and the original picture x into a trained Hourglass3 model (https:// vllab. eecs. umich. edu/data/nips2016/Hourglass3.tar. gz) and acquiring a depth of field estimation diagram of a model output layer; and then performing set operation on the depth estimation graphs of y and x to obtain a depth loss value.
(3) And fitting the loss functions of the content, the style and the depth of field into a linear function, and calculating to generate a total loss value of the graph and the original graph. Based on the total loss function, the descending gradient of the loss function is calculated through an optimization algorithm, and the total loss of the stylized model is minimized.
(4) Repeating steps (1) to (3), each image size for training being 256 × 256. The maximum number of training iterations is set to 40000. The optimization algorithm selects L-BFGS, and the learning rate is set to be 1 multiplied by 10-3The batch _ size is set to 4.
The invention has the technical characteristics and beneficial effects that:
based on the latest research result of the current image style migration, the reliability of an image style migration algorithm is ensured by adopting a torch7 deep learning framework; in order to further improve the effect of rendering the stereoscopic structure of the landscape painting, the Hourglass3 is combined to add depth loss in the objective loss function of style migration. Therefore, the iterative convergence model can retain the depth of field information and the three-dimensional structure sense of the original image to a certain extent;
in conclusion, the method has the technical characteristics that the depth of field information and the three-dimensional structural sense of the original image can be reserved in the image style migration process, so that different styles are blended into the original image more naturally and three-dimensionally, and the quality of stylization of the image is improved.
Drawings
FIG. 1 is a view showing a structure of a model of the present invention
FIG. 2 is an algorithmic flow chart of the method of the present invention
FIG. 3 is a graph comparing stylized effects
FIG. 4 is a contrast diagram of the depth of field
Detailed Description
Example 1
The invention provides an image style migration method capable of keeping a far and near depth of field structure of an original image, wherein a torch7 deep learning framework is adopted in the model, and the method comprises the following steps:
1) carrying out scale transformation on the input picture x to maintain the input picture at 1024 x 512, so that the calculation is convenient;
2) setting parameters of an image transformation network fw as default values;
3) processing an input image x through an image transformation network fw to obtain an initial y;
4) respectively inputting the generated picture y and the original picture x into a trained VGG16 model (http:// cs. stanford. edu/pepole/jcjohns/fast-neural-style/models/VGG 16.t7) and acquiring a feature map of each layer of the model;
5) comparing the feature map of the network relu2_2 layer of the VGG16 with the original feature map to calculate and obtain a target content loss value;
where Ni (φ) represents the normalized features of the ith layer (values equal to those of the convolutional neural network)),φi(y) andrespectively representing input pictures y andcharacteristic diagram of ith convolution layer in perceptual loss network phi
6) Comparing the feature maps of the layers relu1_2, relu2_2, relu3_3 and relu4_3 of the VGG16 network with the feature map of the original image to calculate and obtain a target style loss value, wherein the image style is represented by a Gram matrix:
where Ni (φ) represents the normalized features of the ith layer (values equal to those of the convolutional neural network)),Andrespectively representing input picturesAnd an input image ysIn a Gram matrix representation 7) of an ith convolutional layer feature map of a convolutional neural network phi, respectively inputting a generated picture y and an original picture x into a trained Hourglass3 model (https:// vl-lab, eecs, umich, edu/data/nips2016/Hourglass3.tar, gz) and acquiring a depth of field estimation map of a model output layer;
8) performing set operation on the depth of field estimation graphs of y and x to obtain a depth of field loss value;
9) and fitting the loss functions of the content, the style and the depth of field into a linear function, and calculating to generate a total loss value of the graph and the original graph.
Based on the total loss function, the stylized model is converged through an L-BGFS optimization algorithm. And finally, the style diagram obtained by the converged model can keep the artistic style of the style image, the content of the content image and the information of the far and near field depth structures.
10) Calculating a descending gradient in the image conversion network fw according to the feedback of the loss value, and adjusting the fw parameter to enable the total loss to be close to the minimum value;
11) updating the style Gram matrix of the original image, and setting the learning rate to be 1 multiplied by 10 < -3 >;
12) performing set operation on the style Gram matrix and the feature map of the original image content to obtain a new generated map;
13) repeating the previous steps to carry out iterative training, wherein the iteration times are set to 40000 times;
14) after 40000 times of iterative training, the total loss of the model is basically converged to the global minimum value, and the generated style graph not only keeps the depth and the stereoscopic vision of the original image, but also can well keep the details and the structural sense of the original image.
Example two.
The image stylization migration method combining deep learning and depth perception in the embodiment comprises the following steps:
1) preprocessing the image through an image transformation network to generate y; respectively inputting the generated picture y and the original picture x into the trained VGG16 model, acquiring feature maps of all layers of the model, and calculating loss values of style and content;
2) respectively inputting the generated picture y and the original picture x into the trained Hourglass3 model, acquiring a depth of field estimation image of a model output layer, and calculating a depth of field loss value;
3) fitting the loss functions of the content, the style and the depth of field into a linear function, and calculating to generate a total loss value of the graph and the original graph;
4) and repeating the steps 1) to 3), and enabling the stylized model to be converged through an L-BGFS optimization algorithm. And finally, the style diagram obtained by the converged model can keep the artistic style of the style image, the content of the content image and the information of the far and near field depth structures.
Preferably, the step 1) of obtaining the style and content features of the image in the VGG16 deep perception network is characterized by comprising the following steps:
1) comparing the feature map of the network relu2_2 layer of the VGG16 with the original feature map to calculate and obtain a target content loss value;
where Ni (φ) represents the normalized features of the ith layer (values equal to those of the convolutional neural network)),φi(y) andrespectively representing input pictures y andcharacteristic diagram of ith convolution layer in perceptual loss network phi
2) Comparing the feature maps of the layers relu1_2, relu2_2, relu3_3 and relu4_3 of the VGG16 network with the feature map of the original image to calculate and obtain a target style loss value, wherein the image style is represented by a Gram matrix:
where Ni (φ) represents the normalized features of the ith layer (for a volume)The value equals to the product neural network),Andrespectively representing input picturesAnd an input image ysPerforming Gram matrix representation on the ith convolutional layer feature map of the convolutional neural network phi;
preferably, the step 2) uses a depth perception network to calculate the depth loss value of the style generation image and the original image, and is characterized in that a Hourglass3 model trained by Weifeng Chen et al, michigan university is introduced and a depth loss function is defined to calculate the depth loss between the input image x and the output image in the style transfer model. Ideally, the output image should have the same depth feature value as the input image x. In particular, we can define the depth loss function to be in the same form as the content loss function;
preferably, the step 3) combines the content, the style and the depth of field three-part loss function into a linear function, and calculates to generate the total loss value of the graph and the original graph, wherein a linear function is defined to combine the loss values, and the proportion of the style, the content and the depth of field structure can be realized by changing the weight;
preferably, step 4 comprises, after the step of,
(1) calculating a descending gradient in the image conversion network fw according to the feedback of the loss value, and adjusting the fw parameter to enable the total loss to be close to the minimum value;
(2) updating the style Gram matrix of the original image, and setting the learning rate to be 1 × 10-3;
(3) Performing set operation on the style Gram matrix and the feature map of the original image content to obtain a new generated map;
(4) after 40000 times of iterative training, the total loss of the model is basically converged to the global minimum value, and the generated style graph not only keeps the depth and the stereoscopic vision of the original image, but also can well keep the details and the structural sense of the original image.
Claims (1)
1. An image stylized migration method combining deep learning and depth perception is characterized by comprising the following steps:
1) preprocessing the original image x through an image transformation network to generate a picture y*(ii) a Will generate a picture y*Inputting the original image x into the trained VGG16 model and calculating the loss values of style and content;
2) will generate a picture y*Respectively inputting the original image x into the trained Hourglass3 model, acquiring a depth of field estimation image of a model output layer, and calculating a depth of field loss value;
3) calculating a total loss value according to the loss values of the content, the style and the depth of field;
4) automatically adjusting model parameters through an optimization algorithm, then generating a new stylized image of the stylized image generated at the previous time through the model again, calculating a new loss value, and updating the model parameters until the model converges;
the step 1) of obtaining feature maps of model layers in a trained VGG16 model and calculating loss values of styles and contents specifically comprises the following steps:
1) firstly, the feature map output by the trained VGG16 model is compared with the original feature map to calculate the target content loss value
Wherein N isi(phi) denotes the normalized characteristics of the ith layer of the perceptual loss network phii(y) andrespectively representing input pictures y andcharacteristic diagram of I-th convolution layer in perceptual loss network phi, IδIs the total number of layers of the delta network;
2) and comparing the feature map output by the trained VGG16 model with the feature map of the original image to calculate and obtain a target style loss value:
wherein N isi(phi) denotes the normalized characteristics of the ith layer of the perceptual loss network phi, IδIs the total number of layers of the delta network; the image style is represented by Gram matrix: andrespectively representing input picturesAnd an input image ysPerforming Gram matrix representation on the ith convolutional layer feature map of the convolutional neural network phi;
the step 2) specifically includes calculating the depth of field loss value according to the following formula:
wherein N isi(δ) normalized features representing the i-th layer of the depth-aware loss network δ, δi(y) andrespectively representing input picture y and input imageCharacteristic diagram of the ith convolution layer in the deep perceptual loss network deltaδIs the total number of layers of the delta network;
the step 3) of calculating the total loss value of the generated graph and the original graph comprises the following steps:
wherein λ1,λ2,λ3Weights representing content loss, style loss, and depth loss, respectively;
the step 4) specifically comprises the following steps:
(1) in the image conversion network fwThe total loss value calculated in steps 1) to 3) of (1)Calculating a descending gradient d omega by an L-BFGS optimization method, adjusting an fw parameter to minimize the total loss, and setting the learning rate to be 1 multiplied by 10-3;
(2) Adding the style Gram matrix and the feature map of the original image content to obtain an average value, and obtaining a generated image; obtaining a new loss value and style of the generated graph again through the perception loss network phi and the depth perception network delta to generate the generated graph;
(3) solving a descending gradient according to the new loss value and continuously adjusting the fw parameter;
(4) and repeating the steps, and carrying out 40000 times of iterative training to ensure that the total loss of the model is basically converged to the global minimum value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710596250.XA CN107705242B (en) | 2017-07-20 | 2017-07-20 | Image stylized migration method combining deep learning and depth perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710596250.XA CN107705242B (en) | 2017-07-20 | 2017-07-20 | Image stylized migration method combining deep learning and depth perception |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107705242A CN107705242A (en) | 2018-02-16 |
CN107705242B true CN107705242B (en) | 2021-12-17 |
Family
ID=61170732
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710596250.XA Active CN107705242B (en) | 2017-07-20 | 2017-07-20 | Image stylized migration method combining deep learning and depth perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107705242B (en) |
Families Citing this family (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108470320B (en) * | 2018-02-24 | 2022-05-20 | 中山大学 | Image stylization method and system based on CNN |
CN110232392B (en) * | 2018-03-05 | 2021-08-17 | 北京大学 | Visual optimization method, optimization system, computer device and readable storage medium |
CN108537776A (en) * | 2018-03-12 | 2018-09-14 | 维沃移动通信有限公司 | A kind of image Style Transfer model generating method and mobile terminal |
CN108596830B (en) * | 2018-04-28 | 2022-04-22 | 国信优易数据股份有限公司 | Image style migration model training method and image style migration method |
CN108846793B (en) * | 2018-05-25 | 2022-04-22 | 深圳市商汤科技有限公司 | Image processing method and terminal equipment based on image style conversion model |
CN110166759B (en) * | 2018-05-28 | 2021-10-15 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
CN108765278B (en) * | 2018-06-05 | 2023-04-07 | Oppo广东移动通信有限公司 | Image processing method, mobile terminal and computer readable storage medium |
CN108769644B (en) * | 2018-06-06 | 2020-09-29 | 浙江大学 | Binocular animation stylized rendering method based on deep learning |
CN108924528B (en) * | 2018-06-06 | 2020-07-28 | 浙江大学 | Binocular stylized real-time rendering method based on deep learning |
CN108961349A (en) * | 2018-06-29 | 2018-12-07 | 广东工业大学 | A kind of generation method, device, equipment and the storage medium of stylization image |
CN109064428B (en) * | 2018-08-01 | 2021-04-13 | Oppo广东移动通信有限公司 | Image denoising processing method, terminal device and computer readable storage medium |
CN109191444A (en) * | 2018-08-29 | 2019-01-11 | 广东工业大学 | Video area based on depth residual error network removes altering detecting method and device |
CN110895795A (en) * | 2018-09-13 | 2020-03-20 | 北京工商大学 | Improved semantic image inpainting model method |
CN110706151B (en) * | 2018-09-13 | 2023-08-08 | 南京大学 | Video-oriented non-uniform style migration method |
CN110660018B (en) * | 2018-09-13 | 2023-10-17 | 南京大学 | Image-oriented non-uniform style migration method |
CN109345446B (en) * | 2018-09-18 | 2022-12-02 | 西华大学 | Image style transfer algorithm based on dual learning |
CN109447137B (en) * | 2018-10-15 | 2022-06-14 | 聚时科技(上海)有限公司 | Image local style migration method based on decomposition factors |
CN111260548B (en) * | 2018-11-30 | 2023-07-21 | 浙江宇视科技有限公司 | Mapping method and device based on deep learning |
CN109447936A (en) * | 2018-12-21 | 2019-03-08 | 江苏师范大学 | A kind of infrared and visible light image fusion method |
CN111383165B (en) * | 2018-12-29 | 2024-04-16 | Tcl科技集团股份有限公司 | Image processing method, system and storage medium |
CN109949214A (en) * | 2019-03-26 | 2019-06-28 | 湖北工业大学 | A kind of image Style Transfer method and system |
CN110084741B (en) * | 2019-04-26 | 2024-06-14 | 衡阳师范学院 | Image wind channel migration method based on saliency detection and depth convolution neural network |
CN110210347B (en) * | 2019-05-21 | 2021-03-23 | 赵森 | Intelligent color jacket paper-cut design method based on deep learning |
CN110335206B (en) * | 2019-05-31 | 2023-06-09 | 平安科技(深圳)有限公司 | Intelligent filter method, device and computer readable storage medium |
CN110310221B (en) * | 2019-06-14 | 2022-09-20 | 大连理工大学 | Multi-domain image style migration method based on generation countermeasure network |
CN110458906B (en) * | 2019-06-26 | 2024-03-15 | 广州大鱼创福科技有限公司 | Medical image coloring method based on depth color migration |
CN110490791B (en) * | 2019-07-10 | 2022-10-18 | 西安理工大学 | Clothing image artistic generation method based on deep learning style migration |
TWI730467B (en) * | 2019-10-22 | 2021-06-11 | 財團法人工業技術研究院 | Method of transforming image and network for transforming image |
CN110930295B (en) * | 2019-10-25 | 2023-12-26 | 广东开放大学(广东理工职业学院) | Image style migration method, system, device and storage medium |
CN111127309B (en) * | 2019-12-12 | 2023-08-11 | 杭州格像科技有限公司 | Portrait style migration model training method, portrait style migration method and device |
CN111325681B (en) * | 2020-01-20 | 2022-10-11 | 南京邮电大学 | Image style migration method combining meta-learning mechanism and feature fusion |
CN111353964B (en) * | 2020-02-26 | 2022-07-08 | 福州大学 | Structure-consistent stereo image style migration method based on convolutional neural network |
CN111476708B (en) * | 2020-04-03 | 2023-07-14 | 广州市百果园信息技术有限公司 | Model generation method, model acquisition method, device, equipment and storage medium |
CN111950608B (en) * | 2020-06-12 | 2021-05-04 | 中国科学院大学 | Domain self-adaptive object detection method based on contrast loss |
CN111768438B (en) * | 2020-07-30 | 2023-11-24 | 腾讯科技(深圳)有限公司 | Image processing method, device, equipment and computer readable storage medium |
CN112508815A (en) * | 2020-12-09 | 2021-03-16 | 中国科学院深圳先进技术研究院 | Model training method and device, electronic equipment and machine-readable storage medium |
CN113240576B (en) * | 2021-05-12 | 2024-04-30 | 北京达佳互联信息技术有限公司 | Training method and device for style migration model, electronic equipment and storage medium |
CN114820908B (en) * | 2022-06-24 | 2022-11-01 | 北京百度网讯科技有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN114792355B (en) * | 2022-06-24 | 2023-02-24 | 北京百度网讯科技有限公司 | Virtual image generation method and device, electronic equipment and storage medium |
CN115249221A (en) * | 2022-09-23 | 2022-10-28 | 阿里巴巴(中国)有限公司 | Image processing method and device and cloud equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228198A (en) * | 2016-08-17 | 2016-12-14 | 广东工业大学 | A kind of super-resolution recognition methods of medical treatment CT image |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
CN106780367A (en) * | 2016-11-28 | 2017-05-31 | 上海大学 | HDR photo style transfer methods based on dictionary learning |
CN106886975A (en) * | 2016-11-29 | 2017-06-23 | 华南理工大学 | It is a kind of can real time execution image stylizing method |
CN106952224A (en) * | 2017-03-30 | 2017-07-14 | 电子科技大学 | A kind of image style transfer method based on convolutional neural networks |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090310189A1 (en) * | 2008-06-11 | 2009-12-17 | Gallagher Andrew C | Determining the orientation of scanned hardcopy medium |
US9594977B2 (en) * | 2015-06-10 | 2017-03-14 | Adobe Systems Incorporated | Automatically selecting example stylized images for image stylization operations based on semantic content |
-
2017
- 2017-07-20 CN CN201710596250.XA patent/CN107705242B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228198A (en) * | 2016-08-17 | 2016-12-14 | 广东工业大学 | A kind of super-resolution recognition methods of medical treatment CT image |
CN106780367A (en) * | 2016-11-28 | 2017-05-31 | 上海大学 | HDR photo style transfer methods based on dictionary learning |
CN106886975A (en) * | 2016-11-29 | 2017-06-23 | 华南理工大学 | It is a kind of can real time execution image stylizing method |
CN106709532A (en) * | 2017-01-25 | 2017-05-24 | 京东方科技集团股份有限公司 | Image processing method and device |
CN106952224A (en) * | 2017-03-30 | 2017-07-14 | 电子科技大学 | A kind of image style transfer method based on convolutional neural networks |
Non-Patent Citations (3)
Title |
---|
Perceptual Losses for Real-Time Style Transfer and Super-Resolution;Justin Johnson;《Computer Vision ECCV 2016: 14th European Conference, Amsterdam, The Netherlands》;20161030;694-711 * |
基于深度学习的图像风格转换;欠扁的小篮子;《博客园: https://www.cnblogs.com/z941030/p/7056814.html》;20170621;1-7 * |
电子商务***中基于多Agent***的信息流控制研究;刘怡俊;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20020215;I139-198 * |
Also Published As
Publication number | Publication date |
---|---|
CN107705242A (en) | 2018-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107705242B (en) | Image stylized migration method combining deep learning and depth perception | |
CN110969250B (en) | Neural network training method and device | |
CN106548208B (en) | A kind of quick, intelligent stylizing method of photograph image | |
JP6961727B2 (en) | Generate a copy of interest | |
CN110634170B (en) | Photo-level image generation method based on semantic content and rapid image retrieval | |
CN111325851A (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
CN109345446B (en) | Image style transfer algorithm based on dual learning | |
CN108647723B (en) | Image classification method based on deep learning network | |
CN109410251B (en) | Target tracking method based on dense connection convolution network | |
CN107369147B (en) | Image fusion method based on self-supervision learning | |
CN111695494A (en) | Three-dimensional point cloud data classification method based on multi-view convolution pooling | |
CN113255813A (en) | Multi-style image generation method based on feature fusion | |
CN112307714A (en) | Character style migration method based on double-stage deep network | |
CN111127309A (en) | Portrait style transfer model training method, portrait style transfer method and device | |
CN112686282A (en) | Target detection method based on self-learning data | |
CN108596865B (en) | Feature map enhancement system and method for convolutional neural network | |
CN110288667B (en) | Image texture migration method based on structure guidance | |
US20220398697A1 (en) | Score-based generative modeling in latent space | |
Cui et al. | PortraitNET: Photo-realistic portrait cartoon style transfer with self-supervised semantic supervision | |
Futschik et al. | Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network. | |
CN112541856B (en) | Medical image style migration method combining Markov field and Graham matrix characteristics | |
CN111667401B (en) | Multi-level gradient image style migration method and system | |
CN113869503B (en) | Data processing method and storage medium based on depth matrix decomposition completion | |
CN114140667A (en) | Small sample rapid style migration method based on deep convolutional neural network | |
CN113112397A (en) | Image style migration method based on style and content decoupling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |