CN114549332A - Convolutional neural network skin type processing method and device based on human body analysis prior support - Google Patents

Convolutional neural network skin type processing method and device based on human body analysis prior support Download PDF

Info

Publication number
CN114549332A
CN114549332A CN202011339189.9A CN202011339189A CN114549332A CN 114549332 A CN114549332 A CN 114549332A CN 202011339189 A CN202011339189 A CN 202011339189A CN 114549332 A CN114549332 A CN 114549332A
Authority
CN
China
Prior art keywords
skin
model
human body
human
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011339189.9A
Other languages
Chinese (zh)
Inventor
郑进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huoshaoyun Technology Co ltd
Original Assignee
Hangzhou Huoshaoyun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huoshaoyun Technology Co ltd filed Critical Hangzhou Huoshaoyun Technology Co ltd
Priority to CN202011339189.9A priority Critical patent/CN114549332A/en
Publication of CN114549332A publication Critical patent/CN114549332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a convolution neural network skin type processing method based on human body analytic prior support, which comprises the following steps: A. constructing a human skin analytical model to obtain a human analytical graph; B. constructing a convolutional neural network model for human image skin treatment; C. and introducing human skin analysis information, and constructing a human skin analysis information coding model to guide a portrait skin processing model to perform different processing on different semantic regions of the portrait. The invention also discloses a convolution neural network skin type processing device based on the prior support of human body analysis. The invention utilizes the convolutional neural network to model the complex pixel mapping relation between the original portrait and the portrait processed by the retoucher, introduces the human body analysis information as the prior information of the convolutional neural network, guides the model to perform corresponding processing on each part of the human body, particularly distinguishes the processing of the face and the body skin, ensures that different areas have different processing effects and the processing effect is better.

Description

Convolutional neural network skin type processing method and device based on human body analysis prior support
Technical Field
The invention belongs to the field of image processing, particularly relates to a human skin processing technology for imaging a digital single lens reflex, is applied to skin processing of a portrait in a photo generated by equipment such as a mobile phone, a digital single lens reflex and the like, and particularly relates to a convolutional neural network skin processing method and a convolutional neural network skin processing device based on prior support of human body analysis.
Background
The processing of portrait skin has become a very common image processing requirement in daily life today, and in recent years, various buffing software layers are in endless, but in scenes requiring high commercial-grade portrait retouching, such as wedding photography, advertisement posters and the like, professional retouchers still need to perform manual processing by means of software such as PhotoShop and the like, and through our investigation, the retouchers need about 30 minutes on average to process two million-pixel portrait photographed by one professional camera, and the efficiency makes the development of computer vision technology more close to retouching effect very important. The current portrait skin treatment technology mainly comprises:
1) the skin-polishing effect is achieved by utilizing filtering technologies such as Gaussian blur and bilateral filtering, the skin-polishing effect is obvious, the execution efficiency is high, but the image processed by the method is always fuzzy, the overall texture of the skin is obviously reduced, the skin looks flat and has no stereoscopic impression. In addition, this method has a problem in that it cannot selectively act on a designated area of the image, which causes the background other than the portrait to be processed, so that the sharpness of the image as a whole is greatly reduced;
2) the image repairing technology is utilized to realize the treatment of dark flaws such as speckles and pox on the face. The method firstly needs to find a flaw area on the skin and then utilizes pixels around the flaw to fill up through an image repairing technology, although the method can ensure the definition of the whole image, the repaired local pixel area still has the problem of skin texture loss in different degrees.
In principle, the methods are difficult to ensure that various complicated skin problems can be compatibly processed, and meanwhile, the natural and exquisite commercial-grade portrait processing effect can be kept. In addition, the two methods also need to avoid the wrong processing of the areas outside the skin by means of the corresponding human face five-sense organ positioning technology or the skin recognition technology, and what is more troublesome is that in the process of processing the portrait by the retouching engineer in a real scene, the processing modes and effects of different areas of the human body have great differences, for example, the human face focuses more on the retention of skin textures, and the body skin focuses more on the integral cleanness.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a convolutional neural network skin processing method and a convolutional neural network skin processing device based on the support of human body analytic prior, which introduce human body analytic information as prior information of a convolutional neural network and guide a model to perform corresponding processing on different parts of a human body, and the processing result is more real and clear.
The technical scheme adopted by the invention for solving the technical problems is as follows: a convolution neural network skin type processing method based on human body analysis prior support comprises the following steps:
A. constructing a human skin analytical model to obtain a human analytical graph;
B. constructing a convolutional neural network model for human image skin treatment;
C. and B, introducing human skin analysis information, and constructing a human skin analysis information coding model to guide the human image skin processing model constructed in the step B to carry out different processing on different semantic regions of the human image.
Further, in the step A, a human skin analytic model is constructed based on the CHIP human analytic dataset and the DeepLab-V3 structure.
Further, in the step B, a convolutional neural network model for human image skin processing is constructed based on the UNet structure.
Further, a multi-scale dense convolution module is embedded between each down-sampling and up-sampling in the step B. To increase the field of reception of the corresponding layer, making it easier to treat imperfections of larger area on the skin.
Further, in the step C, a human body skin analysis information coding model is constructed based on HRNet.
Furthermore, when the human skin analysis information is introduced in the step C to guide the model to perform different processing on different portrait areas, human analysis is introduced for the constructed convolutional neural network model as prior information, and the guide model can at least identify three areas of the face, the body and the background which need to be distinguished.
Further, the method comprises the following steps:
s1, constructing a human skin analytical model based on the CIHP human analytical data set and the DeepLab-V3 structure;
s2, construction trainingTraining sample set D { (x)i,yi,mi)|xi∈XN,yi∈YN,i=1,2,...N},xiAs a portrait original, yiCorresponding retouching, m, obtained for manual processing by professional retouchersiIs xiA corresponding human body analytic graph, wherein the human body analytic graph is obtained by reasoning the human body skin analytic model constructed in 1);
s3, constructing a convolution neural network model for human image skin treatment, wherein the input of the model is xiAnd miThe model output is a predicted repair map
Figure BDA0002798137790000031
S4, constructing model loss, wherein the formula is shown in formula 1:
Figure BDA0002798137790000041
wherein phi (y)i)=∑layer=(relu2_2,relu3_3,relu4_3)wlayerVGGlayer(yi) (2)
Wherein phi is shown as formula (2), and represents the output weighting of the relu2_2, relu3_3 and relu4_3 layers of the pre-trained VGG-16 model;
and S5, training the convolutional neural network model.
Preferably, an Adam optimizer is used in the step S5, and the learning rate is set to 1 e-4. The invention also discloses a convolution neural network skin type processing device based on the prior support of human body analysis, which comprises the following steps:
the human skin analytical model is constructed on the basis of the CHIP human analytical data set and the deep Lab-V3 structure and is used for obtaining a human skin analytical graph;
the convolutional neural network model is constructed based on a UNet structure and is used for processing the portrait skin;
the human body analysis information guiding model is constructed based on HRNet and is used for introducing human body skin analysis information and constructing a human body skin analysis information coding model to guide the human body skin processing model to carry out different processing on different semantic regions of the human body.
The invention models all complex skin flaws through the convolutional neural network, the convolutional neural network can make targeted processing according to the color and texture distinction of different flaws through high-order nonlinear mapping, and corresponding positioning and processing methods do not need to be designed according to different flaws in a fussy way.
The invention introduces a human skin analysis information guide model to carry out different targeted processing on different portrait areas. Experiments show that the face skin of a portrait retouching image output by a model is influenced by the skin effect of a body when the portrait skin is processed by a convolutional neural network technology, so that certain shadows and textures of the face part are retouched, and the face part loses a certain degree of stereoscopic impression; 2) some unpredictable treatments also occur in image background areas with skin color or texture close to skin, such as some stools with wood grain removed, rough pink background walls smoothed, etc. In order to solve the problem, the method introduces human body analysis as prior information into the constructed convolutional neural network model, and the guide model can better identify three areas needing to be distinguished, namely the face, the body and the background.
The invention has the beneficial effects that: the invention utilizes the convolutional neural network to model the complex pixel mapping relation between the original portrait and the portrait processed by the retoucher, introduces the human body analysis information as the prior information of the convolutional neural network, guides the model to perform corresponding processing on each part of the human body, particularly distinguishes the processing of the face and the body skin, ensures that different areas have different processing effects and the processing effect is better.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a network diagram of the method of the present invention
FIG. 3 is a graph showing the comparison of the effects of the method of the present invention before and after the treatment.
FIG. 4 is an analytic view of a human body constructed by the method of the present invention.
Fig. 5-1 is an unprocessed real photograph artwork.
Fig. 5-2 is a final image obtained after the original image is processed by the filtering and peeling method.
Fig. 5-3 are final images of the original image after being processed by the stain repair method.
Fig. 5-4 are final images of an original image after processing by the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
A convolution neural network skin type processing method based on human body analysis prior support comprises the following steps:
A. constructing a human skin analytical model to obtain a human skin analytical graph;
specifically, a human skin analytical model is constructed based on the CHIP human body analytical data set and the DeepLab-V3 structure to obtain a human skin analytical graph;
human body analysis aims at analyzing people in an image into a plurality of semantically consistent regions, such as various parts of a body, clothes of the whole body and the like;
the CHIP human body analysis data set is an important public data set in human body analysis, and comprises 3 thousands of human body labeling samples, and semantic regions of human bodies are defined into 20 types (background, hat, hair, gloves, magic mirror, jacket, one-piece dress, coat, socks, shorts, body, scarf, skirt, face, left/right arm, left/right leg, left/right shoe);
the Deeplab V3 is characterized in that a serial Convolution module and a parallel Convolution module with holes are designed for resolving an object in multiple scales (multiple scales), and multiple scales of content information are acquired by adopting multiple different Atrous rates; an Atrous Spatial Pyramid Powing (ASPP) module is provided, convolution features of different scales are mined, image layer features of global content information are coded, and the analysis effect is improved;
B. constructing a volume neural network model for portrait skin treatment;
specifically, a convolution neural network model for human Image skin treatment is constructed based on UNet structure (U-Net: relational Networks for biological Image Segmentation);
aiming at an UNet structure of an original foundation, carrying out the following optimization;
as shown in fig. 2, a multi-scale dense convolution module is embedded between each down-sampling and up-sampling, which simultaneously uses the feature height effectiveness brought by dense Connected symmetric Networks (2017) encouraging feature reuse and the advantage of the ASPP module being able to mine multi-scale features, and 1 module is embedded in the original size, 2 modules are embedded after the first down-sampling, 3 modules are embedded after the second down-sampling, and 1 module is embedded in the up-sampling part. From the experimental results, the injection of the module enables the whole network and the receptive field of a single layer under different scales to be enlarged, and areas with larger areas on the skin, such as larger nevi, massive scars and the like, are easier to treat.
C. Introducing human body skin analysis information, constructing a human body analysis information coding model based on HRNet, and guiding the human figure skin processing model constructed in the step B to carry out different processing on different semantic regions of the human figure;
when the human body analysis information is introduced to guide the model to perform different processing on different portrait areas in the steps, human body analysis is introduced to the constructed convolutional neural network model as prior information, and the guide model can at least identify three areas needing to be distinguished, namely a face area, a body area and a background area;
the invention finds that directly merging the semantic analysis graph and the original image and inputting from the network can cause the image edge and the analysis graph edge to be difficult to naturally blend, therefore, based on HRNet (Deep High-Resolution reconstruction Learning for Human body position Estimation, 2019, the model is proposed to solve the Human body position Estimation), an information coding network is separately constructed to eliminate or flatten the 'essential isolation' existing between the analytic graph and the real image, and the original HRNet is not changed too much, but we have done a lot of experiments to determine how to use it for our application scenario, specifically, the network coded information has better effect on which layer of the main network constructed in step B is injected at all, and finally we choose to inject the position for the first up-sampling after the main network has completed all down-sampling, as shown in fig. 2.
More specifically, a convolutional neural network skin processing method based on human body analytic prior support comprises the following steps:
s1, constructing a human skin analytical model based on the CIHP human analytical data set and the DeepLab-V3 structure;
s2, constructing a training sample set
D={(xi,yi,mi)|xi∈XN,yi∈YN,i=1,2,...N},xiAs a portrait original, yiCorresponding retouching, m, obtained for manual handling by a professional retoucheriIs xiA corresponding human body analytic graph obtained by reasoning the human body skin analytic model constructed in 1);
as shown in fig. 3, the human body analysis map is composed of a white face region, a gray body region, and a black background region.
S3, constructing a convolution neural network model for human image skin treatment, wherein the input of the model is xiAnd miThe model output is a predicted repair map
Figure BDA0002798137790000091
S4, constructing model loss, wherein the formula is shown in formula 1:
Figure BDA0002798137790000092
wherein phi (y)i)=∑layer=(relu2_2,relu3_3,relu4_3)wlayerVGGlayer(yi) (2)
Wherein phi is shown as formula (2), and represents the output weighting of the relu2_2, relu3_3 and relu4_3 layers of the pre-trained VGG-16 model;
s5, training a convolutional neural network model, wherein an Adam optimizer is used in the method, and the learning rate is set to be 1 e-4.
As shown in fig. 3, the left image is the input original image, the right image is the effect image generated by the method of the present invention, and the face, body and background of the human body in the right image are processed respectively to obtain different results. The method beautifies the flaws of the face and body skin of the portrait, including moles, spots, acne marks, wrinkles and the like, so that the skin of the portrait in the photo is natural and fine. Human body analysis is introduced as prior information, a guide model can better identify three areas of a face, a body and a background which need to be distinguished, three colors, white-face, gray-body and background-black are involved in the human body analysis, so that the model network output can keep the background unchanged, only the white and gray areas are processed, and compared with the situation that the background and the skin are not distinguished, some backgrounds which are close to skin color or texture of the skin can be processed by mistake.
Fig. 5-1 is an unprocessed original image of a real photo, fig. 5-2 is an image obtained after being processed by a filtering and skin-polishing method in the prior art, fig. 5-3 is an image obtained after being processed by a stain repairing method in the prior art, and fig. 5-4 is an image obtained after being processed by a method of the present invention.
A convolution neural network skin type processing device based on human body analysis prior support is characterized by comprising the following components:
the human skin analytical model is constructed on the basis of the CHIP human analytical data set and the deep Lab-V3 structure and is used for obtaining a human skin analytical graph;
the convolutional neural network model is constructed based on a UNet structure and is used for processing the portrait skin;
the human body analysis information guiding model is constructed based on HRNet and is used for introducing human body skin analysis information and constructing a human body analysis information coding model to guide a human figure skin processing model to carry out different processing on different semantic regions of a human figure.
The foregoing detailed description is intended to illustrate and not limit the invention, which is intended to be within the spirit and scope of the appended claims, and any changes and modifications that fall within the true spirit and scope of the invention are intended to be covered by the following claims.

Claims (9)

1. A convolution neural network skin type processing method based on human body analysis prior support is characterized by comprising the following steps:
A. constructing a human skin analytical model to obtain a human analytical graph;
B. constructing a convolutional neural network model for human image skin treatment;
C. and D, introducing human skin analysis information, constructing a human skin analysis information coding model, and guiding the portrait skin processing model constructed in the step B to carry out different processing on different semantic regions of the portrait.
2. The convolutional neural network skin processing method based on human body analytic prior support as claimed in claim 1, wherein: and in the step A, a human skin analytical model is constructed based on the CIHP human analytical data set and the DeepLab-V3 structure.
3. The convolutional neural network skin processing method based on human body analytic prior support as claimed in claim 1 or 2, wherein: and B, constructing a convolutional neural network model for human image skin processing based on the UNet structure.
4. The convolutional neural network skin processing method based on human body analytic prior support as claimed in claim 3, wherein: and a multi-scale dense convolution module is embedded between each down-sampling and up-sampling in the step B.
5. The convolutional neural network skin processing method based on human body analytic prior support as claimed in claim 3, wherein: and C, constructing a human body skin analysis information coding model based on HRNet.
6. The convolutional neural network skin processing method based on human body analytic prior support as claimed in claim 1, wherein: and C, when the human body skin analysis information is introduced to guide the model to carry out different treatments on different portrait areas, human body analysis is introduced to the constructed convolutional neural network model as prior information, and the guide model can at least identify three areas needing to be distinguished, namely the face, the body and the background.
7. The convolutional neural network skin processing method based on the prior support of human body analysis as claimed in claim 1, comprising the following steps:
s1, constructing a human skin analytical model based on the CIHP human analytical data set and the DeepLab-V3 structure;
s2, constructing a training sample set D { (x)i,yi,mi)|xi∈XN,yi∈YN,i=1,2,...N},xiAs an original image, yiCorresponding retouching, m, obtained for manual handling by a professional retoucheriIs xiA corresponding human body analytic graph, wherein the human body analytic graph is obtained by reasoning the human body skin analytic model constructed in 1);
s3, constructing a convolution neural network model for human image skin treatment, wherein the input of the model is xiAnd miThe model output is a predicted repair map
Figure FDA0002798137780000021
S4, constructing model loss, wherein the formula is shown in formula 1:
Figure FDA0002798137780000022
wherein phi (y)i)=∑layer=(relu2_2,relu3_3,relu4_3)wlayerVGGlayer(yi) (2)
Wherein phi is shown as formula (2), and represents the output weighting of the relu2_2, relu3_3 and relu4_3 layers of the pre-trained VGG-16 model;
and S5, training the convolutional neural network model.
8. The convolutional neural network skin processing method based on the prior support of human body analysis as claimed in claim 7, wherein: in the step S5, an Adam optimizer is used, and the learning rate is set to 1 e-4.
9. A convolution neural network skin type processing device based on human body analysis prior support is characterized by comprising the following components:
the human skin analytical model is constructed on the basis of a CIHP human analytical data set and a DeepLab-V3 structure and is used for obtaining a human skin analytical graph;
the convolutional neural network model is constructed based on a UNet structure and is used for processing the portrait skin;
the human body analysis information guiding model is constructed based on HRNet and is used for introducing human body skin analysis information and constructing a human body skin analysis information coding model to guide the human body skin processing model to carry out different processing on different semantic regions of the human body.
CN202011339189.9A 2020-11-25 2020-11-25 Convolutional neural network skin type processing method and device based on human body analysis prior support Pending CN114549332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011339189.9A CN114549332A (en) 2020-11-25 2020-11-25 Convolutional neural network skin type processing method and device based on human body analysis prior support

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011339189.9A CN114549332A (en) 2020-11-25 2020-11-25 Convolutional neural network skin type processing method and device based on human body analysis prior support

Publications (1)

Publication Number Publication Date
CN114549332A true CN114549332A (en) 2022-05-27

Family

ID=81659465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011339189.9A Pending CN114549332A (en) 2020-11-25 2020-11-25 Convolutional neural network skin type processing method and device based on human body analysis prior support

Country Status (1)

Country Link
CN (1) CN114549332A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
US20200043213A1 (en) * 2017-04-14 2020-02-06 Shenzhen Sensetime Technology Co., Ltd. Face image processing method and apparatus, and electronic device
CN111275694A (en) * 2020-02-06 2020-06-12 电子科技大学 Attention mechanism guided progressive division human body analytic model and method
CN111445437A (en) * 2020-02-25 2020-07-24 杭州火烧云科技有限公司 Method, system and equipment for processing image by skin processing model constructed based on convolutional neural network
CN111583154A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Image processing method, skin beautifying model training method and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043213A1 (en) * 2017-04-14 2020-02-06 Shenzhen Sensetime Technology Co., Ltd. Face image processing method and apparatus, and electronic device
CN107808137A (en) * 2017-10-31 2018-03-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN111275694A (en) * 2020-02-06 2020-06-12 电子科技大学 Attention mechanism guided progressive division human body analytic model and method
CN111445437A (en) * 2020-02-25 2020-07-24 杭州火烧云科技有限公司 Method, system and equipment for processing image by skin processing model constructed based on convolutional neural network
CN111583154A (en) * 2020-05-12 2020-08-25 Oppo广东移动通信有限公司 Image processing method, skin beautifying model training method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SUDHA VELUSAMY 等: "FabSoften: Face Beautification via Dynamic Skin Smoothing, Guided Feathering, and Texture Restoration", 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 19 June 2020 (2020-06-19) *
邱佳梁 等: "结合肤色分割与平滑的人脸图像快速美化", 中国图象图形学报, vol. 21, no. 7, 31 December 2016 (2016-12-31) *

Similar Documents

Publication Publication Date Title
Li et al. Low-light image enhancement via progressive-recursive network
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
Ren et al. Low-light image enhancement via a deep hybrid network
Liang et al. Semantically contrastive learning for low-light image enhancement
Liang et al. Cameranet: A two-stage framework for effective camera isp learning
Jiang et al. Unsupervised decomposition and correction network for low-light image enhancement
Liu et al. LAE-Net: A locally-adaptive embedding network for low-light image enhancement
Pan et al. MIEGAN: Mobile image enhancement via a multi-module cascade neural network
Zhang et al. Better than reference in low-light image enhancement: Conditional re-enhancement network
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
Ma et al. RetinexGAN: Unsupervised low-light enhancement with two-layer convolutional decomposition networks
Li et al. Flexible piecewise curves estimation for photo enhancement
Chen et al. End-to-end single image enhancement based on a dual network cascade model
CN113781468A (en) Tongue image segmentation method based on lightweight convolutional neural network
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
Tang et al. DRLIE: Flexible low-light image enhancement via disentangled representations
Meng et al. Gia-net: Global information aware network for low-light imaging
Chen et al. Attention-based broad self-guided network for low-light image enhancement
Zheng et al. Windowing decomposition convolutional neural network for image enhancement
Bugeau et al. Influence of color spaces for deep learning image colorization
Zhu et al. Detail-preserving arbitrary style transfer
Jiang et al. Mutual retinex: Combining transformer and cnn for image enhancement
CN112070686B (en) Backlight image cooperative enhancement method based on deep learning
Wang et al. Single low-light image brightening using learning-based intensity mapping
CN114549332A (en) Convolutional neural network skin type processing method and device based on human body analysis prior support

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination