CN111598799A - Image toning enhancement method and image toning enhancement neural network training method - Google Patents

Image toning enhancement method and image toning enhancement neural network training method Download PDF

Info

Publication number
CN111598799A
CN111598799A CN202010362929.4A CN202010362929A CN111598799A CN 111598799 A CN111598799 A CN 111598799A CN 202010362929 A CN202010362929 A CN 202010362929A CN 111598799 A CN111598799 A CN 111598799A
Authority
CN
China
Prior art keywords
image
color
enhancement
neural network
toning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010362929.4A
Other languages
Chinese (zh)
Inventor
刘翼豪
董超
乔宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202010362929.4A priority Critical patent/CN111598799A/en
Publication of CN111598799A publication Critical patent/CN111598799A/en
Priority to PCT/CN2020/129510 priority patent/WO2021218119A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image color matching enhancement method and an image color matching enhancement neural network training method, wherein the method comprises the following steps: acquiring training data samples, and constructing an image color-mixing enhancement neural network, wherein the image color-mixing enhancement neural network comprises an encoder-decoder structure with dense jump links; training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image toning enhancement neural network after inputting the enhanced image to be toned into training can obtain toning multiplying factor and bias for toning enhancement processing of the enhanced image to be toned. The method has better interpretability, robustness, generalization and continuation in image enhancement processing, and improves the effect of image enhancement processing.

Description

Image toning enhancement method and image toning enhancement neural network training method
Technical Field
The application belongs to the technical field of image processing, and particularly relates to an image toning enhancement method and an image toning enhancement neural network training method.
Background
In the age of mobile internet, more and more people are used to share photos taken by themselves on social networks. Due to the influence of factors such as illumination, weather, environment and equipment, the taken picture may have problems of overexposure, underexposure, dull color, low saturation, unbalanced contrast and the like, which affect the subjective visual perception of people, so that the picture needs to be subjected to post-toning processing.
The image toning process is to change the whole or local colors of the image by adjusting the contrast, saturation, hue and the like of the image through an algorithm, such as a darker picture, a lighter picture, a lower saturation picture and the like, so that the image looks more full and vivid. And the traditional image toning processing method has poor effect and poor robustness.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present application provide an image toning enhancement method, an image toning enhancement neural network training method, an image toning enhancement device, a terminal device, and a storage medium.
The application is realized by the following technical scheme:
in a first aspect, an embodiment of the present application provides an image color matching enhanced neural network training method, including:
acquiring a training data sample;
constructing an image toning enhancement neural network, wherein the image toning enhancement neural network comprises an encoder-decoder structure with dense jump links;
training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
In an implementation manner of the first aspect, the obtaining training data samples includes:
acquiring a first image sample with color matching defects as the training data sample, performing color matching processing on the training data sample, and using the training data sample after the color matching processing as a true value label of network supervised learning, wherein the color matching defects comprise at least one of overexposure/underexposure, unbalanced contrast and unsaturated color; alternatively, the first and second electrodes may be,
acquiring a second image sample without color matching defects as a true value label of network supervised learning, and performing color degradation processing on the second image sample to obtain the training data sample; wherein the color degradation processing comprises at least one of overexposure, underexposure, contrast reduction, and color saturation reduction of the second image sample.
In an implementation manner of the first aspect, the color loss function is:
Figure BDA0002475713480000021
wherein the content of the first and second substances,
Figure BDA0002475713480000022
for the toned enhanced image, Y is the truth label and Gau (.) is a gaussian filter function.
In one implementation form of the first aspect, the encoder-decoder structure comprises a plurality of units, each unit comprising a convolution Conv, a linear correction unit ReLU, a residual block ResBlock, and a pooled downsampling Pooling;
wherein the residual block ResBlock comprises: convolution Conv, instance normalization IN according to
Figure BDA0002475713480000023
Processing the image, wherein x is a feature map, mu (x) is the mean value of x, sigma (x) is the standard deviation of x, gamma and β1Are linear affine parameters.
In an implementation manner of the first aspect, the training the image color-matching enhancing neural network by using a fixed-step learning rate attenuation strategy includes:
obtaining parameters of the image color-mixing enhancement neural network by iterative learning by adopting a gradient descent algorithm; and attenuating the learning rate by half every iteration round with preset number, and training the image color-mixing enhancement neural network.
In a second aspect, an embodiment of the present application provides an image toning enhancement method, including:
obtaining an enhanced image to be toned;
inputting the enhanced image to be toned into the trained image toning enhancement neural network to obtain toning multiplying factor and bias for toning enhancement of the enhanced image to be toned;
and carrying out color matching enhancement processing on the enhanced image to be color matched based on the color matching multiplication factor and the bias.
In an implementable manner of the second aspect, the performing, based on the color-mixing multiplication factor and the bias, color-mixing enhancement processing on the image to be color-mixed and enhanced includes:
by J (x) α. I (x) β2Performing color-mixing enhancement treatment on the enhanced image to be color-mixed, wherein I (x) is the enhanced image to be color-mixed, J (x) is the enhanced image after color mixing, α is I (x), β2For the bias, x represents the pixel coordinate.
In a third aspect, an embodiment of the present application provides an image color-matching enhancing neural network training apparatus, including:
the sample acquisition module is used for acquiring training data samples;
the neural network construction module is used for constructing an image toning enhancement neural network, and the image toning enhancement neural network comprises an encoder-decoder structure with dense jump links;
the neural network training module is used for training the image color-mixing enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color-mixing enhancement neural network meets a preset condition; the image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
In a fourth aspect, an embodiment of the present application provides an image color tone enhancement apparatus, including:
the image acquisition module is used for acquiring an enhanced image to be subjected to color mixing;
a parameter obtaining module, configured to input the enhanced image to be color-mixed into a trained image color-mixing enhancing neural network, and obtain a color-mixing multiplication factor and a bias for performing color-mixing enhancement on the enhanced image to be color-mixed;
and the color matching enhancement processing module is used for performing color matching enhancement processing on the enhanced image to be subjected to color matching based on the color matching multiplication factor and the bias.
In a fifth aspect, an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the image toning enhancement neural network training method according to any one of the first aspects, or implements the image toning enhancement method according to any one of the second aspects.
In a sixth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when executed by a processor, the computer program implements the image toning enhancement neural network training method according to any one of the first aspect, or implements the image toning enhancement method according to any one of the second aspect.
In a seventh aspect, this application embodiment provides a computer program product, which when run on a terminal device, causes the terminal device to execute the image toning and enhancing neural network training method according to any one of the above first aspects, or to implement the image toning and enhancing method according to any one of the second aspects.
It is to be understood that, the beneficial effects of the second to seventh aspects may be referred to the relevant description of the first aspect, and are not repeated herein.
Compared with the prior art, the embodiment of the application has the advantages that:
according to the embodiment of the application, training data samples are obtained, and an image color-mixing enhancement neural network is constructed, wherein the image color-mixing enhancement neural network comprises an encoder-decoder structure with dense jump links; training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image toning enhancement neural network can obtain toning multiplying factor and offset for toning enhancement processing on the image to be toned, and can estimate intermediate parameters for image enhancement processing through the image toning enhancement neural network, so that the scheme has better interpretability, robustness, generalization and extensibility in the image enhancement processing, and the effect of image enhancement processing is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the specification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of an image toning enhancement method according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a comparison between an image before and after color enhancement according to an embodiment of the present disclosure;
FIG. 3 is a schematic flowchart of a method for training an image color-matching enhanced neural network according to an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an architecture of an image toning enhanced neural network according to an embodiment of the present application;
FIG. 5 is a schematic flowchart of an image toning enhancement method according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating comparison between a plurality of groups of images before and after color enhancement according to an embodiment of the present disclosure;
FIG. 7 is a schematic flow chart of image toning enhancement provided by an embodiment of the present application;
FIG. 8 is a schematic structural diagram of an image toning-enhanced neural network training device provided in an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a reinforcing apparatus provided in an embodiment of the present application;
fig. 10 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The image color mixing enhancement is to change the whole or local colors of the image by adjusting the contrast, saturation, hue and the like of the image through an algorithm, such as dimming a darker picture, dimming a brighter picture, heightening a picture with low saturation and the like, so that the image looks more full and vivid. The image color-mixing enhancement technology is widely applied to the fields of movie and television production, photography and video shooting, medical imaging, remote sensing imaging and the like. The image color mixing enhancement can be used as a preprocessing algorithm of image processing algorithms such as target recognition, target tracking, feature point matching, image fusion, super-resolution reconstruction and the like.
In recent years, image enhancement technologies based on deep learning are rapidly developed, and the algorithms enable a neural network to learn mapping relations before and after image enhancement from a large number of training data pairs in a supervised or semi-supervised learning mode. Conventional learning-based image toning enhancement algorithms can be broadly divided into three categories: a physical modeling based method, an image-to-image conversion method and a reinforcement learning method.
Methods based on physical modeling attempt to estimate intermediate parameters of the proposed physical model or image enhancement hypothesis. However, due to the non-linearity and complexity of the actual data, the physical model-based approach is easily broken when the distribution of the input image does not match the model assumption.
The second category of methods treats image enhancement as an image-to-image conversion problem, which directly learns the end-to-end mapping between the input and the enhanced image without the need to model intermediate parameters. However, this method is difficult to train, easily introduces noise, and changes the original texture of the image.
Reinforcement learning is another reinforcement method whose purpose is to explicitly simulate the stepwise process of human touch-down. This method converts the color enhancement problem into a Markov Decision Process (MDP), defines each step operation as a global color adjustment operation, and solves it with a deep Q network. However, this method requires a series of toning operations to be defined in advance, and the effect is affected if the operations are not well defined. Meanwhile, the method has higher computational complexity and cost.
Moreover, the above methods are highly limited and have poor generalization ability. In particular, most of the above methods can only deal with one or more image color degradation problems, such as only over-exposed or under-exposed pictures. However, in practical applications, one is faced with not only a single degradation problem, but also the need to deal with exposure, contrast, saturation, hue, etc. simultaneously. Once the existing methods are subjected to the condition that the model cannot be processed, the data set needs to be replaced and retrained, and even the model needs to be further modified, so that the existing methods have great limitations and insufficient generalization capability.
In addition, the above method does not work well on real data sets. Specifically, the learning-based method is a data-driven method, and the existing method can have better performance on a certain data set after training on the data set, but the effect is reduced once the data set is replaced. Particularly, the real data set has more degradation types and more complex conditions, and the existing method easily causes the problems of color distortion, noise, artifacts, insufficient recovery strength and the like.
Based on the above problems, the image color-mixing enhancement neural network training method in the embodiment of the present application obtains training data samples, and constructs an image color-mixing enhancement neural network, where the image color-mixing enhancement neural network includes an encoder-decoder structure with dense jump links; training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image toning enhancement neural network can obtain toning multiplying factor and offset for toning enhancement processing on the image to be toned, and can estimate intermediate parameters for image enhancement processing through the image toning enhancement neural network, so that the scheme has better interpretability, robustness, generalization and extensibility in the image enhancement processing, and the effect of image enhancement processing is improved.
For example, the embodiment of the present application can be applied to the exemplary scenario shown in fig. 1. In this scenario, the terminal 10 obtains a training data sample, for example, the terminal 10 may obtain a first image sample with a color matching defect as the training data sample, perform manual labeling and color matching processing on the training data sample, and use the training data sample after the color matching processing as a true value label of network supervised learning; or, acquiring a second image sample without color matching defects as a true value label of network supervised learning, and performing automatic predefined color degradation processing on the second image sample to obtain a training data sample.
The server 20 obtains the training data samples, constructs an image color-mixing enhancement neural network, which includes a coder-decoder structure with dense jump links, and trains the image color-mixing enhancement neural network by using a fixed step learning rate attenuation strategy based on the training data samples until a color loss function of the image color-mixing enhancement neural network meets a preset condition.
Referring to fig. 2, the image to be color-mixed enhanced processing is input into the trained image color-mixed enhanced neural network to obtain an intermediate parameter for image enhancement processing, and the image to be color-mixed enhanced processing is color-mixed enhanced through the intermediate parameter to obtain the image after color-mixed enhanced processing, so that the image color-mixed enhanced processing method has better interpretability, robustness, generalization and extensibility in image enhancement processing, and the image color-mixed enhanced processing effect is improved.
The method and the device can be used for carrying out image color mixing enhancement on photographic works and film and television works, so that the visual effect of the image works is more vivid and full. For example, the brightness of the overexposed/underexposed picture is adjusted to be at a normal exposure level; adjusting the image with higher or lower contrast to make the important content in the image be highlighted; and enhancing the picture with unbalanced saturation to ensure that the color of the image is more full and vivid.
The image toning enhancement neural network training method of the present application is described in detail below with reference to fig. 1.
Fig. 3 is a schematic flow chart of an image color-tuning enhancing neural network training method provided in an embodiment of the present application, and with reference to fig. 3, the image color-tuning enhancing neural network training method is described in detail as follows:
in step 101, training data samples are obtained.
The training data sample can be an image without artificial color matching or an image subjected to degradation processing such as overexposure and underexposure, and the truth label of network supervised learning can be an image which is subjected to artificial color matching or artificial selection, is not subjected to overexposure and underexposure, has color balance and good visual quality.
In some embodiments, a first image sample with color matching defects may be obtained as the training data sample, and the training data sample is subjected to color matching processing, and the training data sample after color matching processing is used as a true value label for network supervised learning, where the color matching defects include at least one of overexposure/underexposure, unbalanced contrast, and color insufficiency.
For example, a large number of images with over-exposure/under-exposure, unbalanced contrast and unsaturated colors can be collected as Input of a neural network, and the Input images are finished by professional color matching experts, and the obtained manual color matching result is used as a truth label GT for network supervised learning.
In other embodiments, a second image sample without color matching defects may be obtained as a true value label for network supervised learning, and color degradation processing may be performed on the second image sample to obtain the training data sample; wherein the color degradation processing comprises at least one of overexposure, underexposure, contrast reduction, and color saturation reduction of the second image sample.
For example, a large number of high quality pictures can be collected as truth labels GT for network supervised learning, and the pictures are full and vivid in color and are aesthetically pleasing. These data sets are subjected to color degradation at random, for example, overexposure, underexposure, contrast reduction, and the like, and the degraded image is used as an Input of the neural network.
In step 102, an image toning enhancement neural network is constructed that contains densely hopped concatenated encoder-decoder structures.
In this step, a convolutional neural network for image toning enhancement is constructed, the input of the network is a low-quality picture (full picture) to be toned enhanced, and the neural network comprises an encoder-decoder structure with dense jump links.
In some embodiments, a dense spanning link is provided between the decoder and the encoder in the encoder-decoder structure;
the encoder comprises a plurality of first units, each first unit comprising a convolution Conv, a linear correction unit ReLU, a residual block ResBlock, and a pooled downsampling Pooling; wherein the residual block ResBlock comprises: convolution Conv, instance normalization IN according to
Figure BDA0002475713480000101
Processing the image, wherein x is a feature map, mu (x) is the average value of x, sigma (x) is the standard deviation of x, and gamma and β are linear affine parameters;
the decoder comprises a plurality of second units, each comprising a convolution Conv, a linear modification unit ReLU and an upsampling Up.
Referring to fig. 4, the encoder includes a series of convolution Conv, linear correction unit ReLU, residual block ResBlock, and Pooling downsampling Pooling operations to extract the feature map of the image while gradually reducing the spatial resolution of the feature map. The residual block includes convolution Conv, instance normalization IN, and linear modification unit ReLU operation.
An example normalization formula introduced in the above encoder is as follows:
Figure BDA0002475713480000102
where x is a feature map, μ (x) is the mean of x, σ (x) is the standard deviation of x, γ and β1Are linear affine parameters.
The decoder part comprises a series of convolution Conv, linear correction unit ReLU and Up-sampling Up operations for extracting image features, processing the features extracted by the encoder, and gradually restoring the resolution of the feature map to the original size.
Meanwhile, dense skip connection (dense skip connection) is also introduced in the application, namely dense skip connection is added between a decoder and an encoder, so that low-level feature information can be continuously multiplexed in a deep network, and the feature information is more efficiently utilized.
In particular, each first unit may be cross-connected with one second unit. Taking fig. 4 as an example, the first unit 1, the first unit 2, the first unit 3, the second unit 1, the second unit 2, and the second unit 3 are connected in this order from the left to the right, and the first units and the second units are connected in a dense crossing manner.
Specifically, the first unit 1 and the second unit 2 are linked in a crossing manner, the first unit 2 and the second unit 3 are linked in a crossing manner, and the first unit 3 and the parameter estimation integration unit are linked in a crossing manner. Data obtained by Pooling downsampling Pooling in the first unit 1 are sent to the convolution Conv of the second unit 2, data obtained by Pooling downsampling Pooling in the first unit 2 are sent to the convolution Conv of the second unit 3, and data obtained by Pooling downsampling Pooling in the first unit 3 are sent to the convolution Conv of the parameter estimation integration unit.
In step 103, based on the training data sample, training the image color-matching enhancement neural network by using a fixed step learning rate attenuation strategy until a color loss function of the image color-matching enhancement neural network meets a preset condition.
The image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
In this step, color loss can be used as a loss function, i.e., the loss of L1 or L2 after Gaussian smoothing, and the color loss can be used to make the neural network focus on the color part with low frequency. For example, the color loss function may be:
Figure BDA0002475713480000111
wherein the content of the first and second substances,
Figure BDA0002475713480000112
for the toned enhanced image, Y is the truth label and Gau (.) is a gaussian filter function.
In some embodiments, a gradient descent algorithm may be adopted, and parameters of the image toning enhancement neural network are obtained through iterative learning; and attenuating the learning rate by half every iteration round with preset number, and training the image color-mixing enhancement neural network.
Illustratively, the present technique learns the parameters of the network through iteration using a gradient descent algorithm. For example, the initial learning rate may be set to a preset value (e.g., 1e-4), the learning rate is attenuated by half every 50000 iteration rounds, and the constructed neural network is trained using the collected data until the neural network converges.
The image color-mixing enhancement neural network training method comprises the steps of obtaining training data samples, and constructing an image color-mixing enhancement neural network, wherein the image color-mixing enhancement neural network comprises a coder-decoder structure with dense jump links; training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image toning enhancement neural network can obtain toning multiplying factor and offset for toning enhancement processing on the image to be toned, and can estimate intermediate parameters for image enhancement processing through the image toning enhancement neural network, so that the scheme has better interpretability, robustness, generalization and extensibility in the image enhancement processing, and the effect of image enhancement processing is improved.
The image toning enhancement method of the present application is described in detail below with reference to fig. 1.
Fig. 5 is a schematic flow chart of an image toning enhancement method provided in an embodiment of the present application, and with reference to fig. 5, the image toning enhancement method is described in detail as follows:
in step 201, an enhanced image to be toned is obtained.
For example, the enhanced image to be color-mixed may be a dark image, a bright image, a high-saturation image, a low-saturation image, a high-contrast image, a low-contrast image, or the like.
In step 202, the enhanced image to be color-mixed is input into the trained image color-mixing enhancement neural network, so as to obtain a color-mixing multiplication factor and a bias for color-mixing enhancement of the enhanced image to be color-mixed.
After the enhanced image to be color-mixed is input into the trained image color-mixing enhancing neural network, the network can output a color-mixing multiplication factor alpha and an offset beta for color-mixing enhancement of the enhanced image to be color-mixed.
In step 203, based on the toning multiplying factor and the offset, toning enhancement processing is performed on the enhanced image to be toned.
Wherein the sum of the product of the toning multiplier factor alpha and the image to be toned enhanced and the offset beta may be calculated to determine the toned enhanced image.
Illustratively, may be represented by j (x) α · i (x) + β2Performing color-mixing enhancement treatment on the enhanced image to be color-mixed, wherein I (x) is the enhanced image to be color-mixed, J (x) is the enhanced image after color mixing, α is the color-mixing multiplying factor, β2For the bias, x represents the pixel coordinate.
Fig. 6 is a schematic diagram illustrating a comparison between images before and after the color matching enhancement processing of the present application, and it can be seen from fig. 6 that the color matching enhancement processing of the present application on images is better.
Fig. 7 is a schematic flowchart of image color-mixing enhancement according to an embodiment of the present application, and referring to fig. 7, the image color-mixing enhancement process is as follows:
in step 301, training data samples are obtained.
In step 302, an image toning enhancement neural network is constructed that contains densely hopped concatenated encoder-decoder structures.
In step 303, based on the training data sample, training the image color-matching enhancement neural network by using a fixed step learning rate attenuation strategy until a color loss function of the image color-matching enhancement neural network meets a preset condition.
In step 304, an enhanced image to be toned is obtained.
In step 305, the enhanced image to be color-mixed is input into the trained image color-mixing enhancement neural network, and a color-mixing multiplication factor and a bias for color-mixing enhancement of the enhanced image to be color-mixed are obtained.
In step 306, a color-mixing enhancement process is performed on the enhanced image to be color-mixed based on the color-mixing multiplier and the bias.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Corresponding to the image toning enhancement neural network training method described in the foregoing embodiments, fig. X shows a structural block diagram of the image toning enhancement neural network training apparatus provided in the embodiments of the present application, and for convenience of description, only the portions related to the embodiments of the present application are shown.
Referring to fig. 8, the image toning enhancement neural network training apparatus in the embodiment of the present application may include a sample acquisition module 401, a neural network construction module 402, and a neural network training module 403.
The sample acquiring module 401 is configured to acquire a training data sample;
a neural network construction module 402 for constructing an image toning enhanced neural network comprising a densely-hopped concatenated encoder-decoder structure;
a neural network training module 403, configured to train the image color-matching enhancing neural network by using a fixed-step learning rate attenuation strategy based on the training data sample until a color loss function of the image color-matching enhancing neural network meets a preset condition; the image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
Optionally, the sample acquiring module 401 may specifically be configured to:
acquiring a first image sample with color matching defects as the training data sample, performing color matching processing on the training data sample, and using the training data sample after the color matching processing as a true value label of network supervised learning, wherein the color matching defects comprise at least one of overexposure/underexposure, unbalanced contrast and unsaturated color; alternatively, the first and second electrodes may be,
acquiring a second image sample without color matching defects as a true value label of network supervised learning, and performing color degradation processing on the second image sample to obtain the training data sample; wherein the color degradation processing comprises at least one of overexposure, underexposure, contrast reduction, and color saturation reduction of the second image sample.
Optionally, the color loss function is:
Figure BDA0002475713480000141
wherein the content of the first and second substances,
Figure BDA0002475713480000142
for the toned enhanced image, Y is the truth label and Gau (.) is a gaussian filter function.
Optionally, the encoder-decoder structure comprises a plurality of units, each unit comprising a convolution Conv, a linear correction unit ReLU, a residual block ResBlock, and a pooled downsampling Pooling;
wherein the residual block ResBlock comprises: convolution Conv, instance normalization IN according to
Figure BDA0002475713480000143
Processing the image, wherein x is a feature map, mu (x) is the mean value of x, sigma (x) is the standard deviation of x, gamma and β1Are linear affine parameters.
Optionally, the neural network training module 403 may be specifically configured to:
obtaining parameters of the image color-mixing enhancement neural network by iterative learning by adopting a gradient descent algorithm; and attenuating the learning rate by half every iteration round with preset number, and training the image color-mixing enhancement neural network.
Corresponding to the image toning enhancement method described in the foregoing embodiments, fig. X shows a structural block diagram of the image toning enhancement apparatus provided in the embodiments of the present application, and for convenience of explanation, only the portions related to the embodiments of the present application are shown.
Referring to fig. 9, the image toning enhancement apparatus in the embodiment of the present application may include an image obtaining module 501, a parameter obtaining module 502, and a toning enhancement processing module 503.
The image obtaining module 501 is configured to obtain an enhanced image to be color-matched;
a parameter obtaining module 502, configured to input the enhanced image to be color-mixed into the trained image color-mixing enhancing neural network, and obtain a color-mixing multiplication factor and a bias for performing color-mixing enhancement on the enhanced image to be color-mixed;
a color-mixing enhancement processing module 503, configured to perform color-mixing enhancement processing on the enhanced image to be color-mixed based on the color-mixing multiplier and the offset.
Optionally, the color-mixing enhancement processing module 503 may be specifically configured to:
by J (x) α. I (x) β2Performing color-mixing enhancement treatment on the enhanced image to be color-mixed, wherein I (x) is the enhanced image to be color-mixed, J (x) is the enhanced image after color mixing, α is I (x), β2For the bias, x represents the pixel coordinate.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a terminal device, referring to fig. 10, where the terminal device 600 may include: at least one processor 610, a memory 620, and a computer program stored in the memory 620 and executable on the at least one processor 610, wherein the processor 610, when executing the computer program, implements the steps of any of the above-mentioned method embodiments, such as the steps S101 to S103 in the embodiment shown in fig. 2, or the steps S201 to S203 in the embodiment shown in fig. 5. Alternatively, the processor 610, when executing the computer program, implements the functions of each module/unit in the above-described device embodiments, such as the functions of the modules 401 to 403 shown in fig. 8 or the functions of the modules 501 to 503 shown in fig. 9.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 620 and executed by the processor 610 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal device 600.
Those skilled in the art will appreciate that fig. 10 is merely an example of a terminal device and is not limiting of terminal devices and may include more or fewer components than shown, or some components may be combined, or different components such as input output devices, network access devices, buses, etc.
The Processor 610 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 620 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 620 is used for storing the computer program and other programs and data required by the terminal device. The memory 620 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when being executed by a processor, the computer program may implement the steps in the embodiments of the image color-matching enhancing neural network training method or implement the steps in the embodiments of the image color-matching enhancing method.
When the computer program product runs on a mobile terminal, the steps in each embodiment of the image color-mixing enhancement neural network training method or the steps in each embodiment of the image color-mixing enhancement method can be realized when the mobile terminal is executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), random-access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An image toning enhancement neural network training method is characterized by comprising the following steps:
obtaining training data samples, the training data samples
Constructing an image toning enhancement neural network, wherein the image toning enhancement neural network comprises an encoder-decoder structure with dense jump links;
training the image color matching enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color matching enhancement neural network meets a preset condition; the image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
2. The method for image tone enhancement neural network training as claimed in claim 1, wherein said obtaining training data samples comprises:
acquiring a first image sample with color matching defects as the training data sample, performing color matching processing on the training data sample, and using the training data sample after the color matching processing as a true value label of network supervised learning, wherein the color matching defects comprise at least one of overexposure/underexposure, unbalanced contrast and unsaturated color; alternatively, the first and second electrodes may be,
acquiring a second image sample without color matching defects as a true value label of network supervised learning, and performing color degradation processing on the second image sample to obtain the training data sample; wherein the color degradation processing comprises at least one of overexposure, underexposure, contrast reduction, and color saturation reduction of the second image sample.
3. The image toning-enhanced neural network training method of claim 2, wherein the color loss function is:
Figure FDA0002475713470000011
wherein the content of the first and second substances,
Figure FDA0002475713470000012
for the toned enhanced image, Y is the truth label and Gau (.) is a gaussian filter function.
4. The image toning enhanced neural network training method as recited in claim 1, wherein a dense spanning link is arranged between a decoder and an encoder in the encoder-decoder structure;
the encoder comprises a plurality of first units, each first unit comprising a convolution Conv, a linear correction unit ReLU, a residual block ResBlock and a pooled downsamplingPooling; wherein the residual block ResBlock comprises: convolution Conv, instance normalization IN according to
Figure FDA0002475713470000021
Processing the image, wherein x is a feature map, mu (x) is the mean value of x, sigma (x) is the standard deviation of x, gamma and β1Is a linear affine parameter;
the decoder comprises a plurality of second units, each second unit comprising a convolution Conv, a linear modification unit ReLU and an upsampling Up operation.
5. The method for training an image color-mixing enhancement neural network according to claim 1, wherein the training the image color-mixing enhancement neural network by using a fixed step learning rate attenuation strategy comprises:
obtaining parameters of the image color-mixing enhancement neural network by iterative learning by adopting a gradient descent algorithm; and attenuating the learning rate by half every iteration round with preset number, and training the image color-mixing enhancement neural network.
6. An image toning enhancement method, comprising:
obtaining an enhanced image to be toned;
inputting the enhanced image to be toned into the trained image toning enhancement neural network to obtain toning multiplying factor and bias for toning enhancement of the enhanced image to be toned;
and carrying out color matching enhancement processing on the enhanced image to be color matched based on the color matching multiplication factor and the bias.
7. The image toning enhancement method as recited in claim 6, wherein the toning enhancement processing the image to be toned based on the toning multiplier and the offset comprises:
by J (x) α. I (x) β2For the said to-be-adjustedColor enhanced image is subjected to color mixing enhancement treatment, wherein I (x) is the image to be color mixed and enhanced, J (x) is the image after color mixing and enhancement, α is the color mixing multiplying factor, β2For the bias, x represents the pixel coordinate.
8. An image toning enhancement neural network training device, comprising:
the sample acquisition module is used for acquiring training data samples;
the neural network construction module is used for constructing an image toning enhancement neural network, and the image toning enhancement neural network comprises an encoder-decoder structure with dense jump links;
the neural network training module is used for training the image color-mixing enhancement neural network by adopting a fixed step learning rate attenuation strategy based on the training data sample until a color loss function of the image color-mixing enhancement neural network meets a preset condition; the image color mixing enhancement neural network after the input training of the enhanced image to be color mixed can obtain a color mixing multiplication factor and a bias for performing color mixing enhancement on the enhanced image to be color mixed.
9. An image toning enhancement apparatus, comprising:
the image acquisition module is used for acquiring an enhanced image to be subjected to color mixing;
a parameter obtaining module, configured to input the enhanced image to be color-mixed into a trained image color-mixing enhancing neural network, and obtain a color-mixing multiplication factor and a bias for performing color-mixing enhancement on the enhanced image to be color-mixed;
and the color matching enhancement processing module is used for performing color matching enhancement processing on the enhanced image to be subjected to color matching based on the color matching multiplication factor and the bias.
10. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 or implements the method according to claim 6 or 7 when executing the computer program.
CN202010362929.4A 2020-04-30 2020-04-30 Image toning enhancement method and image toning enhancement neural network training method Pending CN111598799A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010362929.4A CN111598799A (en) 2020-04-30 2020-04-30 Image toning enhancement method and image toning enhancement neural network training method
PCT/CN2020/129510 WO2021218119A1 (en) 2020-04-30 2020-11-17 Image toning enhancement method and method for training image toning enhancement neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010362929.4A CN111598799A (en) 2020-04-30 2020-04-30 Image toning enhancement method and image toning enhancement neural network training method

Publications (1)

Publication Number Publication Date
CN111598799A true CN111598799A (en) 2020-08-28

Family

ID=72182419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010362929.4A Pending CN111598799A (en) 2020-04-30 2020-04-30 Image toning enhancement method and image toning enhancement neural network training method

Country Status (2)

Country Link
CN (1) CN111598799A (en)
WO (1) WO2021218119A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884693A (en) * 2021-03-25 2021-06-01 维沃移动通信(深圳)有限公司 Training method and device of image processing model and white balance processing method and device
CN113570499A (en) * 2021-07-21 2021-10-29 此刻启动(北京)智能科技有限公司 Self-adaptive image toning method, system, storage medium and electronic equipment
WO2021218119A1 (en) * 2020-04-30 2021-11-04 中国科学院深圳先进技术研究院 Image toning enhancement method and method for training image toning enhancement neural network
WO2022060088A1 (en) 2020-09-15 2022-03-24 Samsung Electronics Co., Ltd. A method and an electronic device for detecting and removing artifacts/degradations in media
WO2022078413A1 (en) * 2020-10-13 2022-04-21 影石创新科技股份有限公司 Deep learning-based image toning method, apparatus, electronic device, and computer-readable storage medium
WO2023010750A1 (en) * 2021-08-02 2023-02-09 中国科学院深圳先进技术研究院 Image color mapping method and apparatus, electronic device, and storage medium

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114967121B (en) * 2022-05-13 2023-02-03 哈尔滨工业大学 Design method of end-to-end single lens imaging system
CN114821376B (en) * 2022-06-27 2022-09-20 中咨数据有限公司 Unmanned aerial vehicle image geological disaster automatic extraction method based on deep learning
CN115661144B (en) * 2022-12-15 2023-06-13 湖南工商大学 Adaptive medical image segmentation method based on deformable U-Net
CN116703744B (en) * 2023-04-18 2024-05-28 二十一世纪空间技术应用股份有限公司 Remote sensing image dodging and color homogenizing method and device based on convolutional neural network
CN116721306B (en) * 2023-05-24 2024-02-02 北京思想天下教育科技有限公司 Online learning content recommendation system based on big data cloud platform
CN116757965B (en) * 2023-08-16 2023-11-21 小米汽车科技有限公司 Image enhancement method, device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830816A (en) * 2018-06-27 2018-11-16 厦门美图之家科技有限公司 Image enchancing method and device
CN108876745A (en) * 2018-06-27 2018-11-23 厦门美图之家科技有限公司 Image processing method and device
CN109584179A (en) * 2018-11-29 2019-04-05 厦门美图之家科技有限公司 A kind of convolutional neural networks model generating method and image quality optimization method
CN109658349A (en) * 2018-11-16 2019-04-19 聚时科技(上海)有限公司 A kind of image enchancing method and its application for supervised learning application
US20190295223A1 (en) * 2018-03-22 2019-09-26 Adobe Inc. Aesthetics-guided image enhancement

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108335306B (en) * 2018-02-28 2021-05-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109685750A (en) * 2018-12-14 2019-04-26 厦门美图之家科技有限公司 Image enchancing method and calculating equipment
CN111047543A (en) * 2019-12-31 2020-04-21 腾讯科技(深圳)有限公司 Image enhancement method, device and storage medium
CN111598799A (en) * 2020-04-30 2020-08-28 中国科学院深圳先进技术研究院 Image toning enhancement method and image toning enhancement neural network training method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190295223A1 (en) * 2018-03-22 2019-09-26 Adobe Inc. Aesthetics-guided image enhancement
CN108830816A (en) * 2018-06-27 2018-11-16 厦门美图之家科技有限公司 Image enchancing method and device
CN108876745A (en) * 2018-06-27 2018-11-23 厦门美图之家科技有限公司 Image processing method and device
CN109658349A (en) * 2018-11-16 2019-04-19 聚时科技(上海)有限公司 A kind of image enchancing method and its application for supervised learning application
CN109584179A (en) * 2018-11-29 2019-04-05 厦门美图之家科技有限公司 A kind of convolutional neural networks model generating method and image quality optimization method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021218119A1 (en) * 2020-04-30 2021-11-04 中国科学院深圳先进技术研究院 Image toning enhancement method and method for training image toning enhancement neural network
WO2022060088A1 (en) 2020-09-15 2022-03-24 Samsung Electronics Co., Ltd. A method and an electronic device for detecting and removing artifacts/degradations in media
EP4186223A4 (en) * 2020-09-15 2024-01-24 Samsung Electronics Co., Ltd. A method and an electronic device for detecting and removing artifacts/degradations in media
WO2022078413A1 (en) * 2020-10-13 2022-04-21 影石创新科技股份有限公司 Deep learning-based image toning method, apparatus, electronic device, and computer-readable storage medium
CN112884693A (en) * 2021-03-25 2021-06-01 维沃移动通信(深圳)有限公司 Training method and device of image processing model and white balance processing method and device
CN113570499A (en) * 2021-07-21 2021-10-29 此刻启动(北京)智能科技有限公司 Self-adaptive image toning method, system, storage medium and electronic equipment
WO2023010750A1 (en) * 2021-08-02 2023-02-09 中国科学院深圳先进技术研究院 Image color mapping method and apparatus, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2021218119A1 (en) 2021-11-04

Similar Documents

Publication Publication Date Title
CN111598799A (en) Image toning enhancement method and image toning enhancement neural network training method
Lv et al. Attention guided low-light image enhancement with a large scale low-light simulation dataset
Cai et al. Learning a deep single image contrast enhancer from multi-exposure images
US11055827B2 (en) Image processing apparatus and method
Liang et al. Cameranet: A two-stage framework for effective camera isp learning
CN110675336A (en) Low-illumination image enhancement method and device
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
WO2012170462A2 (en) Automatic exposure correction of images
CN110335221B (en) Multi-exposure image fusion method based on unsupervised learning
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
CN114862698B (en) Channel-guided real overexposure image correction method and device
Li et al. Hdrnet: Single-image-based hdr reconstruction using channel attention cnn
US20230267582A1 (en) Permutation invariant high dynamic range imaging
CN110717864B (en) Image enhancement method, device, terminal equipment and computer readable medium
CN111325671B (en) Network training method and device, image processing method and electronic equipment
CN110838088B (en) Multi-frame noise reduction method and device based on deep learning and terminal equipment
US20240013354A1 (en) Deep SDR-HDR Conversion
EP3913572A1 (en) Loss function for image reconstruction
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
CN111953888B (en) Dim light imaging method and device, computer readable storage medium and terminal equipment
CN110766153A (en) Neural network model training method and device and terminal equipment
CN112950509A (en) Image processing method and device and electronic equipment
CN115103118B (en) High dynamic range image generation method, device, equipment and readable storage medium
Li et al. Scale-aware Two-stage High Dynamic Range Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination