CN116843561A - Image enhancement method and device - Google Patents

Image enhancement method and device Download PDF

Info

Publication number
CN116843561A
CN116843561A CN202310639487.7A CN202310639487A CN116843561A CN 116843561 A CN116843561 A CN 116843561A CN 202310639487 A CN202310639487 A CN 202310639487A CN 116843561 A CN116843561 A CN 116843561A
Authority
CN
China
Prior art keywords
image
enhancement
illumination
brightness
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310639487.7A
Other languages
Chinese (zh)
Inventor
高语函
刘微
曲磊
孙菁
张文超
李广琴
朴艺兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Holding Co Ltd
Original Assignee
Hisense Group Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Holding Co Ltd filed Critical Hisense Group Holding Co Ltd
Priority to CN202310639487.7A priority Critical patent/CN116843561A/en
Publication of CN116843561A publication Critical patent/CN116843561A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image enhancement method and equipment, in the method, a plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on a first illumination image, then an image enhancement model is trained according to the first illumination image and the plurality of uneven brightness images, the first illumination image is an image acquired under the environment with the illumination larger than the set light illumination, the second illumination image is an image acquired under the environment with the illumination not larger than the set light illumination, when the image enhancement model is subjected to image enhancement based on the training, the illumination complexity can be considered, the real illumination can be better simulated, the image enhancement effect is improved, the robustness of the image enhancement is improved, and in the embodiment of the application, the two images with the brightness and the darkness are not required to be shot, so that the generalization can be improved, and the image enhancement method has the interpretability and accords with the credibility characteristic.

Description

Image enhancement method and device
Technical Field
The present application relates to the field of computer vision, and in particular, to an image enhancement method and apparatus.
Background
At low light night, the picture collected by the camera is usually dim, the picture definition is far lower than that of naked eyes, and the video quality and definition can be improved through an image enhancement scheme at present. The image enhancement scheme can enhance the image quality of the image shot under the dark light condition, so that the image becomes clear or approximates to the naked eye effect.
The common image enhancement scheme comprises a supervised algorithm, wherein the supervised algorithm needs to acquire the daytime image and the nighttime image which are paired with the same scene at the same time, but the illumination in the actual scene is complex and changeable, and the mode can not simulate the real illumination, so that the enhancement effect is poor.
Disclosure of Invention
The embodiment of the application provides an image enhancement method and device, which are used for solving the problem of poor enhancement effect in the prior art.
In a first aspect, an embodiment of the present application provides an image enhancement method, including:
acquiring an image acquired by image acquisition equipment;
inputting the image into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times, the first illumination image is an image acquired under an environment with illumination greater than a set light illumination, and the second illumination image is an image acquired under an environment with illumination not greater than the set light illumination;
And outputting an enhanced image of the image based on the image enhancement model.
In a second aspect, an embodiment of the present application provides an electronic device, at least comprising a processor and a memory, the processor being configured to implement the steps of the image enhancement method according to any one of the preceding claims when executing a computer program stored in the memory.
In a third aspect, an embodiment of the present application provides an image enhancement apparatus, including:
the acquisition module is used for acquiring the image acquired by the image acquisition equipment;
the input module is used for inputting the image into a pre-trained image enhancement model, the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times, the first illumination image is an image acquired under an environment with illumination larger than set light, and the second illumination image is an image acquired under an environment with illumination not larger than the set light;
and the output module is used for outputting an enhanced image of the image based on the image enhancement model.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the image enhancement method as described in any of the preceding claims.
In the embodiment of the application, an image acquired by image acquisition equipment is acquired; inputting the images into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, and the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times; based on the image enhancement model, an enhanced image of the image is output. In the method, a first illumination image is an image acquired under an environment with illumination larger than a set light, a second illumination image is an image acquired under an environment with illumination not larger than the set light, a plurality of uneven brightness images are determined according to the second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times, then an image enhancement model is trained according to the first illumination image and the plurality of uneven brightness images, and when the image enhancement model based on the training is used for enhancing images, the illumination complexity can be considered, the real illumination can be better simulated, and the image enhancement effect is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an image enhancement process according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image enhancement process according to some embodiments of the present application;
FIG. 3 is a flowchart of an embodiment of the present application for extracting an image of a daytime background area;
FIG. 4 is a flow chart of luminance analysis of night images according to some embodiments of the present application;
FIG. 5 is a schematic illustration of gray scale analysis of an image according to some embodiments of the present application;
FIG. 6 is a flow chart of a method for synthesizing an image with non-uniform brightness according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an image enhancement model according to some embodiments of the present application;
FIG. 8 is a schematic diagram of an image enhancement process according to some embodiments of the present application;
FIG. 9 is a schematic diagram of a model structure of RepVGG according to some embodiments of the application;
FIG. 10A is a schematic diagram of a conventional CBR module architecture provided by some embodiments of the present application;
FIG. 10B is a schematic diagram of a RepVGG-CBR module according to some embodiments of the application;
FIG. 11 is a schematic diagram of a RepVGG-pulling module according to some embodiments of the present application;
fig. 12A is a schematic view of a current connection module according to some embodiments of the present application;
FIG. 12B is a schematic diagram of an improved RepVGG-connection module according to some embodiments of the present application;
FIG. 13 is a schematic diagram of an image enhancement model according to some embodiments of the present application;
FIG. 14 is a schematic view of an architecture of a first enhanced network according to some embodiments of the present application;
FIG. 15 is a schematic illustration of an architecture of a second enhanced network according to some embodiments of the present application;
FIG. 16 is a schematic diagram of an image enhancement process according to some embodiments of the present application;
fig. 17 is a schematic structural diagram of an image enhancement device according to some embodiments of the present application;
fig. 18 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In order to better simulate real illumination and improve an image enhancement effect, the embodiment of the application provides an image enhancement method and device. As shown in fig. 1, in the embodiment of the present application, an acquired image is input into a pre-trained image enhancement model, and an enhanced image of the image is output based on the image enhancement model, where the image enhancement model is obtained by training a first illuminance image and a plurality of non-uniform brightness images, and the plurality of non-uniform brightness images are determined according to a second illuminance image and a plurality of images with different brightnesses obtained by performing brightness conversion on the first illuminance image, where the first illuminance image is an image acquired under an environment greater than the illuminance of a set light, and the second illuminance image is an image acquired under an environment not greater than the illuminance of a set light, so that when the image enhancement model performs image enhancement, real illumination can be better simulated in consideration of illumination complexity, and an image enhancement effect is improved.
Some embodiments of the present application provide an image enhancement process as shown in fig. 2, including the following steps:
s201: and acquiring an image acquired by the image acquisition equipment.
The image enhancement method provided by the embodiment of the application is applied to the electronic equipment, and the electronic equipment can be image acquisition equipment, user equipment, a server or the like. The image acquisition device may include a security monitoring device in a security monitoring scene, and the user device may include, but is not limited to, a mobile phone, a computer, a wearable device, or a home device.
The image acquisition device may acquire an image or video stream. If the electronic device is an image acquisition device, the electronic device can directly acquire the image or video stream acquired by the electronic device. If the electronic device is not an image acquisition device, the electronic device acquires the acquired image or video stream from the image acquisition device. Taking a video stream as an example for illustration, the electronic device firstly decodes the video stream to obtain an image sequence, and then the electronic device can perform image enhancement on each frame of image in the image sequence, and can also perform image enhancement on part of images in the image sequence. For example, when performing image enhancement on a part of images in the image sequence, the electronic device may perform frame skipping processing on the image sequence, for example, select one frame of image every other frame (only for example), so as to obtain a final image sequence image1, image2, … image …, which may improve algorithm efficiency.
In one implementation, the electronic device may also detect whether the acquired image is image enhanced, and if it is determined that image enhancement is performed, the following S202 is performed.
S202: the method comprises the steps of inputting an image into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, and the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image.
The first illumination image refers to an image collected in an environment with the illumination larger than the set light illumination, and the first illumination image can be understood as a high illumination image or a normal illumination image. Generally, the illuminance of the light of the daytime environment is greater than the set illuminance of the light, and the first illuminance image may be a daytime image. The second illuminance image refers to an image acquired in an environment not greater than the illuminance of the set light, and the first illuminance image may be understood as a low illuminance image. Generally, the illuminance of the light of the night environment is not greater than the set illuminance of the light, and the second illuminance image may be a night image. In the embodiment of the application, the value of the illumination of the set light is not limited.
The electronic equipment stores a pre-trained image enhancement model, and the image enhancement model can be obtained through training in one of the following implementation modes:
the implementation mode is as follows: the image enhancement model is obtained through supervised learning training. For example, the image enhancement model first performs image enhancement on a plurality of non-uniform luminance images, and then trains the image enhancement model based on a daytime image (one example of a first luminance image) and the enhanced plurality of non-uniform luminance images.
The implementation mode II is as follows: the image enhancement model is obtained through supervised learning and unsupervised learning training. For example, the image enhancement model performs image enhancement on a plurality of non-uniform luminance images, then performs supervised training on the image enhancement model based on a daytime image (one example of a first luminance image) and the enhanced plurality of non-uniform luminance images, and the image enhancement model performs image enhancement on a night time image (one example of a second luminance image), and then performs unsupervised training on the image enhancement model based on the enhanced night time image.
The plurality of uneven brightness images are determined according to the second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times, so that the plurality of uneven brightness images can reflect illumination complexity, and the image enhancement model obtained based on training can better simulate real illumination.
S203: based on the image enhancement model, an enhanced image of the image is output.
Because the image enhancement model is trained in advance, an enhanced image of the image can be output based on the image enhancement model.
In the embodiment of the application, a plurality of uneven brightness images are determined according to the second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image, then an image enhancement model is trained according to the first illumination image and the plurality of uneven brightness images, and when the image enhancement model is used for enhancing the image, the illumination complexity can be considered, the real illumination can be better simulated, the image enhancement effect is improved, and the robustness of the image enhancement is improved. In addition, the embodiment of the application does not require shooting of two images of light and shade in pairs, so that generalization can be improved, and the device has interpretability.
On the basis of the foregoing embodiment, for implementation one of S202, the training process of the image enhancement model includes:
acquiring a training set, wherein the training set comprises a first illumination image acquired by image acquisition equipment;
the image enhancement model is trained based on the first illumination image and the plurality of non-uniform luminance images.
The plurality of non-uniform brightness images are determined according to the second illumination image and the plurality of images with different brightness, so that the training set can further comprise the second illumination image acquired by the same image acquisition device besides the first illumination image. The set comprising a plurality of non-uniform luminance images may be prepared in advance or may be generated in each iteration of the training. For example, the electronic device performs luminance transformation on the first luminance image multiple times to obtain a plurality of images with different luminance, and then determines a plurality of images with non-uniform luminance according to the second luminance image and the plurality of images with different luminance. The implementation of the luminance transformation is not limited herein, and for example, but not limited to, the luminance transformation is performed multiple times on the first luminance image by adjusting different transformation coefficients.
One or more enhancement networks may be included in the image enhancement model in this embodiment.
Based on the above embodiments, in the embodiments of the present application, based on the image enhancement model, outputting an enhanced image of an image includes:
determining a first enhancement feature map of the image based on a first enhancement network in the image enhancement model;
determining a second enhancement feature map for the background region image of the image based on a second enhancement network in the image enhancement model;
And outputting the enhanced image according to the first enhanced feature map and the second enhanced feature map.
In this embodiment, the image enhancement model includes a plurality of enhancement networks including a first enhancement network and a second enhancement network. The first enhancement network may enhance the brightness of the image, e.g., the enhancement effect of the first enhancement network on the image may be closer to the illumination in the real scene. The second enhancement network may enhance the brightness of the background area of the image, e.g., the enhancement effect of the second enhancement network on the background area of the image may improve the color shift problem.
And then fusing the first enhancement feature map output by the first enhancement network and the second enhancement feature map output by the second enhancement network, so that a final enhancement image can be obtained for the image.
In the embodiment of the application, the image is enhanced based on the first enhancement network and the second enhancement network in the image enhancement model, so that the enhancement effect of the image can be improved.
Based on the foregoing embodiment, for implementation two of S202, the training process of the image enhancement model includes:
acquiring a training set, wherein the training set comprises a first illumination image and a second illumination image which are acquired by the same image acquisition equipment;
Training a first enhancement network in the image enhancement model according to the first illumination image and the plurality of non-uniform brightness images;
and training a second enhancement network in the image enhancement model according to the first background area image and the second illumination image in the first illumination image.
The first enhancement network may be supervised trained based on the first illumination image and the plurality of non-uniform luminance images. For example, the first enhancement network performs image enhancement on the plurality of non-uniform luminance images, and then performs supervised training on the first enhancement network according to the first luminance image and the enhanced plurality of non-uniform luminance images.
Based on the first background area image (e.g., daytime background area image) and the second illumination image in the first illumination image, the second enhancement network may be unsupervised trained. For example, the second enhancement network performs image enhancement on the second illumination image, then uses the first background area image to enhance the enhanced second illumination image again, and performs unsupervised training on the second enhancement network according to the second illumination image after multiple enhancement.
Background areas generally refer to areas of an image that are stationary, such as buildings, trees, lawns, etc., while vehicles, pedestrians, etc., in an image belong to a foreground area. For any fixed point image, only the brightness of the background area changes within a period of time, but the content does not change, so that the night image can be enhanced by utilizing the areas of the normal daytime image. As shown in fig. 3, the electronic device may employ a background extraction algorithm to extract a first background region image in the first illumination image. When the electronic device extracts the first background area image from the first illumination image, a frame difference method may be used, or an image segmentation method (for example, yolov 8) may be used, which is not limited in this embodiment of the present application. The electronic device can also fuse the first background area images extracted from the multi-frame first illumination images to determine a final first background area image, so that training accuracy is improved. In addition, considering that the background content has little variation, the electronic device may not extract the first background area image frame by frame in real time, but may extract the first background area image every set time, reducing the amount of calculation. The electronic device may pre-extract the first background area image prior to training or may extract it in each iteration of training.
In a possible implementation, after extracting the first background area image, the electronic device may further determine a background area template according to the first background area image, for the purpose of the background area template, see description of the subsequent embodiments.
In the embodiment of the application, the enhancement effect of the subsequent image can be improved by training the first enhancement network and the second enhancement network in the image enhancement model.
Based on the foregoing embodiments, in the embodiments of the present application, training a first enhancement network in an image enhancement model according to a first luminance image and a plurality of non-uniform luminance images includes:
inputting a plurality of non-uniform brightness images into a first enhancement network, and outputting a plurality of first feature maps based on the first enhancement network;
determining a first loss value according to the first illumination image, the plurality of first feature maps and the first loss function;
the first enhanced network is trained based on the first loss value.
When training the first enhancement network, a plurality of first feature maps (the first feature maps may be enhancement feature maps obtained by enhancing the non-uniform brightness image) and the first illumination image are used for supervised training. The first loss function may be a referenced loss function such as, but not limited to, a first loss function including a structural similarity index (structural similarity index, SSIM) loss function and/or a mean-square error (MSE) function, etc.
The electronic device may calculate a corresponding plurality of first loss values from the calculated first illumination image and the plurality of first feature maps, and then train the first enhancement network based on the plurality of first loss values.
Based on the foregoing embodiments, in the embodiments of the present application, the method further includes:
determining a brightness area image according to a brightness area with a brightness value larger than the set brightness in the second illumination image;
and fusing the brightness area image and a plurality of images with different brightness to determine a plurality of non-uniform brightness images.
For example, referring to fig. 4, the electronic device may perform luminance analysis on the night image, and then perform luminance division according to the luminance analysis result, and taking division 2 steps as an example, the electronic device may decompose the night image Id into a luminance area image and a darkness area image, where the luminance area image is a luminance area with a luminance value greater than the set luminance, that is, a highlight area image Ih, and the darkness area image is a luminance area with a luminance value not greater than the set luminance, that is, a low-luminance area image Il.
The electronic equipment can carry out brightness analysis on the night image in a gray level histogram statistical mode to obtain a highlight region (generally a region with lamplight) and a low-light region; or the electronic device may also convert the night image from Red Green Blue (RGB) space to hue saturation value (hue saturation value, HSV) space and then extract the brightness data of the V channel to obtain a high-brightness region and a low-brightness region. Taking luminance analysis of a night image as an example by gray histogram statistics, as shown in fig. 5, fig. 5 (a) shows a gray analysis chart of a daytime image, fig. 5 (b) shows a gray analysis chart of a night image, wherein the gray analysis chart has gray scales of 0 to 255 in which the abscissa is a pixel value and the total coordinate is the number of pixels per gray value, it can be seen that the pixel distribution of the night image is generally concentrated and mainly distributed in a low luminance area, and the gray distribution of the daytime image is generally balanced, so that luminance analysis of the night image can be performed from the luminance distribution.
When the electronic equipment performs brightness division according to the brightness analysis result, a plurality of brightness levels can be divided from bright to dark according to the brightness analysis result, the more the brightness levels are divided, the more the extracted illumination area levels are, the closer the synthesized image illumination is to the real scene, and the better the effect is. In the embodiment of the application, two levels of high brightness and low brightness are taken as examples, and a high brightness area image Ih and a low brightness area image Il are obtained after division. It can be understood that the number of the levels of the brightness division and the division manner are not limited in the embodiment of the present application.
In one implementation, the electronic device may also determine a luminance region template, i.e., the highlight region template Imask in fig. 4, from the highlight region image. The highlight region template Imask may serve as a follow-up enhanced network attention image. The highlight region template Imask may take a set value (for example, but not limited to, 0), and may also normalize the highlight region according to the actual pixel value.
Further, the second illumination image is taken as a night image, and after the luminance area image in the night image is determined in the above manner, the electronic device may combine the luminance area image with a plurality of images with different luminance to synthesize a plurality of non-uniform luminance images. As shown in fig. 6, the first illumination image is a daytime image, the electronic device performs multiple brightness conversions on the daytime image Io to obtain a plurality of images Ic with different brightness, and then fuses the highlight region image Ih and the plurality of images Ic with different brightness to determine a plurality of non-uniform brightness images Im, wherein the daytime image and the nighttime image are images acquired for the same point. For example, in the fusion process, the electronic device may traverse each pixel point in the highlight region image Ih, and for each pixel point, if the pixel value of the pixel point is greater than 0 (only for example), the electronic device copies the pixel value to the corresponding position of the image Ic, otherwise, the pixel value at the image Ic is saved, and after the corresponding processing is performed for each pixel point, the uneven brightness image Im may be obtained. The fusion process may select a part of the highlight region image Ih, or may select all the highlight regions, i.e., the selected region may not be limited.
Because the illumination of the night scene is complex, the illumination conditions in different scenes are very different, if only the daytime image is subjected to different exposures to obtain different low-brightness images, the synthesized illumination and the real illumination have larger difference, and the complexity of the real illumination cannot be simulated. In the embodiment of the application, the illumination (such as night illumination) corresponding to the second illumination image is fused to the generated low-brightness image to obtain the synthesized uneven illumination image, so that the illumination complexity of the generated image can be increased.
Based on the foregoing embodiments, in the embodiments of the present application, training the second enhancement network in the image enhancement model according to the first background area image and the second luminance image in the first luminance image includes:
inputting the second illumination image into a second enhancement network, and outputting a second feature map based on the second enhancement network;
fusing the background area template and the second feature map to obtain a fused second feature map;
determining a second background area image in the fused second feature map;
determining a second loss value according to the first background area image, the second background area image and the second loss function;
Determining a third loss value according to the fused second feature map and the third loss function;
and training the second enhancement network according to the second loss value and the third loss value.
In training the second enhancement network, a first background region image (e.g., the first background region image is a daytime background region image) and a second background region image (e.g., the second background region image may be an enhanced background region image determined in a fused second feature map, where the second feature map may be a night enhancement image obtained by enhancing a night image) are used for supervised training, and the fused second feature map is used for unsupervised training. The second loss function may be a referenced loss function, such as, but not limited to, a second loss function including an SSIM loss function and/or an MSE function, and the like. The third Loss function may be a no-reference Loss function, such as, but not limited to, a third Loss function including a Perceptual Loss (total variation Loss) function and/or a total variation Loss (total variation Loss) function.
The electronic device may calculate a corresponding second loss value according to the first background area image and the second background area image, calculate a corresponding third loss value according to the fused second feature map, and then train the second enhancement network according to the second loss value and the third loss value. In the embodiment of the application, the advantages of supervised learning and unsupervised learning can be combined, the unsupervised learning of the real scene image can be realized, and the paired samples are not required to be acquired, so that the generalization in the real scene is better, meanwhile, the advantages of supervised learning are combined, the reference image is adopted for supervised training, the accuracy and reliability of image enhancement can be improved, the robustness is better, and the interpretation is realized.
Fig. 7 is a schematic diagram of an image enhancement model according to an embodiment of the present application, in which a first illumination image is a daytime image and a second illumination image is a nighttime image, a training set of the image enhancement model includes a daytime image set and a nighttime image set, images in each set are labeled with a point location as a name, and each time of iterative training, a daytime image Io and a nighttime image Id of the same point location are respectively taken from the two image sets, and then a plurality of non-uniform brightness images Im are synthesized and a daytime background area image Ibo of the daytime image Io is obtained as a training input. The image enhancement network includes a first loss function, a second loss function, and a third loss function, where the first loss function and the second loss function may be reference loss functions, and the third loss function may be no reference loss functions. The electronic device inputs a plurality of non-uniform brightness images Im into a first enhancement network to obtain a plurality of enhancement feature images imel, and then calculates a first loss value of a first loss function according to the plurality of enhancement feature images imel and the daytime image Io. Since the plurality of non-uniform intensity images Im are related to the highlighted image template, the highlighted image template may act as an attention seeking to direct the first enhancement network to notice darker areas in the image during training. The electronic equipment inputs the night image Id into a second enhancement network to obtain a night enhanced image Ie, the night enhanced image Ie is enhanced again by adopting a background area template, a fused night enhanced image can be obtained, then a second loss value of a second loss function is calculated according to an enhanced background area image Ibe in the fused night enhanced image, and a third loss value of a third loss function is calculated according to the fused night enhanced image. Finally, the electronic device may train the image enhancement model according to the first loss value, the second loss value, and the third loss value.
The electronic device can fuse the enhancement feature image imee output by the first enhancement network and the fused night enhancement image output by the second enhancement network to obtain a final enhancement image in the training process or the reasoning process, namely in the image enhancement process. One implementation is shown in fig. 8, where the first enhancement network output enhancement feature map Ime (an example of the first feature map) and the second enhancement network output fused night enhancement image (an example of the second feature map) are convolved 2 times, that is, processed by a convolution layer-BN regularization-ReLU nonlinear activation (Conv-BN-ReLU, CBR) module and a 1*1 convolution layer (Conv) to obtain a final enhancement image.
In one implementation, the image enhancement model is implemented using UNET network (a semantically partitioned network structure). UNet employs a fully convolutional neural network. The architecture consists of three parts: compression, bottleneck, and expansion sections. The connection section is composed of a plurality of connection blocks. Each block consists of two 3x3 convolutional layers, one 2x2 max pooling. After each block, the number of cores or feature maps may double so that the architecture can learn complex structures efficiently. The bottommost layer is interposed between the connection layer and the expansion layer. It uses two 3x3 convolutional neural network (convolutional neural networks, CNN) layers, followed by a 2x2 up convolutional (up convolutional) layer. The expansion section is composed of a plurality of expansion blocks. Each block will consist of two 3x3 convolutional layers, one 2x2 upsampling layer. The number of feature maps is reduced by half after each convolution to maintain symmetry. The upsampling layer may change the low-resolution picture containing the high-level abstract feature into high-resolution while retaining the high-level abstract feature, and then perform a connection (connection) operation with the high-resolution picture of the left low-level surface layer feature. The number of expansion blocks is the same as the number of connection blocks. Then, the generated feature map is classified by using convolution with the size of 1*1 as two convolution kernels to obtain two final heat maps (hetmap), for example, the first score represents the first class, the second score represents the second class hetmap, then the first score represents the second class hetmap, and the second score represents the second class hetmap.
Based on the above embodiments, in the embodiments of the present application, the image enhancement model includes a RepVGG network structure. RepVGG is a fusible network, a plurality of branches are parallel in a model training stage, each branch is convolved by adopting convolution kernels with different sizes, the representation capability of the model is improved, and the model containing multiple branches is converted into a single-branch structure in a model reasoning stage, so that the reasoning speed is increased, the memory is reduced, and the practical deployment is facilitated.
One implementation is shown in fig. 9, where (a) in fig. 9 shows a schematic diagram of the structure of the residual network (res net), and RepVGG may use the residual edge of res net, and the difference is that RepVGG uses residual connection at each layer, and 3x3 convolution is connected with a convolution of feature (identity) and 1x1, for example, (b) in fig. 9 shows the model structure of RepVGG during training, and (c) in fig. 9 shows the model structure of RepVGG during reasoning, and in the training model, a batch normalization (batch normalization, BN) layer is added before each branch output.
For example, in the embodiment of the application, the Unet network structure is improved, and the convolution of an encoder (encoder) in the Unet network structure is replaced by RepVGG, so that the model feature extraction capability is improved. Fig. 10A shows a presently conventional CBR module structure comprising one Conv2d function and a BN layer, the output of which connects to a linear correction unit (Linear rectification function, relu), and fig. 10B shows a RepVGG-CBR module structure comprising three branches, two of which comprise respectively a Conv2d function and a BN layer, one of which comprises a BN layer, the output of which is Relu. Fig. 11 shows a RepVGG-pooling (pooling) module structure comprising two branches, one of which comprises a Conv2d function (parameters k3, s2, p 1) and a BN layer, and the other of which comprises a Conv2d function (parameters k1, s2, p 0) and a BN layer, the outputs of the two branches being connected to Relu. Fig. 12A shows a current connection module structure including two CBR modules and one maximum pooling (Maxpooling) module, and fig. 12B shows a modified RepVGG-connection module structure including two RepVGG-CBR modules and one RepVGG-pooling module.
In the embodiment of the application, the RepVGG network structure is introduced into the image enhancement model, so that the reasoning speed can be increased, and the image enhancement efficiency can be improved.
Based on the above embodiments, in the embodiments of the present application, the first enhancement network includes an attention (attention) network structure of a luminance area template, and the luminance area template is determined according to a luminance area image in the second luminance image; and/or the second enhancement network comprises an attention network structure of the background region template.
FIG. 13 is a schematic diagram of an image enhancement model structure including an attention network structure, where an image enhancement model encoder is formed by the above-mentioned multiple RepVGG-connection modules, up-sampling and convolution at the network decoder stage are performed by conventional operations, black arrows in FIG. 13 are improvements of the embodiment of the present application, where black bold arrows correspond to the RepVGG-sampling module shown in FIG. 11, black non-bold arrows correspond to the RepVGG-CBR module shown in FIG. 10B, grey bold arrows represent up-conv 2, grey non-bold arrows represent conv 3*3, and the activation function uses Relu.
The first enhancement network includes an intensity network structure of the intensity region template that directs the first enhancement network to notice darker regions in the image during training. In one possible implementation, as shown in fig. 14, the first enhancement network includes a channel (channel) attention network structure and a highlight template attention network structure (i.e., an attention network structure of a luminance area template).
The second enhancement network includes an illumination network structure of a background area template, which can improve color shift problems caused by the absence of a reference image in unsupervised learning by supervising brightness enhancement of a night background area (which is an example of a background area in a second illumination image). In one possible implementation, as shown in fig. 15, the second enhancement network includes a channel attention network structure and a background template attention network structure (i.e., an attention network structure of a background area template).
Based on the foregoing embodiments, in an embodiment of the present application, before inputting the image into the pre-trained image enhancement model, the method further includes:
carrying out gray level statistics on the image and determining a brightness distribution result of the image;
judging whether the brightness distribution result accords with the gray level distribution rule of the low-illumination image;
if so, it is determined to perform image enhancement on the image.
After the electronic device acquires the image, whether the image is subjected to image enhancement or not can be detected according to the brightness distribution result of the image, namely, whether the image is a low-illumination image or not. Referring to fig. 16, the electronic device acquires an image, performs luminance distribution analysis on the image, determines whether the image is a low-illuminance image according to the luminance distribution result, if so, inputs a pre-trained image enhancement model to the image for image enhancement, outputs an enhanced image, and if not, returns to start to acquire the image again.
In one implementation, the gray scale distribution of the low-intensity image includes the gray scale distribution of the night-time image. According to the embodiment shown in fig. 5, a certain rule exists between the gray distribution of the daytime image and the gray distribution of the nighttime image, so that the electronic device can judge whether the gray distribution result of the image accords with the gray distribution rule of the nighttime image or not.
It will be appreciated that in some implementations, the electronic device may also employ a deep learning algorithm to classify the acquired images into categories such as daytime images and nighttime images, where the nighttime images are low-light images.
In the embodiment of the application, the electronic equipment judges whether the image is a low-illumination image or not in a gray level statistics mode, and the image enhancement is carried out on the low-illumination image, so that the image enhancement efficiency can be improved.
According to the method and the device for analyzing the brightness of the image, disclosed by the embodiment of the application, the brightness of the image can be automatically analyzed, then, the image enhancement model is called for enhancing the image for the low-illumination image, so that the overexposure problem and the color cast problem after the image enhancement with uneven illumination can be effectively solved, meanwhile, the pixel distribution in the image is analyzed to improve the naturalness of the image, the quality of the low-illumination image is effectively improved, and the support is provided for the subsequent business scene analysis.
On the basis of the above embodiments, the present application provides an image enhancement device, and fig. 17 is a schematic structural diagram of an image enhancement device according to some embodiments of the present application, where the device includes:
an acquisition module 1701, configured to acquire an image acquired by an image acquisition device;
the input module 1702 is configured to input an image into a pre-trained image enhancement model, where the image enhancement model is obtained by training a first luminance image and a plurality of non-uniform luminance images, and the plurality of non-uniform luminance images are determined according to a second luminance image and a plurality of images with different luminance obtained by performing luminance conversion on the first luminance image, where the first luminance image is an image acquired under an environment greater than a set light luminance, and the second luminance image is an image acquired under an environment no greater than the set light luminance;
an output module 1703 for outputting an enhanced image of the image based on the image enhancement model.
In one possible implementation, the output module 1703 is specifically configured to determine a first enhancement feature map of the image based on a first enhancement network in the image enhancement model; determining a second enhancement feature map for the background region image of the image based on a second enhancement network in the image enhancement model; and outputting the enhanced image according to the first enhanced feature map and the second enhanced feature map.
In a possible implementation manner, the device further comprises a training module, configured to acquire a training set, where the training set includes a first illumination image and a second illumination image acquired by the same image acquisition device; training a first enhancement network in the image enhancement model according to the first illumination image and the plurality of non-uniform brightness images; and training a second enhancement network in the image enhancement model according to the first background area image and the second illumination image in the first illumination image.
In one possible implementation manner, the training module is specifically configured to input a plurality of non-uniform brightness images into a first enhancement network, and output a plurality of first feature maps based on the first enhancement network; determining a first loss value according to the first illumination image, the plurality of first feature maps and the first loss function; the first enhanced network is trained based on the first loss value.
In a possible implementation manner, the training module is further configured to determine a luminance area image according to a luminance area in the second luminance image, where the luminance value is greater than the set luminance; and fusing the brightness area image and a plurality of images with different brightness to determine a plurality of non-uniform brightness images.
In a possible implementation manner, the training module is specifically configured to input the second illumination image into a second enhancement network, and output a second feature map based on the second enhancement network; fusing the background area template and the second feature map to obtain a fused second feature map; determining a second background area image in the fused second feature map; determining a second loss value according to the first background area image, the second background area image and the second loss function; determining a third loss value according to the fused second feature map and the third loss function; and training the second enhancement network according to the second loss value and the third loss value.
In one possible implementation, the image enhancement model includes a RepVGG network architecture.
In one possible implementation, the first enhancement network includes an attention profile network structure of a luminance region template, the luminance region template being determined from a luminance region image in the second luminance image; and/or the second enhancement network comprises an attention network structure of the background region template.
In one possible implementation manner, the device comprises a determining module, a determining module and a determining module, wherein the determining module is used for carrying out gray statistics on the image and determining the brightness distribution result of the image; judging whether the brightness distribution result accords with the gray level distribution rule of the low-illumination image; if so, it is determined to perform image enhancement on the image.
On the basis of the above embodiment, the present application further provides an electronic device, and fig. 18 is a schematic structural diagram of an electronic device provided by the embodiment of the present application, as shown in fig. 18, including: a processor 1801, a communication interface 1802, a memory 1803 and a communication bus 1804, wherein the processor 1801, the communication interface 1802, and the memory 1803 perform communication with each other through the communication bus 1804;
the memory 1803 stores a computer program therein, which when executed by the processor 1801 causes the processor 1801 to perform the steps of:
acquiring an image acquired by image acquisition equipment;
inputting the images into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image, the first illumination image is an image acquired under an environment with illumination greater than a set light, and the second illumination image is an image acquired under an environment with illumination not greater than the set light;
based on the image enhancement model, an enhanced image of the image is output.
In one possible implementation, the processor 1801 is specifically configured to:
determining a first enhancement feature map of the image based on a first enhancement network in the image enhancement model;
determining a second enhancement feature map for the background region image of the image based on a second enhancement network in the image enhancement model;
and outputting the enhanced image according to the first enhanced feature map and the second enhanced feature map.
In one possible implementation, the processor 1801 is further configured to:
acquiring a training set, wherein the training set comprises a first illumination image and a second illumination image which are acquired by the same image acquisition equipment;
training a first enhancement network in the image enhancement model according to the first illumination image and the plurality of non-uniform brightness images;
and training a second enhancement network in the image enhancement model according to the first background area image and the second illumination image in the first illumination image.
In one possible implementation, the processor 1801 is specifically configured to:
inputting a plurality of non-uniform brightness images into a first enhancement network, and outputting a plurality of first feature maps based on the first enhancement network;
determining a first loss value according to the first illumination image, the plurality of first feature maps and the first loss function;
The first enhanced network is trained based on the first loss value.
In one possible implementation, the processor 1801 is further configured to:
determining a brightness area image according to a brightness area with a brightness value larger than the set brightness in the second illumination image;
and fusing the brightness area image and a plurality of images with different brightness to determine a plurality of non-uniform brightness images.
In one possible implementation, the processor 1801 is specifically configured to:
inputting the second illumination image into a second enhancement network, and outputting a second feature map based on the second enhancement network;
fusing the background area template and the second feature map to obtain a fused second feature map;
determining a second background area image in the fused second feature map;
determining a second loss value according to the first background area image, the second background area image and the second loss function;
determining a third loss value according to the fused second feature map and the third loss function;
and training the second enhancement network according to the second loss value and the third loss value.
In one possible implementation, the image enhancement model includes a RepVGG network architecture.
In one possible implementation, the first enhancement network includes an attention profile network structure of a luminance region template, the luminance region template being determined from a luminance region image in the second luminance image; and/or the second enhancement network comprises an attention network structure of the background region template.
In one possible implementation, the processor 1801 is further configured to:
carrying out gray level statistics on the image and determining a brightness distribution result of the image;
judging whether the brightness distribution result accords with the gray level distribution rule of the low-illumination image;
if so, it is determined to perform image enhancement on the image.
The communication bus mentioned for the above-mentioned electronic devices may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface 1802 is used for communication between the above-described electronic device and other devices.
The Memory may include RAM (Random Access Memory ) or NVM (Non-Volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit, an NP (Network Processor ), etc.; but may also be a DSP (Digital Signal Processing, digital instruction processor), application specific integrated circuit, field programmable gate array or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
On the basis of the above embodiments, the embodiments of the present application provide a computer-readable storage medium, in which a computer program executable by an electronic device is stored, which when executed on the electronic device causes the electronic device to perform the steps of:
acquiring an image acquired by image acquisition equipment;
inputting the images into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image, the first illumination image is an image acquired under an environment with illumination greater than a set light, and the second illumination image is an image acquired under an environment with illumination not greater than the set light;
based on the image enhancement model, an enhanced image of the image is output.
In one possible implementation, outputting an enhanced image of the image based on the image enhancement model includes:
determining a first enhancement feature map of the image based on a first enhancement network in the image enhancement model;
Determining a second enhancement feature map for the background region image of the image based on a second enhancement network in the image enhancement model;
and outputting the enhanced image according to the first enhanced feature map and the second enhanced feature map.
In one possible implementation, the training process of the image enhancement model includes:
acquiring a training set, wherein the training set comprises a first illumination image and a second illumination image which are acquired by the same image acquisition equipment;
training a first enhancement network in the image enhancement model according to the first illumination image and the plurality of non-uniform brightness images;
and training a second enhancement network in the image enhancement model according to the first background area image and the second illumination image in the first illumination image.
In one possible implementation, training a first enhancement network in an image enhancement model from a first luminance image and a plurality of non-uniform luminance images includes:
inputting a plurality of non-uniform brightness images into a first enhancement network, and outputting a plurality of first feature maps based on the first enhancement network;
determining a first loss value according to the first illumination image, the plurality of first feature maps and the first loss function;
the first enhanced network is trained based on the first loss value.
In one possible embodiment, the method further comprises:
determining a brightness area image according to a brightness area with a brightness value larger than the set brightness in the second illumination image;
and fusing the brightness area image and a plurality of images with different brightness to determine a plurality of non-uniform brightness images.
In one possible embodiment, training the second enhancement network in the image enhancement model based on the first background region image and the second luminance image in the first luminance image comprises:
inputting the second illumination image into a second enhancement network, and outputting a second feature map based on the second enhancement network;
fusing the background area template and the second feature map to obtain a fused second feature map;
determining a second background area image in the fused second feature map;
determining a second loss value according to the first background area image, the second background area image and the second loss function;
determining a third loss value according to the fused second feature map and the third loss function;
and training the second enhancement network according to the second loss value and the third loss value.
In one possible implementation, the image enhancement model includes a RepVGG network architecture.
In one possible implementation, the first enhancement network includes an attention profile network structure of a luminance region template, the luminance region template being determined from a luminance region image in the second luminance image; and/or the second enhancement network comprises an attention network structure of the background region template.
In one possible embodiment, the method further comprises, prior to inputting the image into the pre-trained image enhancement model:
carrying out gray level statistics on the image and determining a brightness distribution result of the image;
judging whether the brightness distribution result accords with the gray level distribution rule of the low-illumination image;
if so, it is determined to perform image enhancement on the image.
Since the principle of the above-mentioned computer readable storage medium for solving the problem is similar to that of the image enhancement method, the implementation of the above-mentioned computer readable storage medium may refer to the embodiment of the method, and the repetition is omitted.
The computer readable storage medium may be any available medium or data storage device that can be accessed by a processor in an electronic device, including but not limited to magnetic memories such as floppy disks, hard disks, tapes, MO (magneto optical disks), etc., optical memories such as CD, DVD, BD, HVD, etc., and semiconductor memories such as ROM, EPROM, EEPROM, NAND FLASH (non-volatile memories), SSD (solid state disk), etc.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method of image enhancement, the method comprising:
acquiring an image acquired by image acquisition equipment;
inputting the image into a pre-trained image enhancement model, wherein the image enhancement model is obtained by training according to a first illumination image and a plurality of uneven brightness images, the plurality of uneven brightness images are determined according to a second illumination image and a plurality of images with different brightness obtained by carrying out brightness conversion on the first illumination image for a plurality of times, the first illumination image is an image acquired under an environment with illumination greater than a set light illumination, and the second illumination image is an image acquired under an environment with illumination not greater than the set light illumination;
and outputting an enhanced image of the image based on the image enhancement model.
2. The method of claim 1, wherein the outputting the enhanced image of the image based on the image enhancement model comprises:
determining a first enhancement feature map for the image based on a first enhancement network in the image enhancement model;
determining a second enhancement feature map for a background region image of the image based on a second enhancement network in the image enhancement model;
And outputting the enhanced image according to the first enhanced feature map and the second enhanced feature map.
3. The method according to claim 1 or 2, wherein the training process of the image enhancement model comprises:
acquiring a training set, wherein the training set comprises a first illumination image and a second illumination image which are acquired by the same image acquisition equipment;
training a first enhancement network in the image enhancement model according to the first illumination image and the plurality of non-uniform brightness images;
and training a second enhancement network in the image enhancement model according to the first background area image and the second illumination image in the first illumination image.
4. A method as recited in claim 3, wherein said training a first enhancement network in said image enhancement model based on said first luminance image and said plurality of non-uniform luminance images comprises:
inputting the plurality of non-uniform luminance images into the first enhancement network, and outputting a plurality of first feature maps based on the first enhancement network;
determining a first loss value according to the first illumination image, the plurality of first feature maps and a first loss function;
And training the first enhancement network according to the first loss value.
5. The method of claim 4, wherein the method further comprises:
determining a brightness area image according to a brightness area with a brightness value larger than a set brightness in the second illumination image;
and fusing the brightness area image and the plurality of images with different brightness to determine the plurality of non-uniform brightness images.
6. The method of claim 3, wherein the training the second enhancement network in the image enhancement model based on the first background region image and the second luminance image in the first luminance image comprises:
inputting the second illumination image into the second enhancement network, and outputting a second feature map based on the second enhancement network;
fusing the background area template and the second feature map to obtain a fused second feature map;
determining a second background area image in the fused second feature map;
determining a second loss value according to the first background area image, the second background area image and a second loss function;
determining a third loss value according to the fused second feature map and the third loss function;
And training the second enhanced network according to the second loss value and the third loss value.
7. The method of claim 3, wherein the image enhancement model comprises a RepVGG network structure.
8. The method of claim 3, wherein the first enhancement network comprises an attention network structure of luminance region templates, the luminance region templates determined from luminance region images in the second luminance image; and/or
The second enhancement network includes an attention network structure of a background region template.
9. The method of claim 1 or 2, wherein prior to said inputting the image into a pre-trained image enhancement model, the method further comprises:
carrying out gray statistics on the image and determining a brightness distribution result of the image;
judging whether the brightness distribution result accords with the gray level distribution rule of the low-illumination image;
if so, determining to perform image enhancement on the image.
10. An electronic device comprising at least a processor and a memory, the processor being adapted to implement the steps of the image enhancement method according to any of claims 1-9 when executing a computer program stored in the memory.
CN202310639487.7A 2023-05-31 2023-05-31 Image enhancement method and device Pending CN116843561A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310639487.7A CN116843561A (en) 2023-05-31 2023-05-31 Image enhancement method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310639487.7A CN116843561A (en) 2023-05-31 2023-05-31 Image enhancement method and device

Publications (1)

Publication Number Publication Date
CN116843561A true CN116843561A (en) 2023-10-03

Family

ID=88160826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310639487.7A Pending CN116843561A (en) 2023-05-31 2023-05-31 Image enhancement method and device

Country Status (1)

Country Link
CN (1) CN116843561A (en)

Similar Documents

Publication Publication Date Title
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
US10666873B2 (en) Exposure-related intensity transformation
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
US20230080693A1 (en) Image processing method, electronic device and readable storage medium
US9344638B2 (en) Constant bracket high dynamic range (cHDR) operations
EP4198875A1 (en) Image fusion method, and training method and apparatus for image fusion model
CN110163188B (en) Video processing and method, device and equipment for embedding target object in video
CN113674159A (en) Image processing method and device, electronic equipment and readable storage medium
CN110572573A (en) Focusing method and device, electronic equipment and computer readable storage medium
US11977319B2 (en) Saliency based capture or image processing
CN112819858B (en) Target tracking method, device, equipment and storage medium based on video enhancement
Zheng et al. Low-light image and video enhancement: A comprehensive survey and beyond
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN111160340B (en) Moving object detection method and device, storage medium and terminal equipment
CN115661535B (en) Target background removal recovery method and device and electronic equipment
CN117078574A (en) Image rain removing method and device
CN113724282A (en) Image processing method and related product
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
Wang et al. MSF 2 DN: Multi Scale Feature Fusion Dehazing Network with Dense Connection
CN116843561A (en) Image enhancement method and device
US20220188991A1 (en) Method and electronic device for managing artifacts of image
CN116778212A (en) Image processing method and device
US20230186612A1 (en) Image processing methods and systems for generating a training dataset for low-light image enhancement using machine learning models
CN114708250B (en) Image processing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination