CN117876280A - Video frame image enhancement method, system and storage medium - Google Patents

Video frame image enhancement method, system and storage medium Download PDF

Info

Publication number
CN117876280A
CN117876280A CN202311807512.4A CN202311807512A CN117876280A CN 117876280 A CN117876280 A CN 117876280A CN 202311807512 A CN202311807512 A CN 202311807512A CN 117876280 A CN117876280 A CN 117876280A
Authority
CN
China
Prior art keywords
image
channel
video
channel image
brightness
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311807512.4A
Other languages
Chinese (zh)
Inventor
申月
丁霞
张煦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi IoT Technology Co Ltd
Original Assignee
Tianyi IoT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi IoT Technology Co Ltd filed Critical Tianyi IoT Technology Co Ltd
Priority to CN202311807512.4A priority Critical patent/CN117876280A/en
Publication of CN117876280A publication Critical patent/CN117876280A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a video frame image enhancement method, a video frame image enhancement system and a storage medium, which are applied to the technical field of image processing and can improve the robustness and applicability of image enhancement and improve the effect of image enhancement. The method comprises the following steps: performing scene division on the first video image according to a preset division threshold value to obtain a second video image; converting the second video image from the three-primary color space to the hexagonal cone model color space to obtain a third video image; calculating a high-brightness dark channel image according to the first video image so as to fuse the first brightness channel image and the high-brightness dark channel image to obtain a second brightness channel image; inputting the first hue channel image into a first neural network model to obtain a second hue channel image; inputting the first saturation channel image into a second neural network model to obtain a second saturation channel image; and obtaining a target enhanced image according to the second brightness channel image, the second hue channel image and the second saturation channel image.

Description

Video frame image enhancement method, system and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a system, and a storage medium for enhancing a video frame image.
Background
In image acquisition, the imaging conditions of multiple scenes are different, such as strong light environment in outdoor sunny days, weak light environment in overcast and rainy days or at night, and the like. Accordingly, the acquired image can be caused to present the problems of dark part area, bright part area, loss of detail information, insufficient contrast and the like by various imaging environments, so that the subsequent image is identified, and the accuracy and reliability of the image identification are reduced. Therefore, enhancement of the corresponding image is required. However, in the related art, the image obtained by processing through the related image enhancement algorithm has the problems of color distortion, poor contrast and the like, and the image enhancement effect is unstable.
Disclosure of Invention
In order to solve at least one of the above technical problems, the present invention provides a video frame image enhancement method, a system and a storage medium, which can effectively improve the robustness and applicability of image enhancement and effectively improve the image enhancement effect.
In one aspect, an embodiment of the present invention provides a video frame image enhancement method, including the following steps:
performing scene division on a first video image in the video to be processed according to a preset division threshold value to obtain a second video image; the preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes;
Converting the second video image from a three-primary color space to a hexagonal cone model color space to obtain a third video image; the third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image;
calculating to obtain a high-brightness dark channel image according to the first video image, so as to fuse the first brightness channel image of the third video image with the high-brightness dark channel image to obtain a second brightness channel image;
inputting the first hue channel image into a first neural network model to obtain a second hue channel image;
inputting the first saturation channel image into a second neural network model to obtain a second saturation channel image;
and obtaining a target enhanced image according to the second brightness channel image, the second hue channel image and the second saturation channel image.
According to some embodiments of the invention, the performing differential processing on the image in the video to be processed to screen to obtain a first video image includes:
performing differential operation on pixel points corresponding to two continuous frames of images in the video to be processed to obtain a measurement index, and performing binarization processing on the corresponding pixel points according to the measurement index and a preset measurement threshold to obtain a binarized image;
Carrying out averaging treatment according to the binarized image to obtain a priori average value;
and screening the images with the prior mean value smaller than a preset prior threshold value to obtain the first video image.
According to some embodiments of the present invention, the performing scene division on the first video image in the video to be processed according to the preset division threshold to obtain the second video image includes:
calculating the corresponding dark channel high darkness value under each scene according to a preset image data set; the preset image data set comprises a plurality of strong light scene sample images, clear scene sample images and low-illumination scene sample images;
fusing the dark channel high darkness value with a hexagonal cone model color channel of a corresponding image in the preset image dataset to obtain a channel mean value, so as to determine the preset dividing threshold value according to the channel mean value;
dividing the first video image through the preset dividing threshold value to obtain a second video image; wherein the second video image comprises a low-light scene image.
According to some embodiments of the present invention, the fusing the first luminance channel image and the high luminance dark channel image of the third video image to obtain a second luminance channel image includes:
Performing statistical analysis according to the low-illumination scene sample image to determine a cold color gamut space threshold; wherein the cold gamut space threshold comprises a first threshold and a second threshold, and the first threshold is less than the second threshold;
comparing each pixel of the third video image to the cold gamut space threshold to determine whether the pixel is greater than the first threshold and less than the second threshold;
and when the pixel point is determined to be larger than the first threshold value and smaller than the second threshold value, fusing the brightness channel corresponding to the pixel point with the high-brightness dark channel image to obtain the second brightness channel image.
According to some embodiments of the invention, the inputting the first hue channel image into the first neural network model to obtain a second hue channel image includes:
constructing a first generation reactance network, and performing reactance training by taking the clear scene sample image as a positive sample to obtain a first generation reactance model;
and inputting the first hue channel image into the first generation contrast model to enhance a hue channel, so as to obtain the second hue channel image.
According to some embodiments of the invention, the inputting the first saturation channel image into a second neural network model, to obtain a second saturation channel image, includes:
Constructing a second generation countermeasure network, and graying the clear scene sample image to obtain a gray sample image, so as to perform countermeasure training on the second generation countermeasure network according to the gray sample image as a positive sample to obtain a second generation countermeasure model;
carrying out graying treatment on the first saturation channel image to obtain a first gray level image;
inputting the first gray level image into the second generated countermeasure model to obtain edge information;
and fusing the edge information and the first saturation channel image to obtain the second saturation channel image.
According to some embodiments of the invention, the obtaining the target enhanced image according to the second luminance channel image, the second hue channel image, and the second saturation channel image includes:
constructing a fourth video image according to the second brightness channel image, the second hue channel image and the second saturation channel image;
and converting the fourth video image from the hexagonal cone model color space to the trichromatic color space to obtain the target enhanced image.
On the other hand, the embodiment of the invention also provides a video frame image enhancement system, which comprises:
The first module is used for carrying out scene division on a first video image in the video to be processed according to a preset division threshold value to obtain a second video image; the preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes;
the second module is used for converting the second video image from a three-primary color space to a hexagonal cone model color space to obtain a third video image; the third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image;
the third module is used for calculating a high-brightness dark channel image according to the first video image so as to fuse the first brightness channel image and the high-brightness dark channel image of the third video image to obtain a second brightness channel image;
a fourth module, configured to input the first hue channel image into a first neural network model, to obtain a second hue channel image;
a fifth module, configured to input the first saturation channel image into a second neural network model, to obtain a second saturation channel image;
and a sixth module, configured to obtain a target enhanced image according to the second luminance channel image, the second hue channel image, and the second saturation channel image.
On the other hand, the embodiment of the invention also provides a video frame image enhancement system, which comprises:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the video frame image enhancement method as described in the above embodiments.
In another aspect, an embodiment of the present invention further provides a computer storage medium, in which a program executable by a processor is stored, where the program executable by the processor is used to implement the video frame image enhancement method according to the above embodiment.
The video frame image enhancement method, the video frame image enhancement system and the storage medium have at least the following beneficial effects: according to the embodiment of the invention, firstly, the first video image in the video to be processed is subjected to scene division according to the preset division threshold, so that the scene division of the first video image is realized, and the second video image is obtained. The preset dividing threshold value in the embodiment of the invention is determined by dark channel high darkness values of sample images of different scenes. Then, the embodiment of the invention converts the second video image from the three-primary color space to the hexagonal cone model color space to obtain a third video image comprising a first color channel image, a first saturation channel image and a first brightness channel image. Meanwhile, according to the embodiment of the invention, the corresponding high-brightness dark channel image is obtained through calculation according to the first video image, so that the first brightness channel image and the high-brightness dark channel image are fused to obtain the second brightness channel image, and the brightness of the dark part area of the third video image can be enhanced. In addition, in the embodiment of the invention, the first color channel image is input into the first neural network model for processing to obtain the second brightness channel image, and the first saturation channel is input into the second neural network model for processing to obtain the second saturation channel image. Finally, the embodiment of the invention obtains the target enhancement image according to the first brightness channel image, the second hue channel image and the second saturation channel image, thereby realizing the image enhancement of the video frame image, effectively improving the robustness and the applicability of the image enhancement and effectively improving the effect of the image enhancement. It is easy to understand that in the embodiment of the invention, the preset division threshold value of scene division is determined through the dark channel high darkness values of the sample images of different scenes, so that the first video image in the video to be processed is subjected to scene division according to the preset division threshold value, thereby being capable of dividing the video images under different scenes to obtain corresponding second video images, further converting the second video images from the three-primary color space to the color space of the hexagonal cone model and respectively carrying out corresponding processing on each channel image, effectively retaining image information, and simultaneously effectively improving the robustness and the applicability of image enhancement and effectively improving the image enhancement effect.
Drawings
FIG. 1 is a flowchart of a video frame image enhancement method provided by an embodiment of the present invention;
fig. 2 is a schematic flowchart of a step of filtering images in a video to be processed to obtain a first video image according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a step of performing scene division on a first video image according to a preset division threshold to obtain a second video image according to an embodiment of the present invention;
fig. 4 is a schematic flow chart of a step of performing fusion processing on a first luminance channel image and a high-luminance dark channel image of a third video image to obtain a second luminance channel image according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating a step of inputting a first hue channel image into a first neural network model to obtain a second hue channel image according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a step of inputting a first saturation channel image into a second neural network model to obtain a second saturation channel image according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating steps for obtaining a target enhanced image according to a second luminance channel image, a second hue channel image, and a second saturation channel image according to an embodiment of the present invention;
Fig. 8 is a schematic diagram of an overall architecture of a video frame image enhancement method according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a video frame image enhancement system according to an embodiment of the present invention;
fig. 10 is a schematic block diagram of a video frame image enhancement system according to an embodiment of the present invention.
Detailed Description
The embodiments described in the present application should not be construed as limitations on the present application, but rather as many other embodiments as possible without inventive faculty to those skilled in the art, are intended to be within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Before describing embodiments of the present application, related terms referred to in the present application will be first described.
Generating an antagonism network (Generative Adversarial Network, GAN): the generation model is a generation model which learns through a mode that two neural networks play with each other, and can learn a generation task without using labeling data. Accordingly, the generation of the countermeasure network consists of a generator and a discriminator. Wherein the generator takes as input random samples from the underlying space, the output of which needs to resemble as much as possible the real samples in the training set. Accordingly, the input of the discriminator is the true sample or the output of the generator, so that the output of the generator is separated from the true sample as much as possible. The generated countermeasure network is mutually countered by the generator and the discriminator to learn constantly, and finally the discriminator cannot judge whether the output result of the generator is real or not.
Three primary colors color space: refers to a color space consisting of three basic colors, typically red, green and blue, corresponding to the three channels in the RGB color model, respectively. In the RGB color space, the intensity of each color can be represented by a mixture of red, green, and blue colors to different extents.
Hexagonal cone model color (HSV) space: is a color space created from the intuitiveness of colors, each of which is represented by Hue (H), saturation (S) and lightness (Value, V). Accordingly, the parameters of the colors in the HSV color model are respectively: hue (H), saturation (S), brightness (V).
In image acquisition, the imaging conditions of multiple scenes are different, such as strong light environment in outdoor sunny days, weak light environment in overcast and rainy days or at night, and the like. Accordingly, the acquired image can be caused to present the problems of dark part area, bright part area, loss of detail information, insufficient contrast and the like by various imaging environments, so that the subsequent image is identified, and the accuracy and reliability of the image identification are reduced. Therefore, enhancement of the corresponding image is required. However, in the related art, the image obtained by processing through the related image enhancement algorithm has the problems of color distortion, poor contrast and the like, and the image enhancement effect is unstable. Therefore, the above technical problems need to be solved.
Based on the above, an embodiment of the present invention provides a video frame image enhancement method, system and storage medium, which can effectively improve the robustness and applicability of image enhancement and effectively improve the image enhancement effect. Referring to fig. 1, the method of the embodiment of the present invention includes, but is not limited to, step S110, step S120, step S130, step S140, step S150, and step S160.
Specifically, the method application process of the embodiment of the invention includes, but is not limited to, the following steps:
S110: and carrying out scene division on the first video image in the video to be processed according to a preset division threshold value to obtain a second video image. The preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes.
S120: and converting the second video image from the three-primary color space to the hexagonal cone model color space to obtain a third video image. The third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image.
S130: and calculating a high-brightness dark channel image according to the first video image, so as to fuse the first brightness channel image of the third video image with the high-brightness dark channel image to obtain a second brightness channel image.
S140: and inputting the first hue channel image into a first neural network model to obtain a second hue channel image.
S150: and inputting the first saturation channel image into a second neural network model to obtain a second saturation channel image.
S160: and obtaining a target enhanced image according to the second brightness channel image, the second hue channel image and the second saturation channel image.
In the working process of the embodiment, the embodiment of the invention firstly carries out scene division on the first video image in the video to be processed according to the preset division threshold value so as to obtain the second video image. Specifically, in the embodiment of the invention, the preset dividing threshold value is determined by dark channel high darkness values of sample images of different scenes. The dark channel high darkness value in the embodiment of the invention is used for describing the characteristics of the dark region in the image, and is determined by calculating the pixel value of the dark region of the image. Accordingly, a dark channel refers to a darker, lower pixel value region of an image. In addition, the video to be processed in the embodiment of the invention refers to video data such as a monitoring video and a driving recording video which need to be subjected to video frame image enhancement. Accordingly, in the embodiment of the present invention, the first video image refers to an effective image frame in the video to be processed, for example, an image with a dynamic target. It is easy to understand that the video to be processed, such as the monitoring image, collected in the complex scene environment in the embodiment of the invention has the problems of fuzzy details, reduced contrast, too low brightness or exposure and the like, and has less effective information quantity, so that the video to be processed is unfavorable for subsequent recognition. Therefore, in the embodiment of the invention, the first video image in the video to be processed is subjected to scene division in a manner of presetting the division threshold value, so that the corresponding image is adaptively subjected to image enhancement, and the image enhancement effect is effectively improved.
Then, the embodiment of the invention converts the second video image from the three-primary color space to the hexagonal cone model color space to obtain a third video image. Specifically, in the embodiment of the present invention, the second video image is an image in a three-primary color space, that is, an RGB space image. Correspondingly, the embodiment of the invention can adaptively select corresponding algorithm sub-channel processing images by converting the second video image from the RGB space to the hexagonal cone model color space, namely the HSV space, so that effective information is reserved to a greater extent. The third video image in the embodiment of the invention includes a first color channel image, a first saturation channel image, and a first brightness channel image, which correspond to a hue channel, a saturation channel, and a brightness channel of the HSV space, respectively. Illustratively, the calculation formulas for converting the second video image from the RGB space to the HSV space in the embodiment of the present invention are shown in the following formulas (1) to (3):
V(x)=max P c (x) (1)
wherein c= { r, g, b } represents each color channel of the second video image, P c Representing a single channel image under a channel of a three channel image.
Meanwhile, the embodiment of the invention calculates the high-brightness dark channel image according to the first video image so as to fuse the first brightness channel image of the first video image and the high-brightness dark channel image to obtain the second brightness channel image. Specifically, in the embodiment of the invention, the high-brightness dark channel map is an extension of dark channel prior, and is obtained by combining brightness information of the dark channel map and the dark channel map. The calculation formula for calculating the corresponding high-brightness dark channel image according to the first video image in the embodiment of the invention is shown in the following formula (4):
Wherein c= { r, g, b } represents each color channel of the first video image, P c Representing a certain channel image under three channels in the image, wherein omega (x) is represented by P c A sliding window with a certain pixel x as a center, in the embodiment of the present invention, the sliding window size is set to 32, and the maximum value is filteredAnd fusing surrounding ambient light to obtain a high-brightness dark channel chart J (x) of the image.
Correspondingly, the embodiment of the invention performs fusion processing on the first luminance channel image and the high-luminance dark channel image of the third video image, for example, multiplies the calculated high-luminance dark channel image J (x) by the first luminance channel image V (x) to enhance the luminance of the corresponding dark region, thereby obtaining the second luminance channel image. Further, in the embodiment of the invention, the first hue channel image is input into the first neural network model for processing to obtain the second hue channel image, and the first saturation channel image is input into the second neural network model for processing to obtain the second saturation channel image. Specifically, in the embodiment of the present invention, the first neural network model and the second neural network model may generate an antagonistic neural network model, a self-supervised learning neural network model, or an automatic encoder antagonistic network model, etc. It is easy to understand that in the embodiment of the present invention, the first color phase channel image is input into the first neural network model for processing, so as to output an H-channel image that is effectively recovered in the color gamut space, that is, a second color phase channel image. Similarly, in the embodiment of the invention, the first saturation channel image is input into the second neural network model for processing, so that the detail structure information of the first saturation channel image is enhanced, and the second saturation channel image is obtained. Finally, the embodiment of the invention obtains the target enhanced image according to the second brightness channel image, the second hue channel image and the second saturation channel image. The embodiment of the invention constructs and obtains the corresponding target enhanced image by combining the second brightness channel image, the second hue channel image and the second saturation channel image, and completes the image enhancement of the video frame image. It is easy to understand that the embodiment of the invention performs image region screening and targeted restoration in multiple stages according to the diversity of the acquired scene. Firstly, a threshold value is set in a self-adaptive mode to divide the multi-scene images, a corresponding enhancement mode is selected to carry out self-adaptive processing on the selected areas and the sub-areas of the images, so that the robustness and the applicability of image enhancement are effectively improved, and the effect of image enhancement is effectively improved.
Referring to fig. 2, in some embodiments of the present invention, before performing the step of performing scene division on a first video image in a video to be processed according to a preset division threshold to obtain a second video image, the video frame image enhancement method provided by the embodiment of the present invention further includes, but is not limited to, the following steps:
s210: and carrying out differential operation on pixel points corresponding to two continuous frames of images in the video to be processed to obtain a measurement index, and carrying out binarization processing on the corresponding pixel points according to the measurement index and a preset measurement threshold value to obtain a binarized image.
S220: and carrying out averaging treatment according to the binarized image to obtain a priori average value.
S230: and screening the images with the prior mean value smaller than a preset prior threshold value to obtain a first video image.
In this embodiment, before the first video image is subjected to scene division, the embodiment of the present invention first screens out available valid images, i.e., the first video image, based on the variability metric. Concrete embodimentsFirstly, carrying out differential operation on pixel points corresponding to two continuous frames of images in a video to be processed to obtain a measurement index, and carrying out binarization processing according to the measurement index and the pixel points corresponding to a preset measurement threshold to obtain a binarized image. It is readily appreciated that the change in the video image is small when no object is entered within the scene. Therefore, the embodiment of the invention uses the images P of two continuous frames in the video to be processed n (x, y) and P n-1 Performing differential operation on the corresponding pixel points of (x, y), and taking the absolute value of the gray value difference as a measurement index D n The following formula (5) shows:
D n (x,y)=|P n (x,y)-P n-1 (x,y)| (5)
correspondingly, the embodiment of the invention sets the threshold K, namely preset measurement threshold, so as to pass the threshold K and the measurement index D n Binarizing the corresponding pixel points to obtain a binarized image R n The following formula (6) shows:
the preset measurement threshold value is obtained by setting an initial threshold value and performing multiple rounds of iterative operation approximation on effective image pairs and ineffective image pairs with dynamic targets.
Further, in the embodiment of the invention, the prior mean value is obtained by carrying out the averaging treatment according to the binarized image, so that the image with the prior mean value smaller than the preset prior threshold value is screened, and the first video image is obtained. The embodiment of the invention carries out the averaging treatment on the binarized image obtained by the treatment to obtain the corresponding prior mean value, wherein when the prior mean value is smaller than the preset prior threshold value T, namely the image has a certain range of effective area, the target is judged to enter the video image area, so that the effective image, namely the first video image, is screened out. It is easy to understand that the method and the device can effectively reduce the operation amount by grabbing the effective image based on the variability measure for processing.
Referring to fig. 3, in some embodiments of the present invention, a first video image in a video to be processed is subjected to scene division according to a preset division threshold to obtain a second video image, including but not limited to the following steps:
s310: and calculating corresponding dark channel high darkness values under each scene according to the preset image data set. The preset image data set comprises a plurality of strong light scene sample images, clear scene sample images and low-illumination scene sample images.
S320: and fusing the dark channel high darkness value with the hexagonal cone model color channels of the corresponding images in the preset image dataset to obtain a channel mean value, so as to determine a preset dividing threshold value according to the channel mean value.
S330: and dividing the first video image by a preset dividing threshold value to obtain a second video image. Wherein the second video image comprises a low-light scene image.
In this embodiment, the embodiment of the present invention calculates, according to a preset image data set, a corresponding dark channel high darkness value in each scene, and further fuses the dark channel high darkness value with a hexagonal cone model color channel of a corresponding image in the preset image data set to obtain a channel mean value, so as to determine a preset division threshold according to the channel mean value. Specifically, the preset image data set in the embodiment of the invention comprises a plurality of strong light scene sample images, clear scene sample images and low-illumination scene sample images. The embodiment of the invention acquires a plurality of images from a scene data set containing strong light, clear and low-illumination images, namely a preset image data set, and then calculates the dark channel high darkness value of each image of different classification scenes. Then, the embodiment of the invention fuses the calculated dark channel high darkness value corresponding to each scene image with the HSV space channel to obtain a channel mean value, and further determines a corresponding preset dividing threshold value according to the channel mean value. For example, in the embodiment of the present invention, the dark channel high darkness values of the highlight scene sample images are A1, A2, A3 …, the dark channel high darkness values of the clear scene sample images are B1, B2, B3 …, the dark channel high darkness values of the low illuminance scene sample images are C1, C2, C3 …, and the channel average value of each classified image is taken, so that the determined preset division thresholds are (a1+a2+a3 …)/n, (b1+b2+b3 …)/n, respectively. Further, in the embodiment of the invention, the first video image is divided by a preset division threshold value to obtain the second video image. Specifically, the embodiment of the invention divides the effective video frame image acquired in the first stage, namely the first video image, according to the acquired preset division threshold, so that the first video image can be divided into a corresponding strong light scene image, a clear scene image and a low-illumination scene image. Accordingly, in an embodiment of the present invention, the second video image includes a low-light scene image. It should be noted that, in the embodiment of the present invention, the dark channel high darkness values of the images acquired in different scenes and the average value of the channel obtained by fusing the HSV spatial channels of the images exist in different intervals, and the embodiment of the present invention divides the first video image from strong light to different areas with clear and low visibility by setting a preset division threshold. Correspondingly, the embodiment of the invention adaptively divides the multi-scene images based on the threshold value, so that the multi-scene images are more effectively processed aiming at the low-illumination images, the quality of video images acquired by multiple types of scenes is effectively improved, and meanwhile, the adaptability and the robustness of video frame image enhancement are effectively improved.
Referring to fig. 4, in some embodiments of the present invention, the first luminance channel image and the high luminance dark channel image of the third video image are fused to obtain a second luminance channel image, including but not limited to the following steps:
s410: and carrying out statistical analysis according to the low-illumination scene sample image to determine a cold color gamut space threshold value. Wherein the cold gamut space threshold comprises a first threshold and a second threshold, and the first threshold is less than the second threshold.
S420: each pixel of the third video image is compared to the cold gamut space threshold to determine whether the pixel is greater than the first threshold and less than the second threshold.
S430: and when the pixel point is determined to be larger than the first threshold value and smaller than the second threshold value, fusing the brightness channel corresponding to the pixel point with the high-brightness dark channel image to obtain a second brightness channel image.
In this embodiment, the embodiment of the present invention performs statistical analysis according to the low-illuminance scene sample image first to determine the cold color gamut space threshold. Specifically, the embodiment of the invention determines the H channel corresponding to the low-illumination area of the low-illumination scene image to be in the cold color gamut space by means of statistics and analysis of the low-illumination scene sample image. Therefore, the embodiment of the invention performs statistical analysis according to the low-illumination scene sample image to determine the corresponding cold color gamut space threshold. The cold color gamut space threshold in the embodiment of the invention comprises a first threshold and a second threshold, and the first threshold is smaller than the second threshold. Next, the embodiment of the invention compares each pixel point of the third video image with the cold color gamut space threshold value to determine whether the corresponding pixel point is greater than the first threshold value and less than the second threshold value. Specifically, the embodiment of the invention compares the third video image with the preset cold color gamut space threshold pixel by pixel to determine whether the corresponding pixel point is between the first threshold and the second threshold, i.e. is in the cold color gamut space threshold space. Further, when it is determined that the pixel point is greater than the first threshold and smaller than the second threshold, the embodiment of the invention performs fusion processing on the luminance channel and the high-luminance dark channel image of the corresponding pixel point to obtain a second luminance channel image. Exemplary, embodiments of the present invention set a low threshold k in the H-channel cold gamut space 1 (first threshold) and high threshold k 2 Comparing the third video image with a set cold color gamut space threshold value pixel by pixel, and fusing the pixel point V channel (the first brightness channel image) with a high brightness dark channel for brightness enhancement when the H channel value of the pixel is in the cold color gamut space threshold value space, namely the H channel value of the pixel is larger than the first threshold value and smaller than the second threshold value, so as to obtain a second brightness channel image, wherein the second brightness channel image is shown in the following formula (7):
referring to fig. 5, in some embodiments of the present invention, a first hue channel image is input into a first neural network model to obtain a second hue channel image, including, but not limited to, the following steps:
s510: and constructing a first generation reactance network, and performing reactance training by taking a clear scene sample image as a positive sample to obtain a first generation reactance model.
S520: and inputting the first hue channel image into a first generation contrast model to enhance the hue channel, so as to obtain a second hue channel image.
In this embodiment, the embodiment of the present invention performs the countermeasure training on the first generation reactance network by constructing the first generation reactance network and taking the clear scene sample image as the positive sample, so as to obtain a first generation reactance model, and then inputs the first color channel image into the first generation reactance model to perform the color channel enhancement, so as to obtain a second color channel image. Specifically, the embodiment of the invention inputs the clear scene sample image into a framework taking U-Net as a main body so as to learn the mapping curve of the brightness enhancement low-quality image to the clear image. The network model constructed by the embodiment of the invention comprises a generating model based on a U-Net structure and a judging model. Correspondingly, the embodiment of the invention inputs the H information of the HSV image with enhanced brightness into a low-illumination enhanced network of the model, learns the mapping from the low-quality image to the clear image, and performs countermeasure training by taking the H information of the HSV image with clear normal illumination as a positive sample. Wherein the first generation-reactance network mainly focuses on learning tone curves to focus on restoring the true color of the low-light image. Finally, the embodiment of the invention inputs the first color channel image into the first generation contrast model to enhance the color channel, so as to obtain an H-channel image which is effectively restored in the color space, namely a second color channel image.
Referring to fig. 6, in some embodiments of the present invention, the first saturation channel image is input to a second neural network model to obtain a second saturation channel image, including, but not limited to, the steps of:
s610: and constructing a second generation countermeasure network, and graying the clear scene sample image to obtain a gray sample image, so as to perform countermeasure training on the second generation countermeasure network according to the gray sample image as a positive sample to obtain a second generation countermeasure model.
S620: and carrying out graying treatment on the first saturation channel image to obtain a first gray level image.
S630: inputting the first gray level image into a second generated countermeasure model to obtain edge information.
S640: and fusing the edge information and the first saturation channel image to obtain a second saturation channel image.
In this embodiment, the embodiment of the present invention first constructs a second generating countermeasure network, performs countermeasure training with a gray scale image of a clear scene sample image as a positive sample, and obtains a second generating countermeasure model, so as to generate corresponding edge information through the second generating countermeasure model, and further fuses the edge information with the first saturation channel image, so as to obtain a second saturation channel image. Specifically, in the embodiment of the invention, firstly, the clear scene sample image is subjected to gray scale to obtain a corresponding gray scale sample image. Then, the embodiment of the invention takes the gray sample image as a positive sample, performs countermeasure training on the second generated countermeasure network, and constructs a second generated countermeasure model. Further, in the embodiment of the invention, the first saturation channel is subjected to graying treatment, so that the obtained first gray image is input into the second generation countermeasure model, and accordingly clear edge information is generated. Finally, the embodiment of the invention fuses the edge information with the communication of the first saturation channel, thereby realizing the recovery of the detail structure of the S channel and obtaining the image of the second saturation channel. It is easy to understand that the embodiment of the invention introduces the edge information of the gray-scale clear scene image to fuse the corresponding detail structure information into the first saturation channel image, thereby effectively enhancing and recovering the saturation channel image and keeping the effective information to a greater extent.
Referring to fig. 7, in some embodiments of the present invention, a target enhanced image is derived from a second luminance channel image, a second hue channel image, and a second saturation channel image, including, but not limited to, the steps of:
s710: and constructing a fourth video image according to the second brightness channel image, the second hue channel image and the second saturation channel image.
S720: and converting the fourth video image from the hexagonal cone model color space to a three-primary color space to obtain the target enhanced image.
In this embodiment, the embodiment of the present invention performs image space conversion according to the second luminance channel image, the second hue channel image, and the second saturation channel image, to obtain the target enhanced image. Specifically, in the embodiment of the invention, a fourth video image is first constructed according to the second luminance channel image, the second hue channel image and the second saturation channel image. The fourth video image is obtained by combining and constructing the second brightness channel image, the second hue channel image and the second saturation channel image. The fourth video image in the embodiment of the invention is a hexagonal cone model color space (HSV) image. Therefore, the embodiment of the invention obtains the target enhanced image by converting the fourth video image from the hexagonal pyramid model color space to the trichromatic color space, as shown in the following formulas (8) to (12):
C=V′(x)*S(x) (8)
m=V′(x)-C (10)
I (x)=((R +m)*255,(G +m)*255,(B +m)*255) (12)
Wherein C represents the product of saturation and luminance, i.e., chromaticity, X represents an intermediate variable calculated from the relationship between chromaticity C and hue H, m represents the luminance channel minus the chromaticity value, (R ', G ', B ') represents the red, green and blue component values in the three primary color space, I (x) Representation ofThe target enhances the image.
Referring to fig. 8, fig. 8 is a schematic diagram of an overall architecture of a video frame image enhancement method according to an embodiment of the present invention. Taking a video image enhancement application scenario as an example, the embodiment of the present invention first screens out available images based on a variability metric. Specifically, in the embodiment of the invention, the pixel points corresponding to two continuous frames of images in the video to be processed are subjected to differential operation to obtain the measurement index, the corresponding pixel points are subjected to binarization processing according to the measurement index and the preset measurement threshold to obtain the binarized image, and the prior mean value is obtained by carrying out averaging processing according to the binarized image, so that the image with the prior mean value smaller than the preset prior threshold is screened to obtain the first video image. Then, the embodiment of the invention performs scene division on the first video image in the video to be processed according to the preset division threshold value to obtain the second video image. Specifically, the embodiment of the invention calculates the corresponding dark channel high darkness value under each scene according to the preset image data set, so that the dark channel high darkness value is fused with the hexagonal cone model color channel of the corresponding image in the preset image data set to obtain the channel mean value, the preset dividing threshold value is determined according to the channel mean value, and the first video image is divided through the preset dividing threshold value to obtain the second video image. The second video image in the embodiment of the invention comprises a low-illumination scene image. Correspondingly, the preset image data set in the embodiment of the invention comprises a plurality of strong light scene sample images, clear scene sample images and low-illumination scene sample images.
Meanwhile, the embodiment of the invention calculates the high-brightness dark channel image according to the first video image so as to fuse the first brightness channel image and the high-brightness dark channel image of the third video image to obtain the second brightness channel image. Specifically, the embodiment of the invention firstly performs statistical analysis according to the low-illumination scene sample image to determine a cold color gamut space threshold value, wherein the cold color gamut space threshold value comprises a first threshold value and a second threshold value, and the first threshold value is smaller than the second threshold value. Next, the embodiment of the invention compares each pixel point of the third video image with the cold color gamut space threshold to determine whether the pixel point is greater than the first threshold and less than the second threshold. And if the corresponding pixel point is larger than the first threshold value and smaller than the second threshold value, fusing the brightness channel corresponding to the pixel point with the high-brightness dark channel image to obtain a second brightness channel image.
Further, in the embodiment of the invention, the first hue channel image is input into the first neural network model to obtain the second hue channel image, and the first saturation channel image is input into the second neural network model to obtain the second saturation channel image. Specifically, in the embodiment of the invention, a first generation reactance network is constructed, and the clear scene sample image is taken as a positive sample to conduct reactance training to obtain a first generation reactance model, and then a first color channel image is input into the first generation reactance model to conduct color channel enhancement to obtain a second color channel image. Meanwhile, the embodiment of the invention carries out countermeasure training on the second generation countermeasure network by constructing the second generation countermeasure network and carrying out graying on the clear scene sample image so as to take the obtained gray sample image as a positive sample to obtain a second generation countermeasure model. Then, the embodiment of the invention carries out graying treatment on the first saturation channel image, inputs the obtained first gray image into a second generation countermeasure model to generate clear edge information, and further fuses the edge information and the first saturation channel image to obtain a second saturation channel image. Finally, according to the embodiment of the invention, the fourth video image constructed according to the second brightness channel image, the second hue channel image and the second saturation channel image is converted from the hexagonal cone model color space to the three-primary color space to obtain the target enhanced image, so that the image enhancement of the video frame image is completed, the robustness and the applicability of the image enhancement are effectively improved, and the image enhancement effect is effectively improved.
Referring to fig. 9, an embodiment of the present invention further provides a video frame image enhancement system, including:
the first module 810 is configured to perform scene division on a first video image in a video to be processed according to a preset division threshold, so as to obtain a second video image. The preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes.
A second module 820 is configured to convert the second video image from the trichromatic color space to the hexagonal pyramid model color space to obtain a third video image. The third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image.
And a third module 830, configured to calculate a high-brightness dark channel image according to the first video image, so as to perform fusion processing on the first brightness channel image and the high-brightness dark channel image of the third video image, and obtain a second brightness channel image.
A fourth module 840 is configured to input the first hue channel image into the first neural network model to obtain a second hue channel image.
A fifth module 850 is configured to input the first saturation channel image into the second neural network model, and obtain a second saturation channel image.
A sixth module 860 is configured to obtain a target enhanced image according to the second luminance channel image, the second hue channel image, and the second saturation channel image.
The content of the method embodiment of the invention is suitable for the system embodiment, the specific function of the system embodiment is the same as that of the method embodiment, and the achieved beneficial effects are the same as those of the method.
Referring to fig. 10, an embodiment of the present invention further provides a video frame image enhancement system, including:
at least one processor 910.
At least one memory 920 for storing at least one program.
The at least one program, when executed by the at least one processor 910, causes the at least one processor 910 to implement a video frame image enhancement method as described in the above embodiments.
The content of the method embodiment of the invention is suitable for the system embodiment, the specific function of the system embodiment is the same as that of the method embodiment, and the achieved beneficial effects are the same as those of the method.
An embodiment of the present invention also provides a computer-readable storage medium storing computer-executable instructions for execution by one or more control processors, e.g., to perform the video image enhancement method steps described in the above embodiments.
The content of the method embodiment of the invention is suitable for the system embodiment, the specific function of the system embodiment is the same as that of the method embodiment, and the achieved beneficial effects are the same as those of the method.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The step numbers in the above method embodiments are set for convenience of illustration, and the order of steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the embodiments described above, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A method for enhancing an image of a video frame, comprising the steps of:
performing scene division on a first video image in the video to be processed according to a preset division threshold value to obtain a second video image; the preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes;
converting the second video image from a three-primary color space to a hexagonal cone model color space to obtain a third video image; the third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image;
Calculating to obtain a high-brightness dark channel image according to the first video image, so as to fuse the first brightness channel image of the third video image with the high-brightness dark channel image to obtain a second brightness channel image;
inputting the first hue channel image into a first neural network model to obtain a second hue channel image;
inputting the first saturation channel image into a second neural network model to obtain a second saturation channel image;
and obtaining a target enhanced image according to the second brightness channel image, the second hue channel image and the second saturation channel image.
2. The video frame image enhancement method according to claim 1, wherein before performing the step of scene division of a first video image in a video to be processed according to a preset division threshold to obtain a second video image, the method further comprises:
performing differential operation on pixel points corresponding to two continuous frames of images in the video to be processed to obtain a measurement index, and performing binarization processing on the corresponding pixel points according to the measurement index and a preset measurement threshold to obtain a binarized image;
Carrying out averaging treatment according to the binarized image to obtain a priori average value;
and screening the images with the prior mean value smaller than a preset prior threshold value to obtain the first video image.
3. The method for enhancing a video frame image according to claim 1, wherein the performing scene division on the first video image in the video to be processed according to the preset division threshold to obtain the second video image comprises:
calculating the corresponding dark channel high darkness value under each scene according to a preset image data set; the preset image data set comprises a plurality of strong light scene sample images, clear scene sample images and low-illumination scene sample images;
fusing the dark channel high darkness value with a hexagonal cone model color channel of a corresponding image in the preset image dataset to obtain a channel mean value, so as to determine the preset dividing threshold value according to the channel mean value;
dividing the first video image through the preset dividing threshold value to obtain a second video image; wherein the second video image comprises a low-light scene image.
4. The method for enhancing a video frame image according to claim 3, wherein said fusing said first luminance channel image and said high luminance dark channel image of said third video image to obtain a second luminance channel image comprises:
Performing statistical analysis according to the low-illumination scene sample image to determine a cold color gamut space threshold; wherein the cold gamut space threshold comprises a first threshold and a second threshold, and the first threshold is less than the second threshold;
comparing each pixel of the third video image to the cold gamut space threshold to determine whether the pixel is greater than the first threshold and less than the second threshold;
and when the pixel point is determined to be larger than the first threshold value and smaller than the second threshold value, fusing the brightness channel corresponding to the pixel point with the high-brightness dark channel image to obtain the second brightness channel image.
5. The method of claim 3, wherein inputting the first color channel image into a first neural network model to obtain a second color channel image comprises:
constructing a first generation reactance network, and performing reactance training by taking the clear scene sample image as a positive sample to obtain a first generation reactance model;
and inputting the first hue channel image into the first generation contrast model to enhance a hue channel, so as to obtain the second hue channel image.
6. The method of video frame image enhancement according to claim 3, wherein said inputting said first saturation channel image into a second neural network model, resulting in a second saturation channel image, comprises:
constructing a second generation countermeasure network, and graying the clear scene sample image to obtain a gray sample image, so as to perform countermeasure training on the second generation countermeasure network according to the gray sample image as a positive sample to obtain a second generation countermeasure model;
carrying out graying treatment on the first saturation channel image to obtain a first gray level image;
inputting the first gray level image into the second generated countermeasure model to obtain edge information;
and fusing the edge information and the first saturation channel image to obtain the second saturation channel image.
7. The video frame image enhancement method according to claim 1, wherein said deriving a target enhancement image from said second luminance channel image, said second hue channel image, and said second saturation channel image comprises:
constructing a fourth video image according to the second brightness channel image, the second hue channel image and the second saturation channel image;
And converting the fourth video image from the hexagonal cone model color space to the trichromatic color space to obtain the target enhanced image.
8. A video frame image enhancement system, comprising:
the first module is used for carrying out scene division on a first video image in the video to be processed according to a preset division threshold value to obtain a second video image; the preset dividing threshold value is determined through dark channel high darkness values of sample images of different scenes;
the second module is used for converting the second video image from a three-primary color space to a hexagonal cone model color space to obtain a third video image; the third video image comprises a first color channel image, a first saturation channel image and a first brightness channel image;
the third module is used for calculating a high-brightness dark channel image according to the first video image so as to fuse the first brightness channel image and the high-brightness dark channel image of the third video image to obtain a second brightness channel image;
a fourth module, configured to input the first hue channel image into a first neural network model, to obtain a second hue channel image;
A fifth module, configured to input the first saturation channel image into a second neural network model, to obtain a second saturation channel image;
and a sixth module, configured to obtain a target enhanced image according to the second luminance channel image, the second hue channel image, and the second saturation channel image.
9. A video frame image enhancement system, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the video frame image enhancement method of any one of claims 1 to 7.
10. A computer storage medium in which a processor-executable program is stored, wherein the processor-executable program, when executed by the processor, is for implementing the video frame image enhancement method of any one of claims 1 to 7.
CN202311807512.4A 2023-12-26 2023-12-26 Video frame image enhancement method, system and storage medium Pending CN117876280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311807512.4A CN117876280A (en) 2023-12-26 2023-12-26 Video frame image enhancement method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311807512.4A CN117876280A (en) 2023-12-26 2023-12-26 Video frame image enhancement method, system and storage medium

Publications (1)

Publication Number Publication Date
CN117876280A true CN117876280A (en) 2024-04-12

Family

ID=90587540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311807512.4A Pending CN117876280A (en) 2023-12-26 2023-12-26 Video frame image enhancement method, system and storage medium

Country Status (1)

Country Link
CN (1) CN117876280A (en)

Similar Documents

Publication Publication Date Title
US10607324B2 (en) Image highlight detection and rendering
CN109729332B (en) Automatic white balance correction method and system
US11700457B2 (en) Flicker mitigation via image signal processing
Peng et al. Image haze removal using airlight white correction, local light filter, and aerial perspective prior
US9460521B2 (en) Digital image analysis
US20160253792A1 (en) Guided color grading for extended dynamic range
TW201106685A (en) Foreground image separation method
CN113748426B (en) Content aware PQ range analyzer and tone mapping in real-time feed
JP7152065B2 (en) Image processing device
KR100694153B1 (en) Method and Apparatus for Image Processing based on a Mapping Function
KR20080049458A (en) Apparatus to automatically controlling white balance and method thereof
CN111815528A (en) Bad weather image classification enhancement method based on convolution model and feature fusion
US7885458B1 (en) Illuminant estimation using gamut mapping and scene classification
Sazzad et al. Establishment of an efficient color model from existing models for better gamma encoding in image processing
CN113177438A (en) Image processing method, apparatus and storage medium
WO2013114803A1 (en) Image processing device, image processing method therefor, computer program, and image processing system
CN108769543B (en) Method and device for determining exposure time
CN111724447A (en) Image processing method, system, electronic equipment and storage medium
JP4988872B2 (en) Method, apparatus and program for classifying moving objects into common colors in video (Classification of moving objects into common colors in video)
CN104954627A (en) Information processing method and electronic equipment
CN117876280A (en) Video frame image enhancement method, system and storage medium
WO2023110880A1 (en) Image processing methods and systems for low-light image enhancement using machine learning models
KR101359549B1 (en) Method for Automatically Determining Weather Conditions from Image Data and Image Treatment Method for Improving Image by Using the Same
US11861847B2 (en) Image processing device, thermal image generation system, and recording medium
KR101370545B1 (en) Method for Conducting Selectively Image Improving Treatment Based on Automatical Image Data Analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination