CN114331920A - Image processing method and device, storage medium and electronic device - Google Patents

Image processing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114331920A
CN114331920A CN202210221220.1A CN202210221220A CN114331920A CN 114331920 A CN114331920 A CN 114331920A CN 202210221220 A CN202210221220 A CN 202210221220A CN 114331920 A CN114331920 A CN 114331920A
Authority
CN
China
Prior art keywords
image
transmittance
processed
determining
filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210221220.1A
Other languages
Chinese (zh)
Other versions
CN114331920B (en
Inventor
刘硕
俞克强
王松
邵晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210221220.1A priority Critical patent/CN114331920B/en
Publication of CN114331920A publication Critical patent/CN114331920A/en
Application granted granted Critical
Publication of CN114331920B publication Critical patent/CN114331920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method and device, a storage medium and an electronic device, wherein the method comprises the following steps: determining a transmissivity weight map of an image to be processed; calculating the transmittance of the image to be processed by using the transmittance weight graph, and determining a transmittance graph; carrying out filtering processing on the transmittance graph to determine a filtering image; and carrying out fog penetration treatment on the image to be treated based on the filtering image, and determining a target image. By the method and the device, the problems of over-enhancement of a pure color area in an image after fog penetration processing and video flicker in the prior art are solved, and the visual effect of the video is improved.

Description

Image processing method and device, storage medium and electronic device
Technical Field
The present invention relates to the field of images, and in particular, to an image processing method and apparatus, a storage medium, and an electronic apparatus.
Background
The visible light is absorbed or scattered by particles such as tiny water drops in the air, so that the image acquired by the image or video acquisition device is unclear, and the subsequent image processing and application scene are difficult. Therefore, the image needs to be subjected to fog-penetration processing so that the image becomes clear.
The fog penetration technology of the camera can be divided into physical fog penetration and digital fog penetration. The physical fog penetration, namely the optical fog penetration, is mainly realized by a camera lens, and the high-definition fog penetration lens is generally realized on a large electric zoom lens, is high in price and is generally applied to occasions such as ports, forest high points and the like.
The digital fog penetration is realized on a camera or back-end software, is a back-end image restoration technology designed based on a human visual perception model, has the characteristics of low cost, easy deployment and the like, and is suitable for popularization and application in city monitoring. In the classical several digital fog penetration algorithms, a ubiquitous problem exists: that is, a pure color region (e.g., a sky region or a dense fog region) in an image is not well processed, a texture and a blocking phenomenon of a large area often occur, and a flicker problem exists when the method is applied to video fog penetration.
Disclosure of Invention
The invention provides an image processing method and device, a storage medium and an electronic device, and aims to solve the problems of excessive enhancement of a pure color area in an image subjected to fog penetration processing and video flicker in the prior art.
According to an embodiment of the present invention, there is provided an image processing method including: determining a transmissivity weight map of an image to be processed; calculating the transmittance of the image to be processed by using the transmittance weight map, and determining a transmittance map; carrying out filtering processing on the transmittance graph to determine a filtering image; and carrying out fog penetration treatment on the image to be treated based on the filtering image, and determining a target image.
According to another embodiment of the present invention, there is provided an image processing apparatus including: the first determining module is used for determining a transmissivity weight map of the image to be processed; the first calculation module is used for calculating the transmissivity of the image to be processed by utilizing the transmissivity weight graph and determining a transmissivity graph; the first filtering module is used for carrying out filtering processing on the transmittance graph and determining a filtering image; and the first processing module is used for carrying out fog penetration processing on the image to be processed based on the filtering image and determining a target image.
In an exemplary embodiment, the first determining module includes: the first determining unit is used for determining dark channel pixel values of pixel channels in the image to be processed; a first conversion unit, configured to convert the to-be-processed image into a luminance image; the first detection unit is used for carrying out edge detection on the brightness image and determining an edge image; a second determining unit configured to determine the transmittance weight map using the dark channel pixel value, the luminance value of the luminance image, and the edge image.
In an exemplary embodiment, the second determining unit includes: a first determining subunit, configured to determine a comparison result between an edge detection value of the edge image and a preset edge threshold; a second determining subunit, configured to determine a ratio between a luminance value of the luminance image and a maximum luminance value in the luminance map; a first calculating subunit, configured to calculate the transmittance weighted value based on the dark channel pixel value, the comparison result, and the ratio.
In an exemplary embodiment, the first calculating module includes: the first calculating unit is used for calculating a first transmittance of the image to be processed by utilizing a dark channel pixel value of a pixel channel in the image to be processed and a preset fog penetration intensity threshold value; a third determining unit, configured to determine the transmittance map by using the fixed transmittance of the abnormal image area in the image to be processed, the pixel value in the transmittance weight map, and the first transmittance.
In an exemplary embodiment, the first filtering module includes: a fourth determining unit, configured to determine a transmittance of an image of a previous frame of the image to be processed; a second calculation unit configured to calculate an absolute value of a difference between the transmittance of the image to be processed and the transmittance of the previous frame image; the first filtering unit is used for carrying out filtering processing on the absolute value and determining a first filtering result; a fifth determining unit, configured to determine a fusion factor by using the first filtering result and a preset upper and lower threshold; and a third calculating unit, configured to calculate a transmittance after filtering according to the fusion factor, the transmittance of the image to be processed, and the transmittance of the previous frame image, so as to determine the filtered image.
In an exemplary embodiment, the first processing module includes: and the first processing unit is used for inputting the transmissivity of the filtered image and the target atmospheric light value of the image to be processed into a preset atmospheric scattering model so as to output the target image.
In an exemplary embodiment, the apparatus further includes: the second determining module is used for determining the atmospheric light value of the image to be processed by utilizing the dark channel pixel value of the image to be processed; and the third determining module is used for carrying out filtering processing on the atmospheric light value and determining a target atmospheric light value.
According to a further embodiment of the present invention, there is also provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the transmissivity of the image to be processed is calculated by determining the transmissivity weight graph of the image to be processed, the transmissivity graph is filtered, a filtering image is determined, and fog penetration processing is carried out on the image to be processed based on the filtering image. Therefore, the pure color area in the image can not generate the phenomena of texture and blocking due to over enhancement, and the global and local flicker in the image can be inhibited. Therefore, the problems of over-enhancement of a pure color area in an image after fog penetration treatment and video flicker in the prior art can be solved, and the effect of improving the visual effect of the video is achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an image processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a specific embodiment according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of determining transmittance according to an embodiment of the invention;
fig. 5 is a block diagram of the structure of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of an image processing method according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, such as a computer program corresponding to the image processing method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, an image processing method is provided, and fig. 2 is a flowchart of an image processing method according to an embodiment of the present invention, and as shown in fig. 2, the flowchart includes the steps of:
step S202, determining a transmissivity weight map of the image to be processed;
step S204, calculating the transmittance of the image to be processed by using the transmittance weight graph, and determining a transmittance graph;
step S206, filtering the transmittance graph to determine a filtered image;
and S208, carrying out fog penetration treatment on the image to be treated based on the filtered image, and determining a target image.
In the present embodiment, the image to be processed includes but is not limited to an RGB color mode image,
the executing subject of the above steps may be a terminal (e.g., a computer, a mobile device), etc., but is not limited thereto.
Through the steps, the transmissivity of the image to be processed is calculated by determining the transmissivity weight graph of the image to be processed, the transmissivity graph is filtered, the filtered image is determined, and fog penetrating processing is carried out on the image to be processed based on the filtered image. Therefore, the pure color area in the image can not generate the phenomena of texture and blocking due to over enhancement, and the global and local flicker in the image can be inhibited. Therefore, the problems of over-enhancement of a pure color area in an image after fog penetration treatment and video flicker in the prior art can be solved, and the effect of improving the visual effect of the video is achieved.
In one exemplary embodiment, determining a transmittance weight map for a to-be-processed image includes:
s11, determining dark channel pixel values of pixel channels in the image to be processed;
for example, the minimum value of the three RGB channels in the window centered on the current pixel point in the image to be processed is calculated, and the minimum value is determined as the dark channel pixel value of the current pixel point. Wherein, the window size can be preset. And obtaining a dark channel pixel value of the image to be processed by traversing the full image pixel points in the image to be processed.
S12, converting the image to be processed into a brightness image;
for example, in the case where the image to be processed is an RGB image, the RGB image is converted into a luminance image Y.
The manner of converting the RGB image into the luminance image Y in the present embodiment includes, but is not limited to: y =0.299R +0.587G +0.114B, Y =0.3R +0.5G +0.1B, Y =0.2R +0.3G +0.125B, where R (red), G (green), B (blue) are used to represent the luminance values of the three primary colors of light.
S13, carrying out edge detection on the brightness image to determine an edge image;
in the present embodiment, the algorithm for edge detection on the luminance image includes, but is not limited to, Sobel algorithm.
S14, a transmittance weight map is determined using the dark channel pixel values, the luminance values of the luminance image, and the edge image.
In this embodiment, determining the transmittance weight map using the dark channel pixel values, the luminance values of the luminance image, and the edge image includes:
s1401, determining a comparison result of an edge detection value of an edge image and a preset edge threshold value;
for example, when edge < edge _ th, which is used to represent an edge detection value of an edge image, the comparison result flag takes 1, otherwise 0 is taken.
S1402, a ratio between a luminance value of the luminance image and a maximum luminance value in the luminance map is determined.
For example, the ratio = Y/Y _ max, where Y is used to represent the luminance value of the luminance image and Y _ max is used to represent the maximum luminance value in the luminance image.
S1403, a transmittance weight value is calculated based on the dark channel pixel value, the comparison result, and the ratio.
For example, the transmittance weight value is calculated by the following formula:
weight=(Y/Y_max)*flag*dark;
flag = edge < edge _ th 1 or 0;
wherein weight is used for representing the transmittance weighted value, and dark is used for representing the dark channel pixel value of the image to be processed.
In one exemplary embodiment, the transmittance of the image to be processed is calculated by using the transmittance weight map, and the transmittance map is determined, including:
s31, calculating a first transmittance of the image to be processed by using a dark channel pixel value of a pixel channel in the image to be processed and a preset fog penetration intensity threshold value;
for example, the first transmittance tx1=1- ω dark min (a2), where ω is used to represent a preset fog-penetration intensity threshold (including artificially set fog-penetration intensity), dark channel pixel values of pixel channels in the image to be processed, and a2 is used to represent target atmospheric light values of the image to be processed.
In this embodiment, the target atmospheric light value may be determined by:
the same value as the maximum M% value position of the dark channel pixel value of the pixel channel is determined in the image to be processed, and the average thereof is determined as the atmospheric light value a1 of the image to be processed. And performing time domain filtering processing on the A1 to determine a target atmospheric light value A2. For example, a2= tmpflt preA1+ (1-tmpflt) a1, where preA1 is used to represent the atmospheric light mean of the first K frames, tmpflt is used to represent temporal filter coefficients, and M and K are both preset empirical values.
And S32, determining a transmittance map by using the fixed transmittance of the abnormal image area in the image to be processed, the pixel value in the transmittance weight map and the first transmittance.
In the present embodiment, the abnormal image region includes, but is not limited to, a solid color region (e.g., a sky region, a dense fog region, etc.) in the image.
In this embodiment, the pure color regions in the image can be avoided from being over-enhanced by optimizing the first transmittance tx 1. For example, the optimization tx1 results in tx2= fix a weight + tx1 (1-weight), where fix a fixed transmittance for an abnormal image region in the image to be processed may be a predetermined value.
In this embodiment, tx2 is further optimized to avoid over-enhancement of solid color regions in the image. For example, tx2 is optimized to obtain tx3= tx2 plus _ lut [ Y ], where lum _ lut [ Y ] is used to represent a brightness defogging curve, and the horizontal axis Y in the brightness defogging curve represents the pixel brightness and the vertical axis represents the adjustment coefficient.
In one exemplary embodiment, the filtering the transmittance map to determine a filtered image comprises:
s41, determining the transmittance of the image of the previous frame of the image to be processed;
s42, calculating the absolute value of the difference between the transmissivity of the image to be processed and the transmissivity of the previous frame image;
s43, filtering the absolute value to determine a first filtering result;
s44, determining a fusion factor by using the first filtering result and a preset upper and lower limit threshold;
and S45, calculating the transmittance after filtering according to the fusion factor, the transmittance of the image to be processed and the transmittance of the previous frame image to determine a filtering image.
In the embodiment, the image to be processed and the previous frame image are adjacent frames, and flicker caused by large local fog penetration degree difference of the adjacent frames is inhibited according to the dynamic and static changes of the local areas of the image to be processed and the previous frame image.
For example, the absolute value of the difference between the transmittance pre _ tx of the previous frame image and the transmittance tx3 of the image to be processed is calculated, maximum value filtering is performed thereon, and the fusion factor λ is obtained by combining the set upper and lower limit thresholds L, H with the maximum value filtering result max _ diff, as shown in fig. 4.
In this embodiment, tx3 may be further optimized by using the fusion factor, resulting in tx4= λ × tx + (1- λ) × pre _ tx.
When the previous frame image is the first frame image, the transmittance processing is not performed on the first frame image.
In an exemplary embodiment, the fog penetration processing is performed on the image to be processed based on the filtered image, and the determining of the target image comprises the following steps:
and S51, inputting the transmissivity of the filtered image and the target atmosphere light value of the image to be processed into a preset atmosphere scattering model to output a target image.
In the present embodiment, the preset atmospheric scattering model includes, but is not limited to, an atmospheric scattering physical model. For example, the output OUT = (IN-A)/tx4+ A of the atmospheric scattering model is preset, where IN is used to represent the luminance value of the image to be processed, and A is used to represent the target atmospheric light value.
In one exemplary embodiment, the method further comprises:
s61, determining an atmospheric light value of the image to be processed by using a dark channel pixel value of the image to be processed;
and S62, performing filtering processing on the atmospheric light value to determine a target atmospheric light value.
The specific determination manner of the target atmospheric light value is already described in the above S31, and is not described herein again.
The invention is illustrated below with reference to specific examples:
the embodiment aims at the problem that when the sky or the dense fog part in the image is processed in the prior art, the texture and the blocking phenomenon of a larger area can occur, and the embodiment is applied to the video fog penetration and has the flicker problem. An image fog-penetrating method is provided, which takes the fog-penetrating treatment of RGB images as an example, and as shown in fig. 3, includes the following steps:
s301, calculating dark channel pixel values of the RGB image. For example, the minimum value of the RGB three channels in the window centered on the current pixel point is calculated, and the minimum value is determined as the dark channel pixel value of the current pixel point, where the window size can be set. And traversing the pixel points of the whole image, and determining the dark channel pixel value dark of the whole image.
S302, determining a target atmospheric light value A2 of the RGB image, and performing time-domain filtering processing to suppress the flicker of the brightness and color of the full image A2 caused by the transformation of the atmospheric light value A1. For example, by finding the same value as the maximum M% value position of dark on the IN map, the mean thereof is calculated as a 1. And performing time-domain filtering processing on A1 according to A2= tmpflt preA + (1-tmpflt) A1 to obtain a final A2 value. Where preA is used to represent the atmospheric light mean of the previous K frames, tmpflt is used to represent the temporal filter coefficients, and M and K are both used to represent the set empirical values.
S303, determine the transmittance weight map. And converting the RGB image into a brightness image Y, carrying out edge calculation on the Y, and combining the brightness, the edge calculation result and the dark channel information to obtain a transmissivity weight map. For example, the RGB image is converted into Y in terms of Y =0.299R +0.587G + 0.114B. Performing edge calculation on Y, for example, obtaining an edge image edge by using a Sobel operator, and finally obtaining a transmittance weight map weight according to the following formula 1 and formula 2, where edge _ th is used to represent an empirically set edge threshold, when edge < edge _ th, flag is 1, otherwise, 0 is taken; y _ max is used to represent the maximum value of the luminance map.
weight = (Y/Y _ max) × flag = dark formula 1;
flag = edge < edge _ th 1 or 0 formula 2;
s304, calculating a rough transmittance map tx1 according to the atmospheric scattering physical model, and optimizing the rough transmittance map by combining the transmittance weight map and the set brightness defogging curve. For example, a coarse transmittance map is calculated from tx1=1- ω dark min (a), where ω is used to represent the artificially set fog-penetration intensity. The sky/fog region is optimized according to tx2= fix weight + tx1 (1-weight), tx3= tx2 plus lut [ Y ], wherein fix is used for representing empirical values and representing fixed transmittance of the sky/fog region, lum _ lut [ Y ] is used for representing a brightness defogging curve, the horizontal axis Y represents pixel brightness, and the vertical axis represents an adjusting coefficient.
S305, refining the transmittance graph by using an edge-preserving filtering technology, performing 3D transmittance processing, and considering the dynamic and static changes of local areas of front and back frames to suppress flicker caused by large local fog penetration degree difference of adjacent frames. For example, the transmittance tx3 is subjected to a guided filter refinement process in the luminance map Y. Then, the absolute value of the difference between pre _ tx and tx3 is calculated, maximum filtering is performed thereon, and the fusion factor λ is obtained by combining the set upper and lower limit thresholds L, H with the maximum filtering result max _ diff, as shown in fig. 4. The final transmittance tx4 was finally obtained from tx4= λ tx3+ (1- λ) × pre _ tx. Here, pre _ tx is used to indicate the transmittance of the previous frame, and if the current frame is the first frame image, the 3D transmittance process is not performed.
And S306, obtaining an image OUT after fog penetration by combining tx4 and A according to the atmospheric scattering physical model. For example, according to OUT = (IN-A)/tx + A, A fog-penetrating image is obtained.
In summary, the present embodiment combines the brightness, the edge calculation result, and the dark channel information to obtain the transmittance weight map, wherein the sky/dense fog region can be accurately distinguished according to the brightness and the edge calculation result; and then, the dark channel is combined, the transmittance of the flat highlight area which is difficult to distinguish is adjusted according to the size of the dark channel, so that the sky/dense fog area is optimized, the non-sky/dense fog area is excessively natural, the fog block problem cannot occur, the phenomena of texture and blocking caused by excessively enhancing the sky and the dense fog area are avoided, and the fog penetration effect is ensured.
By utilizing the time domain filtering technology, the global flicker caused by the calculation error of atmospheric light can be avoided; and 3D transmissivity processing is carried out, dynamic and static changes of local areas of front and back frames are considered, flicker caused by large local fog penetration degree difference calculated by adjacent frames is avoided, and video fog penetration processing is facilitated.
For fog-free scenes, the effect of contrast enhancement can be achieved.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, an image processing apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 5 is a block diagram of the structure of an image processing apparatus according to an embodiment of the present invention, as shown in fig. 5, the apparatus including:
a first determining module 52, configured to determine a transmittance weight map of the image to be processed;
a first calculating module 54, configured to calculate a transmittance of the to-be-processed image by using the transmittance weight map, and determine a transmittance map;
a first filtering module 56, configured to perform filtering processing on the transmittance map to determine a filtered image;
the first processing module 58 is configured to perform fog penetration processing on the image to be processed based on the filtered image, and determine a target image.
In an exemplary embodiment, the first determining module includes:
the first determining unit is used for determining dark channel pixel values of pixel channels in the image to be processed;
a first conversion unit, configured to convert the to-be-processed image into a luminance image;
the first detection unit is used for carrying out edge detection on the brightness image and determining an edge image;
a second determining unit configured to determine the transmittance weight map using the dark channel pixel value, the luminance value of the luminance image, and the edge image.
In an exemplary embodiment, the second determining unit includes:
a first determining subunit, configured to determine a comparison result between an edge detection value of the edge image and a preset edge threshold;
a second determining subunit, configured to determine a ratio between a luminance value of the luminance image and a maximum luminance value in the luminance map;
a first calculating subunit, configured to calculate the transmittance weighted value based on the dark channel pixel value, the comparison result, and the ratio.
In an exemplary embodiment, the first calculating module includes:
the first calculating unit is used for calculating a first transmittance of the image to be processed by utilizing a dark channel pixel value of a pixel channel in the image to be processed and a preset fog penetration intensity threshold value;
a third determining unit, configured to determine the transmittance map by using the fixed transmittance of the abnormal image area in the image to be processed, the pixel value in the transmittance weight map, and the first transmittance.
In an exemplary embodiment, the first filtering module includes:
a fourth determining unit, configured to determine a transmittance of an image of a previous frame of the image to be processed;
a second calculation unit configured to calculate an absolute value of a difference between the transmittance of the image to be processed and the transmittance of the previous frame image;
the first filtering unit is used for carrying out filtering processing on the absolute value and determining a first filtering result;
a fifth determining unit, configured to determine a fusion factor by using the first filtering result and a preset upper and lower threshold;
and a third calculating unit, configured to calculate a transmittance after filtering according to the fusion factor, the transmittance of the image to be processed, and the transmittance of the previous frame image, so as to determine the filtered image.
In an exemplary embodiment, the first processing module includes:
and the first processing unit is used for inputting the transmissivity of the filtered image and the target atmospheric light value of the image to be processed into a preset atmospheric scattering model so as to output the target image.
In an exemplary embodiment, the apparatus further includes:
the second determining module is used for determining the atmospheric light value of the image to be processed by utilizing the dark channel pixel value of the image to be processed;
and the third determining module is used for carrying out filtering processing on the atmospheric light value and determining a target atmospheric light value.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program is arranged to perform the steps of any of the above-mentioned method embodiments when executed.
In the present embodiment, the above-described computer-readable storage medium may be configured to store a computer program for executing the above steps.
In an exemplary embodiment, the computer-readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
In an exemplary embodiment, the processor may be configured to execute the above steps by a computer program.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An image processing method, comprising:
determining a transmissivity weight map of an image to be processed;
calculating the transmittance of the image to be processed by using the transmittance weight map, and determining a transmittance map;
carrying out filtering processing on the transmittance graph to determine a filtering image;
and carrying out fog penetration treatment on the image to be treated based on the filtering image, and determining a target image.
2. The method of claim 1, wherein determining a transmittance weight map for the image to be processed comprises:
determining a dark channel pixel value of a pixel channel in the image to be processed;
converting the image to be processed into a brightness image;
carrying out edge detection on the brightness image to determine an edge image;
determining the transmittance weight map using the dark channel pixel values, luminance values of the luminance image, and the edge image.
3. The method of claim 2, wherein determining the transmittance weight map using the dark channel pixel values, luminance values of the luminance image, and the edge image comprises:
determining a comparison result of an edge detection value of the edge image and a preset edge threshold value;
determining a ratio between a luminance value of the luminance image and a maximum luminance value in the luminance map;
calculating the transmittance weight value based on the dark channel pixel value, the comparison result, and the ratio.
4. The method according to claim 1, wherein calculating the transmittance of the image to be processed using the transmittance weight map, determining a transmittance map comprises:
calculating a first transmittance of the image to be processed by using a dark channel pixel value of a pixel channel in the image to be processed and a preset fog penetration intensity threshold value;
and determining the transmittance map by using the fixed transmittance of the abnormal image area in the image to be processed, the pixel value in the transmittance weight map and the first transmittance.
5. The method of claim 1, wherein performing a filtering process on the transmittance map to determine a filtered image comprises:
determining the transmissivity of a previous frame image of the image to be processed;
calculating an absolute value of a difference between the transmittance of the image to be processed and the transmittance of the previous frame image;
filtering the absolute value to determine a first filtering result;
determining a fusion factor by using the first filtering result and a preset upper and lower limit threshold;
and calculating the transmittance after filtering according to the fusion factor, the transmittance of the image to be processed and the transmittance of the previous frame image to determine the filtered image.
6. The method of claim 1, wherein performing fog-penetration processing on the image to be processed based on the filtered image, and determining a target image comprises:
and inputting the transmissivity of the filtering image and the target atmospheric light value of the image to be processed into a preset atmospheric scattering model so as to output the target image.
7. The method according to any one of claims 1-6, further comprising:
determining an atmospheric light value of the image to be processed by utilizing a dark channel pixel value of the image to be processed;
and carrying out filtering processing on the atmospheric light value to determine a target atmospheric light value.
8. An image processing apparatus characterized by comprising:
the first determining module is used for determining a transmissivity weight map of the image to be processed;
the first calculation module is used for calculating the transmissivity of the image to be processed by utilizing the transmissivity weight map and determining a transmissivity map;
the first filtering module is used for carrying out filtering processing on the transmissivity graph and determining a filtering image;
and the first processing module is used for carrying out fog penetration processing on the image to be processed based on the filtering image and determining a target image.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program, when executed, is adapted to implement the method of any of claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202210221220.1A 2022-03-09 2022-03-09 Image processing method and device, storage medium, and electronic device Active CN114331920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210221220.1A CN114331920B (en) 2022-03-09 2022-03-09 Image processing method and device, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210221220.1A CN114331920B (en) 2022-03-09 2022-03-09 Image processing method and device, storage medium, and electronic device

Publications (2)

Publication Number Publication Date
CN114331920A true CN114331920A (en) 2022-04-12
CN114331920B CN114331920B (en) 2022-06-24

Family

ID=81033059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210221220.1A Active CN114331920B (en) 2022-03-09 2022-03-09 Image processing method and device, storage medium, and electronic device

Country Status (1)

Country Link
CN (1) CN114331920B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942758A (en) * 2014-04-04 2014-07-23 中国人民解放军国防科学技术大学 Dark channel prior image dehazing method based on multiscale fusion
CN104036466A (en) * 2014-06-17 2014-09-10 浙江立元通信技术股份有限公司 Video defogging method and system
CN104091307A (en) * 2014-06-12 2014-10-08 中国人民解放军重庆通信学院 Frog day image rapid restoration method based on feedback mean value filtering
CN104616269A (en) * 2015-02-27 2015-05-13 深圳市中兴移动通信有限公司 Image defogging method and shooting device
CN104732496A (en) * 2015-03-23 2015-06-24 青岛海信电器股份有限公司 Defogging processing method and display device for video stream images
CN105631831A (en) * 2016-03-14 2016-06-01 北京理工大学 Video image enhancement method under haze condition
CN107360344A (en) * 2017-06-27 2017-11-17 西安电子科技大学 Monitor video rapid defogging method
CN107403421A (en) * 2017-08-10 2017-11-28 杭州联吉技术有限公司 A kind of image defogging method, storage medium and terminal device
CN108717686A (en) * 2018-04-04 2018-10-30 华南理工大学 A kind of real-time video defogging method based on dark channel prior
CN109118440A (en) * 2018-07-06 2019-01-01 天津大学 Single image to the fog method based on transmissivity fusion with the estimation of adaptive atmosphere light
CN109636735A (en) * 2018-11-02 2019-04-16 中国航空工业集团公司洛阳电光设备研究所 A kind of fast video defogging method based on space-time consistency constraint
CN110148093A (en) * 2019-04-17 2019-08-20 中山大学 A kind of image defogging improved method based on dark channel prior
WO2020020445A1 (en) * 2018-07-24 2020-01-30 Toyota Motor Europe A method and a system for processing images to obtain foggy images
CN110889863A (en) * 2019-09-03 2020-03-17 河南理工大学 Target tracking method based on target perception correlation filtering
CN111127372A (en) * 2020-04-01 2020-05-08 杭州涂鸦信息技术有限公司 Image defogging method, system and equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942758A (en) * 2014-04-04 2014-07-23 中国人民解放军国防科学技术大学 Dark channel prior image dehazing method based on multiscale fusion
CN104091307A (en) * 2014-06-12 2014-10-08 中国人民解放军重庆通信学院 Frog day image rapid restoration method based on feedback mean value filtering
CN104036466A (en) * 2014-06-17 2014-09-10 浙江立元通信技术股份有限公司 Video defogging method and system
CN104616269A (en) * 2015-02-27 2015-05-13 深圳市中兴移动通信有限公司 Image defogging method and shooting device
CN104732496A (en) * 2015-03-23 2015-06-24 青岛海信电器股份有限公司 Defogging processing method and display device for video stream images
CN105631831A (en) * 2016-03-14 2016-06-01 北京理工大学 Video image enhancement method under haze condition
CN107360344A (en) * 2017-06-27 2017-11-17 西安电子科技大学 Monitor video rapid defogging method
CN107403421A (en) * 2017-08-10 2017-11-28 杭州联吉技术有限公司 A kind of image defogging method, storage medium and terminal device
CN108717686A (en) * 2018-04-04 2018-10-30 华南理工大学 A kind of real-time video defogging method based on dark channel prior
CN109118440A (en) * 2018-07-06 2019-01-01 天津大学 Single image to the fog method based on transmissivity fusion with the estimation of adaptive atmosphere light
WO2020020445A1 (en) * 2018-07-24 2020-01-30 Toyota Motor Europe A method and a system for processing images to obtain foggy images
CN109636735A (en) * 2018-11-02 2019-04-16 中国航空工业集团公司洛阳电光设备研究所 A kind of fast video defogging method based on space-time consistency constraint
CN110148093A (en) * 2019-04-17 2019-08-20 中山大学 A kind of image defogging improved method based on dark channel prior
CN110889863A (en) * 2019-09-03 2020-03-17 河南理工大学 Target tracking method based on target perception correlation filtering
CN111127372A (en) * 2020-04-01 2020-05-08 杭州涂鸦信息技术有限公司 Image defogging method, system and equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
WANG SHUO 等: "Single image dehazing using dark channel fusion and dark channel confidence", 《2020 INTERNATIONAL CONFERENCE ON BIG DATA & ARTIFICIAL INTELLIGENCE & SOFTWARE ENGINEERING (ICBASE)》 *
叶晓杰 等: "基于边缘权重分析的水面弱目标红外增强", 《激光与红外》 *
向文鼎 等: "基于自适应阈值分割与透射率融合的图像去雾算法", 《计算机工程》 *
江巨浪 等: "基于透射率权值因子的雾天图像融合增强算法", 《电子与信息学报》 *

Also Published As

Publication number Publication date
CN114331920B (en) 2022-06-24

Similar Documents

Publication Publication Date Title
Emberton et al. Underwater image and video dehazing with pure haze region segmentation
EP2852152B1 (en) Image processing method, apparatus and shooting terminal
JP6803378B2 (en) Reverse tone mapping method and equipment
WO2016206087A1 (en) Low-illumination image processing method and device
CN111784605B (en) Image noise reduction method based on region guidance, computer device and computer readable storage medium
KR101621614B1 (en) Method and apparatus for enhancing digital image, and apparatus for image processing using the same
CN106570850B (en) A kind of image interfusion method
US11017511B2 (en) Method and system of haze reduction for image processing
Pei et al. Effective image haze removal using dark channel prior and post-processing
CN111899197B (en) Image brightening and denoising method and device, mobile terminal and storage medium
CN107424137B (en) Text enhancement method and device, computer device and readable storage medium
CN110248242A (en) A kind of image procossing and live broadcasting method, device, equipment and storage medium
CN111369486A (en) Image fusion processing method and device
CN113168669A (en) Image processing method and device, electronic equipment and readable storage medium
CN111192205A (en) Image defogging method and system and computer readable storage medium
CN111598800B (en) Single image defogging method based on space domain homomorphic filtering and dark channel priori
Kudo et al. Image dehazing method by fusing weighted near-infrared image
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
CN114037641A (en) Low-illumination image enhancement method, device, equipment and medium
CN114331920B (en) Image processing method and device, storage medium, and electronic device
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
CN115578294B (en) Image enhancement method, device, equipment and storage medium
CN106780402A (en) Dynamic range of images extended method and device based on Bayer format
Terai et al. Color image contrast enhancement by retinex model
CA3009694C (en) Noise-cancelling filter for video images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant