CN110930326A - Image and video defogging method and related device - Google Patents

Image and video defogging method and related device Download PDF

Info

Publication number
CN110930326A
CN110930326A CN201911122297.8A CN201911122297A CN110930326A CN 110930326 A CN110930326 A CN 110930326A CN 201911122297 A CN201911122297 A CN 201911122297A CN 110930326 A CN110930326 A CN 110930326A
Authority
CN
China
Prior art keywords
image
processed
value
defogging
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911122297.8A
Other languages
Chinese (zh)
Inventor
曾强
陈媛媛
熊剑平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201911122297.8A priority Critical patent/CN110930326A/en
Publication of CN110930326A publication Critical patent/CN110930326A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image and video defogging method and a related device. The image defogging method comprises the steps of calculating a fog density value in an image to be processed, which is shot by a camera device; if the concentration value meets a preset condition, acquiring a defogging parameter of the image to be processed; and carrying out defogging treatment on the image to be treated by utilizing the defogging parameters. According to the scheme, the defogging efficiency can be improved, and the defogging accuracy is enhanced.

Description

Image and video defogging method and related device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image and video defogging method and a related apparatus.
Background
In natural weather such as fog, rain, snow, etc., the imaging quality of the camera devices such as surveillance cameras is greatly reduced due to atmospheric particle scattering, thereby bringing about serious negative effects on outdoor image applications such as video surveillance, terrain survey, automatic driving, etc. Specifically, as the distance between the object and the imaging device increases, the influence of the scattering effect of the atmospheric particles on the image formation gradually increases. This effect is mainly caused by two scattering processes. Firstly, the reflected light on the surface of an object is attenuated due to the scattering of atmospheric particles in the process of reaching the image pickup device; and secondly, because natural light enters the camera device to participate in imaging due to atmospheric particle scattering, the common action of the natural light and the atmospheric particle causes the conditions of low contrast, saturation, color shift and the like of an image/video shot by the camera device, and the conditions are visually represented as one layer of thick or thin fog of the image/video, so that the visual effect is influenced. In view of this, how to improve the defogging efficiency and enhance the defogging accuracy becomes an urgent problem to be solved.
Disclosure of Invention
The technical problem that this application mainly solved provides an image, video defogging method and relevant device, can improve defogging efficiency, reinforcing defogging accuracy.
In order to solve the above problem, a first aspect of the present application provides an image defogging method including: calculating the density value of fog in the image to be processed, which is shot by the camera device; if the concentration value meets a preset condition, acquiring a defogging parameter of the image to be processed; and carrying out defogging treatment on the image to be treated by utilizing the defogging parameters.
In order to solve the above problem, a second aspect of the present application provides a video defogging method including: selecting one frame as an image to be processed every other preset number of frames of images in the video data, wherein the video data is obtained by shooting by a camera device; carrying out defogging treatment on an image to be treated by using the image defogging method of the first aspect; and carrying out defogging treatment on the preset number of frame images behind the image to be treated by adopting the same defogging parameters and image defogging method as the image to be treated.
In order to solve the above problem, a third aspect of the present application provides an image defogging device, including a calculation module, an obtaining module and a processing module, where the calculation module is configured to calculate a fog density value in an image to be processed, the image to be processed being captured by an imaging device; the acquisition module is used for acquiring defogging parameters of the image to be processed if the concentration value meets a preset condition; the processing module is used for carrying out defogging processing on the image to be processed by utilizing the defogging parameters.
In order to solve the above problems, a fourth aspect of the present application provides a video defogging device, including a selection module and a processing module, where the selection module is configured to select one frame as an image to be processed every other preset number of frame images in video data, where the video data is obtained by shooting with a camera device; the processing module is used for carrying out defogging processing on the image to be processed by the image defogging method in the first aspect; the processing module is also used for carrying out defogging processing on a preset number of frame images behind the image to be processed by adopting the same defogging parameters and image defogging method as the image to be processed.
In order to solve the above problem, a fifth aspect of the present application provides a defogging device including a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the image defogging method in the first aspect or implement the video defogging method in the second aspect.
In order to solve the above problem, a sixth aspect of the present application provides a storage device storing program instructions executable by a processor for implementing the image defogging method according to the first aspect described above or the video defogging method according to the second aspect described above.
According to the scheme, the concentration value of the fog in the image to be processed, which is shot by the camera device, is calculated, whether the concentration value meets the preset condition is judged, if the preset condition is met, the defogging parameter of the image to be processed is obtained, the defogging parameter is utilized to perform defogging processing on the image to be processed, namely, before the defogging parameter of the image to be processed is obtained, whether the concentration value of the fog in the image to be processed meets the preset condition is judged, the defogging parameter is effectively prevented from being obtained for all the images to be processed, which are shot by the camera device, the defogging efficiency is favorably improved, in addition, the adverse effect caused by the defogging processing on the image to be processed, which does not meet the preset condition, is effectively avoided, and the defogging accuracy is favorably enhanced.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image defogging method according to the present application;
FIG. 2 is a flowchart illustrating an embodiment of step S11 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 4 is a block diagram of an embodiment of an image to be processed and a downsampled image of the image to be processed;
FIG. 5 is a schematic flow chart illustrating another embodiment of step S13 in FIG. 1;
FIG. 6 is a diagram illustrating an embodiment of pre-processing an image to be processed;
FIG. 7 is a schematic diagram of one embodiment of a transmittance map;
FIG. 8 is a schematic diagram of another embodiment of a transmittance map;
FIG. 9 is a schematic flow chart diagram illustrating another embodiment of an image defogging method according to the present application;
FIG. 10 is a schematic flow chart diagram illustrating an embodiment of a video defogging method according to the present application;
FIG. 11 is a schematic diagram of a frame of an embodiment of an image defogging device according to the present application;
FIG. 12 is a block diagram of an embodiment of a video defogging device according to the present application;
FIG. 13 is a schematic block diagram of an embodiment of the mist eliminator of the present application;
FIG. 14 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of an image defogging method according to the present application. In this embodiment, the image defogging method includes the following steps:
step S11: and calculating the density value of the fog in the image to be processed, which is shot by the camera device.
In this embodiment, the image pickup device may be a monitoring camera installed in an outdoor scene such as an expressway, an intersection, a commercial pedestrian street, or the like.
The scattering effect of atmospheric particles causes the image to appear "foggy," which in particular causes image contrast, saturation degradation, and hue shifts. In this embodiment, the density value of the fog in the image to be processed may be evaluated by using an image contrast condition of the image to be processed, the density value of the fog in the image to be processed may also be evaluated by using an image color attenuation condition of the image to be processed, and the density value of the fog in the image to be processed may also be evaluated by using an image contrast condition and an image color attenuation condition of the image to be processed at the same time, which is not limited in this embodiment.
Step S12: and judging whether the concentration value meets a preset condition, if so, executing step S13.
In an implementation scenario, when the density value of the fog in the image to be processed is obtained through evaluation of the image contrast condition of the image to be processed, the preset condition in this embodiment may include a preset threshold corresponding to the image contrast condition, and at this time, if the density value is greater than the preset threshold, it is determined that the density value satisfies the preset condition.
In another implementation scenario, when the density value of the fog in the image to be processed is obtained through evaluation of the image color attenuation condition of the image to be processed, the preset condition in this embodiment may include a preset threshold corresponding to the image color attenuation condition, and at this time, if the density value is greater than the preset threshold, it is indicated that the density value satisfies the preset condition.
In still another implementation scenario, when the density value of the fog in the image to be processed is evaluated by the image contrast condition and the image color attenuation condition of the image to be processed at the same time, the preset condition in this embodiment may include a first preset threshold corresponding to the image contrast condition and a second preset threshold corresponding to the image color attenuation condition, and at this time, in order to enhance the robustness of the image defogging process and improve the fault tolerance ratio, the preset condition may include any one of the following: the density value is greater than a first preset threshold value and greater than a second preset threshold value, that is, as long as the density value is greater than the first preset threshold value or the density value is greater than the second preset threshold value, the image to be processed can be utilized for defogging.
Step S13: and acquiring defogging parameters of the image to be processed.
In this embodiment, in order to facilitate performing defogging processing on an image to be processed by using a preset atmospheric air model subsequently, acquiring defogging parameters of the image to be processed may include, but are not limited to: the method comprises the steps of obtaining an atmospheric brightness value of an image to be processed and obtaining an optimal transmittance value of each pixel point of the image to be processed based on the atmospheric brightness condition.
In this embodiment, the preset atmospheric air model may be expressed as:
I(x)=J(x)t(x)+A(1-t(x))
wherein, i (x) is the pixel value of the pixel point of the image to be processed, j (x) is the pixel value of the pixel point of the image to be processed after the defogging processing, t (x) is the optimal transmittance value of the pixel point of the image to be processed, and a is the atmospheric brightness value.
Specifically, the step of obtaining the atmospheric brightness value of the image to be processed and the optimal transmittance value of each pixel point is not described herein for the moment.
Step S14: and carrying out defogging treatment on the image to be treated by utilizing the defogging parameters.
In this embodiment, a preset atmospheric air model may be specifically used to perform defogging on each pixel point of the image to be processed based on the atmospheric brightness value and the optimal transmittance value of each pixel point of the image to be processed. The pixel value j (x) of each pixel point of the image to be processed after the defogging process can be obtained by the preset atmospheric air model and can be represented as:
Figure BDA0002275774770000051
wherein, i (x) is the pixel value of the pixel point of the image to be processed, j (x) is the pixel value of the pixel point of the image to be processed after the defogging processing, t (x) is the optimal transmittance value of the pixel point of the image to be processed, and a is the atmospheric brightness value.
In an implementation scenario, in order to further increase the processing speed of the image to be processed and enhance the processing accuracy, when the density value does not satisfy the preset condition as a result of the determination in the step S12, the following step S15 may be further performed:
step S15: and determining that the image to be processed does not need to be subjected to defogging processing.
When the concentration value of the fog in the image to be processed does not meet the preset condition, the image to be processed does not need to be subjected to defogging processing, and at the moment, the image to be processed can be directly output in an implementation scene, so that the steps of obtaining defogging parameters and performing defogging processing by using the defogging parameters are omitted, the calculated amount is greatly reduced, and the adverse effect of the defogging processing on the image without the fog is avoided.
According to the scheme, the concentration value of the fog in the image to be processed, which is shot by the camera device, is calculated, whether the concentration value meets the preset condition is judged, if the preset condition is met, the defogging parameter of the image to be processed is obtained, the defogging parameter is utilized to perform defogging processing on the image to be processed, namely, before the defogging parameter of the image to be processed is obtained, whether the concentration value of the fog in the image to be processed meets the preset condition is judged, the defogging parameter is effectively prevented from being obtained for all the images to be processed, which are shot by the camera device, the defogging efficiency is favorably improved, in addition, the adverse effect caused by the defogging processing on the image to be processed, which does not meet the preset condition, is effectively avoided, and the defogging accuracy is favorably enhanced.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating an embodiment of step S11 in fig. 1. Specifically, the present embodiment may include the following steps:
step S111: and acquiring a first density value of the image to be processed based on the image contrast condition of the image to be processed.
Specifically, a first difference between pixel values of each pixel point and a plurality of neighborhood pixel points of each pixel point may be counted to obtain an image contrast condition of the image to be processed, a candidate concentration value corresponding to each pixel point is calculated by using a preset concentration calculation method and the first difference, and the largest of the candidate concentration values is selected as the first concentration value.
In this embodiment, the first concentration value may be expressed as follows:
Figure BDA0002275774770000061
wherein omegaS(y) represents a set of neighborhood pixels with pixel y as the center pixel, and z belongs to omegaS(y) one of the neighborhood pixels of pixel y, I (z) the pixel value of the neighborhood pixel, I (y) the pixel value of the center pixel in the set omegaSIn (y), I (z) -I (y) represents a first difference value, | Ω, between pixel values of one pixel point and a plurality of neighborhood pixel points of the image to be processedS(y) l represents the total number of neighborhood pixels with pixel y as the center pixel, and y belongs to omegar(x) Each pixel point in the image to be processed is taken as a central pixel point, and the norm operation is expressed by | · |.
The maximum value of the candidate density values corresponding to each pixel point in the image to be processed obtained through calculation can be used as the first density value of the image to be processed.
Step S112: and acquiring a second density value of the image to be processed based on the image color attenuation condition of the image to be processed.
Specifically, a second difference between the brightness value and the saturation value of each pixel point may be counted to obtain an image color attenuation condition of the image to be processed, and the largest second difference is selected as a second density value of the fog in the image to be processed.
In this embodiment, the second concentration value may be calculated by the following equation:
A(x)=Iv(x)-Is(x)
wherein, Iv(x) Representing the brightness value, I, of the pixel point xs(x) Represents the saturation value of pixel x, and A (x) represents the second difference value of pixel x.
In this embodiment, the preset condition may include any one of the following conditions: the first concentration value is larger than a first preset threshold value, and the second concentration value is larger than a second preset threshold value.
In this embodiment, the step S111 and the step S112 may be executed in sequence, for example, the step S111 is executed first and then the step S112 is executed, or the step S112 is executed first and then the step S111 is executed, and the step S111 and the step S112 may also be executed simultaneously, which is not limited in this embodiment.
According to the scheme, the first density value of the image to be processed is obtained according to the image contrast condition of the image to be processed, the second density value of the image to be processed is obtained according to the image color attenuation condition of the image to be processed, and therefore when the first density value is larger than the first preset threshold value or the second density value is larger than the second preset threshold value, the density value of fog in the image to be processed can be determined to meet the preset condition, the fault tolerance rate can be improved, and the robustness of image defogging processing is enhanced.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment of step S13 in fig. 1. Specifically, fig. 3 is a flowchart illustrating an embodiment of obtaining the atmospheric brightness value in the defogging parameter. The method specifically comprises the following steps:
step S31: and taking the minimum brightness value of each channel in each pixel point of the image to be processed as the brightness value of the pixel point corresponding to the dark channel image of the image to be processed.
Referring to fig. 4, fig. 4 is a block diagram of an embodiment of a to-be-processed image P and a downsampled image P' of the to-be-processed image P. As shown in fig. 4, the image P to be processed includes a plurality of pixel points P (i, j), and in this embodiment, the minimum brightness value in three channels (such as an R channel, a G channel, and a B channel) in each pixel point P (i, j) is used as the brightness value of the pixel point P (i, j), so as to construct and obtain a dark channel image P of the image P to be processed0
In one implementation scenario, to reduce the amount of computation, the image to be processed P may be downsampled before the dark channel image is constructed, and specifically, the image to be processed P may be downsampled according to a preset sampling rateIn this embodiment, the preset sampling rate may be 1/4, that is, 1 pixel is taken from every 2 pixels in the width direction, 1 pixel is taken from every 2 pixels in the length direction, and in other implementation scenarios, the sampling rate may also be other values, and this embodiment is not specifically limited herein. After down-sampling the image P to be processed to obtain the down-sampled image P ', the minimum brightness value of each channel in each pixel point P ' (m, n) of the down-sampled image P ' may be used as the dark channel image P of the down-sampled image P0' corresponding pixel point p0' (m, n) brightness value.
In another implementation scenario, in order to reduce the amount of calculation, the already constructed dark channel image P may be processed0Down-sampling to obtain a dark channel image P0Corresponding down-sampled picture P0', the present embodiment is not particularly limited herein.
Step S32: and sorting each pixel point of the dark channel image from large to small according to the brightness value.
In one implementation scenario, when the image P to be processed or the dark channel image P of the image P to be processed0Without downsampling, the dark channel image P may be interpolated0Each pixel point of (1) is sorted according to the brightness value from large to small.
In another implementation scenario, when the image P to be processed or the dark channel image P of the image P to be processed0When downsampling is performed, the dark channel image P of the downsampled image P' obtained by downsampling the image P to be processed can be obtained0' Pixel point p0' (m, n) are ordered according to brightness values from large to small; or the dark channel image P of the image P to be processed may be0Downsampled image P obtained by downsampling0' Pixel point p0' (m, n) are sorted from large to small in brightness value, and this embodiment is not particularly limited herein.
Step S33: selecting pixel points with brightness values within a preset proportion range, and taking the average value of the brightness values of the selected pixel points as the atmospheric brightness value of the image to be processed.
In one implementation scenario, when the image P to be processed or the dark channel image P of the image P to be processed0When downsampling is not performed, pixel points whose brightness values are within a preset proportion range can be selected, and the average value of the brightness values of the selected pixel points is used as the atmospheric brightness value a of the image to be processed, where the preset proportion range in this embodiment may be one in a thousand.
In another implementation scenario, when the image P to be processed or the dark channel image P of the image P to be processed0When down-sampling is performed, a preset ratio range may be adjusted based on a preset sampling rate, for example, to two to five thousandths, then, a pixel point having a brightness value within the adjusted preset ratio range is selected, and an average value of the brightness values of the selected pixel point is used as an atmospheric brightness value a of the image to be processed, which is not limited in this embodiment.
In addition, in a specific application scenario, in order to avoid adverse effects on the defogging processing effect caused by scenes such as a large area of white wall, snow, sky, and the like, which may appear in the image to be processed, in this embodiment, it may be further determined whether the calculated atmospheric brightness value is greater than a preset atmospheric brightness threshold value, and if so, the preset atmospheric brightness threshold value is taken as the atmospheric brightness value of the image to be processed.
According to the scheme, the dark channel image of the image to be processed is constructed, the average brightness values of the pixel points with larger brightness values in the preset proportion range in the dark channel image are calculated and serve as the atmospheric brightness values of the image to be processed, and on the basis, the image to be processed is downsampled or the dark channel image is downsampled, so that the calculated amount can be further reduced, the obtaining speed of the atmospheric brightness values is improved, and the defogging processing speed is further improved.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating another embodiment of step S13 in fig. 1. Specifically, fig. 5 is a flowchart illustrating an embodiment of obtaining an optimal transmittance value of the defogging parameters.
Specifically, the method may include the steps of:
step S51: and carrying out defogging treatment on each pixel point of the image to be treated by utilizing the atmospheric brightness value and at least one preset transmittance value.
In this embodiment, at least one preset transmittance value is an arithmetic progression including a first term, a last term, and a tolerance, for example, at least one preset transmittance value is an arithmetic progression including a first term 0.3, a last term 0.9, and a tolerance 0.1, that is, at least one preset transmittance value is a set {0.3,0.4,0.5,0.6,0.7,0.8,0.9}, so as to limit upper and lower limits of the transmittance value and avoid an abnormal effect. In this embodiment, a preset atmospheric air model may be specifically adopted to perform defogging processing on each pixel point of an image to be processed based on an atmospheric brightness value and at least one preset transmittance value, and the following formula may specifically be referred to:
Figure BDA0002275774770000091
wherein, i (x) is the pixel value of the pixel point of the image to be processed, j (x) is the pixel value of the pixel point of the image to be processed after the defogging processing, t (x) is the optimal transmittance value of the pixel point of the image to be processed, and a is the atmospheric brightness value.
By the above formula, the pixel value obtained after each pixel point p (i, j) of the image to be processed is defogged by respectively adopting at least one preset transmittance value can be obtained through calculation.
In an implementation scenario, in order to further reduce the amount of calculation, before the step S51, a gray scale conversion may be performed on the image to be processed, and the converted image to be processed is downsampled to obtain a preprocessed image of the image to be processed, and then the preprocessed image is divided into a plurality of image sub-blocks, please refer to fig. 6, which is a schematic diagram of a state of an embodiment of preprocessing the image to be processed. As shown in fig. 6, the image to be processed P is subjected to gray scale conversion, and the converted image to be processed is down-sampled, so as to obtain a pre-processed image P 'of the image to be processed, and a pre-processed image P'The thick-line black frame in the middle indicates a plurality of divided image sub-blocks, the size of the image sub-block shown in fig. 6 is 2 × 2, in other implementation scenarios, the size of the image sub-block may also be other values, for example, 20 × 20, and the like, and the embodiment is not limited in detail herein. After the image to be processed is preprocessed in the above manner to obtain a preprocessed image and the preprocessed image is divided into a plurality of image sub-blocks, the image sub-blocks can be defogged by using an atmospheric brightness value and at least one preset transmittance value, and the preset transmittance values used by each pixel point in the image sub-blocks are the same, for example, when the atmospheric brightness value a and the preset transmittance value t are used1When a certain image subblock is subjected to defogging treatment, the preset transmittance value used by each pixel point in the image subblock is t1Therefore, the calculated amount of defogging treatment by utilizing the atmospheric brightness value and the preset transmissivity value is greatly reduced, the speed of obtaining the optimal transmissivity value is improved, and the speed of defogging treatment is further improved.
Step S52: and counting the image loss after defogging treatment of each pixel point of the image to be treated.
In this embodiment, the image loss is a weighted sum of the information loss and the contrast loss. Specifically, it can be expressed as follows:
E=EcontrastLEloss
wherein E represents the image loss after the defogging treatment of the image to be treated, EcontrastRepresenting the loss of contrast after the defogging process of the image to be processed, ElossRepresenting the loss of information, λ, after the image to be processed has been defoggedLRepresenting the weight.
When the image to be processed is preprocessed and divided into a plurality of image sub-blocks, the image loss of each image sub-block after defogging processing can be counted.
Specifically, the information loss E in the above formulalossCan be calculated by the following formula:
Figure BDA0002275774770000111
in this embodiment, the calculation process of j (p) may specifically refer to the steps in the above embodiments, and this embodiment is not repeated herein.
In particular, the contrast loss E in the above formulacontrastCan be calculated by the following formula:
Figure BDA0002275774770000112
wherein J (p) is the pixel value of the pixel point of the image to be processed after defogging treatment,
Figure BDA0002275774770000113
the average value of pixel values of all pixel points in the image subblock B after defogging treatment, t is the numerical value of preset transmissivity adopted by the image subblock B for defogging treatment, and N is the numerical value of the preset transmissivityBRepresenting the number of pixel points in the image sub-block B. Therefore, the contrast loss of a certain image sub-block B subjected to defogging treatment by adopting a preset transmittance value t can be calculated.
Step S53: and taking the preset transmittance value corresponding to the minimum image loss as the optimal transmittance value of the corresponding pixel point of the image to be processed.
After the image loss after defogging processing is performed on each pixel point of the image to be processed by adopting at least one preset transmittance value is obtained through calculation, the preset transmittance value corresponding to the minimum image loss of each pixel point can be used as the optimal transmittance of the pixel point corresponding to the image to be processed. For example, a certain pixel point p (i, j) in the image to be processed adopts a preset transmittance value t1The image loss at that time is E1By using a predetermined transmission value t2The image loss at that time is E2By using a predetermined transmission value t3The image loss at that time is E3And wherein the least image loss is E2Then the image is lost E2Corresponding preset transmittance value t2As corresponding imageOptimal transmittance values for the pixel points p (i, j).
In an implementation scenario, when an image to be processed is preprocessed and divided into a plurality of image sub-blocks, a preset transmittance value corresponding to minimum image loss may be used as an optimal transmittance value of a corresponding image sub-block of the preprocessed image, the optimal transmittance value of the corresponding image sub-block is used as a pixel value of each pixel point in the corresponding image sub-block, a transmittance map corresponding to the preprocessed image is obtained, the transmittance map corresponding to the preprocessed image is smoothed by using guided filtering, and finally, the smoothed transmittance map is up-sampled, and the transmittance map corresponding to the preprocessed image is obtained, wherein the pixel value of each pixel point in the transmittance map corresponding to the image to be processed is the optimal transmittance value of the corresponding pixel point in the image to be processed. The specific technical details of the guided filtering are prior art in the art, and the present embodiment is not described herein again. Referring to fig. 7 and 8 in combination, fig. 7 is a schematic diagram of an embodiment of a transmittance graph, and fig. 8 is a schematic diagram of another embodiment of the transmittance graph. Specifically, fig. 7 shows a transmittance map corresponding to the preprocessed image, and fig. 8 shows a transmittance map obtained by performing the guide filter process on the transmittance map shown in fig. 7 and performing the up-sampling. As shown in fig. 7, it can be clearly observed that the transmittance values of all the pixels in each image sub-block are the same, and fig. 8 after the oriented filtering smoothing and up-sampling is clearly smoothed.
According to the scheme, the loss caused by different preset transmittance values is calculated by combining the information loss and the contrast loss, and the block processing is performed on the down-sampling image, so that the processing load can be reduced, and the processing speed can be increased.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating an image defogging method according to another embodiment of the present application. Specifically, after the step S14 "performing the defogging process on the image to be processed by using the defogging parameters" in the above embodiment, the method may further include the following steps:
step S91: and acquiring a contrast optimization parameter by using a preset contrast optimization value and a first optimization function.
In this embodiment, the preset contrast optimization value is greater than or equal to-1 and less than or equal to 1, and the first optimization function is represented as:
k=tan((45+44*c)/180*π)
and k is a contrast optimization parameter, and c is a preset contrast optimization value.
Step S92: and optimizing the image to be processed after the defogging processing by using the contrast optimization parameter, the preset brightness optimization value and the second optimization function.
In this embodiment, the preset optimized luminance value is greater than or equal to-1 and less than or equal to 1, and the second optimization function is represented as:
the second optimization function is represented as:
y=(x-127.5*(1-b))*k+127.5*(1+b)
wherein y is the pixel value of the pixel point of the image to be processed after optimization, x is the pixel value of the pixel point of the image to be processed after defogging, b is a preset brightness optimization value, and k is a contrast optimization parameter.
By means of the scheme, the stretching of the brightness and the contrast can be achieved under the condition that the image contrast and the image brightness are low after the defogging treatment, and therefore the visual effect is improved.
Referring to fig. 10, fig. 10 is a schematic flowchart illustrating a video defogging method according to an embodiment of the present application. Specifically, the method may include the steps of:
step S1010: and selecting one frame as an image to be processed every other preset number of frames in the video data.
In this embodiment, the video data may be captured by an imaging device. The video data may comprise a plurality of frames of images, for example: p1, P2, P3, P4, P5, P6, P7, P8 and P9. The preset number can be set by a user according to an actual application scenario, for example: 3. 4, 5, etc. Taking the preset number as 2 as an example, in the video data, one frame can be selected as the image to be processed every 2 frames of images, and then the selected image to be processed is: p1, P4, P7.
In addition, the video data may also be a video frame image transmitted in real time, at this time, the received first frame image may be used as an image to be processed, and then the received video frame image after every preset number of frame images is used as a new image to be processed, and so on.
Step S1020: and carrying out defogging treatment on the image to be treated.
In this embodiment, the defogging process on the image to be processed is implemented by using the steps in any of the image defogging method embodiments. When the image to be processed is defogged through the steps in any of the above image defogging method embodiments, the defogging parameters corresponding to the image to be processed, such as the atmospheric brightness value of the image to be processed and the optimal transmittance value of each pixel point in the image to be processed, can be obtained.
Taking the video data as an example, the selected images to be processed are respectively: p1, P4, and P7, the images P1, P4, and P7 to be processed may be subjected to the defogging processing through the steps in any of the above embodiments of the image defogging method, and during the defogging processing, the defogging parameters of the image P1, the image P7, and the image P7 to be processed may be obtained.
Specifically, the step of performing the defogging process on the image to be processed may refer to the step in any of the image defogging method embodiments described above, and details of this embodiment are not repeated herein.
Step S1030: and carrying out defogging treatment on the preset number of frame images behind the image to be treated by adopting the same defogging parameters and image defogging method as the image to be treated.
After the image to be processed is subjected to defogging processing, a preset number of frame images behind the image to be processed can be subjected to defogging processing by adopting the same defogging parameters and image defogging method as those of the image to be processed.
Taking the video data as an example, the selected images to be processed are respectively: p1, P4 and P7, after the above-mentioned image to be processed is dehazed, the same dehazing parameters and image dehazing method as those of the image P1 can be respectively adopted to dehaze the 2 frames of images after the image P1, namely the images P2 and P3; and carrying out defogging treatment on 2 frames of images behind the image P4 to be treated, namely images P5 and P6, by adopting the same defogging parameters and image defogging method as the image P4 to be treated; and carrying out defogging processing on images of 2 frames behind the image to be processed P7, namely images P8 and P9 by adopting the same defogging parameters and image defogging method as the image to be processed P7.
According to the scheme, the defogging process is carried out on the images of the preset number of frames after the images to be processed by adopting the defogging parameters and the image defogging method which are the same as those of the images to be processed, so that the memory consumption caused by repeated calculation of the defogging parameters is avoided, the processing speed is accelerated, and the processing time is saved.
Referring to fig. 11, fig. 11 is a schematic diagram of a frame of an image defogging device 1100 according to an embodiment of the present application. In this embodiment, the image defogging device 1100 includes a calculation module 1110, an acquisition module 1120 and a processing module 1130, where the calculation module 1110 is configured to calculate a fog density value in an image to be processed captured by an imaging device; the obtaining module 1120 is configured to obtain a defogging parameter of the image to be processed if the density value meets a preset condition; the processing module 1130 is configured to perform a defogging process on the image to be processed by using the defogging parameters.
According to the scheme, the concentration value of the fog in the image to be processed, which is shot by the camera device, is calculated, whether the concentration value meets the preset condition is judged, if the preset condition is met, the defogging parameter of the image to be processed is obtained, the defogging parameter is utilized to perform defogging processing on the image to be processed, namely, before the defogging parameter of the image to be processed is obtained, whether the concentration value of the fog in the image to be processed meets the preset condition is judged, the defogging parameter is effectively prevented from being obtained for all the images to be processed, which are shot by the camera device, the defogging efficiency is favorably improved, in addition, the adverse effect caused by the defogging processing on the image to be processed, which does not meet the preset condition, is effectively avoided, and the defogging accuracy is favorably enhanced.
In some embodiments, the obtaining module 1120 further includes a first obtaining sub-module configured to obtain an atmospheric brightness value of the image to be processed, the obtaining module 1120 further includes a second obtaining sub-module configured to obtain an optimal transmittance value of each pixel of the image to be processed based on the atmospheric brightness value, and the processing module 1130 is specifically configured to perform defogging processing on each pixel of the image to be processed by using a preset atmospheric air model based on the atmospheric brightness value and the optimal transmittance value of each pixel of the image to be processed.
In some embodiments, the second obtaining submodule includes a defogging processing unit, configured to perform defogging processing on each pixel point of the image to be processed, respectively, by using the atmospheric brightness value and the at least one preset transmittance value; the second obtaining submodule also comprises a loss statistical unit which is used for counting the image loss of each pixel point of the image to be processed after the defogging processing is carried out on each pixel point; the second obtaining submodule further comprises a transmittance screening unit, and the transmittance screening unit is used for taking the preset transmittance value corresponding to the minimum image loss as the optimal transmittance value of the corresponding pixel point of the image to be processed.
In some embodiments, the second obtaining sub-module further includes a preprocessing unit, configured to perform gray scale conversion on an image to be processed, perform downsampling on the converted image to be processed, obtain a preprocessed image of the image to be processed, and divide the preprocessed image into a plurality of image sub-blocks; the defogging processing unit is also used for defogging the image subblocks by utilizing the atmospheric brightness value and at least one preset transmittance value; the loss statistical unit is also used for counting the image loss of each image sub-block after defogging treatment; the transmissivity screening unit is further used for taking the optimal transmissivity value of the corresponding image subblock as a pixel value of each pixel point in the corresponding image subblock to obtain a transmissivity graph corresponding to the preprocessed image, smoothing the transmissivity graph corresponding to the preprocessed image by adopting guide filtering, up-sampling the smoothed transmissivity graph, and obtaining the transmissivity graph corresponding to the image to be processed. In one implementation scenario, the at least one preset transmittance value is an arithmetic series comprising a first term, a last term, and a tolerance. In one implementation scenario, the image loss is a weighted sum of information loss and contrast loss.
Unlike the foregoing embodiment, the present embodiment jointly calculates the loss caused by different preset transmittance values by using the information loss and the contrast loss, and performs block processing on the downsampled image, thereby reducing the processing load and increasing the processing speed.
In some embodiments, the preset atmospheric air model is:
I(x)=J(x)t(x)+A(1-t(x))
wherein, i (x) is the pixel value of the pixel point of the image to be processed, j (x) is the value of the pixel point of the image to be processed after the defogging processing, t (x) is the optimal transmittance value of the pixel point of the image to be processed, and a is the atmospheric brightness value.
In some embodiments, the first obtaining submodule includes a dark channel image constructing unit configured to use a minimum brightness value of each channel in each pixel point of the image to be processed as a brightness value of a pixel point corresponding to the dark channel image of the image to be processed, the first obtaining submodule further includes a sorting unit configured to sort each pixel point of the dark channel image from large to small according to the brightness value, the first obtaining submodule further includes a screening unit configured to select pixel points having brightness values within a preset proportion range, and use an average value of the brightness values of the selected pixel points as an atmospheric brightness value of the image to be processed.
In some embodiments, the first obtaining sub-module further includes a down-sampling unit configured to down-sample the image to be processed according to a preset sampling rate to obtain a down-sampled image of the image to be processed, the dark channel image constructing unit is further configured to use a minimum brightness value of each channel in each pixel of the down-sampled image as a brightness value of a pixel corresponding to the dark channel image of the down-sampled image, the screening unit is further configured to adjust a preset proportion range based on the preset sampling rate, select a pixel having a brightness value within the adjusted preset proportion range, and use an average value of the brightness values of the selected pixels as an atmospheric brightness value of the image to be processed. In an implementation scenario, the first obtaining sub-module further includes a determining unit, configured to use the preset atmospheric brightness threshold as an atmospheric brightness value of the image to be processed if the atmospheric brightness value is greater than the preset atmospheric brightness threshold.
Different from the embodiment, the calculation amount can be further reduced, the speed of acquiring the atmospheric brightness value can be increased, and the speed of defogging processing can be further increased by constructing the dark channel image of the image to be processed, calculating the average brightness value of a plurality of pixel points with larger brightness values in the dark channel image within the preset proportion range, and taking the average brightness value as the atmospheric brightness value of the image to be processed.
In some embodiments, the calculation module 1110 includes a first calculation sub-module configured to obtain a first density value of the image to be processed based on an image contrast condition of the image to be processed, the calculation module 1110 further includes a second calculation sub-module configured to obtain a second density value of the image to be processed based on an image color attenuation condition of the image to be processed, and the preset condition includes any one of: the first concentration value is larger than a first preset threshold value, and the second concentration value is larger than a second preset threshold value.
Different from the foregoing embodiment, the first density value of the image to be processed is obtained according to the image contrast condition of the image to be processed, and the second density value of the image to be processed is obtained according to the image color attenuation condition of the image to be processed, so that when the first density value is greater than the first preset threshold value, or when the second density value is greater than the second preset threshold value, it can be determined that the density value of the fog in the image to be processed meets the preset condition, and thus the fault tolerance rate can be improved, and the robustness of the image defogging process can be enhanced.
In some embodiments, the first calculation submodule includes a first difference statistical unit configured to calculate a first difference between pixel values of each pixel and a plurality of neighboring pixels of each pixel to obtain an image contrast condition of the image to be processed, the first calculation submodule further includes a candidate concentration value calculation unit configured to calculate a candidate concentration value corresponding to each pixel by using a preset concentration calculation method and the first difference, and the first calculation submodule further includes a first concentration value screening unit configured to select a maximum candidate concentration value as the first concentration value.
In some embodiments, the second calculation submodule includes a second difference statistical unit configured to count a second difference between the brightness value and the saturation value of each pixel point to obtain an image color attenuation condition of the image to be processed, and the second calculation submodule further includes a second density value screening unit configured to select a largest second difference as the second density value.
In some embodiments, the image defogging device 1100 further includes an optimization parameter obtaining module configured to obtain the contrast optimization parameter by using the preset contrast optimization value and the first optimization function, and the image defogging device 1100 further includes an optimization processing module configured to optimize the image to be processed after the defogging processing by using the contrast optimization parameter, the preset brightness optimization value and the second optimization function. In one implementation scenario, the preset contrast optimization value and the preset brightness optimization value are greater than or equal to-1 and less than or equal to 1. In one implementation scenario, the first optimization function is represented as:
k=tan((45+44*c)/180*π)
and k is a contrast optimization parameter, and c is a preset contrast optimization value.
In one implementation scenario, the second optimization function is represented as:
y=(x-127.5*(1-b))*k+127.5*(1+b)
wherein y is the pixel value of the pixel point of the image to be processed after optimization, x is the pixel value of the pixel point of the image to be processed after defogging, b is a preset brightness optimization value, and k is a contrast optimization parameter.
Different from the foregoing embodiment, in the embodiment, for the case that the image contrast and the image brightness are low after the defogging processing, the stretching of the brightness and the contrast is realized, so that the visual effect is improved.
Referring to fig. 12, fig. 12 is a schematic block diagram of a video defogging device 1200 according to an embodiment of the present application. In this embodiment, the video defogging device 1200 includes a selecting module 1210 and a processing module 1220, where the selecting module 1210 is configured to select one frame as an image to be processed every other preset number of frames of images in video data, where the video data is obtained by shooting with a camera device; the processing module 1220 is configured to perform defogging processing on an image to be processed through the steps in any of the above image defogging method embodiments; the processing module 1220 is further configured to perform defogging processing on a preset number of frame images after the image to be processed by using the same defogging parameters and image defogging method as those of the image to be processed.
According to the scheme, the defogging process is carried out on the images of the preset number of frames after the images to be processed by adopting the defogging parameters and the image defogging method which are the same as those of the images to be processed, so that the memory consumption caused by repeated calculation of the defogging parameters is avoided, the processing speed is accelerated, and the processing time is saved.
Referring to fig. 13, fig. 13 is a schematic diagram of a frame of a defogging device 1300 according to an embodiment of the present application. In this embodiment, the defogging device 1300 includes a memory 1310 and a processor 1320 coupled to each other, and the processor 1320 is configured to execute program instructions stored in the memory 1310 to implement the steps in any of the above-described image defogging method embodiments, so as to improve the defogging efficiency and enhance the defogging accuracy, or to implement the steps in any of the above-described video defogging method embodiments, so as to avoid memory consumption caused by repeatedly calculating the defogging parameters, accelerate the processing speed, and save the processing time.
The defogging device 1300 in the present embodiment may be a processing device, such as a server, a microcomputer, or the like, communicatively connected to the imaging device, or may be the imaging device itself, so as to implement the defogging process on the captured image while the imaging device captures the image, or to perform the defogging process on the image in the captured video data while capturing the video data.
Referring to fig. 14, fig. 14 is a schematic diagram of a memory device 1400 according to an embodiment of the present application. The storage device 1400 stores a program instruction 1410 that can be executed by the processor, and the program instruction 1410 is used to implement the steps in any of the above-described embodiments of the image defogging method, thereby improving the defogging efficiency and enhancing the defogging accuracy, or implement the steps in any of the above-described embodiments of the video defogging method, thereby avoiding the memory consumption caused by repeatedly calculating the defogging parameters, increasing the processing speed, and saving the processing time.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (18)

1. An image defogging method, comprising:
calculating the density value of fog in the image to be processed, which is shot by the camera device;
if the concentration value meets a preset condition, acquiring a defogging parameter of the image to be processed;
and carrying out defogging treatment on the image to be treated by utilizing the defogging parameters.
2. The image defogging method according to claim 1, wherein the acquiring defogging parameters of the image to be processed comprises:
acquiring an atmospheric brightness value of the image to be processed;
acquiring the optimal transmittance value of each pixel point of the image to be processed based on the atmospheric brightness value;
the defogging processing on the image to be processed by utilizing the defogging parameters comprises the following steps:
and based on the atmospheric brightness value and the optimal transmittance value of each pixel point of the image to be processed, carrying out defogging treatment on each pixel point of the image to be processed by adopting a preset atmospheric air model.
3. The image defogging method according to claim 2, wherein said obtaining an optimal transmittance value for each pixel point of said image to be processed based on said atmospheric brightness value comprises:
carrying out defogging treatment on each pixel point of the image to be processed respectively by utilizing the atmospheric brightness value and at least one preset transmittance value;
counting the image loss of each pixel point of the image to be processed after defogging processing;
and taking the preset transmittance value corresponding to the minimum image loss as the optimal transmittance value of the corresponding pixel point of the image to be processed.
4. The image defogging method according to claim 3, wherein before said defogging process is respectively performed on each pixel point of the image to be processed by using the atmospheric brightness value and at least one preset transmittance value, the method further comprises:
carrying out gray level conversion on the image to be processed, and carrying out down-sampling on the converted image to be processed to obtain a preprocessed image of the image to be processed;
dividing the pre-processed image into a plurality of image sub-blocks;
the defogging treatment of each pixel point of the image to be processed by utilizing the atmospheric brightness value and at least one preset transmittance value comprises the following steps:
utilizing the atmospheric brightness value and the at least one preset transmittance value to respectively carry out defogging treatment on the plurality of image subblocks;
the image loss after the defogging processing of each pixel point of the image to be processed is counted comprises the following steps:
counting the image loss of each image sub-block after defogging treatment;
the taking the preset transmittance value corresponding to the minimum image loss as the optimal transmittance value of the corresponding pixel point of the image to be processed includes:
taking a preset transmittance value corresponding to the minimum image loss as an optimal transmittance value of a corresponding image sub-block of the preprocessed image;
taking the optimal transmittance value of the corresponding image subblock as the pixel value of each pixel point in the corresponding image subblock to obtain a transmittance map corresponding to the preprocessed image;
smoothing the transmittance graph corresponding to the preprocessed image by adopting guide filtering;
performing up-sampling on the smoothed transmittance graph to obtain a transmittance graph corresponding to the image to be processed;
and the pixel value of each pixel point in the transmissivity graph corresponding to the image to be processed is the optimal transmissivity value of the corresponding pixel point in the image to be processed.
5. The image defogging method according to claim 3, wherein the at least one preset transmittance value is an arithmetic progression comprising a first term, a last term and a tolerance; and/or the presence of a gas in the gas,
the image loss is a weighted sum of information loss and contrast loss.
6. The image defogging method according to claim 2, wherein the preset atmospheric air model is:
I(x)=J(x)t(x)+A(1-t(x))
wherein, i (x) is the pixel value of the pixel point of the image to be processed, j (x) is the pixel value of the pixel point of the image to be processed after the defogging processing, t (x) is the optimal transmittance value of the pixel point of the image to be processed, and a is the atmospheric brightness value.
7. The image defogging method according to claim 2, wherein said acquiring an atmospheric brightness value of the image to be processed comprises:
taking the minimum brightness value of each channel in each pixel point of the image to be processed as the brightness value of the pixel point corresponding to the dark channel image of the image to be processed;
sorting each pixel point of the dark channel image from large to small according to the brightness value;
and selecting pixel points with the brightness values within a preset proportion range, and taking the average value of the brightness values of the selected pixel points as the atmospheric brightness value of the image to be processed.
8. The image defogging method according to claim 7, wherein before the minimum brightness value of each channel in each pixel point of the image to be processed is taken as the brightness value of the corresponding pixel point of the dark channel image of the image to be processed, the method comprises:
the image to be processed is subjected to down-sampling according to a preset sampling rate, and a down-sampled image of the image to be processed is obtained;
the step of taking the minimum brightness value of each channel in each pixel point of the image to be processed as the brightness value of the pixel point corresponding to the dark channel image of the image to be processed includes:
taking the minimum brightness value of each channel in each pixel point of the down-sampling image as the brightness value of the pixel point corresponding to the dark channel image of the down-sampling image;
selecting the pixel points with the brightness values within a preset proportion range, and taking the average value of the brightness values of the selected pixel points as the atmospheric brightness value of the image to be processed comprises the following steps:
adjusting the preset proportion range based on the preset sampling rate;
and selecting pixel points of which the brightness values are within the adjusted preset proportion range, and taking the average value of the brightness values of the selected pixel points as the atmospheric brightness value of the image to be processed.
9. The image defogging method according to claim 7, wherein after selecting the pixels with the brightness values within the preset proportion range and taking the average value of the brightness values of the selected pixels as the atmospheric brightness value of the image to be processed, the method further comprises:
and if the atmospheric brightness value is larger than a preset atmospheric brightness threshold value, taking the preset atmospheric brightness threshold value as the atmospheric brightness value of the image to be processed.
10. The image defogging method according to claim 1, wherein the calculating the fog density value in the image to be processed captured by the imaging device comprises:
acquiring a first density value of the image to be processed based on the image contrast condition of the image to be processed; and the number of the first and second electrodes,
acquiring a second density value of the image to be processed based on the image color attenuation condition of the image to be processed;
the preset condition includes any one of: the first concentration value is larger than a first preset threshold value, and the second concentration value is larger than a second preset threshold value.
11. The image defogging method according to claim 10, wherein the obtaining of the first density value of the image to be processed based on the image contrast condition of the image to be processed comprises:
counting first difference values between pixel values of each pixel point and a plurality of neighborhood pixel points of each pixel point to obtain the image contrast condition of the image to be processed;
calculating a candidate concentration value corresponding to each pixel point by using a preset concentration calculation mode and the first difference value;
and selecting the maximum candidate concentration value as the first concentration value.
12. The image defogging method according to claim 10, wherein said obtaining a second density value of the image to be processed based on the image color attenuation condition of the image to be processed comprises:
counting a second difference value between the brightness value and the saturation value of each pixel point to obtain the image color attenuation condition of the image to be processed;
and selecting the largest second difference value as the second concentration value.
13. The image defogging method according to claim 1, wherein after the defogging processing of the image to be processed by using the defogging parameters, the method further comprises:
acquiring a contrast optimization parameter by using a preset contrast optimization value and a first optimization function;
optimizing the image to be processed after the defogging processing by using the contrast optimization parameter, the preset brightness optimization value and the second optimization function;
wherein the preset contrast optimization value and the preset brightness optimization value are greater than or equal to-1 and less than or equal to 1;
and/or, the first optimization function is represented as:
k=tan((45+44*c)/180*π)
k is the contrast optimization parameter, c is the preset contrast optimization value;
and/or, the second optimization function is represented as:
y=(x-127.5*(1-b))*k+127.5*(1+b)
y is the pixel value of the pixel point of the image to be processed after optimization processing, x is the pixel value of the pixel point of the image to be processed after defogging processing, b is the preset brightness optimization value, and k is the contrast optimization parameter.
14. A method of video defogging, comprising:
selecting one frame as an image to be processed every other preset number of frames of images in video data, wherein the video data is obtained by shooting through a camera device;
defogging the image to be processed by the image defogging method according to any one of claims 1 to 13;
and carrying out defogging treatment on the preset number of frame images behind the image to be treated by adopting the same defogging parameters and image defogging method as the image to be treated.
15. An image defogging device, comprising:
the computing module is used for computing the density value of the fog in the image to be processed, which is shot by the camera device;
the acquisition module is used for acquiring defogging parameters of the image to be processed if the concentration value meets a preset condition;
and the processing module is used for carrying out defogging processing on the image to be processed by utilizing the defogging parameters.
16. A video defogging device, comprising:
the device comprises a selecting module, a processing module and a processing module, wherein the selecting module is used for selecting one frame as an image to be processed every other preset number of frames of images in video data, and the video data is obtained by shooting by a camera device;
a processing module, configured to perform defogging processing on the image to be processed by using the image defogging method according to any one of claims 1 to 13;
the processing module is also used for carrying out defogging processing on the preset number of frame images after the image to be processed by adopting the same defogging parameters and image defogging methods as those of the image to be processed.
17. A defogging device comprising a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the image defogging method according to any one of claims 1 to 13 or the video defogging method according to claim 14.
18. A storage device storing program instructions executable by a processor to implement the image defogging method of any one of claims 1 to 13 or the video defogging method of claim 14.
CN201911122297.8A 2019-11-15 2019-11-15 Image and video defogging method and related device Pending CN110930326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911122297.8A CN110930326A (en) 2019-11-15 2019-11-15 Image and video defogging method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911122297.8A CN110930326A (en) 2019-11-15 2019-11-15 Image and video defogging method and related device

Publications (1)

Publication Number Publication Date
CN110930326A true CN110930326A (en) 2020-03-27

Family

ID=69853104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911122297.8A Pending CN110930326A (en) 2019-11-15 2019-11-15 Image and video defogging method and related device

Country Status (1)

Country Link
CN (1) CN110930326A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628133A (en) * 2021-07-28 2021-11-09 武汉三江中电科技有限责任公司 Rain and fog removing method and device based on video image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120328205A1 (en) * 2011-06-27 2012-12-27 Jiangtao Wen Image enhancement for challenging lighting conditions
CN104272347A (en) * 2012-05-03 2015-01-07 Sk电信有限公司 Image processing apparatus for removing haze contained in still image and method thereof
CN106780380A (en) * 2016-12-09 2017-05-31 电子科技大学 A kind of image defogging method and system
CN107424198A (en) * 2017-07-27 2017-12-01 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107481199A (en) * 2017-07-27 2017-12-15 广东欧珀移动通信有限公司 Image defogging processing method, device, storage medium and mobile terminal
CN107993198A (en) * 2017-10-24 2018-05-04 中国科学院长春光学精密机械与物理研究所 Optimize the image defogging method and system of contrast enhancing
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
CN110381259A (en) * 2019-08-13 2019-10-25 广州欧科信息技术股份有限公司 Mural painting image collecting method, device, computer equipment and storage medium
CN110428371A (en) * 2019-07-03 2019-11-08 深圳大学 Image defogging method, system, storage medium and electronic equipment based on super-pixel segmentation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120328205A1 (en) * 2011-06-27 2012-12-27 Jiangtao Wen Image enhancement for challenging lighting conditions
CN104272347A (en) * 2012-05-03 2015-01-07 Sk电信有限公司 Image processing apparatus for removing haze contained in still image and method thereof
CN106780380A (en) * 2016-12-09 2017-05-31 电子科技大学 A kind of image defogging method and system
CN107424198A (en) * 2017-07-27 2017-12-01 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107481199A (en) * 2017-07-27 2017-12-15 广东欧珀移动通信有限公司 Image defogging processing method, device, storage medium and mobile terminal
CN107993198A (en) * 2017-10-24 2018-05-04 中国科学院长春光学精密机械与物理研究所 Optimize the image defogging method and system of contrast enhancing
CN109961070A (en) * 2019-03-22 2019-07-02 国网河北省电力有限公司电力科学研究院 The method of mist body concentration is distinguished in a kind of power transmission line intelligent image monitoring
CN110428371A (en) * 2019-07-03 2019-11-08 深圳大学 Image defogging method, system, storage medium and electronic equipment based on super-pixel segmentation
CN110381259A (en) * 2019-08-13 2019-10-25 广州欧科信息技术股份有限公司 Mural painting image collecting method, device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KAIMING HE等: "Single Image Haze Removal Using Dark Channel Prior", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628133A (en) * 2021-07-28 2021-11-09 武汉三江中电科技有限责任公司 Rain and fog removing method and device based on video image

Similar Documents

Publication Publication Date Title
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
Park et al. Single image dehazing with image entropy and information fidelity
CN107424133B (en) Image defogging method and device, computer storage medium and mobile terminal
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN100562894C (en) A kind of image combining method and device
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
CN110544213A (en) Image defogging method based on global and local feature fusion
KR101664123B1 (en) Apparatus and method of creating high dynamic range image empty ghost image by using filtering
CN107395991B (en) Image synthesis method, image synthesis device, computer-readable storage medium and computer equipment
JP2012168936A (en) Animation processing device and animation processing method
CN107277299A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN107194901B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107341782B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN115393216A (en) Image defogging method and device based on polarization characteristics and atmospheric transmission model
CN110175967B (en) Image defogging processing method, system, computer device and storage medium
CN111192213A (en) Image defogging adaptive parameter calculation method, image defogging method and system
CN113888509A (en) Method, device and equipment for evaluating image definition and storage medium
CN110930326A (en) Image and video defogging method and related device
CN110738624B (en) Area-adaptive image defogging system and method
CN109903253B (en) Road traffic video defogging algorithm based on depth-of-field prior
CN114648467B (en) Image defogging method and device, terminal equipment and computer readable storage medium
CN103595933A (en) Method for image noise reduction
CN107481199B (en) Image defogging method and device, storage medium and mobile terminal
CN114418874A (en) Low-illumination image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication