CN114663311A - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN114663311A
CN114663311A CN202210302825.3A CN202210302825A CN114663311A CN 114663311 A CN114663311 A CN 114663311A CN 202210302825 A CN202210302825 A CN 202210302825A CN 114663311 A CN114663311 A CN 114663311A
Authority
CN
China
Prior art keywords
image
infrared
visible light
level
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210302825.3A
Other languages
Chinese (zh)
Inventor
徐梦婕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210302825.3A priority Critical patent/CN114663311A/en
Publication of CN114663311A publication Critical patent/CN114663311A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The image processing method comprises the following steps: acquiring an infrared image and a visible light image corresponding to the same scene; acquiring a brightness image corresponding to a brightness channel of the visible light image; obtaining a fog weight map according to the visible light image; fusing the infrared image and the brightness image according to the fog weight image to obtain a defogged brightness image; and generating a defogged image according to the defogged brightness image and the color value of the visible light image. According to the image processing method, the image processing device, the electronic equipment and the storage medium, the infrared image and the visible light image in the same scene are fused by utilizing the characteristic that the infrared ray has better penetrating capacity in the haze environment, so that the permeability of the image of the target scene is improved, the degree of distortion of the defogged image is reduced, and the imaging quality of the defogged image is improved.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
The light received by the image sensor is a mixed light of the scattered scene light and the atmospheric light, and the fog, haze, smog, water vapor, water droplets and the like in the air absorb and scatter the scene light, so that the image quality is degraded.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a computer readable storage medium.
The image processing method of the embodiment of the application comprises the following steps: acquiring an infrared image and a visible light image corresponding to the same scene; acquiring a brightness image corresponding to a brightness channel of the visible light image; obtaining a fog weight map according to the visible light image; fusing the infrared image and the brightness image according to the fog weight map to obtain a defogged brightness image; and generating a defogged image according to the defogged brightness image and the color value of the visible light image.
The image processing device of the embodiment of the application comprises an image acquisition module, a first processing module, a second processing module, a third processing module and a generation module. The image acquisition module is used for acquiring an infrared image and a visible light image corresponding to the same scene. The first processing module is used for acquiring a brightness image corresponding to a brightness channel of the visible light image. And the second processing module is used for obtaining a fog weight map according to the visible light image. And the third processing module is used for fusing the infrared image and the brightness image according to the fog weight map to obtain a defogged brightness image. The generation module is used for generating a defogged image according to the defogged brightness image and the color value of the visible light image.
The electronic device of embodiments of the present application includes one or more processors and memory. The memory stores a computer program. The steps of the image processing method according to the above-described embodiment are realized when the computer program is executed by the processor.
The computer-readable storage medium of the present embodiment stores thereon a computer program that, when executed by a processor, implements the steps of the image processing method described in the above embodiment.
According to the image processing method, the image processing device, the electronic equipment and the computer readable storage medium, the infrared image and the visible light image in the same scene are fused by utilizing the characteristic that infrared rays have better penetrating capacity in a haze environment, so that the permeability of the image of a target scene is improved, the distortion degree of the defogged image is reduced, and the imaging quality of the defogged image is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic view of an electronic device of some embodiments of the present application;
FIG. 3 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 4 is a schematic illustration of a scene of an image processing method according to some embodiments of the present application;
FIG. 5 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 6 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 7 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 8 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 9 is a schematic diagram of a Harr quadtree according to certain embodiments of the present application;
FIG. 10 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 11 is a schematic diagram of a third processing module of certain embodiments of the present application;
FIG. 12 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 13 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 14 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 15 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 16 is a schematic diagram of a second acquisition unit of certain embodiments of the present application;
FIG. 17 is a schematic illustration of a normalized initial fog weight map versus fog weight map according to certain embodiments of the present application;
FIG. 18 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative and are only for the purpose of explaining the present application and are not to be construed as limiting the present application.
In the description of the embodiments of the present application, the terms "first", "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
Referring to fig. 1 and fig. 2, an image processing method according to an embodiment of the present application includes:
01: acquiring an infrared image and a visible light image corresponding to the same scene;
02: acquiring a brightness image corresponding to a brightness channel of the visible light image;
03: obtaining a fog weight map according to the visible light image;
04: fusing the infrared image and the brightness image according to the fog weight image to obtain a defogged brightness image;
05: and generating a defogged image according to the defogged brightness image and the color value of the visible light image.
Referring to fig. 2 and 3, an image processing apparatus 30 according to an embodiment of the present disclosure includes an image obtaining module 31, a first processing module 32, a second processing module 33, a third processing module 34, and a generating module 35.
The image processing method of the present application can be implemented by the image processing apparatus 30 of the present embodiment, wherein step 01 can be implemented by the image obtaining module 31, step 02 can be implemented by the first processing module 32, step 03 can be implemented by the second processing module 33, step 04 can be implemented by the third processing module 34, and step 05 can be implemented by the generating module 35, that is, the image obtaining module 31 can be used to obtain the infrared image and the visible light image corresponding to the same scene. The first processing module 32 may be configured to obtain a luminance image corresponding to a luminance channel of the visible light image. The second processing module 33 may be configured to obtain a fog weight map according to the visible light image. The third processing module 34 may be configured to fuse the infrared image and the luminance image according to the fog weight map to obtain a defogged luminance image. The generating module 35 may be configured to generate a defogged image according to the defogged luminance image and the color value of the visible light image.
According to the image processing method, the infrared image and the visible light image in the same scene are fused by utilizing the characteristic that the infrared ray has better penetrating capacity in the haze environment, so that the permeability of the image of the target scene is improved, the degree of distortion of the defogged image is reduced, and the imaging quality of the defogged image is improved.
In some embodiments, the electronic device 100 may include a smart phone, a tablet computer, a smart watch, a smart bracelet, and the like, which are not limited herein. The electronic device 100 according to the embodiment of the present application is illustrated as a smart phone, and is not to be construed as a limitation to the present application.
The infrared image is an image acquired by an infrared imaging technique, and the infrared rays include near infrared rays and far infrared rays. The infrared image includes only luminance data. The visible light image is a color image including luminance data and color data. It is understood that the luminance image corresponding to the luminance channel of the visible light image includes luminance data of the visible light image, and the color value of the visible light image includes color data of visible light.
There are many methods for acquiring the infrared image and the visible light image corresponding to the same scene, and the images may be acquired by shooting with a camera at the same time or by shooting with a camera respectively.
The fog weight map is used to represent the concentration of fog in the image. For convenience of description, the concentration of the fog may refer to the concentration of the rising objects such as fog, haze, smog, water vapor and water droplets, and the fog is referred to in the following description as the short term for the rising objects such as fog, haze, smog, water vapor and water droplets.
It can be understood that, because the infrared ray has better penetrating ability, the edge feature of the infrared image is more prominent than that of the visible light image at the same position of the fog area, so that the infrared image can be considered more at the position with higher fog density, and the visible light image can be considered more at the position with lower or no fog density, so that the defogging effect is better. Specifically, referring to fig. 4, in fig. 4, the left image is an infrared image, the middle image is a visible light image, and the right image is a fused defogged image.
Referring to fig. 5 and 6, in some embodiments, the image processing method further includes:
06: decomposing the infrared image to obtain an infrared approximate image and an infrared edge image of the infrared image;
07: decomposing the brightness image to obtain a visible light approximate image and a visible light edge image of the brightness image;
step 04 (fusing the infrared approximate image and the visible light approximate image to obtain an approximate image according to the fog weight map) further includes:
41: according to the fog weight map, fusing the infrared approximate image and the visible light approximate image to obtain an approximate image;
42: fusing the infrared edge image and the visible light edge image to obtain an edge image;
43: the approximate image and the edge image are fused to obtain a defogged luminance image.
Referring to fig. 7, in some embodiments, the image processing apparatus 30 further includes a fourth processing module 36 and a fifth processing module 37. Step 06 can be implemented by the fourth processing module 36, and step 07 can be implemented by the fifth processing module 37, that is, the fourth processing module 36 can be configured to decompose the infrared image to obtain an infrared approximation image and an infrared edge image of the infrared image. The fifth processing module 37 may be configured to decompose the luminance image to obtain a visible light approximation image and a visible light edge image of the luminance image. The third processing module 34 may include a first fusion unit 341, a second fusion unit 342, and a third fusion unit 343, step 41 may be implemented by the first fusion unit 341, step 42 may be implemented by the second fusion unit 342, and step 43 may be implemented by the third fusion unit 343, that is, the first fusion unit 341 may be configured to fuse the infrared approximate image and the visible light approximate image according to the fog weight map to obtain the approximate image. The second fusing unit 342 may be configured to fuse the infrared edge image and the visible edge image to obtain an edge image. The third fusing unit 343 may be configured to fuse the approximate image and the edge image to obtain a defogged luminance image.
Therefore, the infrared approximate image and the visible light approximate image can be fused according to the fog weight image, the image contrast is enhanced, the infrared edge image and the visible light edge image are fused, more texture details are obtained, finally the infrared image and the edge image are fused, the defogging brightness image with stronger contrast and more texture details is obtained, in addition, the characteristic that infrared rays have better penetrating capacity in a haze environment is utilized, edge information is obtained from the infrared image, the occurrence probability of artifacts is reduced, and a more real feeling is provided for the defogging image.
Specifically, the infrared approximation image includes an approximation of the infrared image, representing a low frequency component of the infrared image. The infrared edge image includes detail values of the infrared image, representing high frequency components of the infrared image. The visible light approximation image includes an approximation of the luminance image, representing the low frequency components of the luminance image. The visible edge image includes detail values of the visible image, representing high frequency components of the luminance image.
Further, step 06 (decomposing the infrared image to obtain an infrared approximate image and an infrared edge image of the infrared image) includes:
performing N-level two-dimensional haar wavelet decomposition on the infrared image to obtain N-level infrared approximate components, infrared horizontal components, infrared vertical components and infrared diagonal components, wherein the infrared approximate image comprises the N-level infrared approximate components, the infrared edge image comprises the N-level infrared horizontal components, infrared vertical components and infrared diagonal components, and N is greater than or equal to 1;
step 07 (decomposing the luminance image to obtain a visible light approximation image and a visible light edge image of the luminance image), including:
and performing N-level two-dimensional haar wavelet decomposition on the brightness image to obtain N-level visible light approximate components, visible light horizontal components, visible light vertical components and visible light diagonal components, wherein the visible light approximate image comprises the N-level visible light approximate components, the visible light edge image comprises the N-level visible light horizontal components, the visible light vertical components and the visible light diagonal components, and N is greater than or equal to 1.
Referring to fig. 7, in some embodiments, the fourth processing module 36 may be configured to perform N-level two-dimensional haar wavelet decomposition on the infrared image to obtain N levels of infrared approximate components, an infrared horizontal component, an infrared vertical component, and an infrared diagonal component, where the infrared approximate image includes N levels of infrared approximate components, the infrared edge image includes N levels of infrared horizontal components, infrared vertical components, and infrared diagonal components, and N is greater than or equal to 1. The fifth processing module 37 may be configured to perform N-level two-dimensional haar wavelet decomposition on the luminance image, and obtain N levels of visible light approximate components, a visible light horizontal component, a visible light vertical component, and a visible light diagonal component, where the visible light approximate image includes N levels of visible light approximate components, the visible light edge image includes N levels of visible light horizontal components, visible light vertical components, and visible light diagonal components, and N is greater than or equal to 1.
Therefore, the infrared image and the brightness image can be decomposed, and the infrared approximate image, the infrared edge image, the visible light approximate image and the visible light edge image can be obtained.
Specifically, the two-dimensional haar wavelet equation is as follows:
Figure BDA0003563487700000051
take the example of performing N-level two-dimensional haar wavelet decomposition on an infrared image. In the first-stage haar wavelet decomposition, haar wavelet decomposition may be performed on each row in the infrared image to obtain a low-frequency component and a high-frequency component of the infrared image in the horizontal direction, and then haar wavelet decomposition may be performed on each column of the obtained data to obtain a low-frequency component in the horizontal direction and the vertical direction, i.e., a first-stage infrared approximate component, a low-frequency component in the horizontal direction and a high-frequency component in the vertical direction, i.e., a first-stage infrared vertical component, a high-frequency component in the horizontal direction and a low-frequency component in the vertical direction, i.e., a first-stage infrared horizontal component, and a high-frequency component in the horizontal direction and a high-frequency component in the vertical direction, i.e., a first-stage infrared diagonal component. The second-level haar wavelet decomposition decomposes the infrared approximate component of the first level to obtain an infrared approximate component of the second level, an infrared vertical component of the second level, an infrared horizontal component of the second level, and an infrared diagonal component of the second level, and so on, which are not described in detail herein.
It should be noted that the horizontal direction means a row direction, and the vertical direction means a column direction, which is different from the horizontal direction and the vertical direction in physics, and represents only the row and column directions in the image. It will be appreciated that resolution generally decreases with each level of decomposition, with resolution decreasing progressively as the number of levels increases.
N may be a number of 1, 2, 3, 4, etc., which is adjusted according to factors such as data storage space, picture size, picture restoration needs, etc. In certain embodiments, N is 2 or 3. Therefore, under the condition of basically meeting the actual engineering requirements, the storage space is saved, and the reaction speed is accelerated.
Further, referring to fig. 8 and 9, in some embodiments, the image processing method includes:
81, storing the infrared approximate component, the infrared horizontal component, the infrared vertical component and the infrared diagonal component of each level to an infrared haar quadtree;
and 82, storing the visible light approximate component, the visible light horizontal component, the visible light vertical component and the visible light diagonal component of each level into a visible light Hartree.
Referring to fig. 7, in some embodiments, step 81 may be implemented by the fourth processing module 36, and step 82 may be implemented by the fifth processing module 37, that is, the fourth processing module 36 may be configured to store the infrared approximation component, the infrared horizontal component, the infrared vertical component, and the infrared diagonal component of each level in the infrared haar quadtree. The fifth processing module 37 may be configured to store the visible light approximation component, the visible light horizontal component, the visible light vertical component, and the visible light diagonal component of each level into a visible light haar quad-tree.
Therefore, the storage space can be saved, and the data can be conveniently called.
Specifically, fig. 9 is a schematic diagram of a quadtree, where the infrared haar quadtree and the visible haar quadtree have similar structures and correspond to each other, so as to subsequently call each component of the same level according to the infrared haar quadtree and the visible haar quadtree for fusion, and perform inverse haar wavelet according to the fused component.
Referring to fig. 10 and 12, in some embodiments, step 42 (fusing the infrared edge image and the visible edge image to obtain the edge image) includes:
421, taking the infrared horizontal component or the visible horizontal component with larger coefficient in each level as the fused horizontal component of the level;
422, taking the infrared vertical component or the visible light vertical component with larger coefficient in each stage as the fused vertical component of the stage;
423, taking the infrared diagonal component or the visible light diagonal component with a larger coefficient in each level as the fused diagonal component of the level;
step 41 (fusing the infrared approximation image and the visible light approximation image to obtain an approximation image according to the fog weight map) includes:
411: weighting the coefficient of the infrared approximate component and the coefficient of the visible light infrared approximate component of each level according to the fog weight graph to obtain a fused approximate component of the level;
step 43 (fusing the approximate image and the edge image to obtain a defogged luminance image) includes:
431: when the level does not have the next level, performing inverse haar wavelet on the fused horizontal component of the level, the fused vertical component of the level, the fused diagonal component of the level and the fused approximate component of the level to obtain an output component of the level;
432: when the level has a next level, performing histogram matching on the fused approximate component of the level and the output component of the next level, and performing inverse haar wavelet on the fused approximate component of the level, the fused horizontal component of the level, the fused vertical component of the level and the fused diagonal component of the level after histogram matching to obtain the output component of the level;
433: and obtaining a defogging brightness channel according to the output component of the first stage.
Referring to fig. 11, in some embodiments, the first fusion unit 341 includes a first fusion sub-unit 3411, the second fusion unit 342 includes a second fusion sub-unit 3421, a third fusion sub-unit 3422, and a fourth fusion sub-unit 3423, and the third fusion unit 343 includes a fifth fusion sub-unit 3431, a sixth fusion sub-unit 3432, and a seventh fusion sub-unit 3433. Wherein, step 411 may be implemented by the first fusing subunit 3411, step 421 may be implemented by the second fusing unit 3421, step 422 may be implemented by the third fusing subunit 3422, step 423 may be implemented by the fourth fusing subunit 3423, step 431 may be implemented by the fifth fusing subunit 3431, step 432 may be implemented by the sixth fusing subunit 3432, and step 433 may be implemented by the seventh fusing subunit 3433, that is, the first fusing subunit 3411 may be configured to weight the coefficient of the infrared approximation component of each level and the coefficient of the visible light infrared approximation component according to the fog weight map to obtain the fused approximation component of the level. The second fusion unit 3421 may be configured to use the infrared horizontal component or the visible horizontal component with a larger coefficient in each level as the fused horizontal component of the level. The third fusion unit 3422 is configured to use the infrared vertical component or the visible vertical component with a larger coefficient in each stage as the fused vertical component of the stage. The fourth fusing unit 3423 may be configured to use the infrared diagonal component or the visible light diagonal component with a larger coefficient in each level as the fused diagonal component of the level. The fifth fusion unit 3431 may be configured to, when the level does not have a next level, perform inverse haar wavelet on the fused horizontal component of the level, the fused vertical component of the level, the fused diagonal component of the level, and the fused approximate component of the level to obtain an output component of the level. The sixth fusing unit 3432 may be configured to, when the level has a next level, perform histogram matching on the fused approximate component of the level and the output component of the next level, and perform inverse haar wavelet on the fused approximate component of the level, the fused horizontal component of the level, the fused vertical component of the level, and the fused diagonal component of the level after histogram matching to obtain the output component of the level. The seventh fusion unit 3433 may be configured to obtain the defogged luminance channels according to the output components of the first stage.
In addition, when the level has the next level, namely the level is not the last level, histogram matching is carried out on the output component of the next level and the fused horizontal component of the level, so that the artifact can be effectively eliminated, and the sense of reality of the image is increased.
Specifically, fusion is started from the last level of haar wavelet decomposition, namely the Nth level, the output component of the Nth level is output after the fusion of the Nth level is completed, when N is 1, a defogging brightness channel is obtained according to the output component, when N is larger than or equal to 2, the output component according to the Nth level and the approximate component of the N-1 level are subjected to histogram matching, the approximate component subjected to histogram matching is fused with the horizontal component, the vertical component and the diagonal component, the output component of the N-1 level is output, and the like until the output component of the first level is obtained, and the defogging brightness channel is obtained according to the output component of the first level.
It will be appreciated that as the number of stages increases, the resolution gradually decreases, so that the last stage is the lowest resolution stage.
In one embodiment, referring to fig. 12, fig. 12 is a schematic diagram illustrating a fusion process at the nth stage. Specifically, the nth level infrared approximate image and the visible light approximate image are fused through the following formula:
the fused approximate component H NIR + (1-H) Y, where H is a coefficient obtained from the fog weight map, NIR is an approximate component of the infrared approximate image, and Y is an approximate component of the visible approximate image.
Fusing the nth level infrared edge image with the visible edge image may be performed by screening larger components. Specifically, the nth level infrared edge image and the visible edge image are fused through the following formula:
the other components of the fusion are max (Y, NIR), where Y is the other components of the visible edge image (visible horizontal component, visible vertical component, visible diagonal component) and NIR is the other components of the infrared edge image (infrared horizontal component, infrared vertical component, infrared diagonal component).
And then, performing inverse haar wavelet transform according to the fused approximate component and the fused other components (horizontal component, vertical component and diagonal component) to obtain a fused output component.
Therefore, the infrared approximate image and the visible light approximate image are combined by utilizing the fog weight map, the image contrast is enhanced, and when the fused horizontal component, the fused vertical component and the fused diagonal component are obtained, the component with the larger coefficient is selected, so that more texture details can be obtained, and a better fog removing effect is achieved. It will be appreciated that the larger the coefficient component, the more texture detail.
Further, step 433 (obtaining a defogged luminance channel according to the output component of the first stage) includes:
and performing histogram matching on the output component of the first stage and the brightness image to obtain a defogged brightness channel.
In some embodiments, the above steps may be implemented by the seventh fusion subunit 3433, that is, the seventh fusion subunit 3433 may be configured to perform histogram matching on the output component of the first stage and the luminance image to obtain a defogged luminance channel.
Therefore, histogram matching is carried out on the output component of the first level and the brightness image, artifacts in the image can be further eliminated, and a relatively real defogged image is obtained.
In some embodiments, step 433 (deriving the defogged luminance channels based on the output components of the first stage) includes:
and according to the fog weight map, calculating a weighted average of the output component of the first level and the brightness image to obtain a defogging brightness channel.
In some embodiments, the above steps may be implemented by the seventh fusing sub-unit 3433, that is, the seventh fusing sub-unit 3433 may be configured to calculate a weighted average of the output components of the first stage and the luminance image according to the fog weight map to obtain the fog-removed luminance channel.
Therefore, selective introduction compensation is realized, a part of haze is reserved, and the problem that the haze is completely removed to make an image look unnatural can be avoided.
In some embodiments, the image processing method further comprises:
and carrying out corresponding size scaling on the fog weight graph according to the series to obtain a target fog weight graph corresponding to each stage.
In some embodiments, the image processing apparatus 30 further includes a sixth processing module, and the foregoing steps may be implemented by the sixth processing module, that is, the sixth processing module may be configured to perform corresponding size scaling on the fog weight map according to the number of stages to obtain a target fog weight map corresponding to each stage.
In this way, the weight corresponding to the fusion of the infrared approximate component and the visible light approximate component of each level can be obtained.
Specifically, referring to fig. 12, H shown in fig. 12 is a numerical value of the target fog weight map corresponding to the nth level.
Referring to fig. 13, in some embodiments, the visible light image is an RGB image, and the image processing method includes:
converting the visible light image into YUV image, the brightness image is the image corresponding to Y channel,
step 03 (obtaining a fog weight map from the visible light image) includes:
31: acquiring a blue channel image of the visible light image;
32: and obtaining a fog weight map according to the blue channel image and the brightness image.
Referring to fig. 14, in some embodiments, the image processing apparatus 30 includes a conversion module 39, and the second processing module 33 includes a first obtaining unit 331 and a second obtaining unit 332. The conversion module 39 can be used to convert the visible light image into a YUV image, and the luminance image is an image corresponding to the Y channel. The first acquisition unit 331 may be used to acquire a blue channel image of the visible light image. The second obtaining unit 332 may be configured to obtain a fog weight map according to the blue channel image and the luminance image.
In this way, the fog weight map can be obtained from the blue channel image and the luminance image.
Specifically, referring to fig. 15, the step 32 (obtaining the fog weight map according to the blue channel image and the luminance image) includes:
321: obtaining an initial fog weight map according to the blue light channel image and the brightness image;
322: and performing nonlinear stretching on the normalized initial fog weight map to obtain a fog weight map.
Referring to fig. 16, in some embodiments, the second obtaining unit 332 includes an initial sub-unit 3321 and a stretching sub-unit 3322, where step 321 is implemented by the initial sub-unit 3321 and step 322 is implemented by the stretching sub-unit 3322, that is, the initial sub-unit 3321 is configured to obtain an initial fog weight map according to the blue light channel image and the luminance image, and the stretching sub-unit 3322 is configured to non-linearly stretch the normalized initial fog weight map to obtain the fog weight map.
Thus, the defogged image can be made to look more natural.
Specifically, according to the blue light channel and the luminance image information, the initial fog weight map is obtained by the following formula:
H0=min(B,|Y-B|)
wherein H0Representing the initial fog weight map, B representing the blue channel image, and Y representing the luminance image.
It should be noted that there are many ways to perform the non-linear stretching on the normalized initial fog weight map, which can be adjusted according to factors such as the characteristics of the image capturing device and the experimental results, and the invention is not limited in this respect.
Further, in certain embodiments, step 322 (non-linearly stretching the normalized initial fog weight map to obtain a fog weight map) comprises:
when the value of the normalized initial fog weight map is smaller than a first threshold value, the value of the corresponding fog weight map is zero;
when the value of the normalized initial fog weight map is greater than a first threshold value, the value of the corresponding fog weight map increases non-linearly with increasing value of the normalized initial fog weight map.
In this way, the value of the fog weight map smaller than the first threshold value is stretched to zero, the visual effect of the non-fog area can be reserved, and the fog weight map larger than the first threshold value is stretched in a non-linear mode, so that the image is more natural.
Specifically, the fog weight map is used to indicate the fog density, and it can be understood that the more defogged image processing is required at a position where the fog density is higher, and the defogged region or a position where the fog density is small may not be performed, and therefore, when the value of the normalized initial fog weight map is smaller than the first threshold value, the value of the fog weight map is set to zero, and the image processing for a position where the fog density is small is reduced, and the visual effect of the non-fog region is maintained.
It should be noted that the first threshold may be a value such as 0.1, 0.2, 0.3, 0.4, etc., and the specific value of the first threshold is adjusted according to the requirement for the defogging effect, the simulation experiment, etc., and is not limited herein.
In one embodiment, referring to FIG. 17, the equation for non-linearly stretching the normalized initial fog weight map is as follows:
Figure BDA0003563487700000101
in fig. 17, the abscissa and ordinate represent the normalized initial fog weight map, and the right H represents the normalized initial fog weight map.
In summary, the image processing method of the present invention can fuse color information of the visible light image and edge information of the infrared image, so as to improve permeability of the fused defogged image, ensure that the output defogged image has better color characteristics and edge characteristics, further reduce probability of occurrence of artifacts, and provide a truer feeling for the recovered defogged image.
In some embodiments, the defogged image may be generated according to the fog weight map, the defogging brightness image, and the color value of the visible light image. It can be understood that, at a position where the fog density is large, the color of the image and the edge information of the image are also affected to some extent, so that when the defogged image is generated, the color value and the defogging brightness image can be weighted according to the fog weight map to better balance the color distribution and the scene details.
It is worth to be noted that the invention can also perform noise reduction and fusion through the infrared images and the visible light images of multiple frames, introduce more textures, and further improve the imaging quality of the images in various scenes such as night, haze and the like.
In an embodiment, referring to fig. 18, fig. 18 is a flowchart of an image processing method according to the present invention, in which NIR represents an infrared image, RGB represents a visible light image, YUV represents a YUV image, Y represents an image corresponding to a Y channel, i.e., a luminance image, UV represents color information in the YUV image, and B represents a blue channel image of the visible light image.
The image processing method comprises the following steps:
acquiring an infrared image and a visible light image, wherein the visible light image is an RGB image;
obtaining a blue channel image according to the RGB image;
converting the RGB image into a YUV image, acquiring a brightness image according to an image corresponding to a Y channel and acquiring a color value of a corresponding visible light image according to a UV channel;
obtaining a haar quadtree according to the infrared image and the brightness image, wherein the haar quadtree comprises the infrared haar quadtree in which the infrared approximate component, the infrared horizontal component, the infrared vertical component and the infrared diagonal component of each level are stored, and the visible light quadtree in which the visible light approximate component, the visible light horizontal component, the visible light vertical component and the visible light diagonal component of each level are stored;
obtaining a fog weight map according to the blue channel image and the brightness image;
fusing the haar quadtree and the fog weight image, and performing weighted average according to the fused result, the fog weight image and the brightness image to generate a defogged brightness image;
and combining and converting the color value channels of the defogging brightness image and the visible light image to obtain the defogging image in the RGB format.
Referring to fig. 2, the image processing method according to the embodiment of the present disclosure can be implemented by the electronic device 100 according to the embodiment of the present disclosure. In particular, the electronic device 100 includes one or more processors 50 and memory 40. The memory 40 stores a computer program. The steps of the image processing method according to any of the above embodiments are implemented when the computer program is executed by the processor 50.
For example, in the case where the computer program is executed by the processor 50, the steps of the following image processing method are implemented:
01: acquiring an infrared image and a visible light image corresponding to the same scene;
02: acquiring a brightness image corresponding to a brightness channel of the visible light image;
03: obtaining a fog weight map according to the visible light image;
04: fusing the infrared image and the brightness image according to the fog weight image to obtain a defogged brightness image;
05: and generating a defogged image according to the defogged brightness image and the color value of the visible light image.
The computer-readable storage medium of the embodiments of the present application stores thereon a computer program that, when executed by a processor, implements the steps of the image processing method of any of the embodiments described above.
For example, 01: acquiring an infrared image and a visible light image corresponding to the same scene;
02: acquiring a brightness image corresponding to a brightness channel of the visible light image;
03: obtaining a fog weight map according to the visible light image;
04: fusing the infrared image and the brightness image according to the fog weight image to obtain a defogged brightness image;
05: and generating a defogged image according to the defogged brightness image and the color value of the visible light image.
It will be appreciated that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like. The Processor may be a central processing unit, or may be other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (14)

1. An image processing method, characterized in that the image processing method comprises:
acquiring an infrared image and a visible light image corresponding to the same scene;
acquiring a brightness image corresponding to a brightness channel of the visible light image;
obtaining a fog weight map according to the visible light image;
fusing the infrared image and the brightness image according to the fog weight map to obtain a defogged brightness image;
and generating a defogged image according to the defogged brightness image and the color value of the visible light image.
2. The image processing method according to claim 1, characterized in that the image processing method further comprises:
decomposing the infrared image to obtain an infrared approximate image and an infrared edge image of the infrared image;
decomposing the brightness image to obtain a visible light approximate image and a visible light edge image of the brightness image;
and fusing the infrared image and the brightness image according to the fog weight map to obtain a defogged brightness image, wherein the defogging brightness image comprises the following steps:
according to the fog weight map, fusing the infrared approximate image and the visible light approximate image to obtain an approximate image;
fusing the infrared edge image and the visible edge image to obtain an edge image;
and fusing the approximate image and the edge image to obtain the defogged brightness image.
3. The image processing method according to claim 2, wherein the decomposing the infrared image to obtain an infrared approximate image and an infrared edge image of the infrared image comprises:
performing N-level haar wavelet decomposition on the infrared image to acquire N-level infrared approximate components, infrared horizontal components, infrared vertical components and infrared diagonal components, wherein the infrared approximate image comprises the N-level infrared approximate components, the infrared edge image comprises the N-level infrared horizontal components, infrared vertical components and infrared diagonal components, and N is greater than or equal to 1;
decomposing the brightness image to obtain a visible light approximate image and a visible light edge image of the brightness image, including:
and carrying out N-level haar wavelet decomposition on the brightness image to obtain N-level visible light approximate components, visible light horizontal components, visible light vertical components and visible light diagonal components, wherein the visible light approximate image comprises the N-level visible light approximate components, the visible light edge image comprises the N-level visible light horizontal components, the N-level visible light vertical components and the N-level visible light diagonal components, and N is greater than or equal to 1.
4. The image processing method according to claim 3, characterized in that the image processing method further comprises:
storing the infrared approximation component, the infrared horizontal component, the infrared vertical component, and the infrared diagonal component of each level to an infrared haar quadtree;
storing the visible light approximate component, the visible light horizontal component, the visible light vertical component, and the visible light diagonal component for each level to a visible light haar quadtree.
5. The image processing method according to claim 2, wherein the fusing the infrared edge image and the visible edge image to obtain an edge image comprises:
taking the infrared horizontal component or the visible light horizontal component with larger coefficient in each level as the fused horizontal component of the level;
taking the infrared vertical component or the visible light vertical component with a larger coefficient in each level as a fused vertical component of the level;
taking the infrared diagonal component or the visible light diagonal component with a larger coefficient in each level as a fused diagonal component of the level;
the fusing the infrared approximate image and the visible light approximate image according to the fog weight map to obtain an approximate image, comprising:
weighting the coefficient of the infrared approximate component and the coefficient of the visible light infrared approximate component of each level according to the fog weight graph to obtain a fused approximate component of the level;
the fusing the approximate image and the edge image to obtain the defogged brightness image comprises:
when the level does not have the next level, performing inverse haar wavelet on the fused horizontal component of the level, the fused vertical component of the level, the fused diagonal component of the level and the fused approximate component of the level to obtain an output component of the level;
when the level has a next level, performing histogram matching on the fused approximate component of the level and the output component of the next level, and performing inverse haar wavelet on the fused approximate component of the level, the fused horizontal component of the level, the fused vertical component of the level and the fused diagonal component of the level after histogram matching to obtain the output component of the level;
and obtaining the defogging brightness channel according to the output component of the first stage.
6. The method according to claim 5, wherein said deriving the defogged luminance channels according to the output components of the first stage comprises:
and performing histogram matching on the output component of the first stage and the brightness image to obtain the defogging brightness channel.
7. The image processing method according to claim 5 or 6, wherein the deriving the defogged luminance channels according to the output component of the first stage comprises:
and according to the fog weight map, calculating a weighted average of the output component of the first stage and the brightness image to obtain a defogging brightness channel.
8. The image processing method according to claim 5, characterized in that the image processing method further comprises:
and carrying out corresponding size scaling on the fog weight graph according to the series to obtain a target fog weight graph corresponding to each stage.
9. The image processing method according to claim 1, wherein the visible light image is an RGB image, the image processing method comprising:
converting the visible light image into a YUV image, wherein the brightness image is an image corresponding to a Y channel,
determining a fog weight map according to the visible light image, comprising:
acquiring a blue channel image of the visible light image;
and obtaining the fog weight map according to the blue channel image and the brightness image.
10. The image processing method according to claim 9, wherein said obtaining the fog weight map from the blue light channel and the luminance image comprises:
obtaining an initial fog weight map according to the blue light channel image and the brightness image;
and performing nonlinear stretching on the normalized initial fog weight map to obtain the fog weight map.
11. The image processing method of claim 10, wherein the non-linearly stretching the normalized initial fog weight map to obtain the fog weight map comprises:
when the normalized value of the initial fog weight map is smaller than a first threshold value, the corresponding value of the fog weight map is zero;
when the normalized initial fog weight map value is greater than the first threshold value, the corresponding fog weight map value increases non-linearly with increasing normalized initial fog weight map value.
12. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an infrared image and a visible light image corresponding to the same scene;
the first processing module is used for acquiring a brightness image corresponding to a brightness channel of the visible light image;
the second processing module is used for obtaining a fog weight map according to the visible light image;
the third processing module is used for fusing the infrared image and the brightness image according to the fog weight map to obtain a defogged brightness image;
and the generating module is used for generating a defogged image according to the defogged brightness image and the color value of the visible light image.
13. An electronic device, characterized in that the electronic device comprises one or more processors and a memory, the memory storing a computer program which, when executed by the processors, implements the steps of the image processing method of any one of claims 1-11.
14. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 11.
CN202210302825.3A 2022-03-24 2022-03-24 Image processing method, image processing apparatus, electronic device, and storage medium Pending CN114663311A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210302825.3A CN114663311A (en) 2022-03-24 2022-03-24 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210302825.3A CN114663311A (en) 2022-03-24 2022-03-24 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
CN114663311A true CN114663311A (en) 2022-06-24

Family

ID=82031716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210302825.3A Pending CN114663311A (en) 2022-03-24 2022-03-24 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN114663311A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09326024A (en) * 1996-06-06 1997-12-16 Matsushita Electric Ind Co Ltd Picture coding and decoding method and its device
CN107277369A (en) * 2017-07-27 2017-10-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN107438170A (en) * 2016-05-25 2017-12-05 杭州海康威视数字技术股份有限公司 A kind of image Penetrating Fog method and the image capture device for realizing image Penetrating Fog
KR20190076188A (en) * 2017-12-22 2019-07-02 에이스웨이브텍(주) Fusion dual IR camera and image fusion algorithm using LWIR and SWIR
CN110163804A (en) * 2018-06-05 2019-08-23 腾讯科技(深圳)有限公司 Image defogging method, device, computer equipment and storage medium
CN111080568A (en) * 2019-12-13 2020-04-28 兰州交通大学 Tetrolet transform-based near-infrared and color visible light image fusion algorithm
CN112419162A (en) * 2019-08-20 2021-02-26 浙江宇视科技有限公司 Image defogging method and device, electronic equipment and readable storage medium
CN112991218A (en) * 2021-03-23 2021-06-18 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
WO2021184028A1 (en) * 2020-11-12 2021-09-16 Innopeak Technology, Inc. Dehazing using localized auto white balance
CN114119436A (en) * 2021-10-08 2022-03-01 中国安全生产科学研究院 Infrared image and visible light image fusion method and device, electronic equipment and medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09326024A (en) * 1996-06-06 1997-12-16 Matsushita Electric Ind Co Ltd Picture coding and decoding method and its device
CN107438170A (en) * 2016-05-25 2017-12-05 杭州海康威视数字技术股份有限公司 A kind of image Penetrating Fog method and the image capture device for realizing image Penetrating Fog
CN107277369A (en) * 2017-07-27 2017-10-20 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
KR20190076188A (en) * 2017-12-22 2019-07-02 에이스웨이브텍(주) Fusion dual IR camera and image fusion algorithm using LWIR and SWIR
CN110163804A (en) * 2018-06-05 2019-08-23 腾讯科技(深圳)有限公司 Image defogging method, device, computer equipment and storage medium
CN112419162A (en) * 2019-08-20 2021-02-26 浙江宇视科技有限公司 Image defogging method and device, electronic equipment and readable storage medium
CN111080568A (en) * 2019-12-13 2020-04-28 兰州交通大学 Tetrolet transform-based near-infrared and color visible light image fusion algorithm
WO2021184028A1 (en) * 2020-11-12 2021-09-16 Innopeak Technology, Inc. Dehazing using localized auto white balance
CN112991218A (en) * 2021-03-23 2021-06-18 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114119436A (en) * 2021-10-08 2022-03-01 中国安全生产科学研究院 Infrared image and visible light image fusion method and device, electronic equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李洋: "基于IHS和小波变换的可见光与红外图像融合", 《智能***学报》, vol. 7, no. 6, 31 December 2012 (2012-12-31), pages 554 - 559 *
程鹏: "基于近红外的图像去雾方法", 《工程科学与技术》, vol. 45, no. 02, 1 July 2013 (2013-07-01), pages 155 - 159 *

Similar Documents

Publication Publication Date Title
CN104021532B (en) A kind of image detail enhancement method of infrared image
Rao et al. A Survey of Video Enhancement Techniques.
Ooi et al. Bi-histogram equalization with a plateau limit for digital image enhancement
US20110097008A1 (en) Method for processing a digital object and related system
KR102045538B1 (en) Method for multi exposure image fusion based on patch and apparatus for the same
KR20110084025A (en) Apparatus and method for image fusion
Lee et al. Image contrast enhancement using classified virtual exposure image fusion
CN111709898B (en) Infrared image enhancement method and system based on optimized CLAHE
Pei et al. Effective image haze removal using dark channel prior and post-processing
CN107392879B (en) A kind of low-light (level) monitoring image Enhancement Method based on reference frame
US8928769B2 (en) Method for image processing of high-bit depth sensors
CN111311524A (en) MSR-based high dynamic range video generation method
Dhariwal Comparative analysis of various image enhancement techniques
Kao High dynamic range imaging by fusing multiple raw images and tone reproduction
Huo et al. High‐dynamic range image generation from single low‐dynamic range image
CN110111269A (en) Low-light-level imaging algorithm and device based on multiple dimensioned context converging network
CN111953893B (en) High dynamic range image generation method, terminal device and storage medium
CN109035181B (en) Wide dynamic range image processing method based on image average brightness
CN116471486A (en) Method for generating high dynamic range image from single exposure cable tunnel image
Wu et al. Reflectance-guided, contrast-accumulated histogram equalization
Zhang et al. Multi-scale-based joint super-resolution and inverse tone-mapping with data synthesis for UHD HDR video
CN111754412B (en) Method and device for constructing data pair and terminal equipment
US20050089240A1 (en) Applying a tone scale function to a digital image
KR101468433B1 (en) Apparatus and method for extending dynamic range using combined color-channels transmission map
Watanabe et al. Improvement of color quality with modified linear multiscale retinex

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination