WO2020051897A1 - 图像融合方法、***、电子设备和计算机可读介质 - Google Patents

图像融合方法、***、电子设备和计算机可读介质 Download PDF

Info

Publication number
WO2020051897A1
WO2020051897A1 PCT/CN2018/105792 CN2018105792W WO2020051897A1 WO 2020051897 A1 WO2020051897 A1 WO 2020051897A1 CN 2018105792 W CN2018105792 W CN 2018105792W WO 2020051897 A1 WO2020051897 A1 WO 2020051897A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
visible light
frequency component
light image
component
Prior art date
Application number
PCT/CN2018/105792
Other languages
English (en)
French (fr)
Inventor
孙岳
张娅楠
Original Assignee
浙江宇视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江宇视科技有限公司 filed Critical 浙江宇视科技有限公司
Priority to PCT/CN2018/105792 priority Critical patent/WO2020051897A1/zh
Publication of WO2020051897A1 publication Critical patent/WO2020051897A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image fusion method, system, electronic device, and computer-readable medium.
  • the existing image fusion technology simply performs a weighted fusion or layered fusion of the visible light image and the infrared image, resulting in problems such as color distortion and blurred edge details in the fused image.
  • the existing fusion technology does not consider visible light.
  • the problem that the structural information of the image and the infrared image are inconsistent (such as shadows and highlights) causes the fused image to introduce redundant edge information and the visual effect is poor.
  • the quality of the fused image obtained by the existing image fusion method is poor, and the visual effect is not good.
  • the purpose of this application is to provide an image fusion method, system, electronic device, and computer-readable medium to alleviate the technical problems of poor fusion image and poor visual effect obtained by the existing image fusion method.
  • an embodiment of the present application provides an image fusion method, which is applied to an image processor, and includes: using the image processor to extract a chroma component, a low frequency component, and a high frequency component of a visible light image; and extracting an infrared image.
  • a low frequency component and a high frequency component wherein the visible light image and the infrared image correspond to the same real scene; the image processor uses the low frequency component of the infrared image to compare the chromaticity component of the visible light image and the visible light
  • the low-frequency component of the image is subjected to enhancement processing to obtain a chroma component and a low-frequency component of the enhanced visible light image; the image processor fuses the high-frequency component of the infrared image and the high-frequency component of the visible light image to obtain a target High-frequency component; the image processor combines the chrominance component and the low-frequency component of the enhanced visible light image and the target high-frequency component to obtain a fused image.
  • the embodiment of the present application provides a first possible implementation manner of the first aspect, wherein the extracting a chroma component, a low frequency component, and a high frequency component of a visible light image by the image processor includes: The image is subjected to color gamut conversion to separate the initial chrominance component and the initial luminance component of the visible light image; perform noise reduction processing on the initial chrominance component to obtain the chrominance component of the visible light image; and the visible light image A layered noise reduction process is performed on the initial luminance component of, to obtain a low frequency component and a high frequency component of the visible light image.
  • the embodiment of the present application provides a second possible implementation manner of the first aspect, wherein the image processor performs layered noise reduction processing on an initial brightness component of the visible light image to obtain the visible light.
  • the low-frequency component and high-frequency component of the image include: performing a hierarchical noise reduction process on the initial brightness component of the visible light image through a target filtering algorithm to obtain the low-frequency component and high-frequency component of the visible light image, wherein the target filtering algorithm Including: linear filtering algorithm or non-linear filtering algorithm.
  • the embodiment of the present application provides a third possible implementation manner of the first aspect, wherein if the target filtering algorithm is a non-linear filtering algorithm, the initial brightness of the visible light image is determined by the target filtering algorithm.
  • the layered noise reduction processing of the component includes: performing the layered noise reduction processing on the initial luminance component of the visible light image by using the first formula and the second formula to obtain a low frequency component and a high frequency component of the visible light image;
  • the first formula is:
  • the second formula is: Is the filtering weight of the non-linear filtering algorithm, N (u) is the current data block centered on the target point u, N (v) is the reference data block centered on the target point v, and the target point u and all
  • the target point v is any pixel point in the brightness component of the visible light image, and h is a preset filter intensity control parameter.
  • extracting the low-frequency component and the high-frequency component of the infrared image includes: performing hierarchical processing on the infrared image by using a target filtering algorithm. To extract low frequency components and high frequency components of the infrared image from the infrared image.
  • the embodiment of the present application provides a fifth possible implementation manner of the first aspect, wherein the image processor fuses a high-frequency component of the infrared image and a high-frequency component of the visible light image.
  • Obtaining the high-frequency component of the target includes: using a fusion algorithm based on the structural similarity between the visible light image and the infrared image to fuse the high-frequency component of the infrared image and the high-frequency component of the visible light image to obtain a target High-frequency components.
  • the embodiment of the present application provides a sixth possible implementation manner of the first aspect, wherein a fusion algorithm based on a structural similarity between the visible light image and the infrared image is used to integrate the infrared image Fusion of the high-frequency component and the high-frequency component of the visible light image includes: calculating a structural similarity measurement value between the visible light image and a corresponding pixel point in the infrared image; The high-frequency component of the infrared image and the high-frequency component of the visible light image are fused.
  • the embodiment of the present application provides a seventh possible implementation manner of the first aspect, wherein calculating a structural similarity measurement value between the visible light image and the corresponding pixel point in the infrared image includes: Determine a target sliding window; traverse each pixel point of the visible light image and the infrared image through the target sliding window, and obtain the center position of the window when the target sliding window is traversed to any position; The pixel point at the center of the window in the image and the infrared image is used as the corresponding pixel point; using the formula Calculate a measure of structural similarity between corresponding pixels in the visible light image and the infrared image; where ⁇ vis , ⁇ ir , ⁇ vis , and ⁇ ir are The mean and standard deviation of the pixels in the target sliding window are described, and C 1 and C 2 are preset constants.
  • the embodiment of the present application provides an eighth possible implementation manner of the first aspect, wherein the high-frequency component of the infrared image and the high-frequency of the visible light image are converted by using the structural similarity metric Component fusion includes: by formula Fusion the high-frequency component of the infrared image and the high-frequency component of the visible light image, where H comb , H vis , H ir are high-frequency components of the fused image, visible light image, and infrared image, respectively, and th1 and th2 are A preset structural similarity threshold and th1 ⁇ th2, ⁇ is a fusion weight of a high-frequency component of an infrared image.
  • the embodiment of the present application provides a ninth possible implementation manner of the first aspect, wherein the image processor uses the low-frequency component of the infrared image to compare the chromaticity component of the visible light image and the visible light image.
  • the enhancement processing of the low-frequency component of the visible light image includes: performing color gamut conversion on the visible light image to convert the visible light image to an RGB color gamut space; and combining a visible light image converted to the RGB color gamut space and the infrared image to construct a target Image matrix, and calculate the Jacobian matrix of each element in the target image matrix; perform data dimensionality reduction processing on the Jacobian matrix of each element to obtain the visible light image transferred to the RGB color gamut space at each element A gradient vector to obtain a gradient matrix; performing gradient domain image reconstruction on the gradient matrix to obtain an enhanced visible light image; and obtaining the enhanced chrominance component and low-frequency component based on the enhanced visible light image.
  • the embodiment of the present application provides a tenth possible implementation manner of the first aspect, wherein obtaining the enhanced chrominance component and low-frequency component based on the enhanced visible light image includes: after the enhancement The visible light image of the image is subjected to color gamut transformation, and the enhanced chrominance component and low-frequency component are obtained separately.
  • an embodiment of the present application further provides an image fusion system, including: a camera device, an image processor, and a data communication device, wherein the camera device is connected to the image processor through the data communication device
  • the camera device is configured to transmit a visible light image and an infrared image to the image processor through the data communication device
  • the image processor is configured to extract a chromaticity component, a low-frequency component, and a high-frequency component of the visible light image
  • Frequency components and extracting low-frequency components and high-frequency components of the infrared image wherein the visible light image and the infrared image correspond to the same real scene
  • the chromaticity of the visible light image is determined by the low-frequency component of the infrared image
  • the component and the low-frequency component of the visible light image are subjected to enhancement processing to obtain a chrominance component and a low-frequency component of the enhanced visible light image; the high-frequency component of the infrared image and the high-frequency component of the visible light image are
  • the embodiment of the present application provides a first possible implementation manner of the second aspect, wherein the data communication device includes: a wired communication device and / or a wireless communication device.
  • an embodiment of the present application provides an electronic device including a memory, an image processor, and a computer program stored on the memory and executable on the image processor, where the image processor executes the image processor.
  • the computer program implements the method according to any one of the first aspects.
  • an embodiment of the present application provides a computer storage medium on which a computer program is stored, and the computer executes the steps of the method according to any one of the first aspects when the computer runs the computer program.
  • the visible light image and the chrominance component, the low frequency component and the high frequency component are first extracted by an image processor, and the low frequency component and the high frequency component of the infrared image are extracted. Then, the visible light image The chrominance component and the low-frequency component of the visible light image are enhanced to obtain the enhanced chromaticity and low-frequency component of the visible light image; further, the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused to obtain the target high-frequency component ; Finally, the chroma and low frequency components of the visible light image after enhancement and the high frequency component of the target are combined to obtain a fused image.
  • the low-frequency component of the infrared image is used to enhance the chroma component and the low-frequency component of the visible light image, which can effectively improve the visual effect of the fused image.
  • the fusion of the frequency component and the high-frequency component of the visible light image can enhance the details of the visible light image without introducing extra edge detail information, which makes the final fused image of good quality and eases the fusion obtained by the existing image fusion methods.
  • FIG. 1 is a flowchart of an image fusion method according to an embodiment of the present application
  • FIG. 2 is a flowchart of a method for extracting a chroma component, a low frequency component, and a high frequency component of a visible light image according to an embodiment of the present application;
  • FIG. 3 is a flowchart of a method for performing enhancement processing on a chroma component of a visible light image and a low frequency component of a visible light image by using a low frequency component of an infrared image according to an embodiment of the present application;
  • FIG. 4 is a flowchart of a method for fusing a high-frequency component of an infrared image and a high-frequency component of a visible light image according to an embodiment of the present application;
  • FIG. 5 is a schematic diagram of an image fusion apparatus according to an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an image fusion system according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an electronic device according to an embodiment of the present application.
  • an embodiment of an image fusion method is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions. The logical order is shown in the flowchart, but in some cases the steps shown or described may be performed in a different order than here.
  • FIG. 1 is a flowchart of an image fusion method according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • step S102 the image processor extracts a chroma component, a low frequency component, and a high frequency component of the visible light image, and extracts a low frequency component and a high frequency component of the infrared image, where the visible light image and the infrared image correspond to the same real scene;
  • a visible light image may be acquired through a visible light camera
  • an infrared image may be acquired through an infrared camera.
  • the visible light camera and the infrared camera can be set in the same camera device, or they can be set in different camera devices.
  • the camera device where the visible light camera is located and the camera device where the infrared camera is located are installed at the same position.
  • the image fusion method is applied to an image processor.
  • Image fusion specifically means that the image data about the same target collected by multiple source channels is processed through image processing and computer technology to maximize the extraction of favorable information in the respective channels, and finally integrated into a high-quality image to improve image information Utilization, improve the accuracy and reliability of computer interpretation, and improve the spatial and spectral resolution of the original image, which is conducive to monitoring.
  • the favorable information includes: a chroma component, a low frequency component, and a high frequency component of the visible light image, and also includes a low frequency component and a high frequency component of the infrared image.
  • the extraction process is described in detail below.
  • Step S104 the image processor performs enhancement processing on the chroma component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image to obtain the enhanced chroma component and low frequency component of the visible light image;
  • the chromaticity component of the visible light image and the low-frequency component of the visible light image are further enhanced through the low-frequency component of the infrared image.
  • Processing to obtain the enhanced chrominance component and low-frequency component of the visible light image can effectively improve the visual effect of the fused image.
  • Step S106 the image processor fuses the high-frequency component of the infrared image and the high-frequency component of the visible light image to obtain a target high-frequency component;
  • the high frequency component of the infrared image and the high frequency component of the visible light image are further fused to obtain the target high frequency
  • This process can improve the signal-to-noise ratio and edge detail clarity of the fused image in areas with high structural similarity, while not introducing extra false edges in areas with low structural similarity, such as shadows and highlights.
  • step S108 the image processor combines the chrominance component and the low-frequency component of the visible light image after enhancement and the target high-frequency component to obtain a fused image.
  • the enhanced chromaticity component of the visible light image After the enhanced chromaticity component of the visible light image is obtained, after the enhanced low frequency component and the target high frequency component of the visible light image are enhanced, the enhanced chromaticity component and low frequency component of the visible light image and the target high frequency component are combined, and Get the fused image. That is, image reconstruction is performed according to the chrominance component and the low-frequency component of the enhanced visible light image, and the target high-frequency component to obtain a fused image.
  • the visible light image and the chrominance component, the low frequency component and the high frequency component are first extracted by an image processor, and the low frequency component and the high frequency component of the infrared image are extracted. Then, the visible light image The chrominance component and the low-frequency component of the visible light image are enhanced to obtain the enhanced chromaticity and low-frequency component of the visible light image; further, the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused to obtain the target high-frequency component ; Finally, the chroma and low frequency components of the visible light image after enhancement and the high frequency component of the target are combined to obtain a fused image.
  • the low-frequency component of the infrared image is used to enhance the chroma component and the low-frequency component of the visible light image, which can effectively improve the visual effect of the fused image.
  • the fusion of the frequency component and the high-frequency component of the visible light image can enhance the details of the visible light image without introducing extra edge detail information, which makes the final fused image of good quality and eases the fusion obtained by the existing image fusion methods.
  • the image processor extracting a chroma component, a low frequency component, and a high frequency component of a visible light image includes the following steps:
  • Step S1021 Perform color gamut conversion on the visible light image to separate and obtain the initial chrominance component and the initial brightness component of the visible light image;
  • the color gamut conversion is performed on the visible light image to obtain a color space with separated brightness and chromaticity, such as standard HSV space, YIQ space, YUV space, Lab space, etc.
  • a color space with separated brightness and chromaticity such as standard HSV space, YIQ space, YUV space, Lab space, etc.
  • the color space can also be customized according to requirements.
  • This embodiment of the present application does not specifically limit it, and further obtains an initial chrominance component and an initial luminance component of a visible light image.
  • Step S1022 Perform noise reduction processing on the initial chrominance component to obtain a chrominance component of the visible light image
  • the initial chrominance component is obtained, the initial chrominance component is subjected to noise reduction processing, and a linear filter (such as a mean filter, a Gaussian filter, etc.) or a non-linear edge-preserving filter (such as a bilateral filter, a non-local mean value) can be used. Filters, etc.), the image after noise reduction has a higher signal-to-noise ratio.
  • a linear filter such as a mean filter, a Gaussian filter, etc.
  • a non-linear edge-preserving filter such as a bilateral filter, a non-local mean value
  • Step S1023 Perform layered noise reduction processing on the initial brightness component of the visible light image to obtain a low frequency component and a high frequency component of the visible light image.
  • the filtering algorithm is used to perform layered noise reduction on the initial brightness component of the visible light image.
  • the processed brightness component that is, the low frequency component and high frequency component of the visible image
  • the processed brightness component not only has a higher signal-to-noise ratio, but also retains more edge details. information.
  • the process in which the image processor performs hierarchical noise reduction processing on the initial brightness component of the visible light image includes the following steps:
  • step S10231 the initial brightness component of the visible light image is subjected to hierarchical noise reduction processing through a target filtering algorithm to obtain a low frequency component and a high frequency component of the visible light image.
  • the target filtering algorithm includes a linear filtering algorithm or a non-linear filtering algorithm.
  • multi-scale decomposition methods such as wavelet transform, Gaussian pyramid, etc. can also be used, and filtering algorithms can also be used to achieve layered noise reduction on visible light images.
  • linear filtering algorithms such as mean filtering and Gaussian filtering
  • the linear filtering method is simple in principle, low in computational complexity, and superior in performance. It can quickly achieve smoothing of noisy images, but the edge details of the filtering results are blurred.
  • Non-linear filters such as median filtering, non-local mean filtering, bilateral filtering, and other edge-preserving filtering algorithms
  • Non-linear filtering methods can remove the noise while protecting the edge detail information of the image. Layered images have both higher signal-to-noise ratio and sharpness.
  • step S10231 if the target filtering algorithm is a non-linear filtering algorithm, performing hierarchical noise reduction processing on the initial brightness component of the visible light image through the target filtering algorithm includes:
  • the first formula is:
  • the second formula is: Is the filtering weight of the nonlinear filtering algorithm, N (u) is the current data block centered on the target point u, N (v) is the reference data block centered on the target point v, and the target point u and the target point v are brightness Any pixel point in the component, where h is a preset filter intensity control parameter.
  • non-local mean filtering which utilizes the rich repeated redundant information in the image
  • the filtering operation is mainly realized by finding blocks in the image that are similar to the current block structure characteristics. Assuming a visible light image ( The same applies to infrared images.) The brightness component is visY, and u is any point in visY.
  • the specific implementation steps of layered noise reduction are as follows:
  • the low frequency component and high frequency component of the visible light image can be obtained by the following formula:
  • visY_high (u) visY (u) -visY_low (u)
  • the filtering weight N (u), N (v) are the blocks centered on the points u and v, respectively.
  • the filtering weight is large.
  • the filtering weight is small.
  • this method Compared with the linear filter, this method has a relatively high computational complexity, but the low-frequency components of visible light obtained by this method are compared with other existing layering techniques, while retaining more edges while removing noise. The details and high signal-to-noise ratio make the final fused image look clearer and more accurate.
  • extracting low-frequency components and high-frequency components of the infrared image includes:
  • the infrared image is layered by a target filtering algorithm to extract the low-frequency and high-frequency components of the infrared image from the infrared image.
  • the target filtering algorithm may be a linear filtering algorithm (such as mean filtering and Gaussian filtering).
  • the linear filtering method is simple in principle, low in computational complexity, and superior in performance. It can quickly realize smoothing of noisy images, but the edge details of the filtering results are detailed. It is fuzzy; it can also be a non-linear filter (such as median filtering, non-local mean filtering, bilateral filtering and other edge-preserving filtering algorithms).
  • the non-linear filtering method can remove the noise while protecting the edge details of the image.
  • the layered image obtained by the filter has higher signal-to-noise ratio and sharpness.
  • the above content details the specific process of extracting the chrominance component, low frequency component, and high frequency component of the visible light image.
  • the following describes the enhancement processing of the chromaticity component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image. The process is described in detail.
  • the image processor performs enhancement processing on the chromaticity component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image, including the following steps: :
  • Step S1041 performing color gamut conversion on the visible light image to convert the visible light image to the RGB color gamut space
  • visible light images collected in harsh environments such as low light and haze days are not only noisy, but also have low color saturation and contrast.
  • the existing technical solutions mainly use infrared images to perform signal-to-noise ratios on visible light images. Enhancement without or little consideration of the enhancement of image contrast and color performance, which results in a lower contrast of the fused image, which is not conducive to observation.
  • the present application can add the grayscale and contrast information of the infrared image on the basis of the visible light image, and improve the contrast and color performance of the color image.
  • the enhancement algorithm provided in this application is an enhancement method based on brightness and chrominance. This method mainly uses the grayscale information of the infrared image to adjust the luminance component and chrominance component of the visible light image, while improving the contrast of the visible light image while ensuring that the problem of color distortion does not occur.
  • the enhancement method based on the gradient domain mainly converts the image into the gradient domain and uses the gradient information of the infrared image to enhance the visible light image.
  • the visible light image is first subjected to color gamut conversion. It is converted to RGB color gamut space.
  • Step S1042 the target image matrix is constructed by combining the visible light image and the infrared image transferred to the RGB color gamut space, and the Jacobian matrix of each element in the target image matrix is calculated;
  • a high-dimensional image matrix H (ie, the target image matrix) is constructed based on the visible light image I vis and the infrared image I ir , and then each position element (where the element in the target image matrix H is calculated) Aij is any element of the target image matrix H)
  • the Jacobian matrix J i, j is as follows:
  • N ⁇ 4 is the dimension of the high-dimensional matrix H, and each row of the above matrix represents the gradient in the x and y directions in each dimension.
  • the construction method of the high-dimensional matrix is not unique.
  • the high-dimensional image matrix H to be constructed is a 4-dimensional matrix
  • the first three dimensions of the 4-dimensional matrix are visible light images, respectively.
  • the R, G, and B components of the image the fourth dimension is the brightness component of the infrared image
  • the high-dimensional image matrix H to be constructed is a 6-dimensional image
  • the first three dimensions of the matrix are the R, G, and B components of the visible light image, respectively.
  • Three dimensions are the R, G, and B components of the infrared image.
  • Step S1043 performing dimension reduction processing on the Jacobian matrix of each element to obtain a gradient vector of the visible light image transferred to the RGB color gamut space at each element, thereby obtaining a gradient matrix;
  • the dimensions of the Jacobian matrix are reduced by singular value decomposition or principal component analysis.
  • Step S1044 performing gradient domain image reconstruction on the gradient matrix to obtain an enhanced visible light image
  • a gradient domain image reconstruction of the gradient matrix can be performed by solving a two-dimensional discrete Poisson equation or by mapping between the original high-dimensional input and the construction, thereby obtaining an enhanced visible light image.
  • Step S1045 Perform color gamut transformation on the enhanced visible light image to separate the enhanced chrominance component and low frequency component.
  • the calculation complexity of the above enhancement process is relatively high, but this enhancement method can get the clearest and accurate image effect.
  • the enhanced process image has a significant improvement in contrast and color, and can effectively Avoid traditional enhancement methods that cause image color distortion and discontinuities.
  • the embodiment of the present application further provides another enhancement processing method, which is an enhancement method based on only a luminance component.
  • another enhancement processing method which is an enhancement method based on only a luminance component.
  • the weight of the weighted average method can be global or based on local pixels.
  • the local weight can be set according to the brightness, saturation, and contrast of the image.
  • histogram matching also known as histogram specification, refers to an image enhancement method in which a histogram of an image is converted into a histogram of a predetermined shape.
  • This enhanced processing method has a simple principle and low computational complexity, but it may cause color distortion in the enhanced image.
  • the above describes the specific process of enhancing the chroma component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image.
  • the high frequency component of the infrared image and the high frequency component of the visible light image are described in detail below.
  • the process of fusion is described in detail.
  • step S106 the image processor fuses a high-frequency component of an infrared image and a high-frequency component of a visible light image to obtain a target high-frequency component including:
  • a fusion algorithm based on the structural similarity between the visible light image and the infrared image is used to fuse the high frequency component of the infrared image and the high frequency component of the visible light image to obtain the target high frequency component.
  • the specific fusion process includes the following steps: referring to FIG. 4 and step S1061, calculating a structural similarity measurement value between corresponding pixels in the visible light image and the infrared image;
  • Step 1 Determine the target sliding window
  • Step 2 Traverse each pixel of the visible light image and the infrared image through the target sliding window, and obtain the center position of the window when the target sliding window is traversed to any position;
  • Step 3 Use the pixel point at the center of the window in the visible image and the infrared image as the corresponding pixel point;
  • Step 4 Use formulas Calculate the measure of structural similarity between corresponding pixels in the visible light image and the infrared image; where ⁇ vis , ⁇ ir , ⁇ vis , and ⁇ ir are the mean and standard of the pixels of the visible light image and the infrared image in the target sliding window, respectively difference, C 1, C 2 is a predetermined constant.
  • step S1062 the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused by using the structural similarity measurement value.
  • H comb , H vis , and H ir are the high-frequency components of the fused image, visible light image, and infrared image, respectively, and th1 and th2 are preset structures. Similarity threshold and th1 ⁇ th2, ⁇ is the fusion weight of high frequency components of the infrared image.
  • the fusion weight ⁇ may be a preset global weight, or may be an adaptive local weight obtained according to the similarity of the local pixel points.
  • a global weighted average method or a weighting method based on regional saliency can usually be used.
  • the false edges of the image are obvious and the visual effect is poor.
  • a fusion method based on the structural similarity of the visible image and the infrared image can also be used. Taking the weighted fusion strategy based on the similarity of regional structures as an example, the fused image can not only not introduce false edges. In the case of retaining more edge information and further eliminating noise, the specific implementation steps are as follows:
  • ⁇ vis , ⁇ ir , ⁇ vis , and ⁇ ir are the mean and standard deviation of the pixels of the visible and infrared images in this window, respectively, and C 1 and C 2 are preset smaller constants configured to control the above formula.
  • the stability of the above formula is in the range of [-1,1], which can be used to measure the structural similarity between the visible image and the infrared image. The closer the value is to 1, the higher the structural similarity. On the contrary, The closer to -1, the lower the structural similarity.
  • the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused by using the structural similarity metric value.
  • the method of fusing the high-frequency component is defined as follows:
  • H comb , H vis , and H ir are the high-frequency components of the fused image, visible light image, and infrared image, respectively, and th1 and th2 are preset structures. Similarity threshold and th1 ⁇ th2, ⁇ is the fusion weight of high frequency components of the infrared image.
  • This fusion method can improve the signal-to-noise ratio and edge detail clarity of the fused image in areas with high structural similarity, while not introducing unnecessary false edges in areas with low similarity such as shadows and highlights.
  • the method is applicable to various scenes such as low light and haze days.
  • This fusion method can not only effectively improve the signal-to-noise ratio, sharpness, and color performance of the fused image, but also avoid the problem of false edges caused by the inconsistency of the source image structure;
  • the image fusion method provided by the present application uses different filters to perform hierarchical fusion on the image, so that the fused image has a higher signal-to-noise ratio and sharpness, and similarity is introduced when the high-frequency component is fused. Constraints, which avoid the problems of missing edge details of the fusion image and the introduction of redundant false edges in the existing fusion technology. Secondly, by enhancing the visible light image, it also effectively improves the contrast and color performance of the fusion image. Make the visual effect of the fused image richer and more natural.
  • An embodiment of the present application further provides an image fusion apparatus, which is mainly configured to execute the image fusion method provided by the foregoing content in the embodiment of the present application.
  • the image fusion apparatus provided by the embodiment of the present application is specifically described below.
  • FIG. 5 is a schematic diagram of an image fusion apparatus according to an embodiment of the present application.
  • the image processing apparatus mainly includes an extraction module 20, an enhancement processing module 21, a fusion module 22, and a merge module 23, of which:
  • An extraction module configured to extract a chroma component, a low frequency component and a high frequency component of a visible light image, and a low frequency component and a high frequency component of an infrared image, wherein the visible light image and the infrared image correspond to the same real scene;
  • the enhancement processing module is configured to perform enhancement processing on the chroma component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image to obtain the enhanced chroma component and low frequency component of the visible light image;
  • a fusion module configured to fuse a high frequency component of an infrared image and a high frequency component of a visible light image to obtain a target high frequency component
  • the merging module is configured to merge the chrominance component and the low-frequency component of the visible light image after enhancement and the target high-frequency component to obtain a fused image.
  • the visible light image and the chrominance component, the low frequency component and the high frequency component are first extracted by an image processor, and the low frequency component and the high frequency component of the infrared image are extracted. Then, the visible light image The chrominance component and the low-frequency component of the visible light image are enhanced to obtain the enhanced chromaticity and low-frequency component of the visible light image; further, the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused to obtain the target high-frequency component ; Finally, the chroma and low frequency components of the visible light image after enhancement and the target high frequency components are combined to obtain a fused image.
  • the low-frequency component of the infrared image is used to enhance the chroma component and the low-frequency component of the visible light image, which can effectively improve the visual effect of the fused image.
  • the fusion of the frequency component and the high-frequency component of the visible light image can enhance the details of the visible light image without introducing extra edge detail information, which makes the final fused image of good quality and eases the fusion obtained by the existing image fusion methods.
  • the extraction module includes:
  • a first color gamut conversion unit configured to perform color gamut conversion on a visible light image to separate an initial chromaticity component and an initial brightness component of the visible light image;
  • a noise reduction processing unit configured to perform noise reduction processing on an initial chrominance component to obtain a chrominance component of a visible light image
  • a layered noise reduction processing unit is configured to perform layered noise reduction processing on an initial brightness component of a visible light image to obtain a low frequency component and a high frequency component of the visible light image.
  • the hierarchical noise reduction processing unit is further configured to:
  • a target filtering algorithm is used to perform layered noise reduction on the initial brightness component of the visible light image to obtain low-frequency and high-frequency components of the visible light image.
  • the target filtering algorithm includes a linear filtering algorithm or a non-linear filtering algorithm.
  • the hierarchical noise reduction processing unit is further configured to:
  • the first formula is:
  • the second formula is: Is the filtering weight of the non-linear filtering algorithm, N (u) is the current data block centered on the target point u, N (v) is the reference data block centered on the target point v, and the target point u and the target point v are brightness For any pixel in the component, h is a preset filter intensity control parameter.
  • the extraction module further includes:
  • a layered extraction unit is configured to perform layered processing on the infrared image by using a target filtering algorithm to extract low-frequency components and high-frequency components of the infrared image from the infrared image.
  • the fusion module includes:
  • the fusion unit is configured to use a fusion algorithm based on a structural similarity between the visible light image and the infrared image to fuse the high frequency component of the infrared image and the high frequency component of the visible light image to obtain a target high frequency component.
  • the fusion unit is further configured:
  • the high-frequency component of the infrared image and the high-frequency component of the visible light image are fused by using the structural similarity measure.
  • the fusion unit is further configured:
  • the fusion unit is further configured:
  • H comb , H vis , and H ir are the high-frequency components of the fused image, visible light image, and infrared image, respectively, and th1 and th2 are preset structures. Similarity threshold and th1 ⁇ th2, ⁇ is the fusion weight of high frequency components of the infrared image.
  • the enhanced processing module includes:
  • a second color gamut conversion unit configured to perform color gamut conversion on the visible light image to convert the visible light image to an RGB color gamut space
  • a calculation unit configured to combine a visible light image and an infrared image transferred to the RGB color gamut space to construct a target image matrix, and calculate a Jacobian matrix of each element in the target image matrix;
  • the dimensionality reduction processing unit is configured to perform data dimensionality reduction processing on the Jacobian matrix of each element to obtain a gradient vector of the visible light image transferred to the RGB color gamut space at each element, thereby obtaining a gradient matrix;
  • the gradient domain image reconstruction unit is configured to perform gradient domain image reconstruction on the gradient matrix to obtain an enhanced visible light image
  • the third color gamut transformation unit is configured to perform color gamut transformation on the enhanced visible light image, and separate and obtain an enhanced chroma component and a low frequency component.
  • the system includes a camera device 61, an image processor 62, and a data communication device 63, where the camera device 61 communicates with all devices through the data communication device 63.
  • the image processor 62 is connected.
  • the imaging device 61 is configured to transmit the visible light image and the infrared image to the image processor through the data communication device;
  • the image processor 62 is configured to extract a chroma component, a low frequency component and a high frequency component of the visible light image, and extract a low frequency component and a high frequency component of the infrared image, wherein the visible light image and the infrared
  • the images correspond to the same real scene; performing enhancement processing on the chroma component of the visible light image and the low frequency component of the visible light image through the low frequency component of the infrared image to obtain the enhanced chroma component and low frequency component of the visible light image; Fusing the high frequency component of the infrared image and the high frequency component of the visible light image to obtain a target high frequency component; combining the chrominance component and the low frequency component of the enhanced visible light image, and the target high frequency component Merging to obtain a fused image.
  • the camera device includes a visible light camera and an infrared camera; the visible light camera is configured to acquire the visible light image; and the infrared camera is configured to acquire the infrared image.
  • the data communication device includes: a wired communication device and / or a wireless communication device.
  • the electronic device includes: a processor 70, a memory 71, a bus 72, and a communication interface 73, and the processor 70, the communication interface 73, and the memory 71 are connected through the bus 72;
  • the processor 70 is configured to execute executable modules, such as a computer program, stored in the memory 71. When the processor executes the program, the steps of the method as described in the method embodiment are implemented.
  • the memory 71 may include a high-speed random access memory (RAM, Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as at least one disk memory.
  • the communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 73 (which can be wired or wireless), and the Internet, a wide area network, a local network, a metropolitan area network, and the like can be used.
  • the bus 72 may be an ISA bus, a PCI bus, an EISA bus, or the like.
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only a two-way arrow is used in FIG. 7, but it does not mean that there is only one bus or one type of bus.
  • the memory 71 is configured to store a program, and the processor 70 executes the program after receiving the execution instruction.
  • the method executed by the apparatus defined by the flow process disclosed in any one of the foregoing embodiments of the present application may be configured in the processor 70 Or implemented by the processor 70.
  • the processor 70 may be an integrated circuit chip and has a signal processing capability. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 70 or an instruction in a form of software.
  • the above-mentioned processor 70 may be a general-purpose processor, including a central processing unit (CPU), a network processor (Network Processor) (NP), etc .; it may also be a digital signal processor (Digital Signal Processing, DSP) ), Application Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • CPU central processing unit
  • NP Network Processor
  • DSP Digital Signal Processing
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable Gate Array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in combination with the embodiments of the present application may be directly implemented by a hardware decoding processor, or may be performed by using a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
  • the storage medium is located in the memory 71, and the processor 70 reads the information in the memory 71 and completes the steps of the foregoing method in combination with its hardware.
  • a computer storage medium is also provided, and a computer program is stored thereon, and when the computer runs the computer program, the method of any one of the foregoing method embodiments is executed. step.
  • the terms “installation”, “connected”, and “connected” should be understood in a broad sense unless otherwise specified and limited. For example, they may be fixed connections or detachable connections. , Or integrally connected; it can be mechanical or electrical; it can be directly connected, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two elements.
  • installation should be understood in a broad sense unless otherwise specified and limited. For example, they may be fixed connections or detachable connections. , Or integrally connected; it can be mechanical or electrical; it can be directly connected, or it can be indirectly connected through an intermediate medium, or it can be the internal communication of two elements.
  • the specific meanings of the above terms in this application can be understood in specific situations.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some communication interfaces, devices or units, which may be electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they may be stored in a non-volatile computer-readable storage medium executable by a processor.
  • the technical solution of this application is essentially a part that contributes to the existing technology or a part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks and other media that can store program codes .
  • the image fusion method, system, electronic device, and computer-readable medium provided in the embodiments of the present application may be applied to an image processor.
  • Using the low-frequency component of the infrared image to enhance the chroma and low-frequency components of the visible light image can effectively improve the visual effect of the fused image.
  • the high-frequency component of the infrared image and the high-frequency component of the visible light image can be fused to the visible light The details of the image are enhanced without introducing extra edge detail information, which makes the final fused image of good quality, and alleviates the technical problems of poor fused image and poor visual effect obtained by the existing image fusion methods.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像融合方法、装置、电子设备和计算机可读介质,该方法包括:提取可见光图像的色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量(S102);通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的色度分量和低频分量(S104);将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量(S106);将增强之后的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像(S108)。所述方法能够有效提升融合图像的视觉效果,使得最终得到的融合图像质量好,缓解了现有的图像融合方法得到的融合图像质量差,视觉效果不好的问题。

Description

图像融合方法、***、电子设备和计算机可读介质 技术领域
本申请涉及图像处理的技术领域,尤其是涉及一种图像融合方法、***、电子设备和计算机可读介质。
背景技术
随着科学技术的发展,视频采集设备在军事和安防监控领域得到了广泛应用。但是受成像条件、环境等因素的影响,低光照条件下采集到的可见光图像可能存在噪声明显、清晰度较差的问题,而红外图像则信噪比较高且含有丰富的边缘细节信息,但缺少色彩信息,不能反映出真实的场景,利用图像融合技术可以综合可见光图像和红外图像的优势,得到更加清晰的图像效果。
现有的图像融合技术只是对可见光图像和红外图像进行了简单的加权融合或分层融合,导致融合后的图像出现色彩失真、边缘细节模糊等问题,同时现有的融合技术也没有考虑到可见光图像和红外图像的结构信息不一致(如阴影、高亮)的问题,导致融合后的图像引入多余的边缘信息,视觉效果较差。
综上,通过现有的图像融合方法得到的融合图像质量差,视觉效果不好。
发明内容
有鉴于此,本申请的目的在于提供一种图像融合方法、***、电子设备和计算机可读介质,以缓解现有的图像融合方法得到的融合图像质量差,视觉效果不好的技术问题。
第一方面,本申请实施例提供了一种图像融合方法,应用于图像处理器,包括:通过所述图像处理器提取可见光图像的色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量,其中,所述可见光图像和所述红外图像对应相同的真实场景;所述图像处理器通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;所述图像处理器将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量;所述图像处理器将所述增强之后的可见光图像的色度分量和低频分量,以及所述目标高频分量进行合并,得到融合图像。
结合第一方面,本申请实施例提供了第一方面的第一种可能的实施方式,其中,所述图像处理器提取可见光图像的色度分量、低频分量和高频分量包括:对所述可见光图像进行色域转换,以分离得到所述可见光图像的初始色度分量和初始亮度分量;对所述初始色度分量进行降噪处理,得到所述可见光图像的色度分量;对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量。
结合第一方面,本申请实施例提供了第一方面的第二种可能的实施方式,其中,所述图像处理器对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量包括:通过目标滤波算法对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量,其中,所述目标滤波算法包括:线性滤波算法或者非线性滤波算法。
结合第一方面,本申请实施例提供了第一方面的第三种可能的实施方式,其中,若所述目标滤波算法为非线性滤波算法;则通过目标滤波算法对所述可见光图像的初始亮度分量进行分层降噪处理包括:通过第一公式和第二公式对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量;其中,所述第一公式为:
Figure PCTCN2018105792-appb-000001
所述第二公式为:
Figure PCTCN2018105792-appb-000002
为所述非线性滤波算法的滤波权重,N(u)为以目标点u为中心的当前数据块,N(v)为以目标点v为中心的参考数据块,所述目标点u和所述目标点v为所述可见光图像的亮度分量中的任意一个像素点,h为预设的滤波强度控制参数。
结合第一方面,本申请实施例提供了第一方面的第四种可能的实施方式,其中,提取红外图像的低频分量和高频分量包括:通过目标滤波算法对所述红外图像进行分层处理,以从所述红外图像中提取得到所述红外图像的低频分量和高频分量。
结合第一方面,本申请实施例提供了第一方面的第五种可能的实施方式,其中,所述图像处理器将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量包括:采用基于所述可见光图像和所述红外图像之间结构相似度的融合算法将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量。
结合第一方面,本申请实施例提供了第一方面的第六种可能的实施方式,其中,采用基于所述可见光图像和所述红外图像之间结构相似度的融合算法将所述红外图像的高频分量和所述可见光图像的高频分量进行融合包括:计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值;利用所述结构相似性度量值将所述红外图像的高频分量和所述可见光图像的高频分量进行融合。
结合第一方面,本申请实施例提供了第一方面的第七种可能的实施方式,其中,计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值包括:确定目标滑动窗口;通过所述目标滑动窗口对所述可见光图像和所述红外图像的各个像素点 进行遍历,并获取所述目标滑动窗口遍历到任一位置时的窗口中心位置;将所述可见光图像和所述红外图像中所述窗口中心位置处的像素点作为所述相对应像素点;利用公式
Figure PCTCN2018105792-appb-000003
计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值;其中,μ visirvisir分别为所述可见光图像与所述红外图像在所述目标滑动窗口内像素的均值和标准差,C 1,C 2为预设常数。
结合第一方面,本申请实施例提供了第一方面的第八种可能的实施方式,其中,利用所述结构相似性度量值将所述红外图像的高频分量和所述可见光图像的高频分量进行融合包括:通过公式
Figure PCTCN2018105792-appb-000004
将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,其中,H comb,H vis,H ir分别为融合图像、可见光图像以及红外图像的高频分量,th1,th2为预设的结构相似性阈值且th1<th2,ω为红外图像高频分量的融合权重。
结合第一方面,本申请实施例提供了第一方面的第九种可能的实施方式,其中,所述图像处理器通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理包括:将所述可见光图像进行色域转换,以将所述可见光图像转换到RGB色域空间;结合转到RGB色域空间的可见光图像和所述红外图像构建目标图像矩阵,并计算所述目标图像矩阵中各元素的雅克比矩阵;对所述各元素的雅克比矩阵进行数据降维处理,得到所述转到RGB色域空间的可见光图像在各元素处的梯度向量,从而得到梯度矩阵;对所述梯度矩阵进行梯度域图像重建,得到增强后的可见光图像;基于增强后的可见光图像得到所述增强之后的色度分量和低频分量。
结合第一方面,本申请实施例提供了第一方面的第十种可能的实施方式,其中,基于增强后的可见光图像得到所述增强之后的色度分量和低频分量包括:对所述增强后的可见光图像进行色域变换,分离得到所述增强之后的色度分量和低频分量。
第二方面,本申请实施例还提供了一种图像融合***,包括:摄像装置,图像处理器和数据通信设备,其中,所述摄像装置通过所述数据通信设备与所述图像处理器相连接;所述摄像装置配置成:通过所述数据通信设备将可见光图像和红外图像传输至所述图像处理器;所述图像处理器配置成:提取所述可见光图像的色度分量、低频分量和高频分量,以及提取所述红外图像的低频分量和高频分量,其中,所述可见光图像和所述 红外图像对应相同的真实场景;通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量;将所述增强之后的可见光图像的色度分量和低频分量,以及所述目标高频分量进行合并,得到融合图像。
结合第二方面,本申请实施例提供了第二方面的第一种可能的实施方式,其中,所述数据通信设备包括:有线通信设备和/或无线通信设备。
第三方面,本申请实施例提供了一种电子设备,包括存储器、图像处理器及存储在所述存储器上并可在所述图像处理器上运行的计算机程序,所述图像处理器执行所述计算机程序时实现上述第一方面中任一项所述的方法。
第四方面,本申请实施例提供了一种计算机存储介质,其上存储有计算机程序,所述计算机运行所述计算机程序时执行上述第一方面中任一项所述的方法的步骤。
在本实施例中,首先通过图像处理器提取可见光图像和色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量;然后,通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;进而将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量;最终将增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像。通过上述描述可知,在本申请的图像融合方法中,利用红外图像的低频分量对可见光图像的色度分量和低频分量进行增强处理,能够有效提升融合图像的视觉效果,另外,将红外图像的高频分量和可见光图像的高频分量进行融合能够对可见光图像的细节进行增强,同时不会引入多余的边缘细节信息,使得最终得到的融合图像质量好,缓解了现有的图像融合方法得到的融合图像质量差,视觉效果不好的技术问题。
本申请的其他特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请而了解。本申请的目的和其他优点在说明书、权利要求书以及附图中所特别指出的结构来实现和获得。
为使本申请的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
附图说明
为了更清楚地说明本申请具体实施方式或现有技术中的技术方案,下面将对具体实施方式或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图 是本申请的一些实施方式,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种图像融合方法的流程图;
图2为本申请实施例提供的提取可见光图像的色度分量、低频分量和高频分量的方法流程图;
图3为本申请实施例提供的通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理的方法流程图;
图4为本申请实施例提供的将红外图像的高频分量和可见光图像的高频分量进行融合的方法流程图;
图5为本申请实施例提供的一种图像融合装置的示意图;
图6为本申请实施例提供的一种图像融合***的示意图;
图7为本申请实施例提供的一种电子设备的示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例1:
根据本申请实施例,提供了一种图像融合方法的实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机***中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请实施例的一种图像融合方法的流程图,如图1所示,该方法包括如下步骤:
步骤S102,通过所述图像处理器提取可见光图像的色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量,其中,可见光图像和红外图像对应相同的真实场景;
在本申请实施例中,可以通过可见光摄像头获取可见光图像,以及通过红外摄像头获取红外图像。其中,可见光摄像头和红外摄像头可以设置在同一个摄像装置中,还可以设置在不同的摄像装置中。当可见光摄像头和红外摄像头设置在不同的摄像装置中时,可见光摄像头所在摄像装置和红外摄像头所在摄像装置安装在相同的位置。
在本申请实施例中,该图像融合的方法应用于图像处理器。图像融合具体是指将多源 信道所采集到的关于同一目标的图像数据经过图像处理和计算机技术等,最大限度的提取各自信道中的有利信息,最后综合成高质量的图像,以提高图像信息的利用率、改善计算机解译精度和可靠性、提升原始图像的空间分辨率和光谱分辨率,利于监测。
在本申请实施例中,是分别提取了可见光图像和红外图像的有利信息,最后综合得到高质量的图像的过程。具体的,有利信息包括:可见光图像的色度分量、低频分量和高频分量,还包括红外图像的低频分量和高频分量。下文中再对提取的过程进行详细介绍。
步骤S104,所述图像处理器通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;
在得到可见光图像的色度分量、低频分量和高频分量,以及红外图像的低频分量和高频分量后,进一步通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量,该增强处理的过程能有效提升融合图像的视觉效果。
步骤S106,所述图像处理器将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量;
在得到可见光图像的色度分量、低频分量和高频分量,以及红外图像的低频分量和高频分量之后,进一步将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量,该过程可以提升融合图像在结构相似度较高的区域的信噪比以及边缘细节清晰度,同时不会在结构相似度较低的区域如阴影、高光等区域引入多余的伪边缘。
步骤S108,所述图像处理器将增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像。
得到增强之后的可见光图像的色度分量,增强之后的可见光图像的低频分量和目标高频分量后,将增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行合并,就能得到融合图像。也就是根据增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行图像重建,以得到融合图像。
在本实施例中,首先通过图像处理器提取可见光图像和色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量;然后,通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;进而将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量;最终将增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像。通过上述描述可知,在本申请的图像融合方法中,利用红外图像的低频分量对可见光图像的色度分量和低频分量进行增强处理,能够有效提升融合图像 的视觉效果,另外,将红外图像的高频分量和可见光图像的高频分量进行融合能够对可见光图像的细节进行增强,同时不会引入多余的边缘细节信息,使得最终得到的融合图像质量好,缓解了现有的图像融合方法得到的融合图像质量差,视觉效果不好的技术问题。
上述内容对本申请的图像融合方法进行了简要介绍,下面对其中涉及到的具体内容进行详细介绍。
下面将介绍提取可见光图像的色度分量、低频分量和高频分量的具体过程。
在本实施例的一个可选实施方式中,参考图2,步骤102,所述图像处理器提取可见光图像的色度分量、低频分量和高频分量包括如下步骤:
步骤S1021,对可见光图像进行色域转换,以分离得到可见光图像的初始色度分量和初始亮度分量;
具体的,对可见光图像进行色域转换,转换得到一个亮度、色度分离的色彩空间,比如,标准的HSV空间、YIQ空间、YUV空间、Lab空间等,当然还可以根据需求自定义色彩空间,本申请实施例对其不进行具体限制,进而分离得到可见光图像的初始色度分量和初始亮度分量。
步骤S1022,对初始色度分量进行降噪处理,得到可见光图像的色度分量;
在得到初始色度分量后,对初始色度分量进行降噪处理,可以采用线性滤波器(如均值滤波器、高斯滤波器等)或者非线性保边滤波器(如双边滤波器、非局部均值滤波器等),经过降噪处理后的图像具有更高的信噪比。
步骤S1023,对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量。
利用滤波算法对可见光图像的初始亮度分量进行分层降噪处理,处理后的亮度分量(即可见光图像的低频分量和高频分量)不仅具有较高的信噪比,而且保有更多的边缘细节信息。
具体的,所述图像处理器对所述可见光图像的初始亮度分量进行分层降噪处理的过程包括如下步骤:
步骤S10231,通过目标滤波算法对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量,其中,目标滤波算法包括:线性滤波算法或者非线性滤波算法。
在对可见光图像进行分层降噪处理时,既可以采用多尺度分解的方法,例如小波变换、高斯金字塔等,也可以采用滤波算法来实现对可见光图像的分层降噪。
对于滤波算法,可以采用线性滤波算法(如均值滤波、高斯滤波),线性滤波方法原理 简单,计算复杂度低,性能优越,可以快速实现对噪声图像的平滑,但滤波结果边缘细节较模糊;还可以采用非线性滤波器(如中值滤波、非局部均值滤波、双边滤波等保边滤波算法),非线性滤波方法能够在去除噪声的同时保护图像的边缘细节信息,通过这种滤波器得到的分层图像同时具有更高信噪比和清晰度。
在步骤S10231中,若目标滤波算法为非线性滤波算法,通过目标滤波算法对可见光图像的初始亮度分量进行分层降噪处理包括:
通过第一公式和第二公式对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量;
其中,第一公式为:
Figure PCTCN2018105792-appb-000005
第二公式为:
Figure PCTCN2018105792-appb-000006
为非线性滤波算法的滤波权重,N(u)为以目标点u为中心的当前数据块,N(v)为以目标点v为中心的参考数据块,目标点u和目标点v为亮度分量中的任意一个像素点,其中,h为预设的滤波强度控制参数。
在本申请实施例中,采用的是非局部均值滤波,它利用了图像中丰富的重复冗余信息,主要是通过在图像中寻找与当前块结构特征相似的块来实现滤波操作,假设可见光图像(对红外图像同样适用)的亮度分量为visY,u为visY中的任意一点,分层降噪的具体实现步骤如下:
(1)在u的周围设置一个半径为f的领域并记为当前块,然后设置一个与当前块大小一样的参考块,逐像素遍历整个图像,计算参考块与当前块的欧式距离(两个块中对应位置处像素差值的平方)作为两个块的相似性度量。
(2)此时可见光图像的低频分量和高频分量可以通过以下公式得到:
Figure PCTCN2018105792-appb-000007
visY_high(u)=visY(u)-visY_low(u)
其中,滤波权重
Figure PCTCN2018105792-appb-000008
N(u),N(v)分别为以点u,v为中心的块,当这两块的相似性较高时,滤波权重较大,相反,相似性较低时,滤波权重较小。
这种方法与线性滤波器相比,计算复杂度相对较高,但是通过这种方法得到的可见光的低频分量与现有的其他分层技术相比,在去除噪声的同时保留了更多的边缘细节,且具有较高的信噪比,使得最终的融合图像看起来也更加清晰准确。
在本实施例中,提取红外图像的低频分量和高频分量包括:
通过目标滤波算法对红外图像进行分层处理,以从红外图像中提取得到红外图像的低 频分量和高频分量。
可选地,该目标滤波算法可以为线性滤波算法(如均值滤波、高斯滤波),线性滤波方法原理简单,计算复杂度低,性能优越,可以快速实现对噪声图像的平滑,但滤波结果边缘细节较模糊;还可以为非线性滤波器(如中值滤波、非局部均值滤波、双边滤波等保边滤波算法),非线性滤波方法能够在去除噪声的同时保护图像的边缘细节信息,通过这种滤波器得到的分层图像同时具有更高信噪比和清晰度。
上述内容对提取可见光图像的色度分量、低频分量和高频分量的具体过程进行了详细介绍,下面对通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理的过程进行详细介绍。
在本实施例的一个可选地实施方式中,参考图3,步骤S104,所述图像处理器通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理包括如下步骤:
步骤S1041,将可见光图像进行色域转换,以将可见光图像转换到RGB色域空间;
通常在低照、雾霾天等恶劣环境下采集到的可见光图像不仅噪声明显,而且色彩饱和度以及对比度都比较低,现有的技术方案主要利用红外图像对可见光图像的进行了信噪比的提升,而没有或很少考虑到对图像对比度以及色彩表现的增强,这就导致融合图像对比度较低,不利于观测。本申请能实现在可见光图像的基础上加入红外图像的灰度、对比度信息,提升彩色图像的对比度与色彩表现。
本申请提供的增强算法是基于亮度和色度的增强方法。这种方法主要利用红外图像的灰度信息对可见光图像的亮度分量和色度分量进行调整,在提升可见光图像对比度的同时保证了不会出现色彩失真的问题。
在一个具体的实施例中,基于梯度域的增强方法主要是将图像转化到梯度域并利用红外图像的梯度信息对可见光图像进行增强处理,具体实现时,先将可见光图像进行色域转换,将其转换到RGB色域空间。
步骤S1042,结合转到RGB色域空间的可见光图像和红外图像构建目标图像矩阵,并计算目标图像矩阵中各元素的雅克比矩阵;
在可见光图像转换到RGB色域空间后,基于可见光图像I vis和红外图像I ir构造一个高维图像矩阵H(即目标图像矩阵),然后计算目标图像矩阵H中每个位置元素(其中,元素Aij为目标图像矩阵H的任一元素)的雅克比矩阵J i,j如下所示:
Figure PCTCN2018105792-appb-000009
其中,N≥4为高维矩阵H的维数,上述矩阵的每一行表示每个维度下的x和y方向的梯度。
在本实施例中,高维矩阵构造方式不唯一,这里给出了一种简单的构造方式,如果要构建的高维图像矩阵H为4维矩阵,那么4维矩阵的前三维分别是可见光图像的R、G、B分量,第四维是红外图像的亮度分量;如果要构建的高维图像矩阵H为6维图像,那么矩阵的前三维分别是可见光图像的R、G、B分量,后三维分别为红外图像的R、G、B分量。
步骤S1043,对各元素的雅克比矩阵进行数据降维处理,得到转到RGB色域空间的可见光图像在各元素处的梯度向量,从而得到梯度矩阵;
在得到目标图像矩阵中各元素的雅克比矩阵后,通过奇异值分解或者主成分分析等方法对雅克比矩阵进行数据降维。
以奇异值分解为例,首先对雅克比矩阵进行奇异值分解
Figure PCTCN2018105792-appb-000010
其中,U、V分别为由左右奇异向量u i,v i组成的正交矩阵,S为对角矩阵,s ii为对角矩阵S对角线上的元素,假设s 11为最大的奇异值,v 1为其对应的右奇异向量,则增强后的可见光图像在此位置上的梯度向量可以表示为:
Figure PCTCN2018105792-appb-000011
从而得到梯度矩阵。
步骤S1044,对梯度矩阵进行梯度域图像重建,得到增强后的可见光图像;
对增强后的梯度矩阵G可以通过求解二维离散泊松方程或者通过构造与原高维输入之间的映射等方式对梯度矩阵进行梯度域图像重建,从而得到增强后的可见光图像。
步骤S1045,对增强后的可见光图像进行色域变换,分离得到增强之后的色度分量和低频分量。
上述增强处理的过程计算复杂度相对较高,但这种增强方法能得到最清晰准确的图像效果,增强处理后的图像与原图像相比,在对比度以及色彩上有明显提升,而且可以有效地避免传统的增强方法会导致图像色彩失真以及不连续的问题。
本申请实施例还提供了另外一种增强处理的方法,是仅基于亮度分量的增强方法。例如亮度分量加权平均方法、亮度分量直方图匹配方法等,加权平均方法的权重可以是全局 的,也可以是基于局部像素点的,局部权重可以根据图像局部的亮度、饱和度、对比度等信息设置不同的值;而基于直方图匹配的方法,主要是以红外图像的信息为参考,对可见光图像的信息进行全局映射,可以直接对亮度分量进行操作,即根据红外图像的亮度分布信息对可见光图像的亮度进行调整,也可以在梯度域进行操作,即将亮度分量转化到梯度域,然后进行梯度直方图匹配。直方图匹配又称为直方图规定化,是指将一幅图像的直方图变成规定形状的直方图而进行的图像增强方法。
这种增强处理方法原理简单,计算复杂度也较低,但可能导致增强后的图像出现色彩失真的问题。
上述内容对通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理的具体过程进行了详细介绍,下面对将红外图像的高频分量和可见光图像的高频分量进行融合的过程进行详细介绍。
在本实施例的一个可选地实施方式中,步骤S106,所述图像处理器将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量包括:
采用基于可见光图像和红外图像之间结构相似度的融合算法将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量。
具体融合过程包括如下步骤:参考图4,步骤S1061,计算可见光图像与红外图像中相对应像素点之间的结构相似性度量值;
具体的计算过程如下:
第一步:确定目标滑动窗口;
第二步:通过目标滑动窗口对可见光图像和所述红外图像的各个像素点进行遍历,并获取目标滑动窗口遍历到任一位置时的窗口中心位置;
第三步:将可见光图像和红外图像中窗口中心位置处的像素点作为相对应像素点;
第四步:利用公式
Figure PCTCN2018105792-appb-000012
计算可见光图像与红外图像中相对应像素点之间的结构相似性度量值;其中,μ visirvisir分别为可见光图像与红外图像在目标滑动窗口内像素的均值和标准差,C 1,C 2为预设常数。
步骤S1062,利用结构相似性度量值将红外图像的高频分量和可见光图像的高频分量进行融合。
具体的,通过公式
Figure PCTCN2018105792-appb-000013
将红 外图像的高频分量和可见光图像的高频分量进行融合,其中,H comb,H vis,H ir分别为融合图像、可见光图像以及红外图像的高频分量,th1,th2为预设的结构相似性阈值且th1<th2,ω为红外图像高频分量的融合权重。在本实施例中,融合权重ω可以是预设的全局权重,也可以是根据局部像素点的相似性大小得到的自适应局部权重。
下面对该过程再进行详细介绍:
在进行可见光图像与红外图像的高频分量融合时,通常可以采用全局加权平均法或者基于区域显著性的加权方法,但这种方法不能区分可见光图像与红外图像中不同的结构信息,会导致融合图像伪边缘明显,视觉效果较差,还可以采用基于可见光图像与红外图像结构相似度的融合方法,以基于区域结构相似性的加权融合策略为例,融合后的图像不仅能在不引入伪边缘的情况下保有更多的边缘信息,还能进一步消除噪声,具体实现步骤如下:
对可见光图像和红外图像的每个像素,按照以下方式计算其结构相似性:
设置一个滑动窗口,并且逐像素遍历整副图像,假设滑动窗口遍历到任一位置时的窗口中心为(m,n),则在点(m,n)处可见光图像与红外图像的相似性定义如下:
Figure PCTCN2018105792-appb-000014
其中,μ visirvisir分别为可见光图像与红外图像在此窗口内像素的均值和标准差,C 1,C 2为预设的较小的常数,配置成控制上式的稳定性,上述式子的取值范围为[-1,1],可以用来度量可见光图像与红外图像结构上的相似性,取值越接近于1,结构相似度越高,相反地,越接近于-1,结构相似度越低。此时,利用结构相似性度量值将红外图像的高频分量和可见光图像的高频分量进行融合,高频分量的融合方法定义如下:
通过公式
Figure PCTCN2018105792-appb-000015
将红外图像的高频分量和可见光图像的高频分量进行融合,其中,H comb,H vis,H ir分别为融合图像、可见光图像以及红外图像的高频分量,th1,th2为预设的结构相似性阈值且th1<th2,ω为红外图像高频分量的融合权重。
此融合方法能提升融合图像在结构相似度较高的区域的信噪比以及边缘细节清晰度,同时不会在相似度较低的如阴影、高亮等区域引入多余的伪边缘。
在本申请的图像融合方法中,该方法适用低照、雾霾天等各种场景。该融合方法不仅能够有效地提升融合图像的信噪比、清晰度以及色彩表现,还能避免由于源图像结构不一 致带来的伪边缘问题;
可以解决现有的图像融合技术由于红外图像与可见光图像灰度特征与结构特征不一致而导致色彩失真、伪边缘等问题,还可以进一步提升融合图像的信噪比、清晰度、对比度以及色彩表现;
综上,本申请提供的图像融合方法,通过应用不同的滤波器对图像进行分层融合,使融合后的图像具有较高的信噪比和清晰度,并且在高频分量融合时通过引入相似性约束,避免了现有融合技术中可能出现的融合图像边缘细节丢失以及引入多余的伪边缘的问题,其次,通过对可见光图像的增强处理,也有效地提升了融合图像的对比度以及色彩表现,使融合图像的视觉效果更加丰富自然。
实施例2:
本申请实施例还提供了一种图像融合装置,该图像融合装置主要配置成执行本申请实施例上述内容所提供的图像融合方法,以下对本申请实施例提供的图像融合装置做具体介绍。
图5是根据本申请实施例的一种图像融合装置的示意图,如图5所示,该图像处理装置主要包括提取模块20,增强处理模块21,融合模块22和合并模块23,其中:
提取模块,配置成提取可见光图像的色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量,其中,可见光图像和红外图像对应相同的真实场景;
增强处理模块,配置成通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;
融合模块,配置成将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量;
合并模块,配置成将增强之后的可见光图像的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像。
在本实施例中,首先通过图像处理器提取可见光图像和色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量;然后,通过红外图像的低频分量对可见光图像的色度分量和可见光图像的低频分量进行增强处理,得到增强之后可见光图像的的色度分量和低频分量;进而将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量;最终将增强之后可见光图像的的色度分量和低频分量,以及目标高频分量进行合并,得到融合图像。通过上述描述可知,在本申请的图像融合装置中,利用红外图像的低频分量对可见光图像的色度分量和低频分量进行增强处理,能够有效提升融合图像的视觉效果,另外,将红外图像的高频分量和可见光图像的高频分量进行融合能够对可见 光图像的细节进行增强,同时不会引入多余的边缘细节信息,使得最终得到的融合图像质量好,缓解了现有的图像融合方法得到的融合图像质量差,视觉效果不好的技术问题。
可选地,提取模块包括:
第一色域转换单元,配置成对可见光图像进行色域转换,以分离得到可见光图像的初始色度分量和初始亮度分量;
降噪处理单元,配置成对初始色度分量进行降噪处理,得到可见光图像的色度分量;
分层降噪处理单元,配置成对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量。
可选地,分层降噪处理单元还配置成:
通过目标滤波算法对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量,其中,目标滤波算法包括:线性滤波算法或者非线性滤波算法。
可选地,若目标滤波算法为非线性滤波算法,分层降噪处理单元还配置成:
通过第一公式和第二公式对可见光图像的初始亮度分量进行分层降噪处理,得到可见光图像的低频分量和高频分量;
其中,第一公式为:
Figure PCTCN2018105792-appb-000016
第二公式为:
Figure PCTCN2018105792-appb-000017
为非线性滤波算法的滤波权重,N(u)为以目标点u为中心的当前数据块,N(v)为以目标点v为中心的参考数据块,目标点u和目标点v为亮度分量中的任意一个像素点,h为预设的滤波强度控制参数。
可选地,提取模块还包括:
分层提取单元,用于通过目标滤波算法对红外图像进行分层处理,以从红外图像中提取得到红外图像的低频分量和高频分量。
可选地,融合模块包括:
融合单元,配置成采用基于可见光图像和红外图像之间结构相似度的融合算法将红外图像的高频分量和可见光图像的高频分量进行融合,得到目标高频分量。
可选地,融合单元还配置成:
计算可见光图像与红外图像中相对应像素点之间的结构相似性度量值;
利用结构相似性度量值将红外图像的高频分量和可见光图像的高频分量进行融合。
可选地,融合单元还配置成:
确定目标滑动窗口;
通过目标滑动窗口对可见光图像和所述红外图像的各个像素点进行遍历,并获取目标 滑动窗口遍历到任一位置时的窗口中心位置;
将可见光图像和红外图像中窗口中心位置处的像素点作为相对应像素点;
利用公式
Figure PCTCN2018105792-appb-000018
计算可见光图像与红外图像中相对应像素点之间的结构相似性度量值;其中,μ visirvisir分别为可见光图像与红外图像在目标滑动窗口内像素的均值和标准差,C 1,C 2为预设常数。
可选地,融合单元还配置成:
通过公式
Figure PCTCN2018105792-appb-000019
将红外图像的高频分量和可见光图像的高频分量进行融合,其中,H comb,H vis,H ir分别为融合图像、可见光图像以及红外图像的高频分量,th1,th2为预设的结构相似性阈值且th1<th2,ω为红外图像高频分量的融合权重。
可选地,增强处理模块包括:
第二色域转换单元,配置成将可见光图像进行色域转换,以将可见光图像转换到RGB色域空间;
计算单元,配置成结合转到RGB色域空间的可见光图像和红外图像构建目标图像矩阵,并计算目标图像矩阵中各元素的雅克比矩阵;
降维处理单元,配置成对各元素的雅克比矩阵进行数据降维处理,得到转到RGB色域空间的可见光图像在各元素处的梯度向量,从而得到梯度矩阵;
梯度域图像重建单元,配置成对梯度矩阵进行梯度域图像重建,得到增强后的可见光图像;
第三色域变换单元,配置成对增强后的可见光图像进行色域变换,分离得到增强之后的色度分量和低频分量。
本申请实施例所提供的装置,其实现原理及产生的技术效果和前述方法实施例相同,为简要描述,装置实施例部分未提及之处,可参考前述方法实施例中相应内容。
实施例3:
本申请实施例提供了一种图像融合***,参考图6,该***包括:摄像装置61,图像处理器62和数据通信设备63,其中,所述摄像装置61通过所述数据通信设备63与所述图像处理器62相连接。
所述摄像装置61配置成:通过所述数据通信设备将所述可见光图像和所述红外图像 传输至所述图像处理器;
所述图像处理器62配置成:提取所述可见光图像的色度分量、低频分量和高频分量,以及提取所述红外图像的低频分量和高频分量,其中,所述可见光图像和所述红外图像对应相同的真实场景;通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量;将所述增强之后的可见光图像的色度分量和低频分量,以及所述目标高频分量进行合并,得到融合图像。
可选地,所述摄像装置包括:可见光摄像头和红外摄像头;所述可见光摄像头配置成获取所述可见光图像;所述红外摄像头配置成获取所述红外图像。
可选地,所述数据通信设备包括:有线通信设备和/或无线通信设备。
实施例4:
本申请实施例提供了一种电子设备,参考图7,该电子设备包括:处理器70,存储器71,总线72和通信接口73,处理器70、通信接口73和存储器71通过总线72连接;处理器70配置成执行存储器71中存储的可执行模块,例如计算机程序。处理器执行极端及程序时实现如方法实施例中描述的方法的步骤。
其中,存储器71可能包含高速随机存取存储器(RAM,Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个通信接口73(可以是有线或者无线)实现该***网元与至少一个其他网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等。
总线72可以是ISA总线、PCI总线或EISA总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,图7中仅用一个双向箭头表示,但并不表示仅有一根总线或一种类型的总线。
其中,存储器71配置成存储程序,处理器70在接收到执行指令后,执行程序,前述本申请实施例任一实施例揭示的流过程定义的装置所执行的方法可以应配置成处理器70中,或者由处理器70实现。
处理器70可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器70中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器70可以是通用处理器,包括中央处理器(Central Processing Unit,简称CPU)、网络处理器(Network Processor,简称NP)等;还可以是数字信号处理器(Digital Signal Processing,简称DSP)、专用集成电路(Application Specific Integrated Circuit,简称ASIC)、现成可编程 门阵列(Field-Programmable Gate Array,简称FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器71,处理器70读取存储器71中的信息,结合其硬件完成上述方法的步骤。
在本申请的另一个实施例中,还提供了一种计算机存储介质,其上存储有计算机程序,所述计算机运行所述计算机程序时执行上述方法实施例1中任一项所述的方法的步骤。
另外,在本申请实施例的描述中,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以具体情况理解上述术语在本申请中的具体含义。
在本申请的描述中,需要说明的是,术语“中心”、“上”、“下”、“左”、“右”、“竖直”、“水平”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”、“第三”仅配置成描述目的,而不能理解为指示或暗示相对重要性。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本申请的具体实施方式,用以说明本申请的技术方案,而非对其限制,本申请的保护范围并不局限于此,尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。
工业实用性
本申请实施例提供的图像融合方法、***、电子设备和计算机可读介质,可以应用在图像处理器中。利用红外图像的低频分量对可见光图像的色度分量和低频分量进行增强处理,能够有效提升融合图像的视觉效果,另外,将红外图像的高频分量和可见光图像的高频分量进行融合能够对可见光图像的细节进行增强,同时不会引入多余的边缘细节信息,使得最终得到的融合图像质量好,缓解了现有的图像融合方法得到的融合图像质量差,视觉效果不好的技术问题。

Claims (15)

  1. 一种图像融合方法,其特征在于,应用于图像处理器,包括:
    通过所述图像处理器提取可见光图像的色度分量、低频分量和高频分量,以及提取红外图像的低频分量和高频分量,其中,所述可见光图像和所述红外图像对应相同的真实场景;
    所述图像处理器通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;
    所述图像处理器将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量;
    所述图像处理器将所述增强之后的可见光图像的色度分量和低频分量,以及所述目标高频分量进行合并,得到融合图像。
  2. 根据权利要求1所述的方法,其特征在于,所述图像处理器提取可见光图像的色度分量、低频分量和高频分量包括:
    对所述可见光图像进行色域转换,以分离得到所述可见光图像的初始色度分量和初始亮度分量;
    对所述初始色度分量进行降噪处理,得到所述可见光图像的色度分量;
    对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量。
  3. 根据权利要求2所述的方法,其特征在于,所述图像处理器对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量包括:
    通过目标滤波算法对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量,其中,所述目标滤波算法包括:线性滤波算法或者非线性滤波算法。
  4. 根据权利要求3所述的方法,其特征在于,若所述目标滤波算法为非线性滤波算法;则通过目标滤波算法对所述可见光图像的初始亮度分量进行分层降噪处理包括:
    通过第一公式和第二公式对所述可见光图像的初始亮度分量进行分层降噪处理,得到所述可见光图像的低频分量和高频分量;
    其中,所述第一公式为:
    Figure PCTCN2018105792-appb-100001
    所述第二公式为:
    Figure PCTCN2018105792-appb-100002
    为所述非线性滤波算法的滤波 权重,N(u)为以目标点u为中心的当前数据块,N(v)为以目标点v为中心的参考数据块,所述目标点u和所述目标点v为所述可见光图像的亮度分量中的任意一个像素点,h为预设的滤波强度控制参数。
  5. 根据权利要求1所述的方法,其特征在于,提取红外图像的低频分量和高频分量包括:
    通过目标滤波算法对所述红外图像进行分层处理,以从所述红外图像中提取得到所述红外图像的低频分量和高频分量。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述图像处理器将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量包括:
    采用基于所述可见光图像和所述红外图像之间结构相似度的融合算法将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量。
  7. 根据权利要求6所述的方法,其特征在于,采用基于所述可见光图像和所述红外图像之间结构相似度的融合算法将所述红外图像的高频分量和所述可见光图像的高频分量进行融合包括:
    计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值;
    利用所述结构相似性度量值将所述红外图像的高频分量和所述可见光图像的高频分量进行融合。
  8. 根据权利要求7所述的方法,其特征在于,计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值包括:
    确定目标滑动窗口;
    通过所述目标滑动窗口对所述可见光图像和所述红外图像的各个像素点进行遍历,并获取所述目标滑动窗口遍历到任一位置时的窗口中心位置;
    将所述可见光图像和所述红外图像中所述窗口中心位置处的像素点作为所述相对应像素点;
    利用公式
    Figure PCTCN2018105792-appb-100003
    计算所述可见光图像与所述红外图像中相对应像素点之间的结构相似性度量值;其中,μ visirvisir分别为所述可见光图像与所述红外图像在所述目标滑动窗口内像素的均值和标准差,C 1,C 2为预设常数。
  9. 根据权利要求7所述的方法,其特征在于,利用所述结构相似性度量值将所述红外图像的高频分量和所述可见光图像的高频分量进行融合包括:
    通过公式
    Figure PCTCN2018105792-appb-100004
    将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,其中,H comb,H vis,H ir分别为融合图像、可见光图像以及红外图像的高频分量,th1,th2为预设的结构相似性阈值且th1<th2,ω为红外图像高频分量的融合权重。
  10. 根据权利要求1至9中任一项所述的方法,其特征在于,所述图像处理器通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理包括:
    将所述可见光图像进行色域转换,以将所述可见光图像转换到RGB色域空间;
    结合转到RGB色域空间的可见光图像和所述红外图像构建目标图像矩阵,并计算所述目标图像矩阵中各元素的雅克比矩阵;
    对所述各元素的雅克比矩阵进行数据降维处理,得到所述转到RGB色域空间的可见光图像在各元素处的梯度向量,从而得到梯度矩阵;
    对所述梯度矩阵进行梯度域图像重建,得到增强后的可见光图像;
    基于增强后的可见光图像得到所述增强之后的色度分量和低频分量。
  11. 根据权利要求10所述的方法,其特征在于,基于增强后的可见光图像得到所述增强之后的色度分量和低频分量包括:
    对所述增强后的可见光图像进行色域变换,分离得到所述增强之后的色度分量和低频分量。
  12. 一种图像融合***,其特征在于,包括:摄像装置,图像处理器和数据通信设备,其中,所述摄像装置通过所述数据通信设备与所述图像处理器相连接;
    所述摄像装置配置成:通过所述数据通信设备将可见光图像和红外图像传输至所述图像处理器;
    所述图像处理器配置成:提取所述可见光图像的色度分量、低频分量和高频分量,以及提取所述红外图像的低频分量和高频分量,其中,所述可见光图像和所述红外图像对应相同的真实场景;通过所述红外图像的低频分量对所述可见光图像的色度分量和所述可见光图像的低频分量进行增强处理,得到增强之后的可见光图像的色度分量和低频分量;将所述红外图像的高频分量和所述可见光图像的高频分量进行融合,得到目标高频分量;将所述增强之后的可见光图像的色度分量和低频分量,以及所述目标高频分量进行合并,得到融合图像。
  13. 根据权利要求12所述的装置,其特征在于,所述数据通信设备包括:有线通信设备和/或无线通信设备。
  14. 一种电子设备,包括存储器、图像处理器及存储在所述存储器上并可在所述图像处理器上运行的计算机程序,其特征在于,所述图像处理器执行所述计算机程序时实现上述权利要求1至11中任一项所述的方法。
  15. 一种计算机存储介质,其特征在于,其上存储有计算机程序,所述计算机运行所述计算机程序时执行上述权利要求1至11中任一项所述的方法的步骤。
PCT/CN2018/105792 2018-09-14 2018-09-14 图像融合方法、***、电子设备和计算机可读介质 WO2020051897A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/105792 WO2020051897A1 (zh) 2018-09-14 2018-09-14 图像融合方法、***、电子设备和计算机可读介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/105792 WO2020051897A1 (zh) 2018-09-14 2018-09-14 图像融合方法、***、电子设备和计算机可读介质

Publications (1)

Publication Number Publication Date
WO2020051897A1 true WO2020051897A1 (zh) 2020-03-19

Family

ID=69778109

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/105792 WO2020051897A1 (zh) 2018-09-14 2018-09-14 图像融合方法、***、电子设备和计算机可读介质

Country Status (1)

Country Link
WO (1) WO2020051897A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738097A (zh) * 2020-05-29 2020-10-02 理光软件研究所(北京)有限公司 一种目标分类方法、装置、电子设备和可读存储介质
US11250550B2 (en) * 2018-02-09 2022-02-15 Huawei Technologies Co., Ltd. Image processing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800059A (zh) * 2012-07-05 2012-11-28 清华大学 一种基于近红外图像辅助的图像可见性增强方法
CN104995909A (zh) * 2012-12-21 2015-10-21 菲力尔***公司 时间间隔的红外图像增强
CN105869114A (zh) * 2016-03-25 2016-08-17 哈尔滨工业大学 基于多层带间结构模型的多光谱图像和全色图像融合方法
CN106600572A (zh) * 2016-12-12 2017-04-26 长春理工大学 一种自适应的低照度可见光图像和红外图像融合方法
CN108389158A (zh) * 2018-02-12 2018-08-10 河北大学 一种红外和可见光的图像融合方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102800059A (zh) * 2012-07-05 2012-11-28 清华大学 一种基于近红外图像辅助的图像可见性增强方法
CN104995909A (zh) * 2012-12-21 2015-10-21 菲力尔***公司 时间间隔的红外图像增强
CN105869114A (zh) * 2016-03-25 2016-08-17 哈尔滨工业大学 基于多层带间结构模型的多光谱图像和全色图像融合方法
CN106600572A (zh) * 2016-12-12 2017-04-26 长春理工大学 一种自适应的低照度可见光图像和红外图像融合方法
CN108389158A (zh) * 2018-02-12 2018-08-10 河北大学 一种红外和可见光的图像融合方法

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11250550B2 (en) * 2018-02-09 2022-02-15 Huawei Technologies Co., Ltd. Image processing method and related device
CN111738097A (zh) * 2020-05-29 2020-10-02 理光软件研究所(北京)有限公司 一种目标分类方法、装置、电子设备和可读存储介质
CN111738097B (zh) * 2020-05-29 2024-04-05 理光软件研究所(北京)有限公司 一种目标分类方法、装置、电子设备和可读存储介质

Similar Documents

Publication Publication Date Title
WO2021077706A1 (zh) 图像融合方法、装置、存储介质及电子设备
Vanmali et al. Visible and NIR image fusion using weight-map-guided Laplacian–Gaussian pyramid for improving scene visibility
Pan et al. Haze removal for a single remote sensing image based on deformed haze imaging model
US8285033B2 (en) Bi-affinity filter: a bilateral type filter for color images
CN104683767B (zh) 透雾图像生成方法及装置
WO2018082185A1 (zh) 图像处理方法和装置
Mo et al. Attribute filter based infrared and visible image fusion
Zhu et al. Multiscale infrared and visible image fusion using gradient domain guided image filtering
WO2013168618A1 (ja) 画像処理装置及び画像処理方法
CN106846289A (zh) 一种基于显著性迁移与细节分类的红外光强与偏振图像融合方法
Chen et al. Color guided thermal image super resolution
Li et al. Low illumination video image enhancement
WO2020056567A1 (zh) 图像处理方法、装置、电子设备及可读存储介质
WO2020038065A1 (zh) 一种图像处理方法、终端及计算机存储介质
WO2020051897A1 (zh) 图像融合方法、***、电子设备和计算机可读介质
WO2022233252A1 (zh) 图像处理方法、装置、计算机设备和存储介质
Arulkumar et al. Super resolution and demosaicing based self learning adaptive dictionary image denoising framework
Wen et al. Autonomous robot navigation using Retinex algorithm for multiscale image adaptability in low-light environment
CN116109535A (zh) 一种图像融合方法、设备及计算机可读存储介质
Patel et al. A review on infrared and visible image fusion techniques
CN114998122A (zh) 一种低照度图像增强方法
JP6375138B2 (ja) パープルフリンジ除去処理方法及びその処理を遂行するパープルフリンジ除去処理装置
CN113240611B (zh) 一种基于图片序列的异物检测方法
KR20140138046A (ko) 픽처를 처리하기 위한 방법 및 디바이스
Laha et al. Near-infrared depth-independent image dehazing using haar wavelets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18933355

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18933355

Country of ref document: EP

Kind code of ref document: A1