CN112215787B - Infrared and visible light image fusion method based on significance analysis and adaptive filter - Google Patents

Infrared and visible light image fusion method based on significance analysis and adaptive filter Download PDF

Info

Publication number
CN112215787B
CN112215787B CN202010379475.1A CN202010379475A CN112215787B CN 112215787 B CN112215787 B CN 112215787B CN 202010379475 A CN202010379475 A CN 202010379475A CN 112215787 B CN112215787 B CN 112215787B
Authority
CN
China
Prior art keywords
image
infrared
visible light
base layer
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010379475.1A
Other languages
Chinese (zh)
Other versions
CN112215787A (en
Inventor
周志立
王玉成
蒋义钐
阮秀凯
崔桂华
李昌
陈榆
闫正兵
张杨
金荣华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzhou Zhian Yunlian Network Technology Co ltd
Zhejiang Voc Technology Co ltd
Intelligent Lock Research Institute Of Wenzhou University
Original Assignee
Wenzhou Zhian Yunlian Network Technology Co ltd
Zhejiang Voc Technology Co ltd
Intelligent Lock Research Institute Of Wenzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wenzhou Zhian Yunlian Network Technology Co ltd, Zhejiang Voc Technology Co ltd, Intelligent Lock Research Institute Of Wenzhou University filed Critical Wenzhou Zhian Yunlian Network Technology Co ltd
Publication of CN112215787A publication Critical patent/CN112215787A/en
Application granted granted Critical
Publication of CN112215787B publication Critical patent/CN112215787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an infrared and visible light image fusion method based on saliency analysis and a self-adaptive filter. Firstly, an adaptive bilateral filter based on local entropy is designed, so that edge information is effectively reserved, and an image passes through the filter. Then, based on the fact that the visible light image and the infrared image are respectively characterized by pixel intensity and detail information, different methods are designed to extract image significance as corresponding weight values. And finally, fusing and reconstructing the image based on the weight value. Experimental results show that the method based on the invention is superior to the traditional image fusion method in subjective and objective evaluation.

Description

Infrared and visible light image fusion method based on significance analysis and adaptive filter
Technical Field
The invention relates to the field of image fusion, in particular to an infrared and visible light image fusion method based on significance analysis and a self-adaptive filter.
Background
The image fusion is aimed at integrating images acquired by different types of sensors into one image so as to enhance the usability of information and realize more accurate, reliable and comprehensive description of the image, and has very good application prospects in the related fields of mode recognition, medical imaging, modern military and the like in the field of remote sensing. The visible light image collects the reflected light information of the scenery, provides texture details and has the characteristics of high spatial resolution and high consistency of human visual system. The infrared image collects heat radiation information to distinguish the target and determine the background according to the heat radiation difference. Therefore, the visible light and the infrared image have different advantages for complementation, and the fusion can generate a stable fusion image with large information quantity, and can be widely applied to the fields of computer vision and the like. However, the infrared image is susceptible to changes in the ambient temperature. Meanwhile, the visible light image is easily affected by weather conditions, such as poor light, fog, and the like. This sensitivity to environmental conditions makes the fusion of visible and thermal infrared images a challenge.
Image fusion algorithms are generally divided into a pixel level, a feature level, and a decision level. The pixel level fusion algorithm comprises the combination of original source images into a picture, so that the time is saved and the implementation is simple. Therefore, the pixel-level fusion algorithm is widely applied to actual image fusion. In the past few years, researchers have proposed a number of fusion methods for visible light and infrared. Various transform-based image fusion methods, such as: wavelet transform (wavelet), curvelet transform (curvelet), dual-tree complex wavelet transform (dtctwt), contourlet transform, etc., which can effectively realize image fusion. However, these algorithms also easily mask some image detail information. Methods based on multi-scale decomposition, such as laplacian pyramid, ratio pyramid, the performance depends greatly on the setting of relevant parameters, which is not easy for inexperienced users to obtain reliable fusion results. Furthermore, the resolution level selection problem of this approach remains to be solved. Methods based on edge-preserving filtering have also been widely applied to image fusion. One of the keys of the edge-preserving-filter-based approach is to preserve the spatial consistency of the structure and reduce edge halo artifacts. In addition, the fusion rule is another key technology based on edge filtering fusion method preservation. Conventional fusion rules have primarily focused on retaining as much useful information in the source image as possible. Because the saliency of an image is consistent with the visual attention of a human, a fusion rule based on the saliency of an image is used in an image fusion process. Most algorithms attribute image saliency to differences in gray-scale values between image pixels, however, infrared and visible images have different attributes, infrared images are characterized by regional pixel intensity information, and visible images are characterized by texture detail information, so that the saliency measures of infrared and visible images are different.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an infrared and visible light image fusion method based on significance analysis and an adaptive filter.
In order to realize the purpose, the invention provides the following technical scheme:
an infrared and visible light image fusion method based on significance analysis and an adaptive filter is characterized by comprising the following steps:
decomposing visible light and infrared images into a base layer image and a detail layer image by using a self-adaptive filter;
step two, carrying out image fusion on the basic layer images of the visible light and the infrared images based on the image significance,
respectively obtaining the weight coefficients of the visible light base layer image and the infrared base layer image by taking the detail information and the pixel intensity as characteristics, and fusing the visible light base layer image and the infrared base layer image by using the obtained weight coefficients of the visible light base layer image and the infrared base layer image;
and step three, carrying out image fusion on the base layer image fused in the step two and the detail layer image of the visible light.
In the first step of the method,
the adaptive filter is a bilateral filter
Figure BDA0002479645970000021
Wherein
Figure BDA0002479645970000022
Wherein omegaqe.I denotes the region where pixel q is located, I denotes the image, σsAnd σrRepresenting the spatial and range standard deviation of the gaussian function, p is the pixel,
Figure BDA0002479645970000025
respectively representing variance as σs、σrGaussian kernel function of (1), IpIs the gray value, I, corresponding to the pixel pqIs the corresponding gray value of the pixel q.
When the infrared and visible light images are filtered, the sigma is adjusted in different areas in a self-adaptive manner according to different detail informationrThe size of (2).
By local information entropy
Figure BDA0002479645970000023
Obtaining
Figure BDA0002479645970000024
And will be
Figure BDA0002479645970000031
σ (q) obtained in (c) as a adaptively varied σ of the adaptive filterrWherein
Figure BDA0002479645970000032
Is region omegaqProbability of occurrence of inner gray value j, M × N representing size of image, NjThe number of pixels having a gray value j is shown, and L is the number of gray levels of the image.
In the second step, the first step is carried out,
firstly, acquiring a weight coefficient of a visible light base layer image by using local information entropy
Figure BDA0002479645970000033
Where (i, j) is the coordinate where pixel p is located;
then, the weight coefficient of the infrared image is obtained by using the pixel intensity
Figure BDA0002479645970000034
Wherein the function sgm (x) ═ 1+ e-x)-1,I+Is composed of
Figure BDA0002479645970000035
Maximum value of (1), I-Is composed of
Figure BDA0002479645970000036
The negative maximum value of (a) is,
Figure BDA0002479645970000037
I+=Imax-Imid, I-=Imin-Imid,Iqrepresenting a gray value representing a pixel q in an image I, IpFor the grey scale corresponding to any pixel p of the imageThe value of the one or more of the one,
finally, by
Figure BDA0002479645970000038
And acquiring the fused base layer image.
And in the third step, adding the fused base layer image and the visible light detail layer image to obtain a fused image.
The invention has the beneficial effects that: designing a self-adaptive bilateral filter based on local entropy to effectively reserve edge information, enabling an image to pass through the filter, then respectively taking pixel intensity and detail information as characteristics based on a visible light image and an infrared image, designing different methods to extract image significance as a corresponding weight value, and finally fusing and reconstructing the image based on the weight value. The method based on the invention is superior to the traditional image fusion method in subjective and objective evaluation.
Drawings
FIG. 1 is a schematic diagram of a process framework for fusing a visible light image and an infrared image.
FIG. 2 shows the use of different sigmarIn value, the filter effect map of the Lena image, where (a) σr=5 (b) σr=2 (c)σr=1 (d)σr=0.2。
FIG. 3 is a σ (q) and E (q) image effect plot for visible and infrared image maps, where (a) and (b) are the visible light image and infrared image, respectively, (c) and (d) entropy map images (e) and (f) are the σ map images of (a) and (b), respectivelyrThe image is mapped.
FIG. 4 shows an infrared image weight coefficient wir(i, j) a change curve;
fig. 5 is an infrared image and its histogram, and (a) and (b) are infrared images, and (c) and (d) are histograms of (a) and (b), respectively.
FIG. 6 is an infrared image and its saliency map image, (a) and (b) being infrared images (c) and (d) being saliency map images of (a) and (b);
fig. 7 is a fusion effect diagram of the image natio _ camp, (a) a visible light image, (b) an infrared image, (c) rp, (d) wavelet (e) dtctwt (f) cvt (g) msvd (h) according to the present invention.
FIG. 8 is a graph showing the fusion effect of image Meting, (a) visible light image, (b) infrared image, (c) RP (d) wavelet (e) DTCTWT (f) CVT (g) MSVD (h) in accordance with the present invention.
Fig. 9 is a diagram of the fusion effect of the image Lake, (a) visible light image, (b) infrared image, (c) rp, (d) wavelet, (e) dtctw (f) cvt (g) msvd (h) according to the present invention.
Fig. 10 is a diagram showing the fusion effect of the image Soldier, (a) a visible light image, (b) an infrared image, (c) rp, (d) wavelet, (e) dtcwt (f) cvt (g) msvd (h) according to the present invention.
FIG. 11 is a diagram showing the fusion effect of Bunker image, (a) visible light image, (b) infrared image, (c) RP (d) wavelet (e) DTCTWT (f) CVT (g) MSVD (h) in the present invention.
Fig. 12 is an EN index comparison table after various mainstream image fusion methods are employed.
Fig. 13 is a MI index comparison table after various mainstream image fusion methods are employed.
FIG. 14 is Q after various mainstream image fusion methods have been employedVI/FIndex comparison table.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention discloses an infrared and visible light image fusion method based on significance analysis and a self-adaptive filter, which comprises the following three parts, 1) using the self-adaptive edge-preserving filter to decompose an image; (2) adaptive image base layer fusion based on image saliency (3) image fusion of base and detail layers.
Namely, it comprises the following steps:
decomposing visible light and infrared images into a base layer image and a detail layer image by using a self-adaptive filter;
step two, carrying out image fusion on the basic layer images of the visible light and the infrared images based on the image significance,
respectively obtaining the weight coefficients of the visible light base layer image and the infrared base layer image by taking the detail information and the pixel intensity as characteristics, and fusing the visible light base layer image and the infrared base layer image by using the obtained weight coefficients of the visible light base layer image and the infrared base layer image;
and step three, carrying out image fusion on the base layer image fused in the step two and the detail layer image of the visible light.
In relation to the first step, in the first step,
the infrared and visible images are decomposed into two layers: a base layer and a detail layer. The detail layer of the infrared image is discarded because useful information in the infrared image is contained in the base layer, while the detail layer usually contains noise and a small amount of information, and the noise affects the final fusion effect. However, this method necessarily results in some information being lost in the infrared image, so when the infrared image is decomposed, the base layer should preserve the edge detail information of the image as much as possible. In other words, the filter should filter the noise of the infrared image to the maximum extent, while retaining the detail information of the image.
Therefore, the invention adopts the self-adaptive filter with self-adaptive range parameters, the filter is a bilateral filter, the bilateral filter is a nonlinear filter, the difference of the gray value of the pixels kept by the neighborhood is considered, the image is smoothed, meanwhile, the edge details of the image are kept, and the filter can be expressed as follows:
Figure BDA0002479645970000051
wherein the content of the first and second substances,
Figure BDA0002479645970000052
wherein omegaqE I denotes the region where pixel q is located, I denotes the image.
Parameter sigmasAnd σrThe spatial and range standard deviations of the gaussian function are represented. The performance of the bilateral filter depends strongly on the parameter σsAnd σrSelection of (2). In particular, the parameter σrIn relation to regional intensity differences, σ is appropriaterThe values may be used to preserve detail information in the filtered image. FIG. 2 shows the use of different σrValue, filter effect of Lena image. From the figure, σ can be seenrThe smaller the image, the sharper the image, i.e. the more detailed information. Since different areas of the image contain different detail information, for example, the flat area contains less detail information, and the edge area contains more texture information. Therefore, when filtering the image, the different areas should adaptively adjust the sigma according to the difference of the detail informationrThe size of (2). Since the local information entropy can effectively measure the local information of the image, an adaptive filter based on the local information entropy is designed. The local information entropy can be expressed as follows:
Figure BDA0002479645970000061
wherein
Figure BDA0002479645970000062
Is region omegaqProbability of occurrence of inner gray value j, M × N representing size of image, NjThe number of pixels having a gray value j. Obviously, the more image area information, the larger the corresponding value of e (q), so the parameter σ of the designed filterrThe smaller. Therefore, we have designed the following function:
Figure BDA0002479645970000063
fig. 3 shows the image effect resulting from σ (q) and e (q) mapped from a pair of visible and infrared images. From the figure, we can find that the flatter the image area, the smaller e (q), the larger σ (q), and the larger e (q), the smaller σ (q), in the edge area. Therefore, σ (q) in equation (4) can be used as the adaptive parameter in equations (1) and (2) to obtain an adaptive filter, which decomposes the image into two parts:
Bvi=BFad(Ivi) (5)
Bir=BFad(Iir) (6)
wherein, IviAnd IirRespectively representing visible and infrared images, BviAnd BirBase layer, BF, of visible and infrared images, respectivelyadAn adaptive filter is represented. Therefore, detail layers of visible light images and infrared images can be obtained respectively:
Dvi=Ivi-Bvi (7)
Dir=Iir-Bir (8)
firstly, the weight coefficient of the visible light base layer image is obtained, and the visible light image is mainly characterized by containing rich detail information which should be kept in image fusion. The local entropy can effectively describe the texture information contained in the image region, and thus, the weight coefficients of the visible light base layer image can be expressed using the local entropy,
Figure BDA0002479645970000071
wherein (i, j) is the coordinate where the pixel p is located;
then, with the use of the weight coefficient for acquiring the infrared image, the thermal information of the target object is the main feature of the image for the infrared image. It is the factor that is mainly considered for image fusion. Thus, some algorithms use the absolute gray value of a pixel to measure the weight of each pixel. However, the infrared image is easily affected by changes in the ambient temperature. For example, in a low-temperature environment, a region having an average grayscale value of 100 may be considered as a target object. However, in an environment with a relatively high temperature, the value can only be used as a background environment, and the average gray-scale value of the hot spot target area in the environment can exceed 200. The use of absolute pixel values as infrared image weights is therefore not adaptive in most cases.
If IqRepresenting the gray value representing a pixel q in an image I, we define
Figure BDA0002479645970000072
Figure BDA0002479645970000073
Figure BDA0002479645970000074
I+=Imax-Imid (13)
I-=Imin-Imid (14)
For the gray value I corresponding to any pixel p of the imagepThe following treatments are carried out:
Figure BDA0002479645970000075
the weight coefficient corresponding to the pixel is expressed as
Figure BDA0002479645970000076
Wherein the function sgm (x) is (1+ e)-x)-1,I+Is composed of
Figure BDA0002479645970000077
Maximum value of (1), I-Is composed of
Figure BDA0002479645970000078
Negative maximum value of (d). When in use
Figure BDA0002479645970000079
Are respectively provided withIs I+And I-When wirThe values of (i, j) are 1 and 0, respectively. FIG. 4 shows a
Figure BDA00024796459700000710
In the interval [ I-,I+]When wir(i, j) and
Figure BDA00024796459700000711
the change curve of (2). From the figure we can find that when
Figure BDA00024796459700000712
The closer to I+And I-When the curve is flatter, when
Figure BDA0002479645970000081
The steeper the curve, approaching 0. It is well known that for most image pixels are concentrated in the middle of their histogram as shown in fig. 5. In other words, for most images, the difference in the gray values of most pixels is relatively small. Since the gray scale value of the original image is processed by equation (15), the gray scale of the image is obtained
Figure BDA0002479645970000082
Mainly centered around 0. The slope of the curve of the combined image 5 is larger near 0, so that the contrast between pixels can be increased, and the visual effect is enhanced. Fig. 6 shows the corresponding weighting coefficients for two images. From the figure we can find that this method can highlight the hot area more and increase the contrast of the image.
Finally, since the visible light image contains most of the texture information, it is the main determinant of the quality of the fused image. Thus, the image fusion of the base layer is calculated as follows:
Figure BDA0002479645970000083
the acquisition of the fused image is carried out,
notably, the infrared image contains primarily thermal infrared information. With adaptive filters, the detail layer of the infrared image contains a small amount of edge information and most of the noise, which degrades the quality of the fused image. We discard the detail layer of the infrared image. Thus, in the present invention, the final image fusion calculation is as follows:
Ifusion=Bfusion+Dfusion (18)
the current mainstream algorithm adopted in the experiment is as follows: contrast pyramid (RP), Wavelet transform (Wavelet), Dual Tree Complex Wavelet Transform (DTCWT), curvelet transform (CVT), multi-resolution singular value decomposition (MSVD). Using classical infrared and visible light image pairs: nato _ camp, Meting, Lake, Soldier, Bunker, evaluated the fusion effect in subjective and objective aspects, respectively. The objective evaluation indexes include Entropy (EN), Mutual Information (MI) and edge texture information (Q)VI/F)。
The experimental effects are shown in fig. 7 to 11.
The examples should not be construed as limiting the present invention, but any modifications made based on the spirit of the present invention should be within the scope of protection of the present invention.

Claims (2)

1. An infrared and visible light image fusion method based on significance analysis and an adaptive filter is characterized by comprising the following steps:
decomposing visible light and infrared images into a base layer image and a detail layer image by using a self-adaptive filter;
step two, carrying out image fusion on the basic layer images of the visible light and the infrared images based on the image significance,
respectively obtaining weight coefficients of the visible light base layer image and the infrared base layer image by taking the detail information and the pixel intensity as characteristics, and fusing the visible light base layer image and the infrared base layer image by using the obtained weight coefficients of the visible light base layer image and the infrared base layer image;
step three, carrying out image fusion on the base layer image fused in the step two and the detail layer image of the visible light,
in the first step of the method,
the adaptive filter is a bilateral filter
Figure FDA0003593735910000011
Wherein
Figure FDA0003593735910000012
Wherein Ω isqe.I denotes the region where pixel q is located, I denotes the image, σsAnd σrRepresenting the spatial and range standard deviation of the gaussian function, p is the pixel,
Figure FDA0003593735910000013
respectively representing variance as σs、σrGaussian kernel function of (1), IpIs the gray value, I, corresponding to the pixel pqIs the gray value corresponding to the pixel q,
when the infrared and visible light images are filtered, the sigma is adjusted in different areas in a self-adaptive manner according to different detail informationrThe size of (a) is (b),
by local information entropy
Figure FDA0003593735910000014
Obtaining
Figure FDA0003593735910000015
And will be
Figure FDA0003593735910000016
σ (q) obtained in (c) as adaptively varied σ of the adaptive filterrWherein
Figure FDA0003593735910000017
Is region omegaqProbability of occurrence of inner gray value j, M × N representing size of image, NjIs the number of pixels with a gray value of j, L is the number of gray levels of the image,
in the second step, the first step is carried out,
firstly, acquiring a weight coefficient of a visible light base layer image by using local information entropy
Figure FDA0003593735910000021
Wherein (i, j) is the coordinate where the pixel p is located;
then, the weight coefficient of the infrared image is obtained by using the pixel intensity
Figure FDA0003593735910000022
Wherein the function sgm (x) ═ 1+ e-x)-1,I+Is composed of
Figure FDA0003593735910000023
Maximum value of (1), I-Is composed of
Figure FDA0003593735910000024
The negative maximum value of (a) is,
Figure FDA0003593735910000025
Figure FDA0003593735910000026
I+=Imax-Imid,I-=Imin-Imid,Iqrepresenting the gray value of a pixel q in an image I, IpIs the gray value corresponding to any pixel p of the image,
finally, by
Figure FDA0003593735910000027
Obtaining a fused base layer image, Bvi、BirRespectively, visible light and infrared image.
2. The infrared and visible light image fusion method based on saliency analysis and adaptive filters of claim 1 characterized in that in the third step, the fused base layer image and the visible detail layer image are added to form the fused image.
CN202010379475.1A 2020-04-30 2020-05-07 Infrared and visible light image fusion method based on significance analysis and adaptive filter Active CN112215787B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020103707289 2020-04-30
CN202010370728 2020-04-30

Publications (2)

Publication Number Publication Date
CN112215787A CN112215787A (en) 2021-01-12
CN112215787B true CN112215787B (en) 2022-06-17

Family

ID=74058683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010379475.1A Active CN112215787B (en) 2020-04-30 2020-05-07 Infrared and visible light image fusion method based on significance analysis and adaptive filter

Country Status (1)

Country Link
CN (1) CN112215787B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159229B (en) * 2021-05-19 2023-11-07 深圳大学 Image fusion method, electronic equipment and related products
CN113610738A (en) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 Image processing method, device, equipment and computer readable storage medium
CN113436078B (en) * 2021-08-10 2022-03-15 诺华视创电影科技(江苏)有限公司 Self-adaptive image super-resolution reconstruction method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2309449A1 (en) * 2009-10-09 2011-04-13 EPFL Ecole Polytechnique Fédérale de Lausanne Method to produce a full-color smoothed image
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN109493309A (en) * 2018-11-20 2019-03-19 北京航空航天大学 A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN110084748A (en) * 2019-03-26 2019-08-02 温州晶彩光电有限公司 A kind of infrared and visible light image fusion method based on total variational
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2309449A1 (en) * 2009-10-09 2011-04-13 EPFL Ecole Polytechnique Fédérale de Lausanne Method to produce a full-color smoothed image
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN109493309A (en) * 2018-11-20 2019-03-19 北京航空航天大学 A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN110084748A (en) * 2019-03-26 2019-08-02 温州晶彩光电有限公司 A kind of infrared and visible light image fusion method based on total variational
CN110189284A (en) * 2019-05-24 2019-08-30 南昌航空大学 A kind of infrared and visible light image fusion method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李昌兴,武洁.基于FPDEs与CBF的红外与可见光图像融合.《计算机科学》.2019,全文. *

Also Published As

Publication number Publication date
CN112215787A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
Wang et al. Adaptive image enhancement method for correcting low-illumination images
CN112215787B (en) Infrared and visible light image fusion method based on significance analysis and adaptive filter
CN107153816B (en) Data enhancement method for robust face recognition
Wang et al. Single image dehazing based on the physical model and MSRCR algorithm
CN112950518B (en) Image fusion method based on potential low-rank representation nested rolling guide image filtering
CN107784642A (en) A kind of infrared video and visible light video method for self-adaption amalgamation
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
CN110163818A (en) A kind of low illumination level video image enhancement for maritime affairs unmanned plane
CN113313639A (en) Image enhancement method based on Retinex multi-level decomposition
Wang et al. Low-light image joint enhancement optimization algorithm based on frame accumulation and multi-scale Retinex
CN113298147B (en) Image fusion method and device based on regional energy and intuitionistic fuzzy set
Gao et al. Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering
CN111462028A (en) Infrared and visible light image fusion method based on phase consistency and target enhancement
CN111815550A (en) Infrared and visible light image fusion method based on gray level co-occurrence matrix
CN113313702A (en) Aerial image defogging method based on boundary constraint and color correction
CN111507913A (en) Image fusion algorithm based on texture features
CN115587945A (en) High dynamic infrared image detail enhancement method, system and computer storage medium
CN115457249A (en) Method and system for fusing and matching infrared image and visible light image
Zhao et al. Infrared and visible image fusion method based on rolling guidance filter and NSST
Zhou et al. Retinex-MPCNN: a Retinex and Modified Pulse coupled Neural Network based method for low-illumination visible and infrared image fusion
CN107705274B (en) Multi-scale low-light-level and infrared image fusion method based on mathematical morphology
Li et al. [Retracted] Research on Haze Image Enhancement based on Dark Channel Prior Algorithm in Machine Vision
CN110084774B (en) Method for minimizing fusion image by enhanced gradient transfer and total variation
Gao et al. Infrared and visible image fusion using dual-tree complex wavelet transform and convolutional sparse representation
CN109472762A (en) Infrared double-waveband Image Fusion based on NSCT and non-linear enhancing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant