CN109919884A - Infrared and visible light image fusion method based on gaussian filtering weighting - Google Patents
Infrared and visible light image fusion method based on gaussian filtering weighting Download PDFInfo
- Publication number
- CN109919884A CN109919884A CN201910093445.1A CN201910093445A CN109919884A CN 109919884 A CN109919884 A CN 109919884A CN 201910093445 A CN201910093445 A CN 201910093445A CN 109919884 A CN109919884 A CN 109919884A
- Authority
- CN
- China
- Prior art keywords
- image
- formula
- mapping graph
- contrast
- infrared
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
Abstract
The present invention relates to a kind of infrared and visible light image fusion methods based on gaussian filtering weighting, are decomposed to source images using Gaussian filter;To be measured convenient for the conspicuousness to visual signature, using the decision graph models of Gaussian filter weighting structural texture significance visual feature and fusion;According to the correlation between blending image neighborhood territory pixel, the halo artifact generated using inconsistent based on quick wave filter inhibition noise and decision diagram boundary.It is demonstrated experimentally that the method for the present invention has better syncretizing effect compared with existing image interfusion method, it can overcome the problems, such as that the grain details missing of blending image, distortion is insufficient, the conspicuousness of blending image is greatly improved.
Description
Technical field
The present invention relates to a kind of infrared and visible light image fusion methods, can be applied to various military or civilian image
Processing system.
Background technique
Due to imaging mechanism and technical restriction, single imaging sensor is in application environment, use scope and specific objective
The image of acquisition can not reflect all features of object being observed, it is therefore desirable to different sensor images, remove redundancy letter
Breath, extracts respective useful information and is fused into a width and have a more complete information, and it is more accurate comprehensive to obtain target in Same Scene
Spatial information facilitates the mankind to observe and handle image.
Visible images contain captured scene detailed information abundant and spectrum information, but can not embody has smog
Or hiding object under low lighting conditions, the people especially deliberately pretended or object.Infrared imaging sensor penetrates flue dust energy
Power is strong, can capture the object of heat radiation, but infrared thermoviewer, nothing more sensitive to Temperature Distribution in work double tides
Method obtains photographed scene and enriches texture information and spectrum information.Complementary characteristic based on two kinds of sensors, it will be seen that light image with
Infrared image fusion obtains a width to scene description more fully image, wherein not only comprising information in visible images but also including
Information in infrared image.Therefore, infrared with the image co-registration of visible light is that multi-source image merges field important component,
Computer vision, robot field and Research on Target are scouted and identification has obtained very big application.U.S.'s latest edition night vision goggles energy
Enough to carry out fusion treatment to infrared image and visible images, searching personnel can be complete very well in the case where intensity of illumination is bad
At various investigation tasks;Infrared image and color visible image are applied to helicopter using multi-source image integration technology by Britain
On image fusion system, the image after reconstruct achieves good visual effect.Therefore infrared and visual image fusion skill
The research of art has profound significance.
In recent years, for infrared and visual image fusion technology, have a large amount of Image Fusion and proposed in succession,
Obtain good effect.In order to solve the problems, such as not consider that the Space Consistency in fusion process generates speck, document " Pixel
And region based image fusion with complex wavelets ", Information Fusion, 2007,
8 (2): 119~130. document use even numbers complex wavelet transform (DTCWT algorithm) method;Document " Remote sensing
Image fusion using the curvelet transform ", Information Fusion, 2007,8 (2): 143~
156. document carry out image co-registration using curvature wave conversion (CVT algorithm);Document " Infrared and visible image
fusion based on the compensation mechanismin NSCT domain.》Chinese Journal of
Scientific Instrument, 2016,37 (4): using non-down sampling contourlet transform, (NSCT is calculated the 860-870. document
Method).
Summary of the invention
Technical problems to be solved
In order to avoid the shortcomings of the prior art, the present invention propose it is a kind of for infrared with visible images height
The image interfusion method of this filter weight.
Technical solution
A kind of infrared and visible light image fusion method based on gaussian filtering weighting, it is characterised in that steps are as follows:
Step 1: Gaussian filter decomposes source images
Source images are decomposed using Gaussian filter to obtain low frequency component, high fdrequency component subtracts low frequency by source images
Component obtains, such as following formula:
In formula: InFor source images,For the low frequency component of source images,For the high fdrequency component of source images, Gr,σIndicate variance
For σ, size is (2r+1) × (2r+1) Gaussian filter;
Step 2: the measurement of decision value, more visual signature weighting mapping graph constructions
Since picture contrast, clarity, structural information are visible light and infrared image blending image visual quality three
A important feature constructs contrast, clarity and structure significance weighted mapping graph respectively:
(a) contrast weights mapping graph
Mapping graph is weighted using local contrast and Gaussian filter building contrast, respective image is characterized with this
Local feature, contrast notable figure are defined as follows formula:
In formula: * is convolution symbol, and ω (j, k) is the weight of 3 × 3 windows, and i and j indicate some pixel in local window
Position coordinates in mouthful,For the mean value of 3 × 3 windows centered on (x, y), Gr,σFor (2r+1) × (2r+1) window
Gaussian filter, then, contrast weighting mapping graph be defined as follows:
In formula: N is the pixel number of input picture,Contrast saliency value when for pixel number being k, n is input picture
Number;
(b) clarity weights mapping graph
The edge mutation of clarity notable figure reflection image and sharpness information, clarity notable figure are defined as follows:
In formula: ω (i, j) is the weight of 3 × 3 windows, and ML is improved Laplce's component, then, clarity weighting is reflected
Figure is penetrated to be defined as follows:
In formula: N is the pixel number of input picture,Clarity saliency value when for pixel number being k, n are input picture
Number;
(c) structure significance weighted mapping graph
According to Gaussian filter can Efficient Characterization image partial structurtes information characteristic, design Gaussian filter weighting
Partial gradient covariance matrix is as follows:
In formula: Ix(X) and Iy(X) X=(x, y) is indicated along the gradient in the direction x and y, and * is convolution symbol, GσIt is σ for variance
Gaussian filter Matrix C decompose and acquires its characteristic value and is respectively in order to obtain the expression information of local image structureWithAnd it constructs picture structure notable figure and is defined as follows:
For the edge and textural characteristics for characterizing blending image, here α=0.5;
Picture structure significance weighted mapping graph is defined as follows:
In formula: N is the pixel number of input picture,Indicate contrast saliency value when pixel number is k, n is input picture
Number;
(d) is based on quick wave filter weighting mapping graph construction
It introduces quick wave filter and weights mapping graph D applied to each visual signature1,n, D2,nAnd D3,nIn, source images
InIt is as follows that final weighted graph construction is generated as navigational figure:
In formula: r1, ε1, r2, ε2Respectively quickly weight the parameter of wave filter;WithRespectively low frequency component
With the weighted graph of high fdrequency component, m=(1,2,3);
(e) total weighting mapping graph
Use weighting mapping graph and with characterization and source images InCorresponding total weighted graph:
In formula: Wn BAnd Wn DThe respectively total weighted graph of low frequency component and high fdrequency component, λ are a parameter between 0~1;
Step 3: two scale image reconstructions
Infrared and visible images low frequency component and high fdrequency component merge by weighted sum mode respectively
To respective fusion component, such as following formula:
Fused low frequency component and high fdrequency component are reconstructed to obtain fused image, then
Beneficial effect
A kind of infrared and visible light image fusion method based on gaussian filtering weighting proposed by the present invention, adopts source images
It is decomposed with Gaussian filter;To be measured convenient for the conspicuousness to visual signature, weighted using Gaussian filter
The decision graph models of structural texture significance visual feature and fusion;According to the correlation between blending image neighborhood territory pixel, adopt
The halo artifact generated with inconsistent based on quick wave filter inhibition noise and decision diagram boundary.It is demonstrated experimentally that
The method of the present invention has better syncretizing effect compared with existing image interfusion method, can overcome the grain details of blending image
Missing is distorted insufficient problem, and the conspicuousness of blending image is greatly improved.
Present invention utilizes Gaussian filters for image detail feature retention performance, be extracted profile in source images,
Texture and detailed information reach and effectively keep picture edge characteristic;Protecting side feature using quick weighting Steerable filter device can make to protect
Side characteristic fusion efficiencies are greatly improved.This method fused image Edge texture details, distortion etc. obtain very big
Raising, target conspicuousness is very significantly improved, and on the basis of guaranteeing picture quality, improves processing speed.
Detailed description of the invention
The basic flow chart of Fig. 1 the method for the present invention
Fig. 2 visible light and infrared picture data: (1) first group of source images, (2) second groups of source images, (3) third group source figure
Picture, (4) the 4th groups of source images
First group of visible light of Fig. 3 and infrared image with and result: (1) visible light, (2) are infrared, (3) DTCWT, (4) CVT,
(5) NSCT, (6) inventive algorithm
Second group of visible light of Fig. 4 and infrared image fusion results: (1) visible light, (2) infrared, (3) DTCWT, (4) CVT,
(5) NSCT, (6) inventive algorithm
Fig. 5 third group visible light and infrared image fusion results: (1) visible light, (2) infrared, (3) DTCWT, (4) CVT,
(5) NSCT, (6) inventive algorithm
The 4th group of visible light of Fig. 6 and infrared image fusion results: (1) visible light, (2) infrared, (3) DTCWT, (4) CVT,
(5) NSCT, (6) inventive algorithm
Specific embodiment
Now in conjunction with embodiment, attached drawing, the invention will be further described:
Hardware environment for implementation is: experimental situation is CPU Intel Core i5 5200U 2.20GHz, is inside saved as
4GB is programmed using MATLAB R2014a.The present invention carries out fusion treatment verifying with visible images using infrared.Side of the present invention
The basic procedure of method is as shown in Fig. 1, and experiment source image data is as shown in Fig. 2, is embodied as follows:
Step 1: Gaussian filter decomposes source images
Source images are decomposed using Gaussian filter to obtain low frequency component, high fdrequency component subtracts low frequency by source images
Component obtains, such as following formula:
In formula: InFor source images,For the low frequency component of source images,For the high fdrequency component of source images.Gr,σIndicate variance
For σ, size is (2r+1) × (2r+1) Gaussian filter.
Step 2: the measurement of decision value, more visual signature weighting mapping graph constructions
Since picture contrast, clarity, structural information are visible light and infrared image blending image visual quality three
A important feature, the present invention construct contrast, clarity and structure significance weighted mapping graph respectively.
(f) contrast weights mapping graph
The present invention weights mapping graph using local contrast and Gaussian filter building contrast, is characterized accordingly with this
The local feature of image.Local contrast is defined as follows formula:
In formula: ω (j, k) is the weight of 3 × 3 windows, and i and j indicate that position of some pixel in local window is sat
Mark,For the mean value of 3 × 3 windows centered on (x, y), Gr,σIt is filtered for the Gaussian smoothing of (2r+1) × (2r+1) window
Wave device.So, contrast weighting mapping graph is defined as follows:
In formula: N is the pixel number of input picture,Contrast saliency value when for pixel number being k, of n input picture
Number.
(g) clarity weights mapping graph
Clarity notable figure has been well reflected the edge mutation of image and sharpness information, clarity notable figure define such as
Under:
In formula: ω (m, n) is the weight of 3 × 3 windows, and ML is improved Laplce's component.So, clarity weighting is reflected
Figure is penetrated to be defined as follows:
In formula: N is the pixel number of input picture,Clarity saliency value when for pixel number being k, n are input picture
Number.
(h) structure significance weighted mapping graph
Since infrared and visible images partial structurtes and partial gradient covariance are closely related, the present invention is filtered according to Gauss
Wave device can Efficient Characterization image partial structurtes information characteristic, design Gaussian filter weighting partial gradient covariance matrix
It is as follows:
In formula: Ix(X) and Iy(X) X=(x, y) is indicated along the gradient in the direction x and y, and * is convolution symbol, GσIt is σ for variance
Gaussian filter.In order to obtain the expression information of local image structure, Matrix C decompose and acquires its characteristic value and is respectivelyWithAnd it constructs picture structure notable figure and is defined as follows:
For the edge and textural characteristics for characterizing blending image, here α=0.5.
Picture structure significance weighted mapping graph is defined as follows:
In formula: N is the pixel number of input picture,Indicate contrast saliency value when pixel number is k, n is input picture
Number.
(i) is based on quick wave filter weighting mapping graph construction
The characteristic that due to wave filter there is edge to keep.The key point of guiding filtering assume that filtering output image and
There are local linear relationships in the window centered on pixel for navigational figure, but since wave filter needs calculation window
In more pixel value, cause algorithm slower.Therefore, it is applied to each visual signature present invention introduces quick wave filter to add
Weigh mapping graph D1,n, D2,nAnd D3,nIn, source images InIt is as follows that final weighted graph construction is generated as navigational figure:
In formula: r1, ε1, r2And ε2Respectively quickly weight the parameter of wave filter;WithRespectively low frequency point
The weighted graph of amount and high fdrequency component, m=(1,2,3).
(j) total weighting mapping graph
For reflection contrast, the effect of clarity and structure significance measure in Image Visual Feature in fusion weight, this hair
It is bright to use weighting mapping graph and with characterization and source images InCorresponding total weighted graph:
In formula: Wn BAnd Wn DThe respectively total weighted graph of low frequency component and high fdrequency component, λ are a parameter between 0~1.
Step 3: two scale image reconstructions
Infrared and visible images low frequency component and high fdrequency component are passed through weighted sum respectively to be merged to obtain respectively
Fusion component, such as following formula:
Fused low frequency component and high fdrequency component are reconstructed to obtain fused image, then
3~Fig. 6 is described further effect of the invention with reference to the accompanying drawing.
Attached drawing 2 is four groups infrared and visible light source image, and 3~Fig. 6 of attached drawing is visible light and infrared fusion experimental results figure.
1. experiment condition
Experimental situation is CPU Intel Core i5 5200U 2.20GHz, 4GB is inside saved as, using MATLAB R2014a
Programming.The present invention is using four groups infrared and visible images collection (256 × 256).
2. experiment content
The comparison diagram of image after 3~Fig. 6 of attached drawing is four groups infrared and visual image fusion.
With method of the invention and existing three kinds of fusion methods to four groups in attached drawing 2 (c) infrared and visible light source image
Fusion carries out fusion experiment.5 fusion results of attached drawing from left and right be successively document (abbreviation DTCWT algorithm) " Pixel and
Region based image fusion with complex wavelets ", Information Fusion, 2007,8
(2): 119~130. documents (abbreviation CVT algorithm) " Remote sensing image fusion using the curvelet
Transform ", Information Fusion, 2007,8 (2): 143~156. documents (abbreviation NSCT algorithm) Infrared
and visible image fusion based on the compensation mechanismin NSCT domain.》
Chinese Journal of Scientific Instrument, 2016,37 (4): 860-870. and image of the invention melt
Close result figure.
By Germicidal efficacy, the blending image of 1~document of document 3 is compared with experimental result of the present invention, fused image
Contrast appearance reduces to a certain extent, and the blending image background information of acquisition is coarse, cannot be well reflected out visible light figure
Texure information as in.
It can see the bright square in visible images from the image (2) in Fig. 3~Fig. 6, in infrared figure
As the object hidden in plant leaf and the woods can be identified in Fig. 3~Fig. 6 in image (1).Pass through base to a certain extent
It is reduced in the resulting fusion results picture contrast of DTCWT, CVT, NSCT algorithm, such as image (3)~(5) in Fig. 3~Fig. 6.From
The effect that blending image of the present invention is observed in visual effect is substantially better than comparison blending image.
Image (1) and (2) in Fig. 6 can see trees, house and highway in visible images, in infrared image
It can be seen that the not detectable people in visible images, by being based on the resulting fusion results figure of DTCWT, CVT, NSCT algorithm
There is the texure information for reducing, and cannot being well reflected out in visible images to a certain extent in image contrast, such as
Fig. 6 (3)~(5);The blending image of inventive algorithm is better than algorithm above from visual effect, and obtained blending image can not only
What is be enough apparent tells infrared target, and can also preferably represent the texture detail information in visible images.
To further illustrate effect of the present invention, using image mutual information MI, message structure similarity QY, standard deviation SD and side
Edge conservation degree QAB/FQuantitative assessment is carried out to the quality of blending image etc. index is objectively evaluated.For four groups of visible lights and infrared figure
The method of picture, fusion of the invention is shown in Table 1 compared with the quantitative performance of other schemes.It integrates subjective vision and objectively evaluates finger
Mark, compared to other four kinds of fusion methods, fusion method of the present invention can more effectively retain the textures such as the details of source images letter
Breath, and highlight the significant characteristics of source images.
The different fusion methods of table 1 objectively evaluate
Claims (1)
1. a kind of infrared and visible light image fusion method based on gaussian filtering weighting, it is characterised in that steps are as follows:
Step 1: Gaussian filter decomposes source images
Source images are decomposed using Gaussian filter to obtain low frequency component, high fdrequency component subtracts low frequency component by source images
It obtains, such as following formula:
In formula: InFor source images,For the low frequency component of source images,For the high fdrequency component of source images, Gr,σExpression variance is σ,
Size is (2r+1) × (2r+1) Gaussian filter;
Step 2: the measurement of decision value, more visual signature weighting mapping graph constructions
Since picture contrast, clarity, structural information are three weights of visible light and infrared image blending image visual quality
Feature is wanted, constructs contrast, clarity and structure significance weighted mapping graph respectively:
(a) contrast weights mapping graph
Mapping graph is weighted using local contrast and Gaussian filter building contrast, the part of respective image is characterized with this
Feature, contrast notable figure are defined as follows formula:
In formula: * is convolution symbol, and ω (j, k) is the weight of 3 × 3 windows, and i and j indicate some pixel in local window
Position coordinates,For the mean value of 3 × 3 windows centered on (x, y), Gr,σFor the height of (2r+1) × (2r+1) window
This smoothing filter, then, contrast weighting mapping graph is defined as follows:
In formula: N is the pixel number of input picture,Contrast saliency value when for pixel number being k, n are of input picture
Number;
(b) clarity weights mapping graph
The edge mutation of clarity notable figure reflection image and sharpness information, clarity notable figure are defined as follows:
In formula: ω (i, j) is the weight of 3 × 3 windows, and ML is improved Laplce's component, then, clarity weights mapping graph
It is defined as follows:
In formula: N is the pixel number of input picture,Clarity saliency value when for pixel number being k, n are of input picture
Number;
(c) structure significance weighted mapping graph
According to Gaussian filter can Efficient Characterization image partial structurtes information characteristic, design Gaussian filter weighting part
Gradient covariance matrix is as follows:
In formula: Ix(X) and Iy(X) X=(x, y) is indicated along the gradient in the direction x and y, and * is convolution symbol, GσThe height for being σ for variance
This filter decompose to Matrix C and acquires its characteristic value and be respectively to obtain the expression information of local image structureWithAnd it constructs picture structure notable figure and is defined as follows:
For the edge and textural characteristics for characterizing blending image, here α=0.5;
Picture structure significance weighted mapping graph is defined as follows:
In formula: N is the pixel number of input picture,Indicate contrast saliency value when pixel number is k, n is of input picture
Number;
(d) is based on quick wave filter weighting mapping graph construction
It introduces quick wave filter and weights mapping graph D applied to each visual signature1,n, D2,nAnd D3,nIn, source images InAs
It is as follows that navigational figure generates final weighted graph construction:
In formula: r1, ε1, r2, ε2Respectively quickly weight the parameter of wave filter;WithRespectively low frequency component and height
The weighted graph of frequency component, m=(1,2,3);
(e) total weighting mapping graph
Use weighting mapping graph and with characterization and source images InCorresponding total weighted graph:
In formula:WithThe respectively total weighted graph of low frequency component and high fdrequency component, λ are a parameter between 0~1;
Step 3: two scale image reconstructions
Infrared and visible images low frequency component and high fdrequency component are merged to obtain respectively by weighted sum mode respectively
From fusion component, such as following formula:
Fused low frequency component and high fdrequency component are reconstructed to obtain fused image, then
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910093445.1A CN109919884A (en) | 2019-01-30 | 2019-01-30 | Infrared and visible light image fusion method based on gaussian filtering weighting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910093445.1A CN109919884A (en) | 2019-01-30 | 2019-01-30 | Infrared and visible light image fusion method based on gaussian filtering weighting |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109919884A true CN109919884A (en) | 2019-06-21 |
Family
ID=66961163
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910093445.1A Pending CN109919884A (en) | 2019-01-30 | 2019-01-30 | Infrared and visible light image fusion method based on gaussian filtering weighting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109919884A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179208A (en) * | 2019-12-09 | 2020-05-19 | 天津大学 | Infrared-visible light image fusion method based on saliency map and convolutional neural network |
CN111815549A (en) * | 2020-07-09 | 2020-10-23 | 湖南大学 | Night vision image colorization method based on guided filtering image fusion |
CN111861960A (en) * | 2020-07-17 | 2020-10-30 | 北京理工大学 | Infrared and visible light image fusion method |
CN112017139A (en) * | 2020-09-14 | 2020-12-01 | 南昌航空大学 | Infrared and visible light image perception fusion method |
CN114066786A (en) * | 2020-08-03 | 2022-02-18 | 四川大学 | Infrared and visible light image fusion method based on sparsity and filter |
CN115205181A (en) * | 2022-09-15 | 2022-10-18 | 季华实验室 | Multi-focus image fusion method and device, electronic equipment and storage medium |
CN117079194A (en) * | 2023-10-12 | 2023-11-17 | 深圳云天畅想信息科技有限公司 | Cloud video AI understanding generation method and device and computer equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751432A (en) * | 2015-03-09 | 2015-07-01 | 电子科技大学 | Image reconstruction based visible light and infrared image fusion method |
CN104809734A (en) * | 2015-05-11 | 2015-07-29 | 中国人民解放军总装备部军械技术研究所 | Infrared image and visible image fusion method based on guide filtering |
CN105069768A (en) * | 2015-08-05 | 2015-11-18 | 武汉高德红外股份有限公司 | Visible-light image and infrared image fusion processing system and fusion method |
CN106780392A (en) * | 2016-12-27 | 2017-05-31 | 浙江大华技术股份有限公司 | A kind of image interfusion method and device |
CN108389158A (en) * | 2018-02-12 | 2018-08-10 | 河北大学 | A kind of infrared and visible light image interfusion method |
CN108830818A (en) * | 2018-05-07 | 2018-11-16 | 西北工业大学 | A kind of quick multi-focus image fusing method |
-
2019
- 2019-01-30 CN CN201910093445.1A patent/CN109919884A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104751432A (en) * | 2015-03-09 | 2015-07-01 | 电子科技大学 | Image reconstruction based visible light and infrared image fusion method |
CN104809734A (en) * | 2015-05-11 | 2015-07-29 | 中国人民解放军总装备部军械技术研究所 | Infrared image and visible image fusion method based on guide filtering |
CN105069768A (en) * | 2015-08-05 | 2015-11-18 | 武汉高德红外股份有限公司 | Visible-light image and infrared image fusion processing system and fusion method |
CN106780392A (en) * | 2016-12-27 | 2017-05-31 | 浙江大华技术股份有限公司 | A kind of image interfusion method and device |
CN108389158A (en) * | 2018-02-12 | 2018-08-10 | 河北大学 | A kind of infrared and visible light image interfusion method |
CN108830818A (en) * | 2018-05-07 | 2018-11-16 | 西北工业大学 | A kind of quick multi-focus image fusing method |
Non-Patent Citations (3)
Title |
---|
傅志中 等: "基于视觉显著性和NSCT的红外与可见光图像融合", 《电子科技大学学报》 * |
周志强 等: "基于双边与高斯滤波混合分解的图像融合方法", 《***工程与电子技术》 * |
王健 等: "基于YUV与小波变换的可见光与红外图像融合", 《西安工业大学学报》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179208A (en) * | 2019-12-09 | 2020-05-19 | 天津大学 | Infrared-visible light image fusion method based on saliency map and convolutional neural network |
CN111179208B (en) * | 2019-12-09 | 2023-12-08 | 天津大学 | Infrared-visible light image fusion method based on saliency map and convolutional neural network |
CN111815549A (en) * | 2020-07-09 | 2020-10-23 | 湖南大学 | Night vision image colorization method based on guided filtering image fusion |
CN111861960A (en) * | 2020-07-17 | 2020-10-30 | 北京理工大学 | Infrared and visible light image fusion method |
CN111861960B (en) * | 2020-07-17 | 2023-09-29 | 北京理工大学 | Infrared and visible light image fusion method |
CN114066786A (en) * | 2020-08-03 | 2022-02-18 | 四川大学 | Infrared and visible light image fusion method based on sparsity and filter |
CN112017139A (en) * | 2020-09-14 | 2020-12-01 | 南昌航空大学 | Infrared and visible light image perception fusion method |
CN112017139B (en) * | 2020-09-14 | 2023-04-07 | 南昌航空大学 | Infrared and visible light image perception fusion method |
CN115205181A (en) * | 2022-09-15 | 2022-10-18 | 季华实验室 | Multi-focus image fusion method and device, electronic equipment and storage medium |
CN117079194A (en) * | 2023-10-12 | 2023-11-17 | 深圳云天畅想信息科技有限公司 | Cloud video AI understanding generation method and device and computer equipment |
CN117079194B (en) * | 2023-10-12 | 2024-01-05 | 深圳云天畅想信息科技有限公司 | Cloud video AI understanding generation method and device and computer equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919884A (en) | Infrared and visible light image fusion method based on gaussian filtering weighting | |
CN104809734B (en) | A method of the infrared image based on guiding filtering and visual image fusion | |
CN105761214B (en) | Remote sensing image fusion method based on contourlet transform and guiding filtering | |
CN109801250A (en) | Infrared and visible light image fusion method based on ADC-SCM and low-rank matrix expression | |
Goodman et al. | Classification of benthic composition in a coral reef environment using spectral unmixing | |
CN105139367A (en) | Visible light polarization image fusion method based on non-subsampled shear wave | |
CN108921809B (en) | Multispectral and panchromatic image fusion method based on spatial frequency under integral principle | |
CN106327459A (en) | Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network) | |
CN109447909A (en) | The infrared and visible light image fusion method and system of view-based access control model conspicuousness | |
Deng et al. | Moisture content prediction in tealeaf with near infrared hyperspectral imaging | |
CN109509164A (en) | A kind of Multisensor Image Fusion Scheme and system based on GDGF | |
Zhang et al. | Extraction of tree crowns damaged by Dendrolimus tabulaeformis Tsai et Liu via spectral-spatial classification using UAV-based hyperspectral images | |
CN106897707A (en) | Characteristic image time series synthetic method and device based in multi-source points | |
CN109191416A (en) | Image interfusion method based on sparse dictionary study and shearing wave | |
CN109490223A (en) | A kind of target acquisition identifying system and method based on programmable high light spectrum image-forming | |
Liu et al. | Fusion of airborne hyperspectral and LiDAR data for tree species classification in the temperate forest of northeast China | |
CN113298147B (en) | Image fusion method and device based on regional energy and intuitionistic fuzzy set | |
CN106568730B (en) | A kind of rice yin-yang leaf fringe recognition methods based on Hyperspectral imaging near the ground | |
CN102661920B (en) | Device and method for classifying and identifying weeds | |
CN111523587A (en) | Woody plant species spectrum identification method based on machine learning | |
CN110794472B (en) | Detection method of hidden ground objects under vegetation background based on rotor unmanned aerial vehicle | |
Lv et al. | A fruit recognition method of green grape images in the orchard | |
Bertels et al. | Potentials of airborne hyperspectral remote sensing for vegetation mapping of spatially heterogeneous dynamic dunes, a case study along the Belgian coastline | |
Rong et al. | Fusion of infrared and visible images through a hybrid image decomposition and sparse representation | |
Jun et al. | Infrared and visible image fusion via gradientlet filter and salience-combined map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190621 |