CN113192049A - Visible light and infrared image fusion method based on LatLRR and Retinex enhancement - Google Patents

Visible light and infrared image fusion method based on LatLRR and Retinex enhancement Download PDF

Info

Publication number
CN113192049A
CN113192049A CN202110535236.5A CN202110535236A CN113192049A CN 113192049 A CN113192049 A CN 113192049A CN 202110535236 A CN202110535236 A CN 202110535236A CN 113192049 A CN113192049 A CN 113192049A
Authority
CN
China
Prior art keywords
image
fusion
images
detail
retinex
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110535236.5A
Other languages
Chinese (zh)
Other versions
CN113192049B (en
Inventor
赵辽英
潘巧英
厉小润
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Hangzhou Dianzi University
Original Assignee
Zhejiang University ZJU
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU, Hangzhou Dianzi University filed Critical Zhejiang University ZJU
Priority to CN202110535236.5A priority Critical patent/CN113192049B/en
Publication of CN113192049A publication Critical patent/CN113192049A/en
Application granted granted Critical
Publication of CN113192049B publication Critical patent/CN113192049B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a visible light and infrared image fusion method based on LatLRR and Retinex enhancement. The method comprises the following steps of preprocessing a visible light image and an infrared image, obtaining a basic image and a plurality of detail images through multi-stage potential low-rank decomposition, and reconstructing the decomposed basic image: performing Retinex enhancement on the basic image and then fusing to obtain a fused basic image; and reconstructing the detail part of each level into detail images, performing multi-vision comprehensive weighted fusion on each pair of detail images, and finally adding the fused basic image and the fused multi-level detail images to obtain a final fusion image. The fusion result obtained by the fusion method is rich in detail information, the scene target is clearer, and the contrast of the image is improved. The image fusion method provided by the invention can better retain the detail information of the image and simultaneously improve the information content contained in the image, thereby improving the image fusion quality.

Description

Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
Technical Field
The invention relates to the field of image fusion, in particular to a visible light and infrared image fusion method based on LatLRR and Retinex enhancement.
Background
The image fusion technology can fuse the visible light image and the infrared image to obtain a synthetic image which is more suitable for human eye observation or computer vision tasks, so the infrared and visible light image fusion technology has good application prospect in the fields of military reconnaissance, medical diagnosis, artificial intelligence and the like.
At present, there are many algorithms for fusing infrared and visible light images, which can be summarized into three types in general: spatial domain based methods, transform domain based methods and deep learning based methods. The image fusion method based on the spatial domain is a method for directly operating a source image on the spatial domain to obtain a fused image, and has certain advantages in operation speed, but can cause loss of detail information of the source image. Transform domain based methods include fusion methods based on multi-scale decomposition and sparse representation methods. The method based on multi-scale effectively retains the information of the source image, the detail information is obvious, but the structure information of the source image is changed to a certain extent, the fusion method based on sparse representation has better fusion performance, but the defects of limited capability of capturing the global structure and insufficient detail information still exist. The method based on deep learning needs network model training in advance, then image feature extraction and fusion are achieved, fusion performance is greatly influenced by parameters, and calculated amount is large. Recently, researchers have proposed a fusion method based on multi-level potential low-rank representation (MDLatLRR), which can extract global structure information and local structure information in a source image, and the fusion image has rich detail information, high definition and good visibility. However, when the MDLatLRR is used for low-illumination images, the original images are blurred, and the fusion of the salient parts does not make full use of the visual salient features of the detail images, so that the fusion effect is poor.
Disclosure of Invention
The invention aims to provide a visible light and infrared image fusion method based on LatLRR and Retinex enhancement aiming at the defects of the current visible light and infrared image fusion method. The illumination of the basic image is improved by combining a Retinex image enhancement method, and the fusion weight of the detail image is determined by utilizing various visual salient features of the detail image reconstructed by the salient part, so that the fusion effect is improved.
The technical scheme adopted by the invention is as follows:
step 1, preprocessing images to generate training data suitable for low-rank decomposition:
for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
step 2, obtaining a basic image and a plurality of detail images by multi-stage potential low-rank decomposition:
visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decomposition
Figure BDA0003069584650000021
And a base image
Figure BDA0003069584650000022
Infrared image data P (I)2) K detail images are obtained through k-level potential low-rank decomposition
Figure BDA0003069584650000023
And a base image
Figure BDA0003069584650000024
The above-mentioned
Figure BDA0003069584650000025
Represents P (I)r) Detail obtained after j-th decomposition, where r is 1, 2;
Figure BDA0003069584650000026
show that
Figure BDA0003069584650000027
Rearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
step 3, fusing the basic images after enhancement and stretching;
for the basic image
Figure BDA0003069584650000028
And
Figure BDA0003069584650000029
performing multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
Figure BDA00030695846500000210
Preferably, the image contrast stretching method may employ global linear stretching, piecewise linear stretching, exponential transformation stretching, logarithmic transformation stretching, or the like;
preferably, the basic image fusion strategy can adopt average fusion, weighted fusion, absolute value maximum fusion, least square fusion, weighted least square fusion and the like;
preferably, the weighted fusion method refers to a fusion method in which fusion weights of two images are determined first and then weighted summation is performed, and the fusion method includes pixel weighted fusion and regional energy weighted fusion.
And 4, performing multi-vision comprehensive weighting fusion on the detail images.
For each pair of detail images
Figure BDA00030695846500000211
And
Figure BDA00030695846500000212
fusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
Figure BDA0003069584650000031
Figure BDA0003069584650000032
Where (x, y) denotes the pixel coordinate, j ═ 1, …, k;
the multi-vision comprehensive weighting fusion method integrates three indexes of the definition AF, the corner significance EA and the local contrast LC of an image to calculate fusion weight. Preferably, the sharpness is computed using a modified laplacian energy sum, and the corner saliency is computed using a linear structure tensor matrix.
The local contrast calculation formula of the image is as follows:
Figure BDA0003069584650000033
wherein
Figure BDA0003069584650000034
In the image I with IxyAn s x s region that is the center,
Figure BDA0003069584650000035
is represented bypqBeta is a constant, IpqRepresenting pixel points in the s x s region.
The fusion weight calculation formula is as follows:
Figure BDA0003069584650000036
Figure BDA0003069584650000037
Figure BDA0003069584650000038
Figure BDA0003069584650000039
wherein the content of the first and second substances,
Figure BDA00030695846500000310
representing weight coefficients calculated based on sharpness,
Figure BDA00030695846500000311
representing the weight coefficients calculated based on the corner saliency,
Figure BDA00030695846500000312
representing the weight coefficients calculated on the basis of the local contrast,
Figure BDA00030695846500000313
and
Figure BDA00030695846500000314
respectively representing detailed images
Figure BDA00030695846500000315
And
Figure BDA00030695846500000316
local sharpness at coordinates (x, y),
Figure BDA00030695846500000317
and
Figure BDA00030695846500000318
respectively representing detailed images
Figure BDA00030695846500000319
And
Figure BDA00030695846500000320
corner saliency at coordinates (x, y),
Figure BDA00030695846500000321
and
Figure BDA00030695846500000322
respectively representing detailed images
Figure BDA00030695846500000323
And
Figure BDA00030695846500000324
local contrast at coordinates (x, y).
And 5, adding the results of the steps 3 and 4 to obtain a final fusion result:
Figure BDA00030695846500000325
further, the method for calculating the local definition of the image in step 4 specifically includes the following steps:
step 4a-1) calculates the approximate sum of laplacian energies for each position in the image I:
Figure BDA0003069584650000041
Figure BDA0003069584650000042
step 4a-2) calculating a definition map of the image:
Figure BDA0003069584650000043
wherein N is1And M1Is a positive integer.
Further, the corner saliency in step 4 is calculated by using a linear structure tensor method, and the specific process is as follows:
step 4b-1) calculating a linear structure tensor matrix for each position in the image I:
Figure BDA0003069584650000044
wherein the content of the first and second substances,
Figure BDA0003069584650000045
Kσis a gaussian kernel with standard deviation σ, and is a convolution operation.
Step 4b-2) calculating the edge significance.
Firstly, a linear structure tensor matrix J is calculatedσxyCharacteristic value of (d):
Figure BDA0003069584650000046
Figure BDA0003069584650000047
constructing an edge saliency matrix M and a corner saliency matrix N:
M(x,y)=|λxy1xy2|(13)
N(x,y)=|λxy1xy2|(14)
respectively normalizing the matrix M and the matrix N to obtain
Figure BDA0003069584650000048
And
Figure BDA0003069584650000049
and linearly combined to obtain the edge significance:
Figure BDA0003069584650000051
where k ∈ [0,1] is a constant.
The invention has the beneficial effects that:
1. the invention proposes that the Retinex enhancement is firstly carried out on the basic images represented by the potential low rank and then the basic images are fused, so that the illumination of the basic images is improved, and the influence of the fuzziness of the original images on the fusion result can be reduced;
2. according to the invention, the detail image is firstly reconstructed, and then the fusion weight of the detail image is determined by utilizing various visual salient features of the detail image reconstructed by a salient part, so that the feature of human eye visual observation is better met, and the fusion effect is improved;
3. the image fusion method provided by the invention can better retain the detail information of the image and simultaneously improve the information content contained in the image, thereby improving the image fusion quality.
Drawings
FIG. 1 is a flowchart of a potential low rank decomposition combined Retinex enhanced infrared and visible light image fusion method of the present invention;
FIG. 2 is an infrared and visible image to be fused, wherein (a) is the visible image and (b) is the infrared image;
FIG. 3 is a 1-4 level fused image;
FIG. 4 is a diagram illustrating decomposition and fusion indexes at different levels.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description. The drawings are for illustrative purposes only and are not to be construed as limiting the patent. The technical features of the embodiments of the present invention can be combined correspondingly without mutual conflict.
A preferred embodiment of the present invention will be described in detail below, and as shown in FIG. 1, the present invention comprises the following steps:
step 1: reading a visible light image I to be fused1And an infrared image I2Performing a preprocessing operation of rendering the image Ir(r 1,2) are equally divided into M subblocks of size n × n, and each image subblock is rearranged into a column of data to obtain a data matrix P (I) suitable for low-rank decomposition1) And P (I)2);
Step 2: for data matrix P (I)1) And P (I)2) And performing multi-stage potential low-rank decomposition to obtain a base image and a plurality of detail images.
Visible light image data P (I)1) The k-level potential low rank decomposition of (1) is implemented as follows:
the decomposition of the 1 st level is carried out,let X1=P(I1),X1Obtaining a significance matrix through potential low-rank decomposition
Figure BDA0003069584650000061
Wherein L is a significant coefficient matrix, and then reconstructing the matrix to obtain a significant image
Figure BDA0003069584650000062
Further, the base image
Figure BDA0003069584650000063
The subsequent k (k starting from 2) decomposition process is similar, so that
Figure BDA0003069584650000064
X2Obtaining a significance matrix through potential low-rank decomposition
Figure BDA0003069584650000065
Then, the matrix is reconstructed to obtain a salient image
Figure BDA0003069584650000066
Further, a base image is obtained
Figure BDA0003069584650000067
Wherein
Figure BDA0003069584650000068
Show that
Figure BDA0003069584650000069
The data in each column is rearranged into an n × n matrix and then reconstructed into the operation of the whole saliency map.
Same procedure, infrared image data P (I)2) K significant images are obtained through k-level potential low-rank decomposition
Figure BDA00030695846500000610
And a low rank image
Figure BDA00030695846500000611
As shown in fig. 2, step 3: enhanced post-stretch fusion of base images
For the basic image
Figure BDA00030695846500000612
And
Figure BDA00030695846500000613
performing multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
Figure BDA00030695846500000614
The specific implementation method comprises the following steps:
3-1) estimating an illumination component
According to Retinex theory, the original image can be decomposed into illumination components and reflection components, and the core idea is to eliminate or reduce the illumination components of the image and preserve the reflection characteristics as much as possible, thereby restoring the original appearance of the objects in the image. In the example, the illumination component in the basic image is estimated by adopting Gaussian filtering, different scales are applied to obtain different illumination components, and the illumination component I of the image is obtained after weighted averageRef
3-2) control of the radiation component IRefIs subjected to linear stretching
To improve the contrast of the image, it is necessary to contrast the illumination component IRefLinear stretching is carried out, in the present example, in such a way as to contrast the radiation component IRefThe stretching was performed as follows:
Figure BDA0003069584650000071
wherein Imin,ImaxAre the minimum and maximum values in the illumination component, and MIN and MAX are the minimum and maximum values in the gray space to be stretched.
3-3) basic image fusion
Base image after enhanced stretching
Figure BDA0003069584650000072
And
Figure BDA0003069584650000073
obtaining a fused basic image based on region energy weighted fusion
Figure BDA0003069584650000074
The region energy weighted fusion step is as follows:
the specific implementation method comprises the following steps:
(1) firstly, respectively carrying out regional calculation on a basic image by adopting a sliding window to obtain a regional energy matrix with a pixel (m, n) as a center, wherein the calculation formula is as follows:
Figure BDA0003069584650000075
Figure BDA0003069584650000076
where X and Y represent the maximum row and column coordinates, respectively, within a window of the region, the window size typically being 3X 3,
Figure BDA0003069584650000077
is the window coefficient. By continuously moving the central pixel, the area energy matrix can be obtained.
(2) Calculating an area energy ratio matrix E according to the area energy in (1)ratioThe formula is as follows:
Figure BDA0003069584650000078
(3) the fusion weight is determined from the regional energy ratio. Specifically, two thresholds th1, th2 are provided, and th1 is not assumed<th2, if the energy ratio Eratio<th1, describing an infrared chartIf the image area energy is far less than the visible light image area energy, the infrared image area energy can be ignored, the infrared image fusion weight is set to be 0, and the visible light image fusion weight is set to be 1; if th1<Eratio<th2, showing that the energy of the infrared image and the visible light image in the region is relatively close, and determining the fusion weight is shown in the formula; if Eratio>th2, it is shown that the energy of the infrared image is much larger than that of the visible image, and the weight of the visible image is set to 0 and the weight of the infrared image is set to 1. The above method of determining the fusion weight can be formulated as:
Figure BDA0003069584650000081
(4) and finally, the basic part of the fused image is obtained by weighted fusion of the basic part of the infrared image and the basic part of the visible light image.
Figure BDA0003069584650000082
As shown in fig. 3 and 4, step 4: multi-vision comprehensive weighted fusion of each pair of detail images
Each pair of detail images
Figure BDA0003069584650000083
And
Figure BDA0003069584650000084
fusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
Figure BDA0003069584650000085
(j ═ 1, …, k), where (x, y) denotes the pixel coordinates. The multi-vision comprehensive weighting fusion method integrates three indexes of the definition AF, the corner significance EA and the local contrast LC of an image to calculate fusion weight.
The definition adopts improved Laplace energy and calculation, and the specific process is as follows:
step 4a-1) calculating the approximate sum of laplacian energies for each position in the image I
Figure BDA0003069584650000086
Step 4a-2) calculating a sharpness map of the image
Figure BDA0003069584650000087
Wherein N is1And M1Is a positive integer.
The edge saliency is calculated by utilizing a linear structure tensor matrix, and the specific process is as follows:
step 4b-1) calculating a linear structure tensor matrix for each position in the image I
Figure BDA0003069584650000088
Wherein the content of the first and second substances,
Figure BDA0003069584650000091
Kσis a gaussian kernel with standard deviation σ, and is a convolution operation.
Step 4b-2) calculating the edge saliency
Calculation of JσxyCharacteristic value of
Figure BDA0003069584650000092
Figure BDA0003069584650000093
Constructing a matrix edge saliency M and a corner saliency matrix N, wherein
M(x,y)=|λxy1xy2|
N(x,y)=|λxy1xy2|
Respectively normalizing the matrix M and the matrix N to obtain
Figure BDA0003069584650000094
And
Figure BDA0003069584650000095
combining linearly to obtain edge significance
Figure BDA0003069584650000096
Where k ∈ [0,1] is a constant.
The local contrast of the image is calculated according to the formula
Figure BDA0003069584650000097
Wherein
Figure BDA0003069584650000098
In the image I with IxyAn s x s region that is the center,
Figure BDA0003069584650000099
is represented bypqβ is a constant.
The fusion weight calculation formula is
Figure BDA00030695846500000910
Wherein
Figure BDA0003069584650000101
Figure BDA0003069584650000102
Figure BDA0003069584650000103
Step five: the final fusion result is:
Figure BDA0003069584650000104
the invention also provides a visible light and infrared image fusion system based on LatLRR and Retinex enhancement, which comprises an image preprocessing module, an image decomposition module, a basic image enhancement stretching fusion module, a detail image weighting fusion module and an image fusion module;
the image preprocessing module is specifically realized as follows: for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
the image decomposition module is specifically realized as follows: the multi-stage potential low-rank decomposition obtains a base image and a plurality of detail images, visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decomposition
Figure BDA0003069584650000105
And a base image
Figure BDA0003069584650000106
Infrared image data P (I)2) K detail images are obtained through k-level potential low-rank decomposition
Figure BDA0003069584650000107
Figure BDA0003069584650000108
And a base image
Figure BDA0003069584650000109
The above-mentioned
Figure BDA00030695846500001010
Represents P (I)r) Detail obtained after j-th decomposition, where r is 1, 2;
Figure BDA00030695846500001011
show that
Figure BDA00030695846500001012
Rearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
the basic image enhancement stretching fusion module is specifically realized as follows: for the basic image
Figure BDA00030695846500001013
And
Figure BDA00030695846500001014
performing multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
Figure BDA00030695846500001015
The detail image weighting and fusing module is specifically realized as follows: for each pair of detail images
Figure BDA0003069584650000111
And
Figure BDA0003069584650000112
fusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
Figure BDA0003069584650000113
Figure BDA0003069584650000114
The image fusion module is specifically realized as follows: and adding the output of the basic image enhancement stretching fusion module and the output result of the basic image enhancement stretching fusion module to obtain a final fusion result.

Claims (7)

1. The visible light and infrared image fusion method based on LatLRR and Retinex enhancement is characterized by comprising the following steps:
step 1, preprocessing images to generate training data suitable for low-rank decomposition:
for visible light image I1And an infrared image I2Carrying out pretreatment: image data P (I) processed as a low rank decomposition1) And P (I)2) Said P (I)r) (r is 1,2) denotes the image IrDividing the image into M subblocks with the size of n multiplied by n, and rearranging each image subblock into a row of data;
step 2, obtaining a basic image and a plurality of detail images by multi-stage potential low-rank decomposition:
visible light image data P (I)1) K detail images are obtained through k-level potential low-rank decomposition
Figure FDA0003069584640000011
And a base image
Figure FDA0003069584640000012
Infrared image data P (I)2) K detail images are obtained through k-level potential low-rank decomposition
Figure FDA0003069584640000013
And a base image
Figure FDA0003069584640000014
The above-mentioned
Figure FDA0003069584640000015
Represents P (I)r) J thThe fraction of detail obtained after the stage decomposition, where r is 1, 2;
Figure FDA0003069584640000016
show that
Figure FDA0003069584640000017
Rearranging each row of data into an n multiplied by n matrix and then reconstructing the matrix into a whole detail diagram;
step 3, fusing the basic images after enhancement and stretching;
step 4, performing multi-vision comprehensive weighting fusion on the detail images;
and 5, adding the results of the steps 3 and 4 to obtain a final fusion result.
2. The LatLRR and Retinex enhanced visible and infrared image fusion method according to claim 1, characterized in that step 3 is implemented as follows:
for the basic image
Figure FDA0003069584640000018
And
Figure FDA0003069584640000019
performing multi-scale Retinex enhancement, then performing contrast stretching, fusing the basic image after the enhancement stretching processing to obtain a fused basic image
Figure FDA00030695846400000110
The image contrast stretching method adopts global linear stretching, piecewise linear stretching, exponential transformation stretching or logarithmic transformation stretching;
the basic image fusion strategy adopts average fusion, weighted fusion, absolute value maximum fusion, least square fusion or weighted least square fusion;
the weighted fusion method is a fusion method which firstly determines fusion weights of two images and then performs weighted summation, and comprises pixel weighted fusion and regional energy weighted fusion.
3. The LatLRR and Retinex enhanced visible and infrared image fusion method according to claim 1 or 2, characterized in that step 4 is implemented as follows:
for each pair of detail images
Figure FDA0003069584640000021
And
Figure FDA0003069584640000022
fusing by a multi-vision comprehensive weighting fusion method to obtain a fused detail image
Figure FDA0003069584640000023
Figure FDA0003069584640000024
Where (x, y) denotes the pixel coordinate, j ═ 1, …, k;
Figure FDA0003069584640000025
representing the fusion weight.
4. The LatLRR and Retinex enhanced visible and infrared image fusion method of claim 3, wherein the multi-vision comprehensive weighted fusion method synthesizes three indexes of image definition AF, edge saliency EA and local contrast LC to calculate fusion weight;
the local contrast calculation formula of the image is as follows:
Figure FDA0003069584640000026
wherein
Figure FDA0003069584640000027
In the image I with IxyAn s x s region that is the center,
Figure FDA0003069584640000028
is represented bypqBeta is a constant, IpqRepresenting pixel points in the sxs region;
the fusion weight calculation formula is as follows:
Figure FDA0003069584640000029
Figure FDA00030695846400000210
Figure FDA00030695846400000211
Figure FDA00030695846400000212
wherein the content of the first and second substances,
Figure FDA00030695846400000213
representing weight coefficients calculated based on sharpness,
Figure FDA00030695846400000214
representing the weight coefficients calculated based on the corner saliency,
Figure FDA00030695846400000215
representing the weight coefficients calculated on the basis of the local contrast,
Figure FDA00030695846400000216
and
Figure FDA00030695846400000217
respectively representing detailed images
Figure FDA00030695846400000218
And
Figure FDA00030695846400000219
local sharpness at coordinates (x, y),
Figure FDA00030695846400000220
and
Figure FDA00030695846400000221
respectively representing detailed images
Figure FDA00030695846400000222
And
Figure FDA00030695846400000223
corner saliency at coordinates (x, y),
Figure FDA00030695846400000224
and
Figure FDA00030695846400000225
respectively representing detailed images
Figure FDA00030695846400000226
And
Figure FDA00030695846400000227
local contrast at coordinates (x, y).
5. The LatLRR and Retinex enhanced visible light and infrared image fusion method according to claim 4, characterized in that the method for calculating the local sharpness of the image in step 4 comprises the following steps:
step 4a-1) the sum of the approximate laplacian energies for each position in the image I is calculated:
Figure FDA0003069584640000031
step 4a-2) calculating a definition map of the image:
Figure FDA0003069584640000032
wherein N is1And M1Is a positive integer.
6. The LatLRR and Retinex enhanced visible light and infrared image fusion method according to claim 5, wherein the corner saliency is calculated by using a linear structure tensor method in step 4, specifically comprising:
step 4b-1) calculating a linear structure tensor matrix of each position in the image I:
Figure FDA0003069584640000033
wherein the content of the first and second substances,
Figure FDA0003069584640000034
Kσis a gaussian kernel with standard deviation sigma, which is a convolution operation;
step 4b-2) calculating the edge significance:
firstly, a linear structure tensor matrix J is calculatedσxyCharacteristic value of (d):
Figure FDA0003069584640000035
Figure FDA0003069584640000036
secondly, constructing a matrix edge saliency matrix M and a corner saliency matrix N:
M(x,y)=|λxy1xy2|(12)
N(x,y)=|λxy1xy2|(13)
finally, the matrixes M and N are respectively normalized to obtain
Figure FDA0003069584640000041
And
Figure FDA0003069584640000042
and linearly combined to obtain the edge significance:
Figure FDA0003069584640000043
where k ∈ [0,1] is a constant.
7. The LatLRR and Retinex enhanced visible and infrared image fusion method of claim 6, wherein the final fusion result obtained in step 5 is:
Figure FDA0003069584640000044
CN202110535236.5A 2021-05-17 2021-05-17 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement Active CN113192049B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110535236.5A CN113192049B (en) 2021-05-17 2021-05-17 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110535236.5A CN113192049B (en) 2021-05-17 2021-05-17 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement

Publications (2)

Publication Number Publication Date
CN113192049A true CN113192049A (en) 2021-07-30
CN113192049B CN113192049B (en) 2024-02-06

Family

ID=76982231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110535236.5A Active CN113192049B (en) 2021-05-17 2021-05-17 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement

Country Status (1)

Country Link
CN (1) CN113192049B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109539A (en) * 2023-03-21 2023-05-12 智洋创新科技股份有限公司 Infrared image texture information enhancement method and system based on generation of countermeasure network
WO2023231094A1 (en) * 2022-05-31 2023-12-07 广东海洋大学 Underwater sea urchin image recognition model training method and device, and underwater sea urchin image recognition method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345494A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on potential low-rank representation and structure tensor
CN110148104A (en) * 2019-05-14 2019-08-20 西安电子科技大学 Infrared and visible light image fusion method based on significance analysis and low-rank representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109345494A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 Image interfusion method and device based on potential low-rank representation and structure tensor
CN110148104A (en) * 2019-05-14 2019-08-20 西安电子科技大学 Infrared and visible light image fusion method based on significance analysis and low-rank representation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023231094A1 (en) * 2022-05-31 2023-12-07 广东海洋大学 Underwater sea urchin image recognition model training method and device, and underwater sea urchin image recognition method and device
CN116109539A (en) * 2023-03-21 2023-05-12 智洋创新科技股份有限公司 Infrared image texture information enhancement method and system based on generation of countermeasure network

Also Published As

Publication number Publication date
CN113192049B (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
Fan et al. Integrating semantic segmentation and retinex model for low-light image enhancement
Hu et al. Singular value decomposition and local near neighbors for face recognition under varying illumination
Yang et al. An adaptive method for image dynamic range adjustment
CN107798661B (en) Self-adaptive image enhancement method
CN109872285A (en) A kind of Retinex low-luminance color image enchancing method based on variational methods
CA3137297C (en) Adaptive convolutions in neural networks
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
CN112257766A (en) Shadow recognition detection method under natural scene based on frequency domain filtering processing
Zong et al. Local-CycleGAN: a general end-to-end network for visual enhancement in complex deep-water environment
CN113192049A (en) Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
Lepcha et al. A deep journey into image enhancement: A survey of current and emerging trends
CN113039576A (en) Image enhancement system and method
Huo et al. High‐dynamic range image generation from single low‐dynamic range image
Yuan et al. Single image dehazing via NIN-DehazeNet
CN114549567A (en) Disguised target image segmentation method based on omnibearing sensing
CN112509144A (en) Face image processing method and device, electronic equipment and storage medium
CN116012255A (en) Low-light image enhancement method for generating countermeasure network based on cyclic consistency
Wang et al. Single Underwater Image Enhancement Based on $ L_ {P} $-Norm Decomposition
Lei et al. Low-light image enhancement using the cell vibration model
Liu et al. Low-light image enhancement network based on recursive network
Hou et al. Reconstructing a high dynamic range image with a deeply unsupervised fusion model
Pu et al. Fractional-order retinex for adaptive contrast enhancement of under-exposed traffic images
Singh et al. Multiscale reflection component based weakly illuminated nighttime image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant