CN110349117B - Infrared image and visible light image fusion method and device and storage medium - Google Patents

Infrared image and visible light image fusion method and device and storage medium Download PDF

Info

Publication number
CN110349117B
CN110349117B CN201910579632.0A CN201910579632A CN110349117B CN 110349117 B CN110349117 B CN 110349117B CN 201910579632 A CN201910579632 A CN 201910579632A CN 110349117 B CN110349117 B CN 110349117B
Authority
CN
China
Prior art keywords
image
visible light
infrared
formula
wolf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579632.0A
Other languages
Chinese (zh)
Other versions
CN110349117A (en
Inventor
冯鑫
胡开群
袁毅
陈希瑞
张建华
翟治芬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Technology and Business University
Original Assignee
Chongqing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Technology and Business University filed Critical Chongqing Technology and Business University
Priority to CN201910579632.0A priority Critical patent/CN110349117B/en
Publication of CN110349117A publication Critical patent/CN110349117A/en
Application granted granted Critical
Publication of CN110349117B publication Critical patent/CN110349117B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a device and a storage medium for fusing an infrared image and a visible light image, wherein the method comprises the steps of carrying out differential calculation on the infrared image and the visible light image to obtain a differential image; respectively carrying out decomposition calculation on the infrared image, the visible light image and the difference image according to a total variation model to respectively obtain cartoon texture component components of each image; constructing a fitness function of a wolf pack optimization iterative algorithm; and determining a weight item and a weight coefficient in each decomposed component, performing weighting calculation, and obtaining an image to be fused according to a calculation result. Decomposing a source image and a differential image into cartoon texture component components, determining a weight item and a weight coefficient from the source image and the component components through a wolf colony optimization iterative algorithm, and performing weighted combination on the determined weight item and the determined weight coefficient to obtain a final fusion image result, wherein the fusion result has noise robustness, meanwhile, complete contour information and detail information can be kept, and the definition and the contrast are high.

Description

Infrared image and visible light image fusion method and device and storage medium
Technical Field
The invention mainly relates to the technical field of image processing, in particular to a method and a device for fusing an infrared image and a visible light image and a storage medium.
Background
The infrared sensor has certain defects on the real scene reflection, and the resolution ratio and the signal-to-noise ratio of the formed image are low; the visible light sensor can clearly reflect the detailed information of a scene under a certain condition, and the imaging is easily influenced by natural conditions such as illumination, weather and the like. According to the complementarity, the characteristic information of the source images can be mined by using an image fusion method, so that the target information is highlighted, the understanding of a visual system to scene information is improved, and the purposes of camouflage identification, night vision and the like are achieved. The research on the infrared and visible light image fusion is beneficial to promoting the development and the perfection of the image fusion theory, and the research result not only has a certain reference function on the image fusion in other fields, but also has important significance on national defense safety and national construction of China. The system is successfully applied to systems of tracking and positioning, fire early warning, package safety inspection, automobile night driving and the like in the civil field. In the military field, more accurate and reliable target information and comprehensive scene information can be acquired through infrared and visible light image fusion, targets can still be successfully captured under severe meteorological conditions, for example, the infrared visible light double-wave sniping sighting telescope can assist in achieving accurate striking of the targets under various severe environments, and all-weather combat capability of military is improved.
At present, infrared image and visible light image fusion methods are divided into two categories based on multi-scale analysis and sparse representation methods, the two methods easily cause loss of detail information, and the multi-scale method has reconstruction steps, so that artifacts easily appear in the result, and the later recognition is influenced.
Disclosure of Invention
The invention aims to solve the technical problem of the prior art and provides a method, a device and a storage medium for fusing an infrared image and a visible light image.
The technical scheme for solving the technical problems is as follows: a method for fusing an infrared image and a visible light image comprises the following steps:
carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image;
and respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And constructing a fitness function of the wolf pack optimization iterative algorithm. Specifically, the construction is performed according to the fusion index information entropy, the standard deviation and the edge retention.
Determining a weight item and a corresponding weight coefficient in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the corresponding weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image.
And directly carrying out weighting calculation according to the determined weight item and weight coefficient to obtain the fusion image.
Another technical solution of the present invention for solving the above technical problems is as follows: an infrared image and visible light image fusion device, comprising:
and the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image.
And the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And the function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm.
And the weight determining module is used for determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, and the weight item and the weight coefficient are used as a weight item and a weight coefficient of a fusion image component, wherein the fusion image is an image obtained by combining the infrared image and the visible light image.
And the fusion module is used for performing weighting calculation according to the determined weight item and the weight coefficient to obtain the fusion image.
Another technical solution of the present invention for solving the above technical problems is as follows: an infrared image and visible light image fusion device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein when the processor executes the computer program, the infrared image and visible light image fusion method is realized.
Another technical solution of the present invention for solving the above technical problems is as follows: a computer-readable storage medium, storing a computer program which, when executed by a processor, implements the infrared image and visible light image fusion method as described above.
The invention has the beneficial effects that: the method comprises the steps of decomposing an infrared source image, a visible light source image and a differential image into cartoon texture component components through a total variation model, determining a weight item and a weight coefficient from the source image and the component components through a wolf colony optimization iterative algorithm, and performing weighted combination on the determined weight item and the determined weight coefficient to obtain a final fusion image result.
Drawings
FIG. 1 is a schematic flow chart of a fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of image decomposition according to an embodiment of the present invention;
fig. 3 is a block diagram of a fusion apparatus according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating the effect of each image component according to an embodiment of the present invention;
FIG. 5 is a comparative graph of the experiment provided by the embodiment of the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth to illustrate, but are not to be construed to limit the scope of the invention.
As shown in fig. 1, a method for fusing an infrared image and a visible light image includes the following steps:
and carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image.
And respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
And constructing a fitness function of the wolf pack optimization iterative algorithm. Specifically, the construction is performed according to the fusion index information entropy, the standard deviation and the edge retention.
Determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image.
And performing weighting calculation according to the determined weight item and the weight coefficient to obtain the fusion image.
In the embodiment, the infrared source image, the visible light source image and the differential image are decomposed into the cartoon texture component through the total variation model, the weight item and the weight coefficient are determined from the source image and the component through the wolf colony optimization iterative algorithm, and the determined weight item and the determined weight coefficient are subjected to weighted combination to obtain the final fused image result, so that the method has strong noise robustness.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the process of performing decomposition calculation on the infrared image, the visible light image, and the infrared and visible light differential image according to a total variation model includes:
the infrared image, the visible light image and the infrared and visible light differential image are all noise-free source images,
when the infrared image is subjected to decomposition calculation, defining the infrared image as follows according to a total variation problem in a total variation model:
I inf =T inf +C inf
wherein, I inf Representing an infrared image, T inf Representing the texture component, C, of an infrared light image inf Representing the cartoon component of the infrared light image,
when the visible light image is subjected to decomposition calculation, the visible light image is defined as follows according to a total variation problem in a total variation model:
I vis =T vis +C vis
wherein, I vis Representing a visible light image, T vis Representing the texture component of the visible light differential image, C vis Representing the cartoon component of the visible light differential image,
when the infrared and visible light differential image is subjected to decomposition calculation, defining the infrared and visible light differential image as follows according to a total variation problem in a total variation model:
I dif =T dif +C dif
wherein, I dif Representing a differential image of infrared and visible light, T dif Representing the texture component, C, of the infrared and visible differential image dif Representing cartoon component components of the infrared and visible light differential image;
the total variation model is TV-l 1 A model according to said TV-l when performing decomposition calculation on said infrared image 1 The model calculates a minimization function corresponding to the infrared image, the minimization function formula corresponding to the infrared image is expressed as a first formula, and the first formula is as follows:
Figure BDA0002112804420000051
wherein the solution of the first formula is the cartoon component of the infrared light image,
Figure BDA0002112804420000052
the total variation regularization term expressed as cartoon component of the infrared image, + lambda I inf -C inf || 1 d Ω is denoted as a fidelity term, λ is a regularization parameter,
performing decomposition calculation on the visible light image according to the TV-l 1 The model calculates a minimization function corresponding to the infrared image, and the minimization function formula corresponding to the visible light image is expressed as a second formula which is:
Figure BDA0002112804420000061
wherein the solution of the second expression is cartoon component of the visible light image,
Figure BDA0002112804420000062
expressed as the total variation regularization term of cartoon component of visible image, lambda I vis -C vis || 1 d Ω is denoted as a fidelity term, λ is a regularization parameter,
according to the TV-l when the infrared and visible light differential image is decomposed and calculated 1 The model calculates a minimization function corresponding to the infrared image, and the minimization function formula of the infrared and visible light differential image is expressed as a third formula which is:
Figure BDA0002112804420000063
wherein the solution of the third formula is cartoon component components of the infrared and visible light images,
Figure BDA0002112804420000064
expressed as a total variation regularization term of cartoon component components of infrared and visible light images, lambda I dif -C dif || 1 d Ω is expressed as a fidelity term, and λ is expressed as a regularization parameter;
when the infrared image is decomposed and calculated, calculating the texture component of the infrared image according to a fourth formula, wherein the fourth formula is as follows:
T inf =I inf -C inf
when the decomposition calculation is carried out on the visible light image, calculating texture component components of the visible light image according to a fifth formula, wherein the fifth formula is as follows:
T vis =I vis -C vis
when the infrared and visible light differential image is decomposed and calculated, calculating texture component components of the infrared and visible light differential image according to a sixth formula, wherein the sixth formula is as follows:
T dif =I dif -C dif
solving the optimization problems of the minimization function of the infrared image, the minimization function of the visible light image and the minimization function of the infrared and visible light differential image according to a gradient descent method:
Figure BDA0002112804420000065
wherein (i, j) represents the position and parameters of pixel points in the infrared light image or the visible light image or the infrared and visible light differential image
Figure BDA0002112804420000071
And
Figure BDA0002112804420000072
the difference between forward and backward is shown separately,
Figure BDA0002112804420000073
representing the magnitude of the gradient, n the number of iterations, am and an are the distances on the image grid, at represents the amount of time variation,
Figure BDA0002112804420000074
epsilon is set to a minimum value.
In the embodiment, the total variation model is adopted for decomposition, the total variation model has certain noise robustness, the fidelity term is used for forcing cartoon component components to be kept close to an original image, the regularization parameter enables the total variation regularization term and the fidelity term to be balanced, texture details can be better extracted, the fusion result is to have higher edge retention degree while the best detail information is kept, and the fusion quality is improved.
Optionally, as an embodiment of the present invention, the process of the step of constructing the fitness function of the wolf pack optimization iterative algorithm includes:
assume that the fused image is I F Constructing a fitness function, the fitness function being
S=E(I F )*Std(I F )*Edge(I F ),
Wherein, E (I) F ) Representing the entropy of the fused image, std (I) F ) Representing the standard deviation of the fused image, edge (I) F ) Indicating the degree of edge preservation of the fused image.
The process of calculating the entropy includes:
the formula for calculating the entropy is:
Figure BDA0002112804420000075
wherein p is i Representing the probability distribution of the image pixels.
The process of calculating the standard deviation comprises:
the formula for calculating the standard deviation is:
Figure BDA0002112804420000076
wherein M, N represents the image size;
the process of calculating the edge retention includes:
respectively calculating the edge intensity and the direction of the infrared image and the visible light image according to a sobel edge operator, wherein the formula for calculating the edge intensity is as follows:
Figure BDA0002112804420000077
the formula for calculating the direction is:
Figure BDA0002112804420000081
wherein i and j represent directions, G i And G j Representing gradients in the i and j directions, respectively.
Calculating relative edge intensities and relative directions of the hypothetical fused image with respect to the infrared image and the visible light image, the formula for calculating the relative edge intensities being:
Figure BDA0002112804420000082
the formula for calculating the relative direction is:
Figure BDA0002112804420000083
calculating the degree of retention of the relative edge strength and the degree of retention of the relative direction, wherein the formula for calculating the degree of retention of the relative edge strength is as follows:
Figure BDA0002112804420000084
a formula for calculating the degree of retention of the relative direction:
Figure BDA0002112804420000085
define the total edge retention as:
Figure BDA0002112804420000086
calculating the total edge information as:
Figure BDA0002112804420000087
wherein r σ 、Г θ 、K σ 、K θ 、δ σ And delta θ Is a constant, r σ =0.994,Г θ =0.9879,K σ =-15、K θ =-22,δ σ =0.5、δ θ =0.8。
In the above embodiments, the information entropy and the standard deviation are mainly used for measuring the image information amount and the contrast, and the edge similarity is mainly used for evaluating the integrity of the edge structure information of the fusion result. The three indexes are used for defining a fitness function, so that the detail information amount of the multi-source image fusion result and the retention of the edge contour are improved, and the definition and the contrast of the fusion result are improved.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the process of using the determined weight term as a weight term of the image component to be fused includes:
s1: initializing a wolf group: setting the hunting area as Nxd European space, where N is wolf group number, d is variable dimension, and defining maximum iteration number K max The maximum number of seeks is T max The dimension of the variable includes w 1 、w 2 、w 3 、w 4 And w 5 5 weight coefficients; specifically, d =4;
s2: searching: according to fitness function calculates every the fitness function value of wolf is the wolf that maximum fitness function value corresponds for the head wolf, except that in the remaining fitness function, set up the wolf that maximum fitness function value corresponds as exploring the wolf, and iterate through exploring the formula, it is to explore the formula:
Figure BDA0002112804420000091
stopping iteration until the fitness function value of the wolf exploring is larger than that of the wolf exploring, or meeting the exploration times T max Stopping the iteration, wherein x id Represents the detecting wolf in d-dimensional space position, p is the moving direction of the detecting wolf,
Figure BDA0002112804420000092
representing a d-dimensional space search step length;
s3: prey attack: randomly selecting wolfs except the head wolf as murder wolfs, and calculating according to a prey attack formula:
Figure BDA0002112804420000093
wherein the content of the first and second substances,
Figure BDA0002112804420000094
in order to attack the step size,
Figure BDA0002112804420000095
represents the k +1 generation leader position,
the position of the wolf head is X L The fitness function value is Y L Wu Jue wolf Y i Greater than wolf Y L Let Y i =Y L Performing a calling action; daochiang wolf Y i Less than wolf Y L Attack is continued until d is ≤d near Outputting fitness function value Y of each wolf L Determining a weight item and a weight coefficient which are used as image component components to be fused in each output fitness function value, wherein the weight item comprises an infrared light image cartoon component C inf Infrared light image texture component T inf Component C of cartoon component of visible light image vis Visible image texture component T vis Cartoon component C of infrared and visible light differential image dif
Before further output, the method comprises the following updating steps:
and updating the leader position and the optimal target according to the principle that the winner is the king. The winner is the king, the bailer wolf runs to the position of the head wolf under the guidance of the head wolf, if the target fitness after running is larger than the current target fitness, the target fitness replaces the current target fitness, and otherwise, the target fitness is not changed. In the running process of the attack wolf, if the target fitness of a certain position is greater than the function value of the head wolf, the attack wolf is changed into the head wolf, and other wolfs are called to approach the position of the attack wolf.
Then, updating the wolf group according to the principle of 'winning or losing'. The advantages and disadvantages are as follows: after each iteration, the m-head wolfs with the worst objective function values are selected and eliminated, and then the m-head wolfs are randomly generated according to a formula for initializing wolf cluster positions.
In the embodiment, the key combination information, namely the weight coefficient corresponding to the weight term, is found through the wolf pack optimization iterative algorithm, so that the fusion precision is improved, and the problem of contradiction between keeping a complete edge contour and keeping as much texture detail information as possible when the infrared and visible light images are fused in the prior art is solved.
Optionally, as an embodiment of the present invention, as shown in fig. 2, the performing a weighting calculation on the determined weight terms and weight coefficients includes:
let fusion image E be I F The calculation is performed according to the following weighted combination formula:
I F =w 1 *C inf +w 2 *T inf +w 3 *C vis +w 4 *T vis +w 5 *C dif
wherein, w 1 、w 2 、w 3 、w 4 And w 5 All are expressed as weight coefficients, and the value range of the weight coefficients is 0 to 1.
In the above embodiment, the final fusion result is obtained by a direct weighted combination mode, which is different from the current mainstream multi-scale fusion method that a reconstruction step is required, and the reconstruction step is easy to generate artifacts and is not beneficial to later-stage identification.
Optionally, as an embodiment of the present invention, the performing a difference calculation on the infrared image and the visible light image includes:
carrying out difference calculation on the infrared image and the visible light image according to a difference formula
I dif =I inf -I vis
Wherein, I dif Expressed as a differential image of infrared and visible light, I inf Expressed as an infrared image, I vis Represented as a visible light image.
In the above embodiment, since the infrared light image contains extra edge profile information as opposed to the visible light image due to the infrared sensor characteristics, the visible light image is subtracted from the infrared light image to obtain extra features or regions that are not present in the source visible light image, and the same components are retained to aid in the overall fusion quality.
Optionally, as an embodiment of the present invention, as shown in fig. 3, an infrared image and visible light image fusion apparatus includes:
and the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image.
And the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component.
The function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm;
and the weight determining module is used for determining a weight item and a weight coefficient corresponding to the weight item in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, and taking the determined weight item and the weight coefficient as the weight item and the weight coefficient of a fusion image component, wherein the fusion image is an image obtained by combining the infrared image and the visible light image.
And the fusion module is used for carrying out weighting calculation on the determined weight item and the weight coefficient and obtaining the fusion image according to the calculation result.
Optionally, as another embodiment of the present invention, an infrared image and visible light image fusion apparatus includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the infrared image and visible light image fusion method as described above is implemented.
Alternatively, as another embodiment of the present invention, a computer-readable storage medium stores a computer program which, when executed by a processor, implements the infrared image and visible light image fusion method as described above.
As shown in fig. 4, the reference numerals in fig. 4 denote (a) a source infrared cartoon component; (B) a source infrared texture component; (C) a source visible light cartoon component; (D) a source visible texture component; (F) a differentiating cartoon component; (E) differential texture components;
obviously, the cartoon components after the total variation decomposition have rough outline information, and the detail information of the texture components is very clear. Based on this, cartoon and texture components after infrared and visible image decomposition are selected in the final weighted combination. In order to extract the difference characteristic information between the infrared image and the visible light image, the difference image is obtained, and as can be seen from the graph (E), the cartoon component of the difference image mainly reflects the difference contour edge information between the two source images, and the contour edges are very important information in the image fusion process. Therefore, the final weighting components mainly include: the infrared light image cartoon component, the infrared light image texture component, the visible light image cartoon component, the visible light image texture component and the difference image cartoon component respectively correspond to the w 1 、w 2 、w 3 、w 4 And w 5 5 weight coefficients.
As shown in fig. 5, each reference numeral in fig. 4 denotes (a) a visible light image; (b) an infrared light image; (c) NSCT method; (d) Shearlet method; (e) an SR method; (f) a TV variational multiscale analysis method; (g) the fusion method provided by the invention.
From a subjective visual point of view, the NSCT method is slightly better than the Shearlet method in edge preservation due to its translational invariance; although the SR method can extract the spatial detail information in the source image, the edges of the character areas in the scene are still fuzzy, and the contrast is low; the variational multi-scale analysis method adopts variational multi-scale decomposition and adopts guide filtering to select texture information, the obtained fusion result has relatively high edge retention and contrast, and the texture information is clearer compared with the previous methods; the method adopts a wolf colony algorithm to optimize the combination weight of each texture component and each cartoon component to obtain the high-quality contrast and edge details, the contrast and the edge detail information are slightly higher than those of a variation multi-scale analysis method, and the subjective visual effect is best.
The following table is an evaluation index data table:
Figure BDA0002112804420000131
and introducing objectivity indexes into results of various fusion algorithms for evaluation. And 4, selecting four common image fusion performance indexes to evaluate objective quality of results of the fusion methods. The fusion indexes are respectively information theory evaluation indexes Q MI Human visual sensitivity evaluation index Q CB Image structure similarity evaluation index Q Y Gradient characteristic evaluation index Q G . Mutual information evaluation index Q MI The method is used for measuring the degree of correlation between the two images, and is used for measuring the information content of the source images in the final fusion result; gradient index Q G Gradient information used for measuring the transmission of the infrared and visible light images of the source to a final fusion result; structural similarity evaluation index Q Y For measuring the degree to which the fusion result is retained in terms of structural information; visual sensitivity index Q CB Taking the global quality map mean.
According to the invention, the infrared source image, the visible light source image and the differential image are decomposed into the cartoon component and the texture component through the total variation model, the weight item and the weight coefficient are determined from the source image and the component through the wolf colony optimization iterative algorithm, and the determined weight item and the weight coefficient are subjected to weighted combination to obtain the final fusion image result, wherein the fusion result has noise robustness, meanwhile, the complete outline information and detail information can be kept, and the definition and the contrast are also higher.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a unit is only a logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (7)

1. A method for fusing an infrared image and a visible light image is characterized by comprising the following steps:
carrying out differential calculation on the infrared image and the visible light image to obtain an infrared and visible light differential image; the process of performing difference calculation on the infrared image and the visible light image comprises the following steps:
carrying out difference calculation on the infrared image and the visible light image according to a difference formula, wherein the difference formula is as follows:
I dif =I inf -I vis
wherein, I dif Expressed as a differential image of infrared and visible light, I inf Expressed as an infrared image, I vis Expressed as a visible light image;
respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component; the process of respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model comprises the following steps:
when the infrared image is subjected to decomposition calculation, defining the infrared image as follows according to a total variation problem in a total variation model:
I inf =T inf +C inf
wherein, I inf Representing an infrared image, T inf Representing the texture component, C, of an infrared light image inf Representing the cartoon component of the infrared light image,
when the visible light image is subjected to decomposition calculation, the visible light image is defined as follows according to a total variation problem in a total variation model:
I vis =T vis +C vis
wherein, I vis Representing a visible light image, T vis Representing the texture component of the visible light differential image, C vis Representing the cartoon component of the visible light differential image,
when the infrared and visible light differential image is subjected to decomposition calculation, defining the infrared and visible light differential image as follows according to a total variation problem in a total variation model:
I dif =T dif +C dif
wherein, I dif Representing a differential image of infrared and visible light, T dif Representing the texture component, C, of the infrared and visible light differential image dif Representing cartoon component components of the infrared and visible light differential image;
the total variation model is TV-l 1 A model according to said TV-l when performing decomposition calculation on said infrared image 1 The model calculates a minimization function corresponding to the infrared image, the minimization function formula corresponding to the infrared image is expressed as a first formula, and the first formula is as follows:
Figure FDA0003931349610000021
wherein the solution of the first formula is cartoon component of the infrared light image,
Figure FDA0003931349610000022
cartoon represented by infrared light imageTotal variation regularization term of component, λ | | I inf -C inf || 1 d Ω is represented as a fidelity term, λ is a regularization parameter, Ω is a two-dimensional image,
performing decomposition calculation on the visible light image according to the TV-l 1 The model calculates a minimization function corresponding to the infrared image, the minimization function formula corresponding to the visible light image is expressed as a second formula, and the second formula is as follows:
Figure FDA0003931349610000023
wherein the solution of the second expression is cartoon component of the visible light image,
Figure FDA0003931349610000024
expressed as a total variation regularization term of cartoon component of visible image, lambda I vis -C vis || 1 d Ω is denoted as a fidelity term, λ is a regularization parameter,
according to the TV-l when the infrared and visible light differential image is decomposed and calculated 1 The model calculates a minimization function corresponding to the infrared image, and the minimization function formula of the infrared and visible light differential image is expressed as a third formula which is:
Figure FDA0003931349610000025
wherein the solution of the third formula is the cartoon component of the infrared and visible light images,
Figure FDA0003931349610000026
expressed as a total variation regularization term of cartoon component components of infrared and visible light images, lambda I dif -C dif || 1 d Ω is expressed as a fidelity term, and λ is expressed as a regularization parameter;
when the infrared image is decomposed and calculated, calculating texture component components of the infrared image according to a fourth formula, wherein the fourth formula is as follows:
T inf =I inf -C inf
when the decomposition calculation is carried out on the visible light image, calculating texture component components of the visible light image according to a fifth formula, wherein the fifth formula is as follows:
T vis =I vis -C vis
when the infrared and visible light differential image is decomposed and calculated, calculating texture component components of the infrared and visible light differential image according to a sixth formula, wherein the sixth formula is as follows:
T dif =I dif -C dif
respectively solving the optimization problem of the minimization function formula of the infrared image, the minimization function formula of the visible light image and the minimization function formula of the infrared and visible light differential image according to a gradient descent method:
Figure FDA0003931349610000031
wherein (i, j) represents the position and parameter of pixel point in the infrared light image or the visible light image or the infrared and visible light differential image
Figure FDA0003931349610000032
And
Figure FDA0003931349610000033
the difference between forward and backward is shown separately,
Figure FDA0003931349610000034
representing the magnitude of the gradient, n the number of iterations, am and an are the distances on the image grid, at represents the amount of time variation,
Figure FDA0003931349610000035
epsilon is set toThe minimum value of the number of the first and second electrodes,
Figure FDA0003931349610000036
representing the gradient magnitude at the nth iteration at the ij position,
Figure FDA0003931349610000037
and
Figure FDA0003931349610000038
respectively, the forward difference in the two coordinate directions,
Figure FDA0003931349610000039
is the difference in the backward direction of the two coordinates, I ij Representing the pixel value at the ij position;
constructing a fitness function of a wolf pack optimization iterative algorithm;
determining a weight item and a corresponding weight coefficient in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component according to the wolf colony optimization iterative algorithm and the constructed fitness function, wherein the weight item and the corresponding weight coefficient are used as the weight item and the weight coefficient of a fusion image component, and the fusion image is an image obtained by combining the infrared image and the visible light image;
and performing weighting calculation according to the determined weight item and the weight coefficient to obtain the fused image.
2. The method of claim 1, wherein the process of constructing the fitness function of the wolf pack optimization iterative algorithm comprises:
assume that the fused image is I F Constructing a fitness function, wherein the fitness function is as follows:
S=E(I F )*Std(I F )*Edge(I F ),
wherein, E (I) F ) Representing the entropy of the fused image, std (I) F ) Representing the standard deviation of the fused image, edge (I) F ) Representing edges of a fused imageMargin retention;
the process of calculating the entropy comprises:
the formula for calculating the entropy is:
Figure FDA0003931349610000041
wherein p is i Representing a probability distribution of image pixels;
the process of calculating the standard deviation comprises:
the formula for calculating the standard deviation is:
Figure FDA0003931349610000042
wherein M, N represents the image size;
the process of calculating the edge retention includes:
respectively calculating the edge intensity and the direction of the infrared image and the visible light image according to a sobel edge operator, wherein the formula for calculating the edge intensity is as follows:
Figure FDA0003931349610000043
the formula for calculating the direction is:
Figure FDA0003931349610000044
wherein i and j represent directions, G i And G j Represent gradients in the i and j directions, respectively;
calculating relative edge intensities and relative directions of the hypothetical fused image with respect to the infrared image and the visible light image, the formula for calculating the relative edge intensities being:
Figure FDA0003931349610000051
the formula for calculating the relative direction is:
Figure FDA0003931349610000052
wherein σ F (i, j) represents the edge intensity, σ, of the fused image I (i, j) represents the edge intensity of the source image, [ theta ] of the source image I (i, j) represents the direction of the source image, [ theta ] s F (i, j) represents the orientation of the fused image;
calculating the degree of retention of the relative edge strength and the degree of retention of the relative direction, wherein the formula for calculating the degree of retention of the relative edge strength is as follows:
Figure FDA0003931349610000053
a formula for calculating the degree of retention of the relative direction:
Figure FDA0003931349610000054
define the total edge retention as:
Figure FDA0003931349610000055
calculating the total edge information as:
Figure FDA0003931349610000056
wherein r σ 、Г θ 、K σ 、K θ 、δ σ And delta θ Is a constant, r σ =0.994,Г θ =0.9879,K σ =-15、K θ =-22,δ σ =0.5、δ θ =0.8,
Figure FDA0003931349610000057
Indicating the degree of edge preservation of the infrared light image,
Figure FDA0003931349610000058
representing the degree of edge retention, σ, of the visible image inf (i, j) represents the edge intensity, σ, of the infrared light image vis (i, j) represents the edge intensity of the visible light image.
3. The method for fusing the infrared image and the visible light image according to claim 2, wherein the step of using the determined weight term as the weight term of the image component to be fused comprises:
setting the hunting area as Nxd European space, where N is wolf group number, d is variable dimension, and defining maximum iteration number K max The maximum number of seeks is T max The dimension of the variable includes w 1 、w 2 、w 3 、w 4 And w 5 5 weight coefficients;
calculating the fitness function value of each wolf according to the fitness function, taking the wolf corresponding to the maximum fitness function value as a head wolf, setting the wolf corresponding to the maximum fitness function value as a probing wolf in the remaining fitness function except the head wolf, and iterating through a probing formula, wherein the probing formula is as follows:
Figure FDA0003931349610000061
stopping iteration until the fitness function value of the wolf exploring is larger than that of the wolf exploring, or meeting the exploration times T max Stopping the iteration, wherein x id Represents the detecting wolf in d-dimensional space position, p is the moving direction of the detecting wolf,
Figure FDA0003931349610000062
representing a d-dimensional space search step length;
randomly selecting wolfs except the head wolf as murder wolfs, and calculating according to a prey attack formula:
Figure FDA0003931349610000063
wherein the content of the first and second substances,
Figure FDA0003931349610000064
in order to attack the step size,
Figure FDA0003931349610000065
represents the k +1 generation leader position,
the position of the wolf head is X L The fitness function value is Y L In case of murder wolf Y i Greater than wolf Y L Let Y be i =Y L Performing a calling action; daochiang wolf Y i Less than wolf Y L Continue attack until d is ≤d near Outputting fitness function value Y of each wolf L Determining a weight term and a weight coefficient which are used as image component components to be fused in each output fitness function value, wherein the weight term comprises an infrared light image cartoon component C inf Infrared light image texture component T inf Cartoon component C of visible light image vis Visible image texture component T vis Cartoon component C of infrared and visible light differential image dif
4. The method for fusing the infrared image and the visible light image as claimed in claim 3, wherein the process of performing the weighted calculation on the determined weight terms and weight coefficients comprises:
let the fused image be I F The calculation is performed according to the following weighted combination formula:
I F =w 1 *C inf +w 2 *T inf +w 3 *C vis +w 4 *T vis +w 5 *C dif
wherein, w 1 、w 2 、w 3 、w 4 And w 5 All are expressed as weight coefficients, and the value range of the weight coefficients is 0 to 1.
5. An infrared image and visible light image fusion device is characterized by comprising:
the difference processing module is used for carrying out difference calculation on the infrared image and the visible light image to obtain an infrared and visible light difference image;
the decomposition module is used for respectively carrying out decomposition calculation on the infrared image, the visible light image and the infrared and visible light differential image according to a total variation model to respectively obtain an infrared image cartoon texture component, a visible light image cartoon texture component and a differential image cartoon texture component;
the function construction module is used for constructing a fitness function of the wolf pack optimization iterative algorithm;
a weight determining module, configured to determine, according to the wolf pack optimization iterative algorithm and the constructed fitness function, a weight term and a weight coefficient corresponding to the weight term in the infrared image cartoon texture component, the visible light image cartoon texture component and the difference image cartoon texture component, as the weight term and the weight coefficient of a fusion image component, where the fusion image is an image obtained by combining the infrared image and the visible light image;
and the fusion module is used for carrying out weighting calculation on the determined weight item and the weight coefficient and obtaining the fusion image according to the calculation result.
6. An infrared image and visible light image fusion device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that when the computer program is executed by the processor, the infrared image and visible light image fusion method according to any one of claims 1 to 4 is implemented.
7. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the method for fusing an infrared image and a visible light image according to any one of claims 1 to 4.
CN201910579632.0A 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium Active CN110349117B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579632.0A CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Publications (2)

Publication Number Publication Date
CN110349117A CN110349117A (en) 2019-10-18
CN110349117B true CN110349117B (en) 2023-02-28

Family

ID=68177318

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579632.0A Active CN110349117B (en) 2019-06-28 2019-06-28 Infrared image and visible light image fusion method and device and storage medium

Country Status (1)

Country Link
CN (1) CN110349117B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223049B (en) * 2020-01-07 2021-10-22 武汉大学 Remote sensing image variation fusion method based on structure-texture decomposition
CN113139893B (en) * 2020-01-20 2023-10-03 北京达佳互联信息技术有限公司 Image translation model construction method and device and image translation method and device
CN111353966B (en) * 2020-03-03 2024-02-09 南京一粹信息科技有限公司 Image fusion method based on total variation deep learning and application and system thereof
CN111680752B (en) * 2020-06-09 2022-07-22 重庆工商大学 Infrared and visible light image fusion method based on Framelet framework
TWI767468B (en) * 2020-09-04 2022-06-11 聚晶半導體股份有限公司 Dual sensor imaging system and imaging method thereof
CN116485694B (en) * 2023-04-25 2023-11-07 中国矿业大学 Infrared and visible light image fusion method and system based on variation principle
CN117218048B (en) * 2023-11-07 2024-03-08 天津市测绘院有限公司 Infrared and visible light image fusion method based on three-layer sparse smooth model
CN117612093B (en) * 2023-11-27 2024-06-18 北京东青互联科技有限公司 Dynamic environment monitoring method, system, equipment and medium for data center

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105069768A (en) * 2015-08-05 2015-11-18 武汉高德红外股份有限公司 Visible-light image and infrared image fusion processing system and fusion method
CN107248150A (en) * 2017-07-31 2017-10-13 杭州电子科技大学 A kind of Multiscale image fusion methods extracted based on Steerable filter marking area

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
一种基于特征分解的图像融合方法;常莉红;《浙江大学学报(理学版)》;20180715(第04期);29-32+39 *
基于Tetrolet变换的红外与可见光融合;沈瑜等;《光谱学与光谱分析》;20130615(第06期);68-73 *
基于全变分的权值优化的多尺度变换图像融合;邓苗等;《电子与信息学报》;20130715(第07期);137-143 *
基于变分多尺度的红外与可见光图像融合;冯鑫等;《电子学报》;20180315(第03期);171-178 *

Also Published As

Publication number Publication date
CN110349117A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110349117B (en) Infrared image and visible light image fusion method and device and storage medium
Elhoseny et al. Optimal bilateral filter and convolutional neural network based denoising method of medical image measurements
Zhang et al. Assessment of defoliation during the Dendrolimus tabulaeformis Tsai et Liu disaster outbreak using UAV-based hyperspectral images
Wang et al. A random forest classifier based on pixel comparison features for urban LiDAR data
CA2751025A1 (en) Fusion of a 2d electro-optical image and 3d point cloud data for scene interpretation and registration performance assessment
CN109146948A (en) The quantization of crop growing state phenotypic parameter and the correlation with yield analysis method of view-based access control model
CN106897986B (en) A kind of visible images based on multiscale analysis and far infrared image interfusion method
CN105701481B (en) A kind of collapsed building extracting method
CN111489301A (en) Image defogging method based on image depth information guide for migration learning
Kang et al. Fog model-based hyperspectral image defogging
CN117237740B (en) SAR image classification method based on CNN and Transformer
CN115587946A (en) Remote sensing image defogging method based on multi-scale network
CN114648547B (en) Weak and small target detection method and device for anti-unmanned aerial vehicle infrared detection system
CN117392496A (en) Target detection method and system based on infrared and visible light image fusion
Kao et al. Visualizing distributions from multi-return lidar data to understand forest structure
Sebastianelli et al. A speckle filter for Sentinel-1 SAR ground range detected data based on residual convolutional neural networks
CN117115669B (en) Object-level ground object sample self-adaptive generation method and system with double-condition quality constraint
CN111624606B (en) Radar image rainfall identification method
CN113421198A (en) Hyperspectral image denoising method based on subspace non-local low-rank tensor decomposition
CN111460943A (en) Remote sensing image ground object classification method and system
CN116883303A (en) Infrared and visible light image fusion method based on characteristic difference compensation and fusion
Li et al. Effects of image fusion algorithms on classification accuracy
Wang et al. [Retracted] Adaptive Enhancement Algorithm of High‐Resolution Satellite Image Based on Feature Fusion
Sebastianelli et al. A speckle filter for SAR Sentinel-1 GRD data based on Residual Convolutional Neural Networks
CN113379658A (en) Unmanned aerial vehicle observation target feature double-light fusion method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant