CN104700381A - Infrared and visible light image fusion method based on salient objects - Google Patents

Infrared and visible light image fusion method based on salient objects Download PDF

Info

Publication number
CN104700381A
CN104700381A CN201510111415.0A CN201510111415A CN104700381A CN 104700381 A CN104700381 A CN 104700381A CN 201510111415 A CN201510111415 A CN 201510111415A CN 104700381 A CN104700381 A CN 104700381A
Authority
CN
China
Prior art keywords
image
visible images
infrared image
infrared
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510111415.0A
Other languages
Chinese (zh)
Other versions
CN104700381B (en
Inventor
邵静
秦晅
卢旻昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN201510111415.0A priority Critical patent/CN104700381B/en
Publication of CN104700381A publication Critical patent/CN104700381A/en
Application granted granted Critical
Publication of CN104700381B publication Critical patent/CN104700381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an infrared and visible light image fusion method based on salient objects. The infrared and visible light image fusion method based on the salient objects includes following steps: building nonlinear scale space representation for an infrared image and a visible light image which respectively comprise a plurality of objects in a given scene; using a visual attention computational model to compute visual attention salient maps of the infrared image and the visible light image based on the nonlinear scale space representation of the images; using a return inhibition mechanism to select salient object areas from the infrared image and the visible light image based on the visual attention salient maps of the infrared image and the visible light image, and computing all salient object areas in the whole scene; performing rectification operation on the infrared image and the visible light image, using a pixel level fusion algorithm to perform fusion treatment on the salient object areas, and using a feature level fusion algorithm to perform fusion treatment on non-salient object areas; generating a fusion image of the infrared image and the visible light image of the whole scene by synthesizing results.

Description

A kind of based on the infrared of well-marked target and visible light image fusion method
Technical field
The present invention relates to a kind of multi-source image integration technology field, particularly a kind of based on the infrared of well-marked target and visible images multi-level Fusion method.
Background technology
Image co-registration (Image Fusion) refers to and multiresolution or multimedium view data is produced the Comprehensive Analysis Technique of new image by spatial registration and image information complementation.Compared with single-sensor image, fused images can utilize the information of each source images to greatest extent, improve resolution and sharpness, increase the sensitivity of image object perception, perceived distance and precision, antijamming capability etc., thus reduce imperfection and the uncertainty of target apperception, improve the accuracy rate to target four and scene interpretation ability.The general flow process of image co-registration as shown in the figure.First, some pretreatment operation are carried out, as filtering noise etc. to several source images; Secondly, time-space relation is carried out to image, is mapped in same space-time coordinates by them, to guarantee the consistance of their space-time positions; Again, corresponding method is adopted to process to the image after registration; Finally, carry out fusion treatment according to certain fusion rule and obtain fused images.
Photoelectricity and infrared detection sensor are the most common two kinds of load of the tactical reconnaissance such as early warning plane and unmanned plane platform.Visual light imaging, wave band 0.4 ~ 1.0 μm, is characterized in that resolution is high, good concealment, can obtain abundant contrast, CF information, but affect by ambient light illumination, can not work at night, without camouflage recognition capability.Infrared imaging is in wave band 8 ~ 14 μm (long infrared band) and 3 ~ 5 μm (middle-infrared band), be characterized in having the ability necessarily penetrating cigarette, mist, snow etc., good concealment, imaging resolution is higher, detection range is generally between a few km to tens kms, but climate affects, and when operating distance is far away, image is stable not.When each spot temperature of target itself changes greatly or thermal background emission characteristic is more weak, the detailed information of the target that infrared image comprises or background is less, and visible images then can comprise abundant detailed information; But in dark or have in cigarette, cloud, mist bad border, visible images is second-rate, and the target in infrared image is but still clear and legible.Visible images and infrared image are carried out fusion treatment, the impact of neighbourhood noise on target detection can be overcome, the shape of comprehensive master goal and minutia, and do not affect by ambient lighting, round-the-clockly can use, system performance is studied and judged to the detection of raising reconnaissance platforms and Image Intelligence analysis significant.
The key tactical that US Department of Defense formulates at different times in the works, has the task of quite a few to relate to multi-source image fusion aspect." LANTIAN " gondola playing good operational performance in the Gulf War is exactly a kind of image fusion system that the multiple sensors information superposition such as FLIR (Forward-Looking Infrared), laser ranging, visible light camera can be shown.American TI Company obtains in nineteen ninety-five from U.S.'s night vision and electronic sensor management board (NVESD) by infrared with the contract of three generations's LLL image fusion integrated design to Advanced Helicopter (AHP) sensing system, and signal processing function is completed by TMS32OC30DSP.2003, U.S. Department of Defense's proposition " transverse direction in Military transforming merges (Transformational aboutHorizontal Fusion) " white paper, focus on laterally merging especially data fusion, object is the horizontal interoperability particularly data fusion ability of strengthening system.2006, general office of US Department of Defense (OSD) issued the Annual Technical bulleted list that will carry out verifying within the coming years, to promote the fast development in these fields.
Domestic work of merging at multi-source image concentrates on algorithm research, comprises technique study multi-source images such as utilizing DT-CWT conversion, multi-scale wavelet sampling, region segmentation and merges.Existing systems technology is mostly confined to the simple superposition of the different phase of Same Scene or the image pixel of its different color channels or accepts or rejects computing.
Summary of the invention
Goal of the invention: technical matters to be solved by this invention is the deficiency for existing in existing infrared image and visible light image fusion method, provides a kind of based on the infrared of well-marked target and visible light image fusion method.
In order to solve the problems of the technologies described above, the invention discloses a kind of based on the infrared of well-marked target and visible light image fusion method, comprising the following steps:
Step 1, comprises infrared image and the visible images of some (more than one) target to given Same Scene, set up infrared and multiscale space that is visible images respectively and represent;
Step 2, represents on basis at image non-linear metric space, utilizes visual attention computation model to calculate infrared and vision attention that is visible images and significantly schemes.
Step 3, significantly schemes on basis at infrared and visible images vision attention, utilizes the machine-processed well-marked target region selected respectively in infrared and visible images of inhibition of return, and calculates all well-marked target regions in whole scene.
Step 4, carries out registration operation to infrared image and visible images, adopts Pixel-level blending algorithm to carry out fusion treatment to well-marked target region, to non-significant target area, adopts feature-based fusion algorithm to carry out fusion treatment.
Step 5, comprehensive well-marked target area merges result and non-significant area merges result, generate infrared image and the visual image fusion image of whole scene.
Step 1 of the present invention comprises and utilizes the infrared and multiscale space that is visible images of the method establishment of Nonlinear Scale Space Theory to represent; The multiscale space of infrared image and visible images represents and is respectively:
L:I (x 1, y 1) × t 1→ I (x 1, y 1; t 1), wherein, wherein L is metric space conversion, x 1represent the horizontal ordinate in infrared image I, y 1represent the ordinate in infrared image I, t 1represent the scale factor of infrared image multiscale space.L:V (x 2, y 2) × t 2→ V (x 2, y 2; t 2), wherein, wherein L is metric space conversion, x 2represent the horizontal ordinate in visible images V, y 2represent the ordinate in visible images V, t 2represent the scale factor of visible images multiscale space.
For infrared image: I (x 1, y 1; t 1)=I (x 1, y 1; 0) * g (x 1, y 1; t 1), infrared image scaling function g 1 ( x 1 , y 1 ; t 1 ) = 1 2 π t 1 exp { - x 1 2 + y 1 2 2 t 1 1 } ;
For visible images: V (x 2, y 2; t 2)=V (x 2, y 2; 0) * g (x 2, y 2; t 2), visible images scaling function g 2 ( x 2 , y 2 ; t 2 ) = 1 2 π t 2 exp { - x 2 2 + y 2 2 2 t 2 } ;
In step 2 of the present invention, set up the visual attention computation model of infrared image and visible images, the vision attention calculating infrared image and visible images is significantly schemed, comprise: calculate the brightness of infrared image and visible images, color and direction Low Level Vision characteristic pattern, and each characteristic remarkable picture of correspondence.
For infrared image:
The remarkable figure computing formula of brightness:
c represents high scale factor, and s represents low scale factor, and I (c) represents the infrared image of high metric space, and I (s) represents the infrared image of low metric space.
The remarkable figure computing formula of color:
Red green/green red biconjugate of infrared image founds image wherein IR (c) the high yardstick red channel image that is infrared image, the low yardstick red channel image that IR (s) is infrared image, the high yardstick green channel images that IG (c) is infrared image, the low yardstick green channel images that IG (s) is infrared image;
Indigo plant Huang/champac the biconjugate of infrared image founds image wherein, the high yardstick blue channel image that IB (c) is infrared image, the low yardstick blue channel image that IB (s) is infrared image, the high yardstick yellow channels image that IY (c) is infrared image, the low yardstick yellow channels image that IY (s) is infrared image.
The remarkable figure computing formula in direction:
The direction of infrared image is significantly schemed Wherein, the high dimension characteristic pattern that IO (c, θ) is infrared image, the low dimension characteristic pattern that IO (s, θ) is infrared image, θ value is ( 0 , π 4 , π 2 , 3 π 4 ) .
The vision attention calculating whole infrared image by normalization operator N () is significantly schemed:
IS=N(I)+N(IRG)+N(IBY)+N(IO)。
For visible images:
The remarkable figure computing formula of brightness:
c represents high scale factor, and s represents low scale factor, and V (c) represents the visible images of high metric space, and V (s) represents the visible images of low metric space.
The remarkable figure computing formula of color:
Red green/green red biconjugate of visible images founds image wherein VR (c) the high yardstick red channel image that is visible images, the low yardstick red channel image that VR (s) is visible images, the high yardstick green channel images that VG (c) is visible images, the low yardstick green channel images that VG (s) is visible images;
Indigo plant Huang/champac the biconjugate of visible images founds image wherein, the high yardstick blue channel image that VB (c) is visible images, the low yardstick blue channel image that VB (s) is visible images, the high yardstick yellow channels image that VY (c) is visible images, the low yardstick yellow channels image that VY (s) is visible images.
The remarkable figure computing formula in direction:
The direction of visible images is significantly schemed wherein, the high dimension characteristic pattern that VO (c, θ) is visible images, the low dimension characteristic pattern that VO (s, θ) is visible images, θ value is ( 0 , π 4 , π 2 , 3 π 4 ) .
The vision attention calculating whole visible images by normalization operator N () is significantly schemed:
VS=N(V)+N(VRG)+N(VBY)+N(VO)。
In step 3 of the present invention, significantly scheme on basis at infrared and visible images vision attention, choose the maximum point of gray-scale value in visual saliency map for blinkpunkt first, after selecting first well-marked target region, the gray-scale value of blinkpunkt is first set to zero, again select the point that in the remarkable figure of visual task, gray-scale value is maximum to be second time blinkpunkt, select the well-marked target region in infrared image and visible images successively, and remember M respectively i, M v, M irepresent significant target area number in infrared image, M vrepresent significant target area in visible images, the well-marked target region M in whole scene sumfor: M sum=M i+ M v-(M i∩ M v).
In step 4 of the present invention, for M sumindividual salient region, adopts the linear weighted function blending algorithm that Pixel-level merges to carry out fusion treatment to well-marked target region, obtains the fused images F of marking area saliency(i, j)=w i(i, j) I (i, j)+w v(i, j) V (i, j)+C, for M iVindividual in infrared image and visible images all significant region, w is set i(i, j)=w v(i, j)=0.5; For M i-IVindividual only significant region in infrared image, arranges w i(i, j)=0.8, w v(i, j)=0.2; For M v-IVindividual only significant region in visible images, arranges w i(i, j)=0.2, w v(i, j)=0.8; To non-significant target area, adopt the fusion carrying out non-significant target area based on principal component analytical method, obtain fused images F non-saliency(i, j);
In step 5 of the present invention, calculate the infrared image of whole scene and the fusion results of visible images, F (i, j)=F saliency(i, j)+F non-saliency(i, j).
Beneficial effect: the invention has the advantages that: the multi-level Fusion model setting up infrared image and visible images, visual attention computation model is utilized to select well-marked target in infrared image and visible images respectively, Pixel-level fusion is carried out to well-marked target region, ensure the fusion mass of infrared image and visible images, carry out feature-based fusion to other non-limiting background area, the data volume of process reduces reason greatly.The method that the present invention proposes, while guaranteeing well-marked target syncretizing effect, improves the fusion efficiencies to a large amount of non-significant target area, has taken into account fusion mass and efficiency.
Accompanying drawing explanation
To do the present invention below in conjunction with the drawings and specific embodiments and further illustrate, above-mentioned and/or otherwise advantage of the present invention will become apparent.
Fig. 1 is based on the infrared of well-marked target and visible light image fusion method process flow diagram
Fig. 2 is the infrared image of a certain scene
Fig. 3 is be the visible images of Same Scene with Fig. 2
Fig. 4 is the fused images obtained based on the infrared of well-marked target and visible light image fusion method
Embodiment
Composition graphs 1, the present invention is based on the infrared of well-marked target and visible images multi-level Fusion method, specifically comprises the following steps:
Step 1, utilizes the multiscale space of Nonlinear Scale Space Theory method establishment infrared image and visible images to represent.The infrared image being provided with certain scene is I (x 1, y 1), (x 1, y 1∈ R 2) x 1represent the horizontal ordinate in infrared image I, y 1represent the ordinate in infrared image I; Visible images is V (x 2, y 2), (x 2, y 2∈ R 2), x 2represent the horizontal ordinate in visible images V, y 2represent the ordinate in visible images V.Scale factor is designated as t ∈ R +, the metric space of infrared image and visible images represents and is respectively L:I (x 1, y 1) × t 1→ I (x 1, y 1; t 1), t 1represent the scale factor of infrared image multiscale space.L:V (x 2, y 2) × t 2→ V (x 2, y 2; t 2), t 2represent the scale factor of visible images multiscale space.
I(x 1,y 1;0)=I(x 1,y 1)L t+s(I)=L t(L s(I)) (1)
V(x 2,y 2;0)=V(x 2,y 2)L t+s(V)=L t(L s(V)) (2)
Wherein L is metric space conversion, and the initial value of yardstick t is the yardstick in original image, is generally 0.The diffusion equation that L meets is:
∂ t L = - div J → - - - ( 3 )
Wherein, represent the equilibrium response of diffusion, D is that a positive definite symmetric matrices represents diffusion tensor.If with parallel, then represent that diffusion is isotropic, otherwise be anisotropy.If D is a normal number, then corresponding metric space is linear-scale space; If D is a scalar function relevant to picture structure, then it is non-linear isotropy metric space; If D is the vector function relevant to picture structure, then it is Anisotropic Nonlinear metric space.
The linear-scale space represented with gaussian kernel function is a kind of Gaussian convolution smoothing process not producing new construction.
For infrared image, can obtain:
I (x 1, y 1; t 1)=I (x 1, y 1; 0) * g (x 1, y 1; t 1) infrared image scaling function
g 1 ( x 1 , y 1 ; t 1 ) = 1 2 π t 1 exp { - x 1 2 + y 1 2 2 t 1 1 } - - - ( 4 )
For visible images, can obtain:
V (x 2, y 2; t 2)=V (x 2, y 2; 0) * g (x 2, y 2; t 2) visible images scaling function
g 2 ( x 2 , y 2 ; t 2 ) = 1 2 π t 2 exp { - x 2 2 + y 2 2 2 t 2 } - - - ( 5 )
Gaussian function during linearity metric space represents in smoothed image while noise, also level and smooth important features, thus be difficult to extract these features on thick yardstick; Meanwhile, the use of isotropic linear diffusion equation causes the correspondence at edge between different scale to be difficult to determine, thus make some thickness yardsticks represent between correspondence calculate be difficult to.For making the details such as object edge in image not by smoothly, diffusion equation should the regional area of adaptive process image, thus keeps not wishing by fuzzy information, and namely method of diffusion should be non-linear.Represent with the Nonlinear Scale Space Theory of non-linear isotropic diffusion establishing equation image, can guarantee that the region contour in each scalogram picture is enhanced, and the correspondence be conducive between different scale calculates, its diffusion equation is such as formula shown in (6):
∂ t L = - div ( g ( | | ▿ L | | ) ▿ L ) - - - ( 6 )
g ( | | ▿ L | | ) = 1 1 + | | ▿ L | | 2 / λ 2 Or g ( | | ▿ L | | ) = exp ( - | | ▿ L | | 2 / λ 2 ) - - - ( 7 )
Wherein, λ >0 represents edge threshold.
Step 2, represents on basis at image non-linear metric space, utilizes visual attention computation model to calculate infrared and vision attention that is visible images and significantly schemes.
Represent on basis in step 1 pair infrared image and visible images metric space, extract color, brightness and direction respectively as the low-level feature guiding vision attention, calculate all kinds of characteristic remarkable picture by the central authorities-periphery difference of receptive field, obtain vision attention finally by normalization operator and significantly scheme.
(1) Low Level Vision characteristic pattern
For infrared image:
Brightness figure: note Ir (t 1), Ig (t 1) and Ib (t 1) be respectively redness, green and blue channel in original infrared image, wherein t 1represent scale factor, luminance graph is:
I(t 1)=(Ir(t 1)+Ig(t 1)+Ib(t 1))/3 (8)
Color characteristic figure: with I (t 1) to r (t 1), g (t 1) and b (t 1) passage is normalized, can obtain sensu lato red, green, blue and Huang Si channel value, such as formula (9), (10), (11), shown in (12).
IR(t 1)=Ir(t 1)-(Ig(t 1)+Ib(t 1))/2 (9)
IG(t 1)=Ig(t 1)-(Ir(t 1)+Ib(t 1))/2 (10)
IB(t 1)=Ib(t 1)-(Ir(t 1)+Ig(t 1))/2 (11)
IY(t 1)=Ir(t 1)+Ig(t 1)-2*(|Ir(t 1)-Ig(t 1)|+Ib(t 1)) (12)
Wherein, negative value is set to 0.
Direction character figure: utilize Gabor filter to calculate the direction character figure of infrared image, shown in (13):
IG ( x 1 , y 1 ) = 1 2 πσβ exp { - π [ ( x 1 - x 0 ) 2 σ 2 + ( y 1 - y 0 ) 2 β 2 ] } exp { i[ ξ 0 x 1 + v 0 y 1 ] } - - - ( 13 )
Wherein (x 0, y 0) be receptive field centre coordinate in spatial domain, x 0for horizontal ordinate, y 0for ordinate; (ξ 0, υ 0) be the optimal spatial frequency of wave filter on frequency domain, ξ 0for real part, υ 0for imaginary part.α and β is the standard deviation of Gaussian function on x and y-axis direction respectively.The Gabor filter that the present invention gets four direction exports as directional vision characteristic pattern: n is the direction that constant 4, i represents different, and value is 0,1,2,3.
For visible images:
Brightness figure: note Vr (t 2), Vg (t 2) and Vb (t 2) be respectively redness, green and blue channel in primary visible light image, wherein t 2represent scale factor, then luminance graph is:
V(t 2)=(Vr(t 2)+Vg(t 2)+Vb(t 2))/3 (14)
Color characteristic figure: with V (t 2) to Vr (t 2), Vg (t 2) and Vb (t 2) passage is normalized, can obtain sensu lato red, green, blue and Huang Si channel value, such as formula (15), (16), (17), shown in (18).
VR(t 2)=Vr(t 2)-(Vg(t 2)+Vb(t 2))/2 (15)
VG(t 2)=Vg(t 2)-(Vr(t 2)+Vb(t 2))/2 (16)
VB(t 2)=Vb(t 2)-(Vr(t 2)+Vg(t 2))/2 (17)
VY(t 2)=Vr(t 2)+Vg(t 2)-2*(|Vr(t 2)-Vg(t 2)|+Vb(t 2)) (18)
Wherein, negative value is set to 0.
Direction character figure: utilize Gabor filter to calculate the direction character figure of visible images, shown in (19):
VG ( x 2 , y 2 ) = 1 2 πσβ exp { - π [ ( x 2 - x 0 ) 2 σ 2 + ( y 2 - y 0 ) 2 β 2 ] } exp { i[ ξ 0 x 2 + v 0 y 2 ] } - - - ( 19 )
Wherein (x 0, y 0) be receptive field centre coordinate in spatial domain, x 0for horizontal ordinate, y 0for ordinate; (ξ 0, υ 0) be the optimal spatial frequency of wave filter on frequency domain, ξ 0for real part, υ 0for imaginary part.α and β is the standard deviation of Gaussian function on x and y-axis direction respectively.The Gabor filter that the present invention gets four direction exports as directional vision characteristic pattern: n is the direction that constant 4, i represents different, and value is 0,1,2,3.
(2) characteristic remarkable picture
For infrared image:
By brightness I, four color components IR, IG, IB, IY and the four direction characteristic pattern method of metric space represent, center image corresponds to the yardstick under high resolving power, periphery image corresponds to the yardstick under low resolution, realize receptive field and integrate the poor calculative strategy in wild central authorities-periphery, obtain each characteristic remarkable picture.
Brightness is significantly schemed:
The remarkable figure of brightness is produced by luminance contrast, represents that brightness is significantly schemed with I (c, s):
Wherein, c is the scale factor that infrared image metric space represents middle high-resolution, and s is low resolution scale factor, for central authorities-periphery difference calculates son, the respective pixel deducted in low-resolution image by the pixel in high-definition picture is realized.
Color is significantly schemed:
Red green/green red biconjugate of infrared image founds image:
Wherein IR (c) the high yardstick red channel image that is infrared image, the low yardstick red channel image that IR (s) is infrared image, the high yardstick green channel images that IG (c) is infrared image, the low yardstick green channel images that IG (s) is infrared image.
Indigo plant Huang/champac the biconjugate of infrared image founds image:
Wherein, the high yardstick blue channel image that IB (c) is infrared image, the low yardstick blue channel image that IB (s) is infrared image, the high yardstick yellow channels image that IY (c) is infrared image, the low yardstick yellow channels image that IY (s) is infrared image.
Direction is significantly schemed:
Utilize Gabor pyramid IO (σ, θ), can obtain image local directional information, wherein σ is scale factor, θ ∈ 0 °, and 45 °, 90 °, 135 ° }.By the calculating of local direction contrast, by the calculating of local direction contrast, direction significantly can be schemed IO (c, s, θ) and be encoded into one group:
For visible images:
By brightness V, four color components VR, VG, VB, VY and the four direction characteristic pattern method of metric space represent, center image corresponds to the yardstick under high resolving power, periphery image corresponds to the yardstick under low resolution, realize receptive field and integrate the poor calculative strategy in wild central authorities-periphery, obtain each characteristic remarkable picture.
Brightness is significantly schemed:
The remarkable figure of brightness is produced by luminance contrast, represents that brightness is significantly schemed with V (c, s):
Wherein, c is the scale factor that visible images metric space represents middle high-resolution, and s is low resolution scale factor, for central authorities-periphery difference calculates son, the respective pixel deducted in low-resolution image by the pixel in high-definition picture is realized.
Color is significantly schemed:
Red green/green red biconjugate of visible images founds image:
Wherein VR (c) the high yardstick red channel image that is visible images, the low yardstick red channel image that VR (s) is visible images, the high yardstick green channel images that VG (c) is visible images, the low yardstick green channel images that VG (s) is visible images.
Indigo plant Huang/champac the biconjugate of visible images founds image:
Wherein, the high yardstick blue channel image that VB (c) is visible images, the low yardstick blue channel image that VB (s) is visible images, the high yardstick yellow channels image that VY (c) is visible images, the low yardstick yellow channels image that VY (s) is visible images.
Direction is significantly schemed:
Utilize Gabor pyramid VO (σ, θ), can obtain image local directional information, wherein σ is scale factor, θ ∈ 0 °, and 45 °, 90 °, 135 ° }.By the calculating of local direction contrast, by the calculating of local direction contrast, direction significantly can be schemed VO (c, s, θ) and be encoded into one group:
(3) vision attention is significantly schemed
The low-level feature figure in brightness, color and direction has different dynamic ranges and extraction mechanism, and because all characteristic remarkable pictures are combined, the object that only conspicuousness is stronger in minority figure, may be covered by noise or conspicuousness is less in large spirogram object.Therefore, the present invention utilizes normalization operator N (), to strengthen the less characteristic remarkable picture in remarkable peak, and weakens the characteristic remarkable picture that there are a large amount of significantly peaks.Significantly scheme each category feature, the operation of this operator comprises: 1) this characteristic pattern of normalization to [0 ..., 1] and in scope, to eliminate the amplitude difference depending on feature; 2) the great average in all local except the overall situation is maximum is calculated ) use take advantage of this characteristic remarkable picture.
For infrared image:
Respectively the remarkable figure of brightness, color and direction character is normalized by normalization operator N (), obtains with be shown below.
I ‾ = ⊕ c ⊕ s N ( I ( c , s ) ) - - - ( 24 )
IRG ‾ = ⊕ c ⊕ s N ( IRG ( c , s ) ) - - - ( 25 )
IBY ‾ = ⊕ c ⊕ s N ( IBY ( c , s ) ) - - - ( 26 )
IO ‾ = Σ θ N ( ⊕ c ⊕ s N ( IO ( c , s , θ ) ) ) - - - ( 27 )
In formula, symbol ⊕ represents that pointwise is sued for peace.
The remarkable figure of vision attention has formula (22) to calculate:
IS = N ( I ‾ ) + N ( IRG ‾ ) + N ( IBY ‾ ) + N ( IO ‾ ) - - - ( 22 )
For visible images:
Respectively the remarkable figure of brightness, color and direction character is normalized by normalization operator N (), obtains with be shown below.
V ‾ = ⊕ c ⊕ s N ( V ( c , s ) ) - - - ( 24 )
VRG ‾ = ⊕ c ⊕ s N ( VRG ( c , s ) ) - - - ( 25 )
VBY ‾ = ⊕ c ⊕ s N ( VBY ( c , s ) ) - - - ( 26 )
VO ‾ = Σ θ N ( ⊕ c ⊕ s N ( VO ( c , s , θ ) ) ) - - - ( 27 )
In formula, symbol ⊕ represents that pointwise is sued for peace.
The remarkable figure of vision attention has formula (22) to calculate:
VS = N ( V ‾ ) + N ( VRG ‾ ) + N ( VBY ‾ ) + N ( VO ‾ ) - - - ( 22 )
Step 3, significantly schemes on basis at infrared and visible images vision attention, utilizes the machine-processed well-marked target region selected respectively in infrared and visible images of inhibition of return, and calculates all well-marked target regions in whole scene.
After calculating visual saliency map by step 2, choose the maximum point of gray-scale value in visual saliency map for blinkpunkt first, centered by this point, go out first well-marked target region by the rectangular selection of original image size 1/16.After first well-marked target regional choice goes out, the gray-scale value of blinkpunkt is first set to zero, again select the point that in the remarkable figure of visual task, gray-scale value is maximum be second time blinkpunkt, the rectangular selection of employing original image size 1/16 goes out second well-marked target region.Iterative cycles, selects the well-marked target region in original image one by one.Well-marked target region in the infrared image selected and visible images is respectively M i, M v.
Well-marked target region in whole scene is M sum=M i+ M v-(M i∩ M v).
Step 4, carries out registration operation to infrared image and visible images, adopts Pixel-level blending algorithm to carry out fusion treatment to well-marked target region, to non-significant target area, adopts feature-based fusion algorithm to carry out fusion treatment.
Adopt key point matching method, registration operation is carried out to infrared image I and visible images V.
For salient region, the well-marked target calculated in whole scene by step 3 is M sum, wherein, in infrared image and visible images, all significant areal is M iV=M i∩ M v, only in infrared image, significant areal is M i-IV=M i-M iV, only in visible images, significant areal is M v-IV=M v-M iV, adopt the linear weighted function blending algorithm that Pixel-level merges to carry out fusion treatment to well-marked target region, obtain the fused images F of marking area saliency(i, j), represents by formula (23):
F saliency(i,j)=w I(i,j)·I(i,j)+w V(i,j)·V(i,j)+C (23)
Wherein, I (i, j) and V (i, j) represents the pixel position in source images, and F (i, j) represents the pixel position in fused images.W i(i, j) and w v(i, j) is weights, and w i(i, j)+w v(i, j)=1.For M iVindividual in infrared image and visible images all significant region, w is set i(i, j)=w v(i, j)=0.5; For M i-IVindividual only significant region in infrared image, arranges w i(i, j)=0.8, w v(i, j)=0.2; For M v-IVindividual only significant region in visible images, arranges w i(i, j)=0.2, w v(i, j)=0.8.
To non-significant target area, feature level Image Fusion is adopted to carry out fusion treatment to the non-significant target area in infrared image and visible images.The fusion carrying out non-significant target area based on principal component analysis (PCA) (Principal Component Analysis, PCA) method is adopted in this patent.
Infrared image and visible images are converted by PCA shift theory respectively, obtain each major component, the first principal component image of infrared image and visible images is carried out Histogram Matching, then the first Main classification full-colour image is replaced, obtain fused images F by PCA inverse transformation non-saliency(i, j).
Step 5, comprehensive well-marked target area merges result and non-significant area merges result, generate infrared image and the visual image fusion image of whole scene.
The fused images F in well-marked target region in scene is calculated in step 4 saliency(i, j) and non-significant target area image F non-saliencyon the basis of (i, j), calculate the infrared image of whole scene and the fusion results of visible images, shown in (24).
F(i,j)=F saliency(i,j)+F non-saliency(i,j) (24)
Embodiment
By an object lesson, implementation procedure of the present invention is described.
Fig. 2 is the infrared image of a certain scene, and Fig. 3 is the visible images of a certain scene.
According to described in step 1, the Nonlinear Scale Space Theory first setting up infrared image and visible images represents, edge threshold λ is taken as 0.5.
According to described in step 2, calculate the brightness of infrared image and visible images, color and directional vision characteristic pattern respectively, and brightness, color and direction are significantly schemed, the vision attention calculating infrared image and visible images is significantly schemed.
According to described in step 3, the well-marked target region calculated in infrared image is 5, and the well-marked target region in visible images is 4, and wherein in infrared image and visible images, all significant target is 3, and the well-marked target region in whole scene is 6.
According to described in step 4, a well-marked target region, 6 in scene is merged according to Pixel-level fusion method, other non-significant target area is merged according to feature level fusing method.
According to described in step 5, the fusion results of the fusion results in well-marked target region and non-significant target area is carried out comprehensively, obtains infrared image and the visual image fusion result of whole scene, as shown in Figure 4.
Effect of the present invention can be summarised as:
1) the multi-level Fusion model of image co-registration is set up, Pixel-level fusion method and feature level fusing method are adopted respectively to the well-marked target in scene and non-significant target, Pixel-level fusion is carried out to well-marked target, namely essence merges, feature-based fusion is carried out to other background area, namely slightly merges, while guaranteeing well-marked target syncretizing effect, improve the fusion efficiencies to a large amount of non-significant target area, meet with the visual cognition process height of the mankind.
2) utilize the multiscale space of Nonlinear Scale Space Theory method establishment image to represent, in smoothed image while noise, guarantee that in each scalogram picture, well-marked target profile is enhanced, and improves the fusion mass to well-marked target in scene.
3) visual attention computation model of infrared image and visible images is set up, from brightness, color and aspect, direction three metric objective conspicuousness, and the well-marked target calculated respectively by normalization operator and inhibition of return mechanism in infrared image and visible images, select the well-marked target in infrared image and visible images simultaneously, and carry out fusion treatment.
4) method that the present invention proposes not only can be used for the fusion treatment of infrared image and visible images, field can also be merged by further genralrlization to other multi-source image, as multispectral image and visual image fusion, SAR image and visible images, SAR image and infrared image etc., application prospect is extensive.
The invention provides a kind of based on the infrared of well-marked target and visible light image fusion method; the method and access of this technical scheme of specific implementation is a lot; the above is only the preferred embodiment of the present invention; should be understood that; for those skilled in the art; under the premise without departing from the principles of the invention, can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.The all available prior art of each ingredient not clear and definite in the present embodiment is realized.

Claims (6)

1. based on the infrared of well-marked target and a visible light image fusion method, it is characterized in that, comprise the following steps:
Step 1, given Same Scene is comprised to infrared image and the visible images of some targets, the multiscale space setting up infrared image and visible images respectively represents;
Step 2, represents on basis at image non-linear metric space, sets up the visual attention computation model of infrared image and visible images, and the vision attention calculating infrared image and visible images is significantly schemed;
Step 3, significantly schemes on basis at infrared image and visible images vision attention, utilizes the machine-processed well-marked target region selected respectively in infrared image and visible images of inhibition of return, and calculates all well-marked target regions in whole scene;
Step 4, carries out registration operation to infrared image and visible images, adopts Pixel-level blending algorithm to carry out fusion treatment to well-marked target region, to non-significant target area, adopts feature-based fusion algorithm to carry out fusion treatment;
Step 5, comprehensive well-marked target area merges result and non-significant area merges result, generate infrared image and the visual image fusion image of whole scene.
2. according to claim 1ly a kind ofly to it is characterized in that based on the infrared of well-marked target and visible light image fusion method, in step 1, the multiscale space of infrared image and visible images represents and is respectively:
L:I (x 1, y 1) × t 1→ I (x 1, y 1; t 1), wherein, wherein L is metric space conversion, x 1represent the horizontal ordinate in infrared image I, y 1represent the ordinate in infrared image I, t 1represent the scale factor of infrared image multiscale space;
L:V (x 2, y 2) × t 2→ V (x 2, y 2; t 2), wherein, wherein L is metric space conversion, x 2represent the horizontal ordinate in visible images V, y 2represent the ordinate in visible images V, t 2represent the scale factor of visible images multiscale space;
For infrared image I (x 1, y 1; t 1):
I(x 1,y 1;t 1)=I(x 1,y 1;0)*g(x 1,y 1;t 1),
Infrared image scaling function g 1(x 1, y 1; t 1) be:
g 1 ( x 1 , y 1 ; t 1 ) = 1 2 πt 1 exp { - x 1 2 + y 1 2 2 t 1 1 } ,
For visible images V (x 2, y 2; t 2):
V(x 2,y 2;t 2)=V(x 2,y 2;0)*g(x 2,y 2;t 2),
Visible images scaling function g 2(x 2, y 2; t 2) be:
g 2 ( x 2 , y 2 ; t 2 ) = 1 2 πt 2 exp { - x 2 2 + y 2 2 2 t 2 } .
3. according to claim 2 a kind of based on the infrared of well-marked target and visible light image fusion method, it is characterized in that, in step 2, set up the visual attention computation model of infrared image and visible images, the vision attention calculating infrared image and visible images is significantly schemed, comprise: calculate the brightness of infrared image and visible images, color and direction Low Level Vision characteristic pattern, and each characteristic remarkable picture of correspondence;
For infrared image:
I (c, s) computing formula is significantly schemed in brightness:
c represents high scale factor, and s represents low scale factor, and I (c) represents the infrared image of high metric space, and I (s) represents the infrared image of low metric space;
Color remarkable figure computing method are as follows:
Red green-green red biconjugate of infrared image founds image IRG (c, s) computing formula and is:
wherein IR (c) the high yardstick red channel image that is infrared image, the low yardstick red channel image that IR (s) is infrared image, the high yardstick green channel images that IG (c) is infrared image, the low yardstick green channel images that IG (s) is infrared image;
Indigo plant Huang-champac the biconjugate of infrared image founds image IBY (c, s) computing formula and is:
wherein, the high yardstick blue channel image that IB (c) is infrared image, the low yardstick blue channel image that IB (s) is infrared image, the high yardstick yellow channels image that IY (c) is infrared image, the low yardstick yellow channels image that IY (s) is infrared image;
Direction remarkable figure computing method are as follows:
The direction of infrared image is significantly schemed IO (c, s, θ) computing formula and is:
wherein, the high dimension characteristic pattern that IO (c, θ) is infrared image, the low dimension characteristic pattern that IO (s, θ) is infrared image, θ value is
The vision attention calculating whole infrared image by normalization operator N () significantly schemes IS:
IS=N(I)+N(IRG)+N(IBY)+N(IO);
For visible images:
V (c, s) computing formula is significantly schemed in brightness:
c represents high scale factor, and s represents low scale factor, and V (c) represents the visible images of high metric space, and V (s) represents the visible images of low metric space;
Color remarkable figure computing method are as follows:
Red green-green red biconjugate of visible images founds image VRG (c, s) computing formula and is:
Wherein VR (c) the high yardstick red channel image that is visible images, the low yardstick red channel image that VR (s) is visible images, the high yardstick green channel images that VG (c) is visible images, the low yardstick green channel images that VG (s) is visible images;
Indigo plant Huang-champac the biconjugate of visible images founds image VBY (c, s) computing formula and is:
wherein, the high yardstick blue channel image that VB (c) is visible images, the low yardstick blue channel image that VB (s) is visible images, the high yardstick yellow channels image that VY (c) is visible images, the low yardstick yellow channels image that VY (s) is visible images;
The remarkable figure computing formula in direction:
The direction of visible images is significantly schemed VO (c, s, θ) computing formula and is:
Wherein, the high dimension characteristic pattern that VO (c, θ) is visible images, the low dimension characteristic pattern that VO (s, θ) is visible images, θ value is
The vision attention calculating whole visible images by normalization operator N () is significantly schemed:
VS=N(V)+N(VRG)+N(VBY)+N(VO)。
4. according to claim 3 a kind of based on the infrared of well-marked target and visible light image fusion method, it is characterized in that, in step 3, significantly scheme on basis at infrared and visible images vision attention, choose the maximum point of gray-scale value in visual saliency map for blinkpunkt first, after selecting first well-marked target region, the gray-scale value of blinkpunkt is first set to zero, the point that in the remarkable figure of visual task, gray-scale value is maximum is again selected to be second time blinkpunkt, select the well-marked target region in infrared image and visible images successively, and remember M respectively i, M v, M irepresent significant target area number in infrared image, M vrepresent significant target area in visible images, the well-marked target region M in whole scene sumfor:
M sum = M I + M V - ( M I ∩ M V ) .
5. according to claim 4ly a kind ofly to it is characterized in that based on the infrared of well-marked target and visible light image fusion method, in step 4, for M sumindividual salient region, adopt the linear weighted function blending algorithm that Pixel-level merges to carry out fusion treatment to well-marked target region, the fused images obtaining marking area is:
F saliency(i,j)=w I(i,j)·I(i,j)+w V(i,j)·V(i,j)+C,
In infrared image and visible images, all significant region is M iV, w is set i(i, j)=w v(i, j)=0.5; Only in infrared image, significant region is M i-IV, w is set i(i, j)=0.8, w v(i, j)=0.2; Only in visible images, significant region is M v-IV, w is set i(i, j)=0.2, w v(i, j)=0.8;
To non-significant target area, adopt the fusion carrying out non-significant target area based on principal component analytical method, obtain fused images F non-saliency(i, j).
6. according to claim 5ly a kind ofly to it is characterized in that based on the infrared of well-marked target and visible light image fusion method, in step 5, calculate the infrared image of whole scene and the fusion results F (i, j) of visible images:
F(i,j)=F saliency(i,j)+F non-saliency(i,j)。
CN201510111415.0A 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target Active CN104700381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510111415.0A CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510111415.0A CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Publications (2)

Publication Number Publication Date
CN104700381A true CN104700381A (en) 2015-06-10
CN104700381B CN104700381B (en) 2018-10-12

Family

ID=53347469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510111415.0A Active CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Country Status (1)

Country Link
CN (1) CN104700381B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251355A (en) * 2016-08-03 2016-12-21 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106530266A (en) * 2016-11-11 2017-03-22 华东理工大学 Infrared and visible light image fusion method based on area sparse representation
CN106898008A (en) * 2017-03-01 2017-06-27 南京航空航天大学 Rock detection method and device
CN107292872A (en) * 2017-06-16 2017-10-24 艾松涛 Image processing method/system, computer-readable recording medium and electronic equipment
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN108288344A (en) * 2017-12-26 2018-07-17 李文清 A kind of efficient forest fire early-warning system
CN108769550A (en) * 2018-05-16 2018-11-06 中国人民解放军军事科学院军事医学研究院 A kind of notable analysis system of image based on DSP and method
CN109255793A (en) * 2018-09-26 2019-01-22 国网安徽省电力有限公司铜陵市义安区供电公司 A kind of monitoring early-warning system of view-based access control model feature
CN109447909A (en) * 2018-09-30 2019-03-08 安徽四创电子股份有限公司 The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN109493309A (en) * 2018-11-20 2019-03-19 北京航空航天大学 A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN110210407A (en) * 2019-06-04 2019-09-06 武汉科技大学 A kind of Misty Image well-marked target detection method
CN110489792A (en) * 2019-07-12 2019-11-22 中国人民解放军92942部队 A kind of method, apparatus and server for visual camouflage distance design
CN110874827A (en) * 2020-01-19 2020-03-10 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
CN111914422A (en) * 2020-08-05 2020-11-10 北京开云互动科技有限公司 Real-time visual simulation method for infrared features in virtual reality
US11010876B2 (en) * 2015-06-26 2021-05-18 Nec Corporation Image processing system, image processing method, and computer-readable recording medium
WO2021120408A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on double-layer optimization
CN113159229A (en) * 2021-05-19 2021-07-23 深圳大学 Image fusion method, electronic equipment and related product
US20220044374A1 (en) * 2019-12-17 2022-02-10 Dalian University Of Technology Infrared and visible light fusion method
CN116977154A (en) * 2023-09-22 2023-10-31 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium
US11967102B2 (en) 2021-07-16 2024-04-23 Shanghai United Imaging Intelligence Co., Ltd. Key points detection using multiple image modalities

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873440A (en) * 2010-05-14 2010-10-27 西安电子科技大学 Infrared and visible light video image fusion method based on Surfacelet conversion
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873440A (en) * 2010-05-14 2010-10-27 西安电子科技大学 Infrared and visible light video image fusion method based on Surfacelet conversion
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
BIN YANG ET AL.: "Visual attention guided image fusion with sparse representation", 《OPTIK》 *
万莉: "基于仿生视觉机理的多源图像融合", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邵静: "协同视觉选择注意计算模型研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11010876B2 (en) * 2015-06-26 2021-05-18 Nec Corporation Image processing system, image processing method, and computer-readable recording medium
CN106251355B (en) * 2016-08-03 2018-12-14 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106251355A (en) * 2016-08-03 2016-12-21 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106530266A (en) * 2016-11-11 2017-03-22 华东理工大学 Infrared and visible light image fusion method based on area sparse representation
CN106530266B (en) * 2016-11-11 2019-11-01 华东理工大学 A kind of infrared and visible light image fusion method based on region rarefaction representation
CN106898008A (en) * 2017-03-01 2017-06-27 南京航空航天大学 Rock detection method and device
CN107292872A (en) * 2017-06-16 2017-10-24 艾松涛 Image processing method/system, computer-readable recording medium and electronic equipment
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN108288344A (en) * 2017-12-26 2018-07-17 李文清 A kind of efficient forest fire early-warning system
CN108090888B (en) * 2018-01-04 2020-11-13 北京环境特性研究所 Fusion detection method of infrared image and visible light image based on visual attention model
CN108090888A (en) * 2018-01-04 2018-05-29 北京环境特性研究所 The infrared image of view-based access control model attention model and the fusion detection method of visible images
CN108769550A (en) * 2018-05-16 2018-11-06 中国人民解放军军事科学院军事医学研究院 A kind of notable analysis system of image based on DSP and method
CN108769550B (en) * 2018-05-16 2020-07-07 中国人民解放军军事科学院军事医学研究院 Image significance analysis system and method based on DSP
CN109255793A (en) * 2018-09-26 2019-01-22 国网安徽省电力有限公司铜陵市义安区供电公司 A kind of monitoring early-warning system of view-based access control model feature
CN109447909A (en) * 2018-09-30 2019-03-08 安徽四创电子股份有限公司 The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN109493309A (en) * 2018-11-20 2019-03-19 北京航空航天大学 A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN110210407A (en) * 2019-06-04 2019-09-06 武汉科技大学 A kind of Misty Image well-marked target detection method
CN110489792A (en) * 2019-07-12 2019-11-22 中国人民解放军92942部队 A kind of method, apparatus and server for visual camouflage distance design
CN110489792B (en) * 2019-07-12 2023-03-31 中国人民解放军92942部队 Method and device for designing visible light camouflage distance and server
US20220044374A1 (en) * 2019-12-17 2022-02-10 Dalian University Of Technology Infrared and visible light fusion method
WO2021120408A1 (en) * 2019-12-17 2021-06-24 大连理工大学 Infrared and visible light fusion method based on double-layer optimization
US11823363B2 (en) * 2019-12-17 2023-11-21 Dalian University Of Technology Infrared and visible light fusion method
US11830222B2 (en) 2019-12-17 2023-11-28 Dalian University Of Technology Bi-level optimization-based infrared and visible light fusion method
CN110874827B (en) * 2020-01-19 2020-06-30 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
CN110874827A (en) * 2020-01-19 2020-03-10 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
CN111914422A (en) * 2020-08-05 2020-11-10 北京开云互动科技有限公司 Real-time visual simulation method for infrared features in virtual reality
CN113159229A (en) * 2021-05-19 2021-07-23 深圳大学 Image fusion method, electronic equipment and related product
CN113159229B (en) * 2021-05-19 2023-11-07 深圳大学 Image fusion method, electronic equipment and related products
US11967102B2 (en) 2021-07-16 2024-04-23 Shanghai United Imaging Intelligence Co., Ltd. Key points detection using multiple image modalities
CN116977154A (en) * 2023-09-22 2023-10-31 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium
CN116977154B (en) * 2023-09-22 2024-03-19 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium

Also Published As

Publication number Publication date
CN104700381B (en) 2018-10-12

Similar Documents

Publication Publication Date Title
CN104700381A (en) Infrared and visible light image fusion method based on salient objects
Zhang et al. Deep-IRTarget: An automatic target detector in infrared imagery using dual-domain feature extraction and allocation
Luo et al. Multi-scale traffic vehicle detection based on faster R–CNN with NAS optimization and feature enrichment
Luo et al. Thermal infrared image colorization for nighttime driving scenes with top-down guided attention
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN104850850B (en) A kind of binocular stereo vision image characteristic extracting method of combination shape and color
CN109344701A (en) A kind of dynamic gesture identification method based on Kinect
Zhang et al. Vehicle color recognition using Multiple-Layer Feature Representations of lightweight convolutional neural network
CN113158943A (en) Cross-domain infrared target detection method
Zhou et al. YOLO-CIR: The network based on YOLO and ConvNeXt for infrared object detection
Huang et al. Correlation and local feature based cloud motion estimation
CN116091372B (en) Infrared and visible light image fusion method based on layer separation and heavy parameters
Fleyeh Traffic and road sign recognition
CN109214331A (en) A kind of traffic haze visibility detecting method based on image spectrum
CN105678318A (en) Traffic label matching method and apparatus
Ma et al. An all-weather lane detection system based on simulation interaction platform
CN106887002A (en) A kind of infrared image sequence conspicuousness detection method
Xiao et al. Pedestrian object detection with fusion of visual attention mechanism and semantic computation
Jin et al. Vehicle license plate recognition for fog‐haze environments
Lu et al. A cross-scale and illumination invariance-based model for robust object detection in traffic surveillance scenarios
Bai et al. Road type classification of MLS point clouds using deep learning
Bala et al. Image simulation for automatic license plate recognition
Ying et al. Region-aware RGB and near-infrared image fusion
CN115546667A (en) Real-time lane line detection method for unmanned aerial vehicle scene
CN201374082Y (en) Augmented reality system based on image unique point extraction and random tree classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant