CN104700381B - A kind of infrared and visible light image fusion method based on well-marked target - Google Patents

A kind of infrared and visible light image fusion method based on well-marked target Download PDF

Info

Publication number
CN104700381B
CN104700381B CN201510111415.0A CN201510111415A CN104700381B CN 104700381 B CN104700381 B CN 104700381B CN 201510111415 A CN201510111415 A CN 201510111415A CN 104700381 B CN104700381 B CN 104700381B
Authority
CN
China
Prior art keywords
image
visible images
infrared
notable
infrared image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510111415.0A
Other languages
Chinese (zh)
Other versions
CN104700381A (en
Inventor
邵静
秦晅
卢旻昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 28 Research Institute
Original Assignee
CETC 28 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 28 Research Institute filed Critical CETC 28 Research Institute
Priority to CN201510111415.0A priority Critical patent/CN104700381B/en
Publication of CN104700381A publication Critical patent/CN104700381A/en
Application granted granted Critical
Publication of CN104700381B publication Critical patent/CN104700381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of infrared and visible light image fusion method based on well-marked target, includes the following steps:The infrared image and visible images for including several targets to giving Same Scene are established infrared and visible images Nonlinear Scale Space Theories and are indicated respectively;On the basis of image non-linear scale space indicates, infrared and visible images vision attention notable figures are calculated using visual attention computation model;On the basis of infrared and visible images vision attention notable figure, the well-marked target region in infrared and visible images is selected respectively using inhibition of return mechanism, and calculate all well-marked target regions in entire scene;Registration operation is carried out to infrared image and visible images, fusion treatment is carried out using Pixel-level blending algorithm to well-marked target region, to non-significant target area, fusion treatment is carried out using feature-based fusion algorithm;Synthesis result generates the infrared image and visual image fusion image of whole scene.

Description

A kind of infrared and visible light image fusion method based on well-marked target
Technical field
The present invention relates to a kind of multi-source image integration technology field, it is especially a kind of based on well-marked target it is infrared with it is visible Light image multi-level Fusion method.
Background technology
Image co-registration (Image Fusion) refers to that multiresolution or multimedium image data are passed through spatial registration and figure As message complementary sense generates the Comprehensive Analysis Technique of new image.Compared with single-sensor image, blending image can be to greatest extent The information using each source images, improve resolution ratio and clarity, increase sensitivity, perceived distance and the essence of image object perception Degree, anti-interference ability etc., to reduce the imperfection and uncertainty of target apperception, improve to the accuracy rate of target four and Scene interpretation ability.The general flow of image co-registration is as shown in the figure.First, some pretreatment operations are carried out to several source images, Such as filter out noise;Secondly, time-space relation is carried out to image, that is, mapped them into the same space-time coordinates, to ensure The consistency of their space-time positions;Again, the image after registration is handled using corresponding method;Finally, according to certain Fusion rule carry out fusion treatment obtain blending image.
Photoelectricity and infrared detection sensor are two kinds of most commonly seen load of the tactical reconnaissances platform such as early warning plane and unmanned plane. Visual light imaging is in 0.4~1.0 μm of wave band, its main feature is that high resolution, good concealment can get abundant contrast, color And shape information, but influenced by ambient light illumination, it cannot work at night, no camouflage recognition capability.Infrared imaging is in 8~14 μm of wave band (long infrared band) and 3~5 μm (middle infrared bands), its main feature is that with the ability for centainly penetrating cigarette, mist, snow etc., concealment Good, imaging resolution is higher, and detection range is generally in several kms between more than ten kms, but climate influences, operating distance compared with Image is not sufficiently stable when remote.When each spot temperature of target itself changes greatly or thermal background emission characteristic is weaker, infrared figure As comprising target or background detailed information it is less, and visible images then may include abundant detailed information;However, in light Line is dark or has in cigarette, cloud, mist bad border, it is seen that light image is second-rate, and the target in infrared image is but still clear and legible.It will Visible images carry out fusion treatment with infrared image, and ambient noise can be overcome to grasp mesh comprehensively to the influence of target acquisition Target shape and minutia, and do not influenced by ambient lighting, it round-the-clock can use, to improving reconnaissance platforms detection and figure System performance is studied and judged as intelligence analysis to be of great significance.
The key tactical that US Department of Defense formulates in different times in the works, has significant component of task to be related to multi-source figure In terms of fusion." LANTIAN " gondola that good operational performance is played in the Gulf War be exactly one kind can by forward-looking infrared, swash The image fusion system that the multiple sensors information superposition such as ligh-ranging, visible light camera is shown.American TI Company is in nineteen ninety-five It is obtained from U.S.'s night vision and electronic sensor management board (NVESD) by infrared with three generations's LLL image fusion integrated design to first Into the contract of helicopter (AHP) sensing system, signal processing function is completed by TMS32OC30DSP.2003, U.S. national defense Portion proposes " the lateral fusion (Transformational about Horizontal Fusion) in Military transforming " white paper, Focus on laterally fusion especially data fusion, it is therefore an objective to the lateral interoperability of strengthening system especially data fusion ability. 2006, general office of US Department of Defense (OSD) issued the Annual Technical bulleted list to be verified within the coming years, to push away Move the fast development in these fields.
The work that the country is merged in multi-source image concentrates on algorithm research, including the use of DT-CWT transformation, multi-scale wavelet The methods of sampling, region segmentation research multi-source image fusion.Existing systems technology is confined to the difference to Same Scene mostly The simple superposition of the image pixel of phase or its different color channels accepts or rejects operation.
Invention content
Goal of the invention:The technical problem to be solved by the present invention is to be directed to existing infrared image and visual image fusion side It is insufficient present in method, a kind of infrared and visible light image fusion method based on well-marked target is provided.
In order to solve the above-mentioned technical problem, the invention discloses a kind of, and the infrared and visible images based on well-marked target melt Conjunction method, includes the following steps:
Step 1, the infrared image and visible images for including several (more than one) targets to giving Same Scene, respectively Infrared and visible images multiscale spaces are established to indicate;
Step 2, it on the basis of image non-linear scale space indicates, is calculated using visual attention computation model infrared and can The vision attention notable figure of light-exposed image.
Step 3, it on the basis of infrared and visible images vision attention notable figure, is selected respectively using inhibition of return mechanism Go out the well-marked target region in infrared and visible images, and calculates all well-marked target regions in entire scene.
Step 4, registration operation is carried out to infrared image and visible images, well-marked target region is merged using Pixel-level Algorithm carries out fusion treatment, and to non-significant target area, fusion treatment is carried out using feature-based fusion algorithm.
Step 5, comprehensive well-marked target region fusion results and non-significant region fusion results, generate the infrared figure of whole scene Picture and visual image fusion image.
Step 1 of the present invention establishes infrared and visible images multiple dimensioned skies including the use of the method for Nonlinear Scale Space Theory Between indicate;The multiscale space of infrared image and visible images indicates:
L:I(x1,y1)×t1→I(x1,y1;t1), wherein wherein L converts for scale space, x1It indicates in infrared image I Abscissa, y1Indicate the ordinate in infrared image I, t1Indicate the scale factor of infrared image multiscale space.L:V(x2, y2)×t2→V(x2,y2;t2), wherein wherein L converts for scale space, x2Indicate the abscissa in visible images V, y2Table Show the ordinate in visible images V, t2Indicate the scale factor of visible images multiscale space.
For infrared image:I(x1,y1;t1)=I (x1,y1;0)*g(x1,y1;t1), infrared image scaling function
For visible images:V(x2,y2;t2)=V (x2,y2;0)*g(x2,y2;t2), it is seen that light image scaling function
In step 2 of the present invention, the visual attention computation model of infrared image and visible images is established, calculates infrared image With the vision attention notable figure of visible images, including:Brightness, color and the direction for calculating infrared image and visible images are low Layer visual signature figure and corresponding each characteristic remarkable picture.
For infrared image:
Brightness notable figure calculation formula:
C indicates that the high yardstick factor, s indicate that low scale factor, I (c) indicate that high yardstick is empty Between infrared image, I (s) indicates the infrared image of low scale space.
Color notable figure calculation formula:
Red green/green red double opposition images of infrared image Wherein IR (c) is the high yardstick red channel image of infrared image, and IR (s) is the low scale red channel image of infrared image, IG (c) is the high yardstick green channel images of infrared image, and IG (s) is the low scale green channel images of infrared image;
The double opposition images of blue Huang/champac of infrared image Wherein, IB (c) is the high yardstick blue channel image of infrared image, and IB (s) is the low scale blue channel image of infrared image, IY (c) is the high yardstick yellow channels image of infrared image, and IY (s) is the low scale yellow channels image of infrared image.
Direction notable figure calculation formula:
The direction notable figure of infrared imageWherein, IO (c, θ) is infrared image High yardstick direction character figure, IO (s, θ) is the low dimension characteristic pattern of infrared image, and θ values are
The vision attention notable figure of entire infrared image is calculated with normalization operator N ():
IS=N (I)+N (IRG)+N (IBY)+N (IO).
For visible images:
Brightness notable figure calculation formula:
C indicates that the high yardstick factor, s indicate that low scale factor, V (c) indicate high yardstick space Visible images, V (s) indicates the visible images of low scale space.
Color notable figure calculation formula:
Red green/green red double opposition images of visible images Wherein VR (c) is the high yardstick red channel image of visible images, and VR (s) is the low scale red channel figure of visible images Picture, VG (c) are the high yardstick green channel images of visible images, and VG (s) is the low scale green channel figure of visible images Picture;
The double opposition images of blue Huang/champac of visible images Wherein, VB (c) is the high yardstick blue channel image of visible images, and VB (s) is the low scale blue channel of visible images Image, VY (c) are the high yardstick yellow channels image of visible images, and VY (s) is the low scale yellow channels of visible images Image.
Direction notable figure calculation formula:
Wherein, VO (c, θ) is visible to the direction notable figure of visible images The high yardstick direction character figure of light image, VO (s, θ) are the low dimension characteristic pattern of visible images, and θ values are
The vision attention notable figure of entire visible images is calculated with normalization operator N ():
VS=N (V)+N (VRG)+N (VBY)+N (VO).
In step 3 of the present invention, on the basis of infrared and visible images vision attention notable figure, choose in visual saliency map The maximum point of gray value is that the gray value of blinkpunkt for the first time is set as by blinkpunkt after selecting first well-marked target region for the first time Zero, it is second of blinkpunkt to select in visual task notable figure maximum o'clock of gray value again, select successively infrared image and Well-marked target region in visible images, and M is remembered respectivelyI, MV, MIIndicate the significant target area number in infrared image, MVIndicate the significant target area in visible images, the well-marked target region M in entire scenesumFor:Msum=MI+MV- (MI∩MV)。
In step 4 of the present invention, for MsumA salient region, the linear weighted function blending algorithm pair merged using Pixel-level Well-marked target region carries out fusion treatment, obtains the blending image F of marking areasaliency(i, j)=wI(i,j)·I(i,j)+ wV(i, j) V (i, j)+C, for MIVW is arranged in a region all significant in infrared image and visible imagesI(i, j)=wV (i, j)=0.5;For MI-IVW is arranged in a region significant only in infrared imageI(i, j)=0.8, wV(i, j)=0.2;It is right In MV-IVW is arranged in a region significant only in visible imagesI(i, j)=0.2, wV(i, j)=0.8;To non-significant target Region obtains blending image F using the fusion for carrying out non-significant target area based on principal component analytical methodnon-saliency(i, j);
In step 5 of the present invention, the infrared image of entire scene and the fusion results of visible images are calculated, F (i, j)= Fsaliency(i,j)+Fnon-saliency(i,j)。
Advantageous effect:The invention has the advantages that:The multi-level Fusion model of infrared image and visible images is established, is utilized Visual attention computation model selects the well-marked target in infrared image and visible images respectively, is carried out to well-marked target region Pixel-level merges, and ensures the fusion mass of infrared image and visible images, and feature is carried out to other non-limiting background areas Grade fusion, the data volume of processing greatly reduce reason.Method proposed by the present invention while ensuring to well-marked target syncretizing effect, The fusion efficiencies to a large amount of non-significant target areas are improved, fusion mass and efficiency have been taken into account.
Description of the drawings
The present invention is done with reference to the accompanying drawings and detailed description and is further illustrated, of the invention is above-mentioned And/or otherwise advantage will become apparent.
Fig. 1 is the infrared and visible light image fusion method flow chart based on well-marked target
Fig. 2 is the infrared image of a certain scene
The visible images that Fig. 3 is with Fig. 2 is Same Scene
Fig. 4 is the blending image that the infrared and visible light image fusion method based on well-marked target obtains
Specific implementation mode
In conjunction with Fig. 1, the present invention is based on the infrared and visible images multi-level Fusion method of well-marked target, specifically include with Lower step:
Step 1, the multiscale space that infrared image and visible images are established using Nonlinear Scale Space Theory method is indicated. Infrared image equipped with certain scene is I (x1,y1), (x1,y1∈R2)x1Indicate the abscissa in infrared image I, y1Indicate infrared Ordinate in image I;Visible images are V (x2,y2), (x2,y2∈R2), x2Indicate the abscissa in visible images V, y2 Indicate the ordinate in visible images V.Scale factor is denoted as t ∈ R+, the scale space table of infrared image and visible images Show respectively L:I(x1,y1)×t1→I(x1,y1;t1), t1Indicate the scale factor of infrared image multiscale space.L:V(x2, y2)×t2→V(x2,y2;t2), t2Indicate the scale factor of visible images multiscale space.
I(x1,y1;0)=I (x1,y1)Lt+s(I)=Lt(Ls(I)) (1)
V(x2,y2;0)=V (x2,y2)Lt+s(V)=Lt(Ls(V)) (2)
Wherein L is scale space transformation, and the initial value of scale t is the scale in original image, generally 0.The expansion that L meets Scattered equation is:
Wherein,Indicate that the equilibrium response of diffusion, D are that a positive definite symmetric matrices indicates diffusion tensor.Such as FruitWithIt is parallel, then it represents that diffusion is isotropic, otherwise is anisotropy.If D is a normal number, accordingly Scale space is linear-scale space;If D is one and the relevant scalar function of picture structure, for non-linear isotropism ruler Spend space;If D be with the relevant vector function of picture structure, for Anisotropic Nonlinear scale space.
The linear-scale space indicated with gaussian kernel function is a kind of Gaussian convolution smoothing process not generating new construction.
For infrared image, can obtain:
I(x1,y1;t1)=I (x1,y1;0)*g(x1,y1;t1) infrared image scaling function
For visible images, can obtain:
V(x2,y2;t2)=V (x2,y2;0)*g(x2,y2;t2) visible images scaling function
Linearity scale space indicate in Gaussian function in smoothed image while noise, some also smooth weights The feature wanted, to be very difficult to extract these features on thick scale;Meanwhile the use of isotropic linear diffusion equation is led It is difficult to determine to cause the correspondence at edge between different scale, so that the correspondence between some thickness scales indicate calculates It is difficult.To make the details such as the object edge in image not be smoothed, the partial zones for the processing image that diffusion equation should be adaptive Domain, to keep the information for being not intended to be blurred, i.e. method of diffusion should be non-linear.With non-linear isotropic diffusion equation The Nonlinear Scale Space Theory for establishing image indicates, can ensure that the region contour in each scale image is enhanced, and be conducive to not It is calculated with the correspondence between scale, shown in diffusion equation such as formula (6):
Or
Wherein, λ>0 indicates edge threshold.
Step 2, it on the basis of image non-linear scale space indicates, is calculated using visual attention computation model infrared and can The vision attention notable figure of light-exposed image.
On the basis of step 1 indicates infrared image and visible images scale space, color, brightness and side are extracted respectively To the low-level feature as guiding vision attention, all kinds of characteristic remarkable pictures are calculated by center-periphery difference of receptive field, are finally led to It crosses normalization operator and obtains vision attention notable figure.
(1) low-level visual feature figure
For infrared image:
Brightness figure:Remember Ir (t1)、Ig(t1) and Ib (t1) it is respectively red, green and indigo plant in original infrared image Chrominance channel, wherein t1Indicate that scale factor, luminance graph are:
I(t1)=(Ir (t1)+Ig(t1)+Ib(t1))/3 (8)
Color characteristic figure:With I (t1) to r (t1)、g(t1) and b (t1) channel is normalized, can obtain sensu lato Red, green, blue and Huang Si channel value, such as formula (9), (10), (11), shown in (12).
IR(t1)=Ir (t1)-(Ig(t1)+Ib(t1))/2 (9)
IG(t1)=Ig (t1)-(Ir(t1)+Ib(t1))/2 (10)
IB(t1)=Ib (t1)-(Ir(t1)+Ig(t1))/2 (11)
IY(t1)=Ir (t1)+Ig(t1)-2*(|Ir(t1)-Ig(t1)|+Ib(t1)) (12)
Wherein, negative value is set as 0.
Direction character figure:The direction character figure that infrared image is calculated using Gabor filter, as shown in formula (13):
Wherein (x0,y0) it is receptive field centre coordinate in spatial domain, x0For abscissa, y0For ordinate;(ξ00) it is filter Optimal spatial frequency on frequency domain, ξ0For real part, υ0For imaginary part.α and β is the standard of Gaussian function on x and y-axis direction respectively Difference.The present invention takes the Gabor filter of four direction to export and is used as directional vision characteristic pattern:N is constant 4, and i is indicated not Same direction, value 0,1,2,3.
For visible images:
Brightness figure:Remember Vr (t2)、Vg(t2) and Vb (t2) be respectively red in primary visible light image, green and Blue channel, wherein t2Indicate scale factor, then luminance graph is:
V(t2)=(Vr (t2)+Vg(t2)+Vb(t2))/3 (14)
Color characteristic figure:With V (t2) to Vr (t2)、Vg(t2) and Vb (t2) channel is normalized, can obtain broadly Red, green, blue and Huang Si channel value, such as formula (15), (16), (17), shown in (18).
VR(t2)=Vr (t2)-(Vg(t2)+Vb(t2))/2 (15)
VG(t2)=Vg (t2)-(Vr(t2)+Vb(t2))/2 (16)
VB(t2)=Vb (t2)-(Vr(t2)+Vg(t2))/2 (17)
VY(t2)=Vr (t2)+Vg(t2)-2*(|Vr(t2)-Vg(t2)|+Vb(t2)) (18)
Wherein, negative value is set as 0.
Direction character figure:The direction character figure that visible images are calculated using Gabor filter, as shown in formula (19):
Wherein (x0,y0) it is receptive field centre coordinate in spatial domain, x0For abscissa, y0For ordinate;(ξ00) it is filter Optimal spatial frequency on frequency domain, ξ0For real part, υ0For imaginary part.α and β is the standard of Gaussian function on x and y-axis direction respectively Difference.The present invention takes the Gabor filter of four direction to export and is used as directional vision characteristic pattern:N is constant 4, and i is indicated not Same direction, value 0,1,2,3.
(2) characteristic remarkable picture
For infrared image:
By brightness I, the method for four color components IR, IG, IB, IY and four direction characteristic pattern scale space carries out table Show, center image correspond to high-resolution under scale, periphery image correspond to low resolution under scale, realize receptive field and The calculative strategy for integrating wild center-periphery difference, obtains each characteristic remarkable picture.
Brightness notable figure:
Brightness notable figure is generated by luminance contrast, and brightness notable figure is indicated with I (c, s):
Wherein, c is the scale factor that infrared image scale space indicates middle high-resolution, and s is low resolution scale factor,Son is calculated for center-periphery difference, the reality of the respective pixel in low-resolution image is subtracted by the pixel in high-definition picture It is existing.
Color notable figure:
Red green/green red double opposition images of infrared image:
Wherein IR (c) is the high yardstick red channel image of infrared image, and IR (s) is that the low scale red of infrared image is logical Road image, IG (c) are the high yardstick green channel images of infrared image, and IG (s) is the low scale green channel figure of infrared image Picture.
The double opposition images of blue Huang/champac of infrared image:
Wherein, IB (c) is the high yardstick blue channel image of infrared image, and IB (s) is the low scale blue of infrared image Channel image, IY (c) are the high yardstick yellow channels image of infrared image, and IY (s) is the low scale yellow channels of infrared image Image.
Direction notable figure:
Using Gabor pyramids IO (σ, θ), image local directional information can be obtained, wherein σ is scale factor, θ ∈ {0°,45°,90°,135°}.It can be by direction by the calculating of local direction contrast by the calculating of local direction contrast Notable figure IO (c, s, θ) is encoded into one group:
For visible images:
By brightness V, the method for four color components VR, VG, VB, VY and four direction characteristic pattern scale space carries out table Show, center image correspond to high-resolution under scale, periphery image correspond to low resolution under scale, realize receptive field and The calculative strategy for integrating wild center-periphery difference, obtains each characteristic remarkable picture.
Brightness notable figure:
Brightness notable figure is generated by luminance contrast, and brightness notable figure is indicated with V (c, s):
Wherein, c be visible images scale space indicate middle high-resolution scale factor, s be low resolution scale because Son,Son is calculated for center-periphery difference, the respective pixel in low-resolution image is subtracted by the pixel in high-definition picture It realizes.
Color notable figure:
Red green/green red double opposition images of visible images:
Wherein VR (c) is the high yardstick red channel image of visible images, and VR (s) is that the low scale of visible images is red Chrominance channel image, VG (c) are the high yardstick green channel images of visible images, and VG (s) is that the low scale of visible images is green Chrominance channel image.
The double opposition images of blue Huang/champac of visible images:
Wherein, VB (c) is the high yardstick blue channel image of visible images, and VB (s) is the low scale of visible images Blue channel image, VY (c) are the high yardstick yellow channels image of visible images, and VY (s) is the low scale of visible images Yellow channels image.
Direction notable figure:
Using Gabor pyramids VO (σ, θ), image local directional information can be obtained, wherein σ is scale factor, θ ∈ {0°,45°,90°,135°}.It can be by direction by the calculating of local direction contrast by the calculating of local direction contrast Notable figure VO (c, s, θ) is encoded into one group:
(3) vision attention notable figure
The low-level feature figure of brightness, color and direction has different dynamic range and extraction mechanism, due to all spies Sign notable figure is combined, only the stronger object of conspicuousness in a small number of figures, may by noise or in big spirogram conspicuousness compared with Small object is covered.Therefore, the present invention to enhance the less characteristic remarkable picture in notable peak, and is cut using normalization operator N () The weak characteristic remarkable picture that there are a large amount of significantly peaks.To each category feature notable figure, the operation of the operator includes:1) spy is normalized It levies in figure to [0 ..., 1] range, to eliminate the amplitude difference dependent on feature;2) all local poles in addition to global maximum are calculated Big mean value) useMultiply this feature notable figure.
For infrared image:
Brightness, color and direction character notable figure are normalized respectively with normalization operator N (), obtained WithIt is shown below.
Symbol ⊕ indicates point-by-point summation in formula.
Vision attention notable figure has formula (22) calculating:
For visible images:
Brightness, color and direction character notable figure are normalized respectively with normalization operator N (), obtained WithIt is shown below.
Symbol ⊕ indicates point-by-point summation in formula.
Vision attention notable figure has formula (22) calculating:
Step 3, it on the basis of infrared and visible images vision attention notable figure, is selected respectively using inhibition of return mechanism Go out the well-marked target region in infrared and visible images, and calculates all well-marked target regions in entire scene.
After visual saliency map is calculated by step 2, it is to watch attentively for the first time to choose the maximum point of gray value in visual saliency map Point goes out first well-marked target region centered on the point with the rectangular selection of original image size 1/16.First well-marked target region After selecting, the gray value of blinkpunkt for the first time is set as zero, select again in visual task notable figure the maximum point of gray value for Second of blinkpunkt goes out second well-marked target region using the rectangular selection of original image size 1/16.Iterative cycles select one by one Select out the well-marked target region in original image.Well-marked target region in the infrared image and visible images selected is respectively MI, MV
Well-marked target region in entire scene is Msum=MI+MV-(MI∩MV)。
Step 4, registration operation is carried out to infrared image and visible images, well-marked target region is merged using Pixel-level Algorithm carries out fusion treatment, and to non-significant target area, fusion treatment is carried out using feature-based fusion algorithm.
Using crucial point match method, registration operation is carried out to infrared image I and visible images V.
For salient region, it is M to calculate the well-marked target in entire scene by step 3sum, wherein in infrared image It is M with all significant areal in visible imagesIV=MI∩MV, significant areal is M only in infrared imageI-IV =MI-MIV, significant areal is M only in visible imagesV-IV=MV-MIV, using the linear weighted function of Pixel-level fusion Blending algorithm carries out fusion treatment to well-marked target region, obtains the blending image F of marking areasaliency(i, j), with formula (23) It indicates:
Fsaliency(i, j)=wI(i,j)·I(i,j)+wV(i,j)·V(i,j)+C (23)
Wherein, I (i, j) and V (i, j) indicates that the pixel position in source images, F (i, j) indicate the picture in blending image Vegetarian refreshments position.wI(i, j) and wV(i, j) is weights, and wI(i,j)+wV(i, j)=1.For MIVIt is a in infrared image and visible W is arranged in all significant region in light imageI(i, j)=wV(i, j)=0.5;For MI-IVIt is a significant only in infrared image W is arranged in regionI(i, j)=0.8, wV(i, j)=0.2;For MV-IVW is arranged in a region significant only in visible imagesI (i, j)=0.2, wV(i, j)=0.8.
To non-significant target area, feature level figure is used to the non-significant target area in infrared image and visible images As blending algorithm carries out fusion treatment.Using based on principal component analysis (Principal Component in this patent Analysis, PCA) method carries out the fusion of non-significant target area.
Infrared image and visible images are converted by PCA shift theories respectively, obtain each principal component, by infrared figure The first principal component image of picture and visible images carries out Histogram Matching, then replaces the first Main classification with full-colour image, Blending image F is being obtained by PCA inverse transformationsnon-saliency(i,j)。
Step 5, comprehensive well-marked target region fusion results and non-significant region fusion results, generate the infrared figure of whole scene Picture and visual image fusion image.
The blending image F in well-marked target region in step 4 calculates scenesaliency(i, j) and non-significant target area Image Fnon-saliencyOn the basis of (i, j), the infrared image of entire scene and the fusion results of visible images are calculated, such as formula (24) shown in.
F (i, j)=Fsaliency(i,j)+Fnon-saliency(i,j) (24)
Embodiment
Illustrate the realization process of the present invention by a specific example.
Fig. 2 is the infrared image of a certain scene, and Fig. 3 is the visible images of a certain scene.
It is indicated according to the Nonlinear Scale Space Theory for described in step 1, initially setting up infrared image and visible images, edge threshold Value λ is taken as 0.5.
According to brightness, color and the directional vision feature for described in step 2, calculating separately infrared image and visible images Figure and brightness, color and direction notable figure calculate the vision attention notable figure of infrared image and visible images.
It it is 5 according to the well-marked target region described in step 3, calculated in infrared image, it is seen that notable in light image Target area is 4, wherein all significant target is 3 in infrared image and visible images, the notable mesh in entire scene It is 6 to mark region.
According to described in step 4,6 well-marked target regions in scene are merged according to pixel level fusing method, it is right It is merged according to feature level fusing method other non-significant target areas.
According to described in step 5, the fusion results in well-marked target region and the fusion results of non-significant target area are carried out It is comprehensive, obtain entire scene infrared image and visual image fusion as a result, as shown in Figure 4.
Effect of the present invention can be summarised as:
1) the multi-level Fusion model for establishing image co-registration, in scene well-marked target and non-significant target be respectively adopted Pixel level fusing method and feature level fusing method carry out Pixel-level fusion to well-marked target, i.e. essence fusion, to other background areas Domain carries out feature-based fusion, i.e., thick fusion improves while ensuring to well-marked target syncretizing effect to a large amount of non-significant targets The fusion efficiencies in region meet with the visual cognition process height of the mankind.
2) the multiscale space expression of image, the noise in smoothed image are established using Nonlinear Scale Space Theory method Simultaneously, it is ensured that well-marked target profile is enhanced in each scale image, improves the fusion mass to well-marked target in scene.
3) visual attention computation model for establishing infrared image and visible images, in terms of brightness, color and direction three Metric objective conspicuousness, and calculated separately in infrared image and visible images by normalizing operator and inhibition of return mechanism Well-marked target, simultaneous selection goes out the well-marked target in infrared image and visible images, and carries out fusion treatment.
4) method proposed by the present invention cannot be only used for the fusion treatment of infrared image and visible images, can also be into one Step is generalized to other multi-source image fusions field, such as multispectral image and visual image fusion, SAR image and visible light figure Picture, SAR image and infrared image etc., application prospect is extensive.
The present invention provides a kind of infrared and visible light image fusion method based on well-marked target, implements the technology There are many method and approach of scheme, the above is only a preferred embodiment of the present invention, it is noted that for the art Those of ordinary skill for, various improvements and modifications may be made without departing from the principle of the present invention, these change Protection scope of the present invention is also should be regarded as into retouching.The available prior art of each component part being not known in the present embodiment adds To realize.

Claims (1)

1. a kind of infrared and visible light image fusion method based on well-marked target, which is characterized in that include the following steps:
Step 1, the non-linear multiscale space of infrared image and visible images is established using non-linear multiscale space method It indicates:
If being I (x there are one the infrared image of scene1,y1), x1Indicate the abscissa in infrared image I, y1Indicate infrared image I In ordinate;Visible images are V (x2,y2), x2Indicate the abscissa in visible images V, y2Indicate visible images V In ordinate, scale factor is denoted as t ∈ R+, the non-linear multiscale space expression of infrared image and visible images is respectively I(x1,y1)×t1→I(x1,y1;t1), t1Indicate the scale factor of the non-linear multiscale space of infrared image, V (x2,y2)×t2 →V(x2,y2;t2), t2Indicate the scale factor of the non-linear multiscale space of visible images:
I(x1,y1;0)=I (x1,y1)
V(x2,y2;0)=V (x2,y2)
Wherein I (x1,y1;0) it is the infrared image of original scale, V (x2,y2;0) it is the visible images of original scale, scale t's Initial value is the scale in original image;
The linear-scale space indicated with gaussian kernel function is the Gaussian convolution smoothing process for not generating new construction;
For infrared image, obtain:
I(x1,y1;t1)=I (x1,y1;0)*g(x1,y1;t1)
Infrared image scaling function
For visible images, obtain:
V(x2,y2;t2)=V (x2,y2;0)*g(x2,y2;t2)
Visible images scaling function
The regional area of the adaptive processing image of diffusion equation, to keep the information for being not intended to be blurred, i.e. method of diffusion It is non-linear;It is indicated with the non-linear multiscale space of non-linear isotropic diffusion establishing equation image, it is ensured that each scale Region contour in image is enhanced, and is conducive to the calculating of the correspondence between different scale, and diffusion equation is shown below:
Or
Wherein,It is expressed as the gradient of image,Indicate the modulus value of the gradient of image,The diffusion coefficient of expression, λ > 0 indicates edge threshold;
Step 2, it on the basis of image non-linear multiscale space indicates, is calculated using visual attention computation model infrared and visible The vision attention notable figure of light image;
On the basis of step 1 indicates infrared image and visible images scale space, color, brightness and direction are extracted respectively and is made To guide the low-level feature of vision attention, all kinds of characteristic remarkable pictures are calculated by the center of receptive field-periphery difference, finally by returning One change operator obtains vision attention notable figure;
(1) low-level visual feature figure:
For infrared image:
Brightness figure:Remember Ir (t1)、Ig(t1) and Ib (t1) it is respectively that red in original infrared image, green and blue are logical Road, wherein t1Indicate that scale factor, brightness figure are:
I(t1)=(Ir (t1)+Ig(t1)+Ib(t1))/3
Color characteristic figure:To I (t1) r (t1)、g(t1) and b (t1) channel is normalized, obtain sensu lato red IR (t1)、 Green IG (t1), indigo plant IB (t1) and Huang IY (t1) four channel values, as shown in following formula:
IR(t1)=Ir (t1)-(Ig(t1)+Ib(t1))/2
IG(t1)=Ig (t1)-(Ir(t1)+Ib(t1))/2
IB(t1)=Ib (t1)-(Ir(t1)+Ig(t1))/2
IY(t1)=Ir (t1)+Ig(t1)-2*(|Ir(t1)-Ig(t1)|+Ib(t1))
Wherein, it is set as 0 if channel value result is negative value;
Direction character figure:The direction character figure that infrared image is calculated using Gabor filter, is shown below:
Wherein (x1,y1) be infrared image pixel coordinate, (x0,y0) it is receptive field centre coordinate in spatial domain, x0For abscissa, y0 For ordinate;(ξ00) it is optimal spatial frequency of the filter on frequency domain, ξ0For real part, υ0For imaginary part, i1 indicates different Direction;σ and β is the standard deviation of Gaussian function on x and y-axis direction respectively;The Gabor filter of four direction is taken to export conduct side To characteristic pattern:Four direction is respectively
For visible images:
Brightness figure:Remember Vr (t2)、Vg(t2) and Vb (t2) it is respectively red, green and blue in primary visible light image Channel, wherein t2Indicate scale factor, then brightness figure is:
V(t2)=(Vr (t2)+Vg(t2)+Vb(t2))/3
Color characteristic figure:To V (t2) Vr (t2)、Vg(t2) and Vb (t2) channel is normalized, obtain it is sensu lato it is red, green, Blue and four channel value VR (t of Huang2), VR (t2), VB (t2), VY (t2), as shown in following formula:
VR(t2)=Vr (t2)-(Vg(t2)+Vb(t2))/2
VG(t2)=Vg (t2)-(Vr(t2)+Vb(t2))/2
VB(t2)=Vb (t2)-(Vr(t2)+Vg(t2))/2
VY(t2)=Vr (t2)+Vg(t2)-2*(|Vr(t2)-Vg(t2)|+Vb(t2))
Wherein, it is set as 0 if channel value result is negative value;
Direction character figure:The direction character figure that visible images are calculated using Gabor filter, is shown below:
Wherein (x2,y2) be visible images pixel coordinate;(x0,y0) it is receptive field centre coordinate in spatial domain, x0For abscissa, y0For ordinate;(ξ00) it is optimal spatial frequency of the filter on frequency domain, ξ0For real part, υ0For imaginary part, i1 indicates different Direction;σ and β is the standard deviation of Gaussian function on x and y-axis direction respectively;The Gabor filter of four direction is taken to export conduct side To characteristic pattern:Four direction is respectively
(2) characteristic remarkable picture
For infrared image:
By brightness I, the method for four color components IR, IG, IB, IY and four direction characteristic pattern scale space is indicated, Center image corresponds to the scale under high-resolution, and periphery image corresponds to the scale under low resolution, realizes receptive field and whole The calculative strategy for closing wild center-periphery difference, obtains each characteristic remarkable picture;
Brightness notable figure:
Brightness notable figure is generated by luminance contrast, and brightness notable figure is indicated with I (c, s):
Wherein, c is the scale factor that infrared image scale space indicates middle high-resolution, and s is low resolution scale factor,For Center-periphery difference calculates son, and subtracting the respective pixel in low-resolution image by the pixel in high-definition picture realizes;
Color notable figure:
Red green/green red double opposition images of infrared image:
Wherein IR (c) is the high yardstick red channel image of infrared image, and IR (s) is the low scale red channel figure of infrared image Picture, IG (c) are the high yardstick green channel images of infrared image, and IG (s) is the low scale green channel images of infrared image;
The double opposition images of blue Huang/champac of infrared image:
Wherein, IB (c) is the high yardstick blue channel image of infrared image, and IB (s) is the low scale blue channel of infrared image Image, IY (c) are the high yardstick yellow channels image of infrared image, and IY (s) is the low scale yellow channels image of infrared image;
Direction notable figure:
Image local directional information is obtained using Gabor pyramids IO (σ 1, θ), wherein σ 1 is scale factor, θ ∈ 0 °, 45 °, 90°,135°};By the calculating of local direction contrast, direction notable figure IO (c, s, θ) is encoded into one group:
IO (s, θ) | indicate that the direction character figure of low-resolution image, IO (c, θ) indicate the direction character figure of high-definition picture;
For visible images:
By brightness V, the method for four color components VR, VG, VB, VY and four direction characteristic pattern scale space is indicated, Center image corresponds to the scale under high-resolution, and periphery image corresponds to the scale under low resolution, realizes receptive field and whole The calculative strategy for closing wild center-periphery difference, obtains each characteristic remarkable picture;
Brightness notable figure:
Brightness notable figure is generated by luminance contrast, and brightness notable figure is indicated with V (c, s):
Wherein, c is the scale factor that visible images scale space indicates middle high-resolution, and s is low resolution scale factor, Son is calculated for center-periphery difference, the reality of the respective pixel in low-resolution image is subtracted by the pixel in high-definition picture It is existing;
Color notable figure:
Red green/green red double opposition images of visible images:
Wherein VR (c) is the high yardstick red channel image of visible images, and VR (s) is that the low scale red of visible images is logical Road image, VG (c) are the high yardstick green channel images of visible images, and VG (s) is that the low scale green of visible images is logical Road image;
The double opposition images of blue Huang/champac of visible images:
Wherein, VB (c) is the high yardstick blue channel image of visible images, and VB (s) is the low scale blue of visible images Channel image, VY (c) are the high yardstick yellow channels image of visible images, and VY (s) is the low scale yellow of visible images Channel image;
Direction notable figure:
Using Gabor pyramids VO (σ 1, θ), image local directional information, wherein σ are obtained1For scale factor, θ ∈ 0 °, 45 °, 90°,135°};By the calculating of local direction contrast, direction notable figure VO (c, s, θ) can be encoded into one group:
(3) vision attention notable figure:
Using normalization operator N (), to enhance the less characteristic remarkable picture in notable peak, and weaken the spy that there are a large amount of significantly peaks Notable figure is levied, to each category feature notable figure, the operation of the operator includes:
1) it normalizes in this feature notable figure to 0~1 range, to eliminate the amplitude difference dependent on feature;
2) mean value of all local maximums in addition to global maximum is calculated
3) it usesMultiply this feature notable figure;
For infrared image:
Brightness, color and direction notable figure are normalized respectively with normalization operator N (), obtained WithIt is shown below:
Symbol in formulaIndicate point-by-point summation;
Infrared image vision attention notable figure is calculated by the following formula:
For visible images:
Brightness, color and direction notable figure are normalized respectively with normalization operator N (), obtained WithIt is shown below:
Symbol in formulaIndicate point-by-point summation;
Visible images vision attention notable figure is calculated by the following formula:
Step 3, it on the basis of infrared and visible images vision attention notable figure, is selected respectively using inhibition of return mechanism red Well-marked target region in outer and visible images, and calculate all well-marked target regions in entire scene;
After vision attention notable figure is calculated by step 2, it is for the first time to choose the maximum point of gray value in vision attention notable figure Blinkpunkt goes out first well-marked target region, first well-marked target centered on the point with the rectangular selection of original image size 1/16 After regional choice goes out, the gray value of blinkpunkt for the first time is set as zero, selects gray value in visual task notable figure maximum again Point is second of blinkpunkt, goes out second well-marked target region using the rectangular selection of original image size 1/16, iterative cycles, by A well-marked target region selected in original image, the well-marked target region point in the infrared image and visible images selected It Wei not MI, MV
Well-marked target region in entire scene is Msum=MI+MV-(MI∩MV);
Step 4, registration operation is carried out to infrared image and visible images, Pixel-level blending algorithm is used to well-marked target region Fusion treatment is carried out, to non-significant target area, fusion treatment is carried out using feature-based fusion algorithm;
Using crucial point match method, registration operation is carried out to infrared image I and visible images V;
For well-marked target region, it is M to calculate the well-marked target region in entire scene by step 3sum, wherein in infrared figure All significant areal is M in picture and visible imagesIV=MI∩MV, significant areal is only in infrared image MI-IV=MI-MIV, significant areal is M only in visible imagesV-IV=MV-MIV, using Pixel-level fusion it is linear plus It weighs blending algorithm and fusion treatment is carried out to well-marked target region, obtain the blending image F in well-marked target regionsaliency(i, j) is used Following formula indicates:
Fsaliency(i, j)=wI(i,j)·I(i,j)+wV(i,j)·V(i,j)+C
Wherein, I (i, j) and V (i, j) indicates that the pixel position in source images, F (i, j) indicate the pixel in blending image Position;wI(i, j) and wV(i, j) is weights, and wI(i,j)+wV(i, j)=1;For MIVIt is a in infrared image and visible light figure W is arranged in all significant region as inI(i, j)=wV(i, j)=0.5;For MI-IVA region significant only in infrared image, W is setI(i, j)=0.8, wV(i, j)=0.2;For MV-IVW is arranged in a region significant only in visible imagesI(i,j) =0.2, wV(i, j)=0.8;
To non-significant target area, the non-significant target area in infrared image and visible images is melted using feature level image Hop algorithm carries out fusion treatment, using the fusion for carrying out non-significant target area based on principal component analytical method;
Infrared image and visible images are converted by PCA shift theories respectively, obtain each principal component, by infrared image and The first principal component image of visible images carries out Histogram Matching, then replaces first principal component with full-colour image, then lead to It crosses PCA inverse transformations and obtains blending image Fnon-saliency(i,j);
Step 5, comprehensive well-marked target region fusion results and non-significant target area fusion results, generate the infrared figure of whole scene Picture and visual image fusion image;
The blending image F in well-marked target region in step 4 calculates scenesaliency(i, j) and non-significant target area image Fnon-saliencyOn the basis of (i, j), the infrared image of entire scene and the fusion results of visible images are calculated, such as following formula institute Show:
F (i, j)=Fsaliency(i,j)+Fnon-saliency(i,j)。
CN201510111415.0A 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target Active CN104700381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510111415.0A CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510111415.0A CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Publications (2)

Publication Number Publication Date
CN104700381A CN104700381A (en) 2015-06-10
CN104700381B true CN104700381B (en) 2018-10-12

Family

ID=53347469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510111415.0A Active CN104700381B (en) 2015-03-13 2015-03-13 A kind of infrared and visible light image fusion method based on well-marked target

Country Status (1)

Country Link
CN (1) CN104700381B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11010876B2 (en) * 2015-06-26 2021-05-18 Nec Corporation Image processing system, image processing method, and computer-readable recording medium
CN106251355B (en) * 2016-08-03 2018-12-14 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106530266B (en) * 2016-11-11 2019-11-01 华东理工大学 A kind of infrared and visible light image fusion method based on region rarefaction representation
CN106898008A (en) * 2017-03-01 2017-06-27 南京航空航天大学 Rock detection method and device
CN107292872A (en) * 2017-06-16 2017-10-24 艾松涛 Image processing method/system, computer-readable recording medium and electronic equipment
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method
CN108198157A (en) * 2017-12-22 2018-06-22 湖南源信光电科技股份有限公司 Heterologous image interfusion method based on well-marked target extracted region and NSST
CN108288344A (en) * 2017-12-26 2018-07-17 李文清 A kind of efficient forest fire early-warning system
CN108090888B (en) * 2018-01-04 2020-11-13 北京环境特性研究所 Fusion detection method of infrared image and visible light image based on visual attention model
CN108769550B (en) * 2018-05-16 2020-07-07 中国人民解放军军事科学院军事医学研究院 Image significance analysis system and method based on DSP
CN109255793B (en) * 2018-09-26 2019-07-05 国网安徽省电力有限公司铜陵市义安区供电公司 A kind of monitoring early-warning system of view-based access control model feature
CN109447909A (en) * 2018-09-30 2019-03-08 安徽四创电子股份有限公司 The infrared and visible light image fusion method and system of view-based access control model conspicuousness
CN109493309A (en) * 2018-11-20 2019-03-19 北京航空航天大学 A kind of infrared and visible images variation fusion method keeping conspicuousness information
CN110210407A (en) * 2019-06-04 2019-09-06 武汉科技大学 A kind of Misty Image well-marked target detection method
CN110489792B (en) * 2019-07-12 2023-03-31 中国人民解放军92942部队 Method and device for designing visible light camouflage distance and server
CN111161356B (en) 2019-12-17 2022-02-15 大连理工大学 Infrared and visible light fusion method based on double-layer optimization
CN111080724B (en) * 2019-12-17 2023-04-28 大连理工大学 Fusion method of infrared light and visible light
CN110874827B (en) * 2020-01-19 2020-06-30 长沙超创电子科技有限公司 Turbulent image restoration method and device, terminal equipment and computer readable medium
CN111914422B (en) * 2020-08-05 2021-02-02 北京开云互动科技有限公司 Real-time visual simulation method for infrared features in virtual reality
CN113159229B (en) * 2021-05-19 2023-11-07 深圳大学 Image fusion method, electronic equipment and related products
US11967102B2 (en) 2021-07-16 2024-04-23 Shanghai United Imaging Intelligence Co., Ltd. Key points detection using multiple image modalities
CN116977154B (en) * 2023-09-22 2024-03-19 南方电网数字电网研究院有限公司 Visible light image and infrared image fusion storage method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873440A (en) * 2010-05-14 2010-10-27 西安电子科技大学 Infrared and visible light video image fusion method based on Surfacelet conversion
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101873440A (en) * 2010-05-14 2010-10-27 西安电子科技大学 Infrared and visible light video image fusion method based on Surfacelet conversion
CN103366353A (en) * 2013-05-08 2013-10-23 北京大学深圳研究生院 Infrared image and visible-light image fusion method based on saliency region segmentation
CN104408700A (en) * 2014-11-21 2015-03-11 南京理工大学 Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Visual attention guided image fusion with sparse representation;Bin Yang et al.;《Optik》;20141231;第125卷(第17期);全文 *
协同视觉选择注意计算模型研究;邵静;《中国博士学位论文全文数据库 信息科技辑》;20081115;第2008年卷(第11期);第51-52、55-57、62页 *
基于仿生视觉机理的多源图像融合;万莉;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140815;第2014年卷(第8期);第36、41、47、48、52页 *

Also Published As

Publication number Publication date
CN104700381A (en) 2015-06-10

Similar Documents

Publication Publication Date Title
CN104700381B (en) A kind of infrared and visible light image fusion method based on well-marked target
Hwang et al. Multispectral pedestrian detection: Benchmark dataset and baseline
Cai et al. Dehazenet: An end-to-end system for single image haze removal
CN109344701A (en) A kind of dynamic gesture identification method based on Kinect
Luo et al. Multi-scale traffic vehicle detection based on faster R–CNN with NAS optimization and feature enrichment
Lalonde et al. Estimating the natural illumination conditions from a single outdoor image
Lu et al. Multi-scale strip pooling feature aggregation network for cloud and cloud shadow segmentation
Luo et al. Thermal infrared image colorization for nighttime driving scenes with top-down guided attention
Renza et al. A new approach to change detection in multispectral images by means of ERGAS index
CN102314602B (en) Shadow removal in image captured by vehicle-based camera using optimized oriented linear axis
CN102314600A (en) Be used for the shade of the removal of clear path detection by the image of catching based on the camera of vehicle
CN104504744B (en) A kind of true method and device of plan for synthesizing license plate image
CN105678318B (en) The matching process and device of traffic sign
CN109816694A (en) Method for tracking target, device and electronic equipment
CN116091372B (en) Infrared and visible light image fusion method based on layer separation and heavy parameters
CN109753945A (en) Target subject recognition methods, device, storage medium and electronic equipment
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Jin et al. Vehicle license plate recognition for fog‐haze environments
Jain et al. Multi-sensor image fusion using intensity hue saturation technique
Xu et al. COCO-Net: A dual-supervised network with unified ROI-loss for low-resolution ship detection from optical satellite image sequences
Maxwell et al. Real-time physics-based removal of shadows and shading from road surfaces
Ying et al. Region-aware RGB and near-infrared image fusion
Shibata et al. Unified image fusion framework with learning-based application-adaptive importance measure
Zhao et al. Infrared and visible imagery fusion based on region saliency detection for 24-hour-surveillance systems
Li et al. Cloud detection from remote sensing images by cascaded U-shape attention networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant