CN103839243A - Multi-channel satellite cloud picture fusion method based on Shearlet conversion - Google Patents

Multi-channel satellite cloud picture fusion method based on Shearlet conversion Download PDF

Info

Publication number
CN103839243A
CN103839243A CN201410056917.3A CN201410056917A CN103839243A CN 103839243 A CN103839243 A CN 103839243A CN 201410056917 A CN201410056917 A CN 201410056917A CN 103839243 A CN103839243 A CN 103839243A
Authority
CN
China
Prior art keywords
fusion
sigma
image
shearlet
typhoon
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410056917.3A
Other languages
Chinese (zh)
Other versions
CN103839243B (en
Inventor
张长江
陈源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201410056917.3A priority Critical patent/CN103839243B/en
Publication of CN103839243A publication Critical patent/CN103839243A/en
Application granted granted Critical
Publication of CN103839243B publication Critical patent/CN103839243B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a multi-channel satellite cloud picture fusion method based on Shearlet conversion and belongs to the field of weather prognoses. Firstly, two registered satellite cloud pictures are subjected to Shearlet conversion to acquire a low-frequency coefficient and a high-frequency coefficient; secondly, a low-frequency Shearlet domain part is divided again through a Laplacian pyramid, the mean value of the top layer of the Laplacian pyramid is worked out, and then reconstruction of other layers with large gray-level absolute values of the Laplacian pyramid is carried out; in the high-frequency Shearlet domain part, the information entropy, average gradient and standard deviation of each high-frequency sub-picture are worked out and are then subjected to normalization processing, the product of every group of three processed values is worked out, and the sub-picture with the large product serves as a fused sub-picture; the fused sub-picture is subjected to detail enhancement treatment through a non-linear operator; finally, a final fused picture is obtained through Shearlet inverse transformation. The method can be popularized to fusion of three or more satellite cloud picture to achieve multi-channel satellite cloud picture fusion and acquire high-precision typhoon center positioning results.

Description

Based on the hyperchannel satellite cloud picture fusion method of Shearlet conversion
Technical field
The invention belongs to weather prognosis field.Specifically, relate to a kind of hyperchannel satellite cloud picture fusion method based on Shearlet conversion take raising Typhoon center location precision as object.
Background technology
Meteorological satellite cloud pictures has been brought into play extremely important effect in weather monitoring forecast and atmospheric environment detection, particularly some extreme meteorological disasters monitorings has been played to key effect.Therefore, satellite cloud picture is carried out to subsequent analysis processing, can obtain better the information such as atmosphere, land, ocean, cloud layer, for monitoring and prediction provides reliable Data support, and can improve robotization and the accuracy of forecast, have important practical significance.
No. 2 C stars of China's wind and cloud (FY-2C) geostationary orbit weather satellite receives from the visible ray of the earth, infrared and aqueous vapor radiation by the scanning radiometer of a visible channel, three infrared channels and an aqueous vapor passage, five passages obtain a width and cover the panorama cloud atlas of 1/3 earth per half an hour, and the frequent observation when many time is particularly suitable for detecting life cycle compared with generation and the development short and diastrous weathers such as harm heavy rain, typhoon, sandstorm greatly.But, the image imaging principle difference of each passage, the data message obtaining is also different, and the information of obtaining from the satellite cloud picture of single channel has certain limitation, is unfavorable for reflecting the feature of observed object.Image interfusion method combines the satellite cloud picture information of different passages, and more fully cloud atlas information can be provided, and is beneficial to and obtains more reliable data, improves the precision of forecast and monitoring.Therefore, Chinese scholars has all carried out constantly exploring to the integration technology of hyperchannel satellite cloud picture.
Wavelet Analysis Theory, through significant progress, has obtained application very widely in image co-registration field.The people such as A.Abd-Elrahman have proposed a kind of utilizing wavelet transformation to carry out when satellite cloud picture merges improving the Enhancement Method of the relevant shadow region of its cloud, and have retained detailed information, and its result has improved cloud atlas quality effectively.Lee, the people such as Y. propose a kind of new wavelet field Satellite Images Fusion algorithm, have considered intensity and spectral range and the correlation spectrum response of every width source images.Represent the spectral response of each passage by Gaussian function sum, then adjust space and the spectral resolution of image with Gaussian function modeling.The PSNR(Peak Signal-Noise Ratio of the method fusion results) value, root-mean-square error and related coefficient more excellent than the effect of classic method.The people such as Yang W. are incorporated into compressed sensing (CS) in the blending algorithm of satellite image, a blending algorithm CS-FWT-PCA based on antimeric b spline wavelets is proposed, the method is used Hama Da matrix as measuring matrix, degree of rarefication Adaptive matching tracing algorithm (SAMP) is as reconstruction algorithm, adopt a kind of improved fusion rule based on local variance, obtain being better than the syncretizing effect of traditional fusion method.
But wavelet transformation exists certain limitation, owing to only having less direction number, i.e. level, vertical and three, diagonal angle direction, thereby can only capture the information in limited direction, easily cause the loss of information.The limitation and the deficiency that separate for small echo, multiresolution Analysis Theory had obtained further development in recent years, and multi-scale geometric analysis instrument arises at the historic moment.Multi-scale geometric analysis instrument not only has the multiresolution of small echo, also has directivity and the anisotropy of multiple dimensioned, good time-frequency local characteristics and height simultaneously.Common multi-scale geometric analysis instrument comprises: Bandelet, Ridgelet, Curvelet, Contourlet, NSCT(NonSubsampled Contourlet Transform) and Shearlet etc.Wherein, the edge feature of curve or straight line in the more applicable analysis of two-dimensional images of Curvelet ratio of transformation small echo, there is higher approximation accuracy and sparse expression ability, Curvelet conversion introducing image co-registration can be extracted better to the feature of original image, for fused images provides more information.The people such as Shutao Li have proposed the Multi-focus image fusion of a kind of combination Curvelet and wavelet transformation, and this image co-registration result is better than other any independent Multiscale Fusion methods.But the redundance of Curvelet mapping algorithm is higher, the not bank of filters based on threshold sampling, and have Gibbs phenomenon.In addition, Contourlet conversion is to be realized by the fan-filter strictly adopting and resampling, is conventionally made up of Laplacian Pyramid Transform and an anisotropic filter.Can catch more accurately the segmentation secondary continuous curve in image with the subband of different scale, different frequency, thereby make the edge energy of image more concentrated.The people such as Miao Qiguang have proposed a kind of image interfusion method based on Contourlet conversion, HFS is carried out to region energy comparison, and in conjunction with consistency check, obtain in the result that is better than the image interfusion method of small wave converting method and laplacian pyramid aspect edge maintenance and texture information.The people such as Juan Lu have proposed a kind of Image Fusion based on NSCT and Energy-Entropy, and the fusion results of this algorithm has abundanter directional information and very strong noise robustness.But when Contourlet conversion is applied to image co-registration, easily introduce Pseudo-Gibbs artifacts, overcome that this phenomenon required time is long and data volume is large.
And Shear wave filter is introduced in Shearlet conversion, this wave filter does not have the constraint of direction number, can also represent with a window function.Shearlet conversion not only can detect all singular points, and direction that can adaptive tracing singular curve, does not also need to carry out contrary fan-filter group conversion when inverse transformation.Shearlet conversion had both overcome the deficiency of wavelet transformation information dropout in image co-registration application, the restriction of direction number while also successfully having broken away from filtering.Therefore,, along with the development of image processing techniques, Shearlet conversion is more and more subject to numerous researchists' concern and attention, has become one of study hotspot.The people such as Qi-guang Miao utilize the directivity, polarization, anisotropy of Shearlet conversion, the advantage such as multiple dimensioned, and by Shearlet conversion, for image co-registration, its fusion results comprises more details and less distortion information than additive method.The people such as Cheng S. propose a kind of based on Shearlet conversion and PCNN(Pulse Coupled Neural Network) Image Fusion.Extract the Gradient Features of each direction Shear matrix, carry out multiple dimensioned decomposition with small echo, and merge high frequency coefficient with PCNN, obtained good syncretizing effect.The people such as Guorong G. propose a kind of new for NSST(Non-Subsampled Shearlet Transform) Multi-focus image fusion of conversion, propose the fusion rule of concentrated area information for little side's error features of multiple focussing image, the visual quality of experimental result demonstration the method and objective evaluation are obviously better than the fusion results of wavelet transform.
Summary of the invention
The invention provides a kind of hyperchannel satellite cloud picture fusion method based on Shearlet conversion, object is to provide a kind of fusion method of the each evaluating of image with the high information degree of reservation cloud atlas that consider, realize Typhoon center location preferably, and extend to the fusion of hyperchannel satellite cloud picture.
The technical scheme that the present invention takes is to comprise the following steps:
Step 1, to source images A and B after registration, image size is M × N, carries out respectively Shearlet conversion, the decomposition number of plies is W, decomposing direction number is T, T=2 r, r ∈ Z *, obtain high frequency coefficient SH aand SH b, low frequency coefficient SL aand SL b;
Step 2, respectively to low frequency coefficient SL aand SL bdo Laplacian pyramid, the decomposition number of plies is Q, obtains exploded view as LA and LB, and q straton figure is respectively LA qand LB q, 1≤q≤Q;
Step 3, to laplacian pyramid top layer subgraph LA qand LB qmerge by averaging method, obtain fusion results LF qfor:
LF Q ( i , j ) = LA Q ( i , j ) + LB Q ( i , j ) 2
Wherein, 1≤i≤CL q, 1≤j≤RL q, CL qthe line number of decomposing subgraph Q tomographic image, RL qit is the columns that decomposes subgraph Q tomographic image;
Step 4, to other stratons of laplacian pyramid figure LA qand LB qget large fusion rule with gray scale absolute value and merge, 1≤q≤Q-1, fusion results LF qfor:
LF q ( i , j ) = LA q ( i , j ) , | LA q ( i , j ) | &GreaterEqual; | LB q ( i , j ) | LB q ( i , j ) , | LA q ( i , j ) | < | LB q ( i , j ) |
Step 5, the laplacian pyramid LF that obtains after merging is reconstructed, obtains the fusion results TL of low frequency part f;
Step 6, at the HFS of Shearlet transform domain, ask respectively information entropy, average gradient and the standard deviation of every layer of each direction subgraph, the high frequency subgraph of note w layer t direction is respectively
Figure BDA0000467558540000043
with 1≤w≤W, 1≤t≤T, its size is the same with former figure size is M × N, its information entropy E is:
E = - &Sigma; i = 0 L - 1 P i log 2 P i
Wherein, P irepresent the probability that in subgraph, grey scale pixel value is i, the number of gray level in L presentation video, the average gradient of high frequency subgraph be expressed as
G &OverBar; = &Sigma; ii = 1 M - 1 &Sigma; jj = 1 N - 1 ( &PartialD; SH w t ( x ii , y jj ) &PartialD; x ii ) 2 + ( &PartialD; SH w t ( x ii , y jj ) &PartialD; y jj ) 2 2 ( M - 1 ) ( N - 1 )
Wherein,
Figure BDA00004675585400000416
represent high frequency subgraph
Figure BDA0000467558540000047
or
Figure BDA0000467558540000048
middle x iirow y jjthe pixel of row, 1≤ii≤M, 1≤jj≤N, the standard deviation sigma of high frequency subgraph is expressed as:
&sigma; = &Sigma; ii = 1 M &Sigma; jj = 1 N ( SH w t ( x ii , y jj ) - h &OverBar; ) 2 M &times; N
Wherein, represent the gray average of high frequency subgraph;
Step 7, to high frequency subgraph with corresponding information entropy E, average gradient standard deviation sigma is normalized respectively, obtains the information entropy E after normalization g, average gradient
Figure BDA00004675585400000414
standard deviation sigma g, the high frequency subgraph after the conduct that selection three product value is large is merged,
( SH F ) w t = ( SH A ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A &GreaterEqual; ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B ( SH B ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A < ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B
Step 8, to merge after high frequency subgraph carry out the non-linear enhancing processing based on Stationary Wavelet Transform, establish high frequency subgraph
Figure BDA0000467558540000051
all pixel grey scales in absolute value maximum be maxh, the high frequency subgraph after can being enhanced
Figure BDA0000467558540000052
for:
( E _ SH F ) w t ( ii , jj ) = a &CenterDot; max h { sigm [ c ( Sh w t ( ii , jj ) - b ) ] - sigm [ - c ( Sh w t ( ii , jj ) + b ) ] }
Wherein, b=0.35, c=20; A=1/ (d 1-d 2), d 1=sigm (c × (1+b)), d 2=sigm (c × (1-b));
Sh w t ( ii , jj ) = ( SH f ) w t ( ii , jj ) / max h ;
Step 9, the Shearlet coefficient value after fusion treatment is carried out to Shearlet inverse transformation, obtain final fused images F.
In step 1 of the present invention, the satellite cloud picture image that source images A after described registration and B select China weather satellite FY-2C to return, this satellite has 5 passages: infrared 1 passage, infrared 2 passages, aqueous vapor passage, infrared 4 passages, visible channel, the optional wherein image of two passages carries out registration.
The present invention also comprises: in the fusion results above, add the cloud atlas of other passages, realize three width and above cloud atlas fusion, reach the effect that hyperchannel satellite cloud picture merges.
The present invention is applied to Shearlet conversion the fusion of hyperchannel satellite cloud picture, based on two width cloud atlas of registration, in conjunction with the fusion method of Laplacian pyramid, a kind of fusion rule of the each evaluating of image with the high information degree of reservation cloud atlas that consider proposed, realize Typhoon center location preferably, and extend to the fusion of hyperchannel satellite cloud picture.
The described technical scheme of invention can fully merge the useful information of each passage, realizes well hyperchannel typhoon cloud atlas and merges, and retains to greatest extent the details of each passage cloud atlas, keeps the sharpness of fused image.Cloud atlas after merging by utilization, to having eye and non-eye typhoon to carry out centralized positioning, obtains the Typhoon center location result of higher accuracy, can illustrate that syncretizing effect has good practical value.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the present invention is based on the hyperchannel satellite cloud picture fusion method of Shearlet conversion;
Infrared 1 passage cloud atlas in the 5 passage satellite cloud pictures that Fig. 2 (a) returns for China weather satellite FY-2C;
Infrared 2 passage cloud atlas in the 5 passage satellite cloud pictures that Fig. 2 (b) returns for China weather satellite FY-2C;
Aqueous vapor passage cloud atlas in the 5 passage satellite cloud pictures that Fig. 2 (c) returns for China weather satellite FY-2C;
Infrared 4 passage cloud atlas in the 5 passage satellite cloud pictures that Fig. 2 (d) returns for China weather satellite FY-2C;
Visible channel cloud atlas in the 5 passage satellite cloud pictures that Fig. 2 (e) returns for China weather satellite FY-2C;
Fig. 3 (a) is the infrared 2 passage cloud atlas in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (b) is the aqueous vapor passage cloud atlas in infrared 2 passages and the hyperchannel satellite cloud picture fusion experiment of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (c) is the fusion results of the laplacian pyramid in infrared 2 passages and the hyperchannel satellite cloud picture fusion experiment of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (d) is the fusion results of the classical Discrete Orthogonal Wavelets in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (e) is the fusion results of the Curvelet in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (f) is the fusion results of the Contourlet in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (g) is the fusion results of the NSCT in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 3 (h) is the fusion results of the algorithm of the present invention in infrared 2 passages and the hyperchannel satellite cloud picture fusion experimental results of aqueous vapor passage cloud atlas (eyed typhoon) of 12: 00 on the 31st typhoon Talim August in 2005;
Fig. 4 (a) is the part enlarged image of fusion results in Fig. 3 (c);
Fig. 4 (b) is the part enlarged image of fusion results in Fig. 3 (d);
Fig. 4 (c) is the part enlarged image of fusion results in Fig. 3 (e);
Fig. 4 (d) is the part enlarged image of fusion results in Fig. 3 (f);
Fig. 4 (e) is the part enlarged image of fusion results in Fig. 3 (g);
Fig. 4 (f) is the part enlarged image of fusion results in Fig. 3 (h);
Fig. 5 (a) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (c) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 5 (b) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (d) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 5 (c) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (e) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 5 (d) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (f) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 5 (e) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (g) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 5 (f) is that the fusion results of " thyrite " cloud atlas in Fig. 3 (h) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 6 (a) is for utilizing the Typhoon center location result schematic diagram of the infrared 2 passage cloud atlas of " thyrite " cloud atlas in Fig. 3 (a);
Fig. 6 (b) is the Typhoon center location result schematic diagram of utilizing the aqueous vapor passage cloud atlas of " thyrite " cloud atlas in Fig. 3 (b);
Fig. 6 (c) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (c);
Fig. 6 (d) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (d);
Fig. 6 (e) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (e);
Fig. 6 (f) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (f);
Fig. 6 (g) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (g);
Fig. 6 (h) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of " thyrite " cloud atlas in Fig. 3 (h);
Fig. 7 (a) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage cloud atlas (non-eye typhoon);
Fig. 7 (b) be 2006 typhoon on May 11,00: 00 " pearl " aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (c) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the laplacian pyramid of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (d) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the classical Discrete Orthogonal Wavelets of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (e) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the Curvelet of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (f) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the Contourlet of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (g) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the NSCT of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 7 (h) be 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and the fusion experimental results of the present invention of the hyperchannel satellite cloud picture of aqueous vapor passage cloud atlas (non-eye typhoon);
Fig. 8 (a) is the part enlarged image of fusion results in Fig. 7 (c);
Fig. 8 (b) is the part enlarged image of fusion results in Fig. 7 (d);
Fig. 8 (c) is the part enlarged image of fusion results in Fig. 7 (e);
Fig. 8 (d) is the part enlarged image of fusion results in Fig. 7 (f);
Fig. 8 (e) is the part enlarged image of fusion results in Fig. 7 (g);
Fig. 8 (f) is the part enlarged image of fusion results in Fig. 7 (h);
Fig. 9 (a) is that the fusion results of " pearl " cloud atlas in Fig. 7 (c) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 9 (b) is that the fusion results of " pearl " cloud atlas in Fig. 7 (d) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 9 (c) is that the fusion results of " pearl " cloud atlas in Fig. 7 (e) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 9 (d) is that the fusion results of " pearl " cloud atlas in Fig. 7 (f) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 9 (e) is that the fusion results of " pearl " cloud atlas in Fig. 7 (g) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Fig. 9 (f) is that the fusion results of " pearl " cloud atlas in Fig. 7 (h) intercepts cloud atlas, is convenient to carry out Typhoon center location;
Figure 10 (a) is for utilizing the Typhoon center location result schematic diagram of the infrared 1 passage cloud atlas of " pearl " in Fig. 7 (a);
Figure 10 (b) is the Typhoon center location result schematic diagram of utilizing the aqueous vapor passage cloud atlas of " pearl " in Fig. 7 (b);
Figure 10 (c) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the laplacian pyramid of " pearl " cloud atlas in Fig. 7 (c);
Figure 10 (d) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the classical Discrete Orthogonal Wavelets of " pearl " cloud atlas in Fig. 7 (d);
Figure 10 (e) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the Curvelet of " pearl " cloud atlas in Fig. 7 (e);
Figure 10 (f) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the Contourlet of " pearl " cloud atlas in Fig. 7 (f);
Figure 10 (g) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the NSCT of " pearl " cloud atlas in Fig. 7 (g);
Figure 10 (h) is for utilizing the Typhoon center location result schematic diagram of the fusion results image of the present invention of " pearl " cloud atlas in Fig. 7 (h).
Embodiment
The present invention proposes a kind of hyperchannel satellite cloud picture fusion method based on Shearlet conversion.Based on two width satellite cloud pictures of registration, first satellite cloud picture is carried out to Shearlet conversion, obtain low frequency coefficient and high frequency coefficient; Then, to Shearlet territory low frequency part, utilize laplacian pyramid again to decompose, its top layer is got to average, other layers are got reconstruct after the part that gray scale absolute value is large; At Shearlet territory HFS, first calculate these three values of information entropy, average gradient and standard deviation of each high frequency subgraph, be then normalized respectively, then three values after treatment are done to product, finally getting the subgraph that product is large is fusion subgraph; Again the high frequency subgraph nonlinear operator after merging is strengthened to the thin details of image; Finally, obtain final fused images by Shearlet inverse transformation.This method can be generalized to the fusion of three width and above satellite cloud picture, realizes hyperchannel satellite cloud picture and merges.
Be illustrated in figure 1 the schematic flow sheet of the hyperchannel satellite cloud picture fusion method that the present invention is based on Shearlet conversion.Based on the hyperchannel satellite cloud picture fusion method of Shearlet conversion, comprise the following steps:
Step 1, to source images A and B after registration, image size is M × N, carries out respectively Shearlet conversion, the decomposition number of plies is W, decomposing direction number is T(T=2 r, r ∈ Z *), obtain high frequency coefficient SH aand SH b, low frequency coefficient SL aand SL b;
Step 2, respectively to low frequency coefficient SL aand SL bdo Laplacian pyramid, the decomposition number of plies is Q, obtains exploded view as LA and LB, the q(1≤q≤Q) straton figure is respectively LA qand LB q;
Step 3, to laplacian pyramid top layer subgraph LA qand LB qmerge by averaging method, obtain fusion results LF qfor
LF Q ( i , j ) = LA Q ( i , j ) + LB Q ( i , j ) 2
Wherein, 1≤i≤CL q, 1≤j≤RL q, CL qthe line number of decomposing subgraph Q tomographic image, RL qit is the columns that decomposes subgraph Q tomographic image;
Step 4, to other stratons of laplacian pyramid figure LA qand LB q(1≤q≤Q-1) gets large fusion rule with gray scale absolute value and merges, fusion results LF qfor
LF q ( i , j ) = LA q ( i , j ) , | LA q ( i , j ) | &GreaterEqual; | LB q ( i , j ) | LB q ( i , j ) , | LA q ( i , j ) | < | LB q ( i , j ) |
Step 5, the laplacian pyramid LF that obtains after merging is reconstructed, obtains the fusion results TL of low frequency part f;
Step 6, at the HFS of Shearlet transform domain, ask respectively information entropy, average gradient and the standard deviation of every layer of each direction subgraph, note the w(1≤w≤W) layer the t(1≤t≤T) the high frequency subgraph of individual direction is respectively
Figure BDA0000467558540000113
with
Figure BDA0000467558540000114
its size is the same with former figure size is M × N, and its information entropy E is
E = - &Sigma; i = 0 L - 1 P i log 2 P i
Wherein, P irepresent the probability that in subgraph, grey scale pixel value is i, the number of the gray level in L presentation video; The average gradient of high frequency subgraph
Figure DEST_PATH_GDA0000487794950000135
be expressed as
G &OverBar; = &Sigma; ii = 1 M - 1 &Sigma; jj = 1 N - 1 ( &PartialD; SH w t ( x ii , y jj ) &PartialD; x ii ) 2 + ( &PartialD; SH w t ( x ii , y jj ) &PartialD; y jj ) 2 2 ( M - 1 ) ( N - 1 )
Wherein,
Figure BDA0000467558540000117
represent high frequency subgraph
Figure BDA0000467558540000118
or
Figure BDA0000467558540000119
middle x iirow y jjthe pixel of row, 1≤ii≤M, 1≤jj≤N.The standard deviation sigma of high frequency subgraph is expressed as
&sigma; = &Sigma; ii = 1 M &Sigma; jj = 1 N ( SH w t ( x ii , y jj ) - h &OverBar; ) 2 M &times; N
Wherein,
Figure BDA00004675585400001111
represent the gray average of high frequency subgraph;
Step 7, to high frequency subgraph
Figure BDA00004675585400001112
with corresponding information entropy E, average gradient
Figure BDA00004675585400001114
standard deviation sigma is normalized respectively, obtains the information entropy E after normalization g, average gradient
Figure BDA00004675585400001115
standard deviation sigma g, the high frequency subgraph after the conduct that selection three product value is large is merged,
( SH F ) w t = ( SH A ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A &GreaterEqual; ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B ( SH B ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A < ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B
Step 8, to merge after high frequency subgraph carry out the non-linear enhancing processing based on Stationary Wavelet Transform, establish high frequency subgraph
Figure BDA0000467558540000122
all pixel grey scales in absolute value maximum be maxh, the high frequency subgraph after can being enhanced
Figure BDA0000467558540000123
for
( E _ SH F ) w t ( ii , jj ) = a &CenterDot; max h { sigm [ c ( Sh w t ( ii , jj ) - b ) ] - sigm [ - c ( Sh w t ( ii , jj ) + b ) ] }
Wherein, b=0.35, c=20; A=1/ (d 1-d 2), d 1=sigm (c × (1+b)), d 2=sigm (c × (1-b));
Sh w t ( ii , jj ) = ( SH f ) w t ( ii , jj ) / max h .
Step 9, the Shearlet coefficient value after fusion treatment is carried out to Shearlet inverse transformation, obtain final fused images F.
The present invention also comprises: in the fusion results above, add the cloud atlas of other passages, realize three width and above cloud atlas fusion, reach the effect that hyperchannel satellite cloud picture merges.
The wherein said processing to Shearlet coefficient of dissociation is that the Shearlet coefficient of dissociation of image is merged according to fusion rule separately respectively.Shearlet territory low frequency part, utilizes laplacian pyramid again to decompose, and its top layer is got to average, and other layers are got reconstruct after the part that gray scale absolute value is large; Shearlet territory HFS, tries to achieve information entropy, average gradient and the standard deviation of high frequency subgraph, gets part that three's product is large as the high frequency subgraph after merging.
The fusion rule of Shearlet low frequency coefficient is first to do Laplacian pyramid, then laplacian pyramid top layer subgraph is merged by averaging method, other straton of laplacian pyramid figure is got to large fusion rule with gray scale absolute value to be merged, finally the laplacian pyramid after merging is reconstructed, obtains new Shearlet low frequency coefficient.
Notice experimental example further illustrates effect of the present invention below.
Experimental example 1:
As shown in Fig. 3 (a)~(h), we choose infrared 2 passages and the aqueous vapor passage cloud atlas image that derive from 12: 00 on the 31st typhoon Talim August in 2005 and carry out fusion treatment as former figure.Being treated to gray level image by MATLAB7.0, is all the fusion experiment images that intercept 512 × 512 pixels from 2288 × 2288 big or small such satellite cloud pictures shown in Fig. 2.Wherein its brightness of numeral for each pixel.This point of the larger explanation of numeral is brighter.
We treat that to two width fusion cloud image image carries out Shearlet conversion respectively, decompose the number of plies and are one deck, eight directions in the decomposable process of described step.In order to verify the validity of blending algorithm in this paper, by the fusion results of this paper method and laplacian pyramid image interfusion method, classical Discrete Orthogonal Wavelets image interfusion method, (fusion rule is that low frequency coefficient is got average to Contourlet image interfusion method, high frequency coefficient is got the part that region energy is large, decomposing direction setting is [0, 2]), Curvelet image interfusion method (independent Curvelet image interfusion method, fusion rule low frequency coefficient is got average, high frequency coefficient is got the part that window area self-energy is large, wherein window size is 3 × 3) and the NSCT image interfusion method (algorithm that NSCT merges in conjunction with energy, wherein the decomposition direction setting of NSCT is [3, 3]) fusion results of these 5 kinds of methods contrasts.Wherein, laplacian pyramid image interfusion method is identical with the fusion rule of classical Discrete Orthogonal Wavelets image interfusion method, all adopts low frequency part to get average, and HFS is got the method for the part that gray scale absolute value is larger.
As Fig. 3 (a) and Fig. 3 (b) are depicted as infrared 2 passages and the aqueous vapor passage cloud atlas (512 × 512) of 12: 00 on the 31st typhoon Talim August in 2005.The fusion results of Fig. 3 (c) laplacian pyramid, Fig. 3 (d) is the fusion results of classical Discrete Orthogonal Wavelets, Fig. 3 (e) is the fusion results of Curvelet image interfusion method, Fig. 3 (f) is the fusion results of Contourlet image interfusion method, Fig. 3 (g) is the fusion results of NSCT image interfusion method, and Fig. 3 (h) is the fusion results of this paper blending algorithm.
From Fig. 3 (a)~(h), can see, the fused images of Fig. 3 (c) laplacian pyramid blending algorithm is more clear than the fusion results of the classical Discrete Orthogonal Wavelets blending algorithm of Fig. 3 (d); The fused images of the fused images of Fig. 3 (e) Curvelet Image Fusion and Fig. 3 (g) NSCT Image Fusion compares the fusion source figure close to Fig. 3 (b) aqueous vapor passage, gradation of image value is slightly bigger than normal, and typhoon eye and cloud sector are around not less; The fused images of Fig. 3 (f) Contourlet Image Fusion has tiny grid phenomenon.The fused images of Fig. 3 (h) this paper Image Fusion is more similar with the fused images of Fig. 3 (c) laplacian pyramid blending algorithm, clear picture, and near the eye of wind, information is outstanding.For contrast detail part more clearly, we intercept the parts of images of above-mentioned fusion results, as shown in Figure 4.
From Fig. 4 (a)~(f), can see in the fusion results of Fig. 4 (e) Curvelet Image Fusion and the fusion results of Fig. 4 (f) NSCT Image Fusion that cloud cluster and eye of wind gray scale difference are not very large, poorer compared to the fruiting area calibration of other groups.The image effect that in other fusion results of several groups, typhoon revolves is more close.The fusion results of algorithm can be given prominence to typhoon eye information effectively herein, and typhoon main body cloud system entirety is smoother, is conducive to improve the precision of the Typhoon center location based on satellite cloud picture.
In order to evaluate objectively the syncretizing effect with epigraph, the present invention has calculated respectively information entropy E, average gradient G, standard deviation sigma and the average correlation coefficient Average_Corr of above fused images, and has calculated the product of these four evaluatings.Because this blending algorithm is for hyperchannel satellite cloud picture, object is to improve the precision of Typhoon center location, so extremely focus on the indexs such as the quantity of information, spatial resolution, sharpness of fused images, guarantee that fused images has preferably detailed information and textural characteristics.The quantity of information of image can be evaluated by information entropy, information entropy objectively evaluation map picture before and after merging quantity of information number, the average information that the larger expression fused images of information entropy E comprises increases to some extent, it is just better that information is more enriched syncretizing effect.The spatial resolution of image can be evaluated by related coefficient, standard deviation.Related coefficient can be used for weighing the degree of correlation between two width images.If the related coefficient of fusion results and the former figure of fusion more approaches 1, degree of correlation is just larger, and namely syncretizing effect is better.For merging source images A and fused images F, its related coefficient is Corr (A, F); For merging source images B and fused images F, its related coefficient is Corr (B, F); Average correlation coefficient is
Average _ Corr = Corr ( A , F ) + Corr ( B , F ) 2
Average correlation coefficient Average_Corr, more close to 1, illustrates that fusion results is better.Standard deviation sigma has reflected the dispersion degree of gradation of image value with respect to gradation of image mean value.In addition, the information contrast of the larger expression fused images of standard deviation sigma is larger, more easily embodies information.Otherwise standard deviation sigma is less, the grey level distribution of presentation video is more concentrated, and contrast is not obvious, is difficult for embodying the detailed information of fused images.The sharpness of image can be evaluated with average gradient, average gradient
Figure BDA0000467558540000142
can reflect sensitively the ability that image is expressed minor detail contrast.In general, average gradient
Figure BDA0000467558540000143
larger presentation video rate of gray level is larger, and image is more clear.Consider this four evaluatings, the present invention proposes the effect by the product comprehensive evaluation fused images of these four parameters, and product value is larger, illustrates that the effect of fused images is better, and quantity of information is abundanter, and image is more clear, is more conducive to Typhoon center location.
The performance index of infrared 2 passages of typhoon Talim and aqueous vapor passage cloud atlas fusion results are as shown in table 1.
The performance parameter comparison of infrared 2 passages of typhoon Talim and the various fusion results of aqueous vapor passage cloud atlas in table 1 Fig. 3 (c)~(h)
From table 1, can obtain, average gradient and the standard deviation of algorithm fusion result are all better than the result of other blending algorithms herein, and the product of four evaluatings is also optimum, and the combination property that the syncretizing effect of this algorithm is described is optimum.And information entropy and average correlation coefficient are not optimum, but be more or less the same with the result of other blending algorithms, it is 0.008 that information entropy maximum differs, and it is 0.003 that average correlation coefficient maximum differs, so can think that these two parameters are suitable with other fusion method effects.
As shown in Fig. 5 (a)~(f), be the image to various fusion results sectional drawing 39 × 39 sizes in Fig. 3 (c)~(h), then with Typhoon center location algorithm location center of typhoon.Typhoon center location algorithm is first to make territory, the airtight cloud sector of typhoon, then based on gradient information the abundantest feature in center of typhoon region in airtight cloud, travel through airtight cloud sector by 9 × 9 size windows, selecting the window position that in airtight cloud sector, grain intersection point is maximum is center of typhoon region, then the geometric center of getting central area is center of typhoon.Find after center of typhoon, in 512 × 512 fusion results figure, mark center with "+" symbol, as shown in Fig. 6 (a)~(h).
From Fig. 6 (c)~(h), can see, the Typhoon center location result of Fig. 6 (c) laplacian pyramid fusion results, the Typhoon center location result of the classical Discrete Orthogonal Wavelets fusion results of Fig. 6 (d), the Typhoon center location result of Fig. 6 (f) Contourlet fusion results and Fig. 6 (h) herein the Typhoon center location result of algorithm fusion result relatively approach center of typhoon, nuance is difficult to detect by an unaided eye, so we calculate the distance error of center of typhoon according to the longitude and latitude error of Typhoon center location, 12: 00 on the 31st August in 2005, the Typhoon center location error of infrared 2 passages of typhoon Talim and aqueous vapor passage cloud atlas fusion results was as shown in table 2.
The centralized positioning error ratio of table 2 infrared 2 passages of typhoon Talim 12: 00 on the 31st August in 2005 and the various fusion method results of aqueous vapor passage cloud atlas
Figure BDA0000467558540000161
From table 2, can see, algorithm center of typhoon error is 39.37km herein, and centralized positioning resultant error minimum is better than the centralized positioning result of infrared 2 passages, aqueous vapor passage and other fusion methods separately.
Experimental example 2:
As shown in Fig. 7 (a)~(h), we choose derive from 2006 typhoon on May 11,00: 00 " pearl " infrared 1 passage and aqueous vapor passage cloud atlas image carry out fusion treatment as former figure.Its infrared 1 passage and aqueous vapor passage cloud atlas are as shown in Fig. 7 (a) and Fig. 7 (b).The fusion results of Fig. 7 (c) laplacian pyramid, Fig. 7 (d) is the fusion results of classical Discrete Orthogonal Wavelets, Fig. 7 (e) is that fusion results Fig. 7 (f) of Curvelet image interfusion method is the fusion results of Contourlet image interfusion method, Fig. 7 (g) is the fusion results of NSCT image interfusion method, and Fig. 7 (h) is the fusion results of this Image Fusion.Owing to being non-eye typhoon in Fig. 7, from the cloud atlas details of cyclone periphery, the comparison of ingredients that in the fusion results of the fusion results of Fig. 7 (e) Curvelet Image Fusion and Fig. 7 (g) NSCT Image Fusion, gray-scale value is large is many, and details is fuzzyyer; In other group fusion results, the fusion results of Fig. 7 (c) laplacian pyramid blending algorithm is slightly better than the fusion results of the classical orthogonal wavelet blending algorithm of Fig. 7 (d), the fusion results of Fig. 7 (h) this paper algorithm fusion algorithm and the fusion results of Fig. 7 (c) laplacian pyramid blending algorithm are more approaching, and the fusion results of Fig. 7 (f) Contourlet blending algorithm is also better.For contrast detail part more clearly, we intercept the parts of images of above-mentioned fusion results, as shown in Fig. 8 (a)~(f).
In Fig. 8, can see, this group non-eye typhoon cloud atlas Feng Xuan center cloud atlas is still brighter, to recently seeing that difference is not clearly, is nearly all more or less the same.From peripheral cloud atlas information, Fig. 8 (a) laplacian pyramid fusion results, the classical orthogonal wavelet fusion results of Fig. 8 (b) and Fig. 8 (f) algorithm fusion result of the present invention are slightly excellent, Fig. 8 (c) Curvelet fusion results and Fig. 8 (e) NSCT fusion results were better than bright, cloud atlas is not very clear, can not give prominence to cloud cluster information, details, the marginal portion of Fig. 8 (d) Contourlet fusion results are clear not.
Infrared 1 passage and the aqueous vapor passage cloud atlas of various blending algorithms to typhoon in Fig. 7 " pearl " merges, and the performance index of its fusion results are as shown in table 3.
The performance parameter comparison of infrared 1 passage of typhoon " pearl " and the various fusion results of aqueous vapor passage cloud atlas in table 3 Fig. 7 (c)~(h)
Figure BDA0000467558540000171
From table 3, can obtain, average gradient and the standard deviation of algorithm fusion result are all better than the result of other blending algorithms herein, and the product of four evaluatings is also optimum, and the syncretizing effect that this algorithm is described is optimum.And information entropy and average correlation coefficient are not optimum, but the result with other blending algorithms is more or less the same, it is 0.025 that information entropy maximum differs, and it is 0.005 that average correlation coefficient maximum differs, so can think that the performance of these two parameters and other fusion methods is substantially suitable.
Then the fusion results of the whole bag of tricks is intercepted the image of 39 × 39 sizes, as shown in Fig. 9 (a)~(f), image is carried out to Typhoon center location with Typhoon center location algorithm, to verify the validity of this blending algorithm.Because this wind and cloud figure that organizes a performance is anophthalmia, so gradation of image value is larger, but the sectional drawing of a few width fusion results seems that difference neither be very large.The result of Typhoon center location is with "+" symbol mark center in 512 × 512 fusion results figure, as shown in Figure 10 (a)~(h).In Figure 10 (a)~(h), the Typhoon center location result of various fusion methods is all different, it is far away that the Typhoon center location of infrared 1 passage of fusion source picture in picture 10 (a) departs from center, and the Typhoon center location of Figure 10 (b) aqueous vapor passage relatively approaches center.The Typhoon center location result side on the lower side of the Typhoon center location result of Figure 10 (c) laplacian pyramid fusion results and Figure 10 (g) NSCT fusion results, the Typhoon center location result of the Typhoon center location result of the classical Discrete Orthogonal Wavelets fusion results of Figure 10 (d), the Typhoon center location result of Figure 10 (e) Curvelet fusion results and Figure 10 (f) Contourlet fusion results is all more to the right slightly, relatively approaches center.The Typhoon center location result of Figure 10 (f) fusion results of the present invention is relatively near center, and effect is more excellent.The distance error of calculating center of typhoon according to the longitude and latitude error of Typhoon center location, 00: 00 on the 11st May in 2006, the Typhoon center location error of infrared 1 passage of typhoon " pearl " and aqueous vapor passage cloud atlas fusion results was as shown in table 4.
Infrared 1 passage of table 4 2006 typhoon on May 11,00: 00 " pearl " and the centralized positioning error ratio of the various fusion method results of aqueous vapor passage cloud atlas are
Figure BDA0000467558540000191
As can be seen from Table 4, algorithm center of typhoon error is 76.21km herein, is obviously better than the centralized positioning result of independent infrared 1 passage and other fusion methods, effect optimum.
Experimental example 3:
In order to further illustrate the validity of the blending algorithm that proposes herein, the lower surface analysis computation complexity of method proposed by the invention once.Image Fusion is at MatLab R2009a running software herein, running software is Intel's Duo 2 four core [email protected] at processor, inside save as the 2GB(gold scholar DDR31333MHz that pauses), operating system is 32 SP3(DirectX9.0c of Windows XP professional version) the OptiPlex780 of Dell desktop computer on.To measuring the working time of all kinds of fusion methods, test by second group of experimental image, the working time of various blending algorithms is as shown in table 5 herein.
The working time of the various blending algorithms of table 5
As can be seen from Table 5, except Image Fusion working time of the Image Fusion based on laplacian pyramid and classical quadrature discrete small echo is shorter, the time used of other Image Fusion of Image Fusion that the present invention proposes is all lacked.Therefore, the computation complexity of the blending algorithm that the present invention proposes is low, and can obtain good syncretizing effect.
Can realize well the fusion of satellite cloud picture between two by above-mentioned three groups of description of test algorithms of the present invention, by with laplacian pyramid image interfusion method, classical Discrete Orthogonal Wavelets image interfusion method, Contourlet image interfusion method, the fusion results of Curvelet image interfusion method and these 5 kinds of methods of NSCT image interfusion method contrasts, prove that algorithm has optimum standard deviation and average gradient herein, information entropy and the average correlation coefficient suitable with other directions, and comprehensive evaluation index is best, fused images good visual effect, can clearly retain typhoon eye and cloud system detailed information, and it is higher to utilize fusion results to carry out the precision of Typhoon center location, eye and non-eye typhoon are applicable to, the resultant effect of its satellite cloud picture fusion results is best, carry out the fusion of three width and above satellite cloud picture by this method, realizing hyperchannel satellite cloud picture merges, be conducive to the more information in conjunction with cloud atlas, improve the precision of Typhoon center location.

Claims (3)

1. the hyperchannel satellite cloud picture fusion method based on Shearlet conversion, is characterized in that comprising the following steps:
Step 1, to source images A and B after registration, image size is M × N, carries out respectively Shearlet conversion, the decomposition number of plies is W, decomposing direction number is T, T=2 r, r ∈ Z *, obtain high frequency coefficient SH aand SH b, low frequency coefficient SL aand SL b;
Step 2, respectively to low frequency coefficient SL aand SL bdo Laplacian pyramid, the decomposition number of plies is Q, obtains exploded view as LA and LB, and q straton figure is respectively LA qand LB q, 1≤q≤Q;
Step 3, to laplacian pyramid top layer subgraph LA qand LB qmerge by averaging method, obtain fusion results LF qfor:
LF Q ( i , j ) = LA Q ( i , j ) + LB Q ( i , j ) 2
Wherein, 1≤i≤CL q, 1≤j≤RL q, CL qthe line number of decomposing subgraph Q tomographic image, RL qit is the columns that decomposes subgraph Q tomographic image;
Step 4, to other stratons of laplacian pyramid figure LA qand LB qget large fusion rule with gray scale absolute value and merge, 1≤q≤Q-1, fusion results LF qfor:
LF q ( i , j ) = LA q ( i , j ) , | LA q ( i , j ) | &GreaterEqual; | LB q ( i , j ) | LB q ( i , j ) , | LA q ( i , j ) | < | LB q ( i , j ) |
Step 5, the laplacian pyramid LF that obtains after merging is reconstructed, obtains the fusion results TL of low frequency part f;
Step 6, at the HFS of Shearlet transform domain, ask respectively information entropy, average gradient and the standard deviation of every layer of each direction subgraph, the high frequency subgraph of note w layer t direction is respectively
Figure FDA0000467558530000013
with
Figure FDA0000467558530000014
1≤w≤W, 1≤t≤T, its size is the same with former figure size is M × N, its information entropy E is:
E = - &Sigma; i = 0 L - 1 P i log 2 P i
Wherein, P irepresent the probability that in subgraph, grey scale pixel value is i, the number of gray level in L presentation video, the average gradient of high frequency subgraph be expressed as
G &OverBar; = &Sigma; ii = 1 M - 1 &Sigma; jj = 1 N - 1 ( &PartialD; SH w t ( x ii , y jj ) &PartialD; x ii ) 2 + ( &PartialD; SH w t ( x ii , y jj ) &PartialD; y jj ) 2 ( M - 1 ) ( N - 1 )
Wherein,
Figure FDA0000467558530000024
represent high frequency subgraph
Figure FDA0000467558530000025
or
Figure FDA0000467558530000026
middle x iirow y jjthe pixel of row, 1≤ii≤M, 1≤jj≤N, the standard deviation sigma of high frequency subgraph is expressed as:
&sigma; = &Sigma; ii = 1 M &Sigma; jj = 1 N ( SH w t ( x ii , y jj ) - h &OverBar; ) 2 M &times; N
Wherein, represent the gray average of high frequency subgraph;
Step 7, to high frequency subgraph
Figure FDA0000467558530000029
with
Figure FDA00004675585300000210
corresponding information entropy E, average gradient
Figure FDA00004675585300000211
standard deviation sigma is normalized respectively, obtains the information entropy E after normalization g, average gradient
Figure FDA00004675585300000212
standard deviation sigma g, the high frequency subgraph after the conduct that selection three product value is large is merged,
( SH F ) w t = ( SH A ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A &GreaterEqual; ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B ( SH B ) w t , ( E g ) A &times; ( G &OverBar; g ) A &times; ( &sigma; g ) A < ( E g ) B &times; ( G &OverBar; g ) B &times; ( &sigma; g ) B
Step 8, to merge after high frequency subgraph carry out the non-linear enhancing processing based on Stationary Wavelet Transform, establish high frequency subgraph
Figure FDA00004675585300000214
all pixel grey scales in absolute value maximum be maxh, the high frequency subgraph after can being enhanced
Figure FDA00004675585300000215
for:
( E _ SH F ) w t ( ii , jj ) = a &CenterDot; max h { sigm [ c ( Sh w t ( ii , jj ) - b ) ] - sigm [ - c ( Sh w t ( ii , jj ) + b ) ] }
Wherein, b=0.35, c=20; A=1/ (d 1-d 2), d 1=sigm (c × (1+b)), d 2=sigm (c × (1-b));
Sh w t ( ii , jj ) = ( SH f ) w t ( ii , jj ) / max h ;
Step 9, the Shearlet coefficient value after fusion treatment is carried out to Shearlet inverse transformation, obtain final fused images F.
2. a kind of hyperchannel satellite cloud picture fusion method based on Shearlet conversion according to claim 1, it is characterized in that in described step 1, the satellite cloud picture image that source images A after described registration and B select China weather satellite FY-2C to return, this satellite has 5 passages: infrared 1 passage, infrared 2 passages, aqueous vapor passage, infrared 4 passages, visible channel, the optional wherein image of two passages carries out registration.
3. a kind of hyperchannel satellite cloud picture fusion method based on Shearlet conversion according to claim 1 and 2, characterized by further comprising: the cloud atlas of adding other passages in the fusion results above, realize three width and above cloud atlas fusion, reach the effect that hyperchannel satellite cloud picture merges.
CN201410056917.3A 2014-02-19 2014-02-19 Multi-channel satellite cloud picture fusion method based on Shearlet conversion Active CN103839243B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410056917.3A CN103839243B (en) 2014-02-19 2014-02-19 Multi-channel satellite cloud picture fusion method based on Shearlet conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410056917.3A CN103839243B (en) 2014-02-19 2014-02-19 Multi-channel satellite cloud picture fusion method based on Shearlet conversion

Publications (2)

Publication Number Publication Date
CN103839243A true CN103839243A (en) 2014-06-04
CN103839243B CN103839243B (en) 2017-01-11

Family

ID=50802713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410056917.3A Active CN103839243B (en) 2014-02-19 2014-02-19 Multi-channel satellite cloud picture fusion method based on Shearlet conversion

Country Status (1)

Country Link
CN (1) CN103839243B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318532A (en) * 2014-10-23 2015-01-28 湘潭大学 Secondary image fusion method combined with compressed sensing
CN107230197A (en) * 2017-05-27 2017-10-03 浙江师范大学 Tropical cyclone based on satellite cloud picture and RVM is objective to determine strong method
CN108073865A (en) * 2016-11-18 2018-05-25 南京信息工程大学 A kind of aircraft trail cloud recognition methods based on satellite data
CN109215008A (en) * 2018-08-02 2019-01-15 上海海洋大学 A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation
CN109272477A (en) * 2018-09-11 2019-01-25 中国科学院长春光学精密机械与物理研究所 A kind of fusion method and fusion treatment device based on NSST Yu adaptive binary channels PCNN
CN109740629A (en) * 2018-12-05 2019-05-10 电子科技大学 A kind of non-down sampling contourlet decomposition transform system and its implementation based on FPGA
CN113284079A (en) * 2021-05-27 2021-08-20 山东第一医科大学(山东省医学科学院) Multi-modal medical image fusion method
CN113487529A (en) * 2021-07-12 2021-10-08 吉林大学 Meteorological satellite cloud picture target detection method based on yolk

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006017233A1 (en) * 2004-07-12 2006-02-16 Lehigh University Image fusion methods and apparatus
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006017233A1 (en) * 2004-07-12 2006-02-16 Lehigh University Image fusion methods and apparatus
CN103116881A (en) * 2013-01-27 2013-05-22 西安电子科技大学 Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QI-GUANG MIAO ET AL.: "A novel algorithm of image fusion using shearlets", 《OPTICS COMMUNICATIONS》 *
WANG-Q LIM: "The Discrete Shearlet Transform: A New Directional Transform and Compactly Supported Shearlet Frames", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
杨何群 等: "热带气旋卫星遥感客观定位方法研究进展", 《热带海洋学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318532A (en) * 2014-10-23 2015-01-28 湘潭大学 Secondary image fusion method combined with compressed sensing
CN104318532B (en) * 2014-10-23 2017-04-26 湘潭大学 Secondary image fusion method combined with compressed sensing
CN108073865B (en) * 2016-11-18 2021-10-19 南京信息工程大学 Aircraft trail cloud identification method based on satellite data
CN108073865A (en) * 2016-11-18 2018-05-25 南京信息工程大学 A kind of aircraft trail cloud recognition methods based on satellite data
CN107230197A (en) * 2017-05-27 2017-10-03 浙江师范大学 Tropical cyclone based on satellite cloud picture and RVM is objective to determine strong method
CN107230197B (en) * 2017-05-27 2023-05-12 浙江师范大学 Tropical cyclone objective strength determination method based on satellite cloud image and RVM
CN109215008A (en) * 2018-08-02 2019-01-15 上海海洋大学 A kind of multispectral and panchromatic image fusion method of entirety two generations Bandelet transformation
CN109272477A (en) * 2018-09-11 2019-01-25 中国科学院长春光学精密机械与物理研究所 A kind of fusion method and fusion treatment device based on NSST Yu adaptive binary channels PCNN
CN109740629A (en) * 2018-12-05 2019-05-10 电子科技大学 A kind of non-down sampling contourlet decomposition transform system and its implementation based on FPGA
CN109740629B (en) * 2018-12-05 2022-03-15 电子科技大学 Non-downsampling contourlet decomposition transformation system based on FPGA and implementation method thereof
CN113284079A (en) * 2021-05-27 2021-08-20 山东第一医科大学(山东省医学科学院) Multi-modal medical image fusion method
CN113284079B (en) * 2021-05-27 2023-02-28 山东第一医科大学(山东省医学科学院) Multi-modal medical image fusion method
CN113487529A (en) * 2021-07-12 2021-10-08 吉林大学 Meteorological satellite cloud picture target detection method based on yolk

Also Published As

Publication number Publication date
CN103839243B (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN103839243A (en) Multi-channel satellite cloud picture fusion method based on Shearlet conversion
CN103700075A (en) Tetrolet transform-based multichannel satellite cloud picture fusing method
CN102063715B (en) Method for fusing typhoon cloud pictures based on NSCT (Nonsubsampled Controurlet Transformation) and particle swarm optimization algorithm
CN101968883B (en) Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN101478693B (en) Method for evaluating star-loaded optical remote sensing image compression quality
CN102968781B (en) Image fusion method based on NSCT (Non Subsampled Contourlet Transform) and sparse representation
CN102800074B (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN103559496B (en) The extracting method of the multiple dimensioned multi-direction textural characteristics of froth images
CN105957054B (en) A kind of image change detection method
CN102063713A (en) Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN101546428A (en) Image fusion of sequence infrared and visible light based on region segmentation
CN104809734A (en) Infrared image and visible image fusion method based on guide filtering
CN104036495A (en) Welding defect extraction method and welding defect detection method
CN101482617A (en) Synthetic aperture radar image denoising method based on non-down sampling profile wave
CN104021536B (en) A kind of adaptive SAR image and Multispectral Image Fusion Methods
CN101303764A (en) Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave
CN109636766A (en) Polarization differential and intensity image Multiscale Fusion method based on marginal information enhancing
CN101483777B (en) SAR image denoising compressing method based on adaptive multi-dimension Bandelet packet
CN101847257A (en) Image denoising method based on non-local means and multi-level directional images
CN103793711A (en) Multidimensional vein extracting method based on brain nuclear magnetic resonance image
CN103116881A (en) Remote sensing image fusion method based on PCA (principal component analysis) and Shearlet conversion
CN102750705A (en) Optical remote sensing image change detection based on image fusion
CN104268833A (en) New image fusion method based on shift invariance shearlet transformation
CN110991738A (en) NPP/VIIRS night lamplight remote sensing data-based economic development research method
CN102096913B (en) Multi-strategy image fusion method under compressed sensing framework

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant