CN102819829A - Rebuilding algorithm for super-resolution remote sensing image based on fractal theory - Google Patents

Rebuilding algorithm for super-resolution remote sensing image based on fractal theory Download PDF

Info

Publication number
CN102819829A
CN102819829A CN2012102475326A CN201210247532A CN102819829A CN 102819829 A CN102819829 A CN 102819829A CN 2012102475326 A CN2012102475326 A CN 2012102475326A CN 201210247532 A CN201210247532 A CN 201210247532A CN 102819829 A CN102819829 A CN 102819829A
Authority
CN
China
Prior art keywords
image
fractal
resolution
uproar
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102475326A
Other languages
Chinese (zh)
Inventor
胡茂桂
王劲峰
赵豫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN2012102475326A priority Critical patent/CN102819829A/en
Publication of CN102819829A publication Critical patent/CN102819829A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a rebuilding method for a super-resolution remote sensing image based on a fractal theory. The rebuilding method provided by the invention is as follows: connecting fractal similarity or affinity between areas under different scales in an image through fractal coding and transmitting the established connection to a spacial scale with higher resolution through the transmissibility of fractal feature on the scale. According to the invention, an image containing additive white gaussian noise is separated in a quantified way, so that a noise-free high-resolution image can be rebuilt from a noise-containing low-resolution image. The rebuilding for super resolution of a digital elevation image, the effectiveness of the rebuilding method provided by the invention is improved. A fractal-based super-resolution rebuilding model capable of removing noises provided by the invention has theoretical significance and practical value in global environment change study, disaster and environment monitoring.

Description

Reconstruction algorithm based on fractal super-resolution remote sensing image
Technical field
The invention belongs to the geography information science and technology field; With the single width remote sensing image is main research object; It is carried out super-resolution rebuilding research; Under the condition that keeps the remote sensing image primitive character, go out to have the remote sensing image of high spatial resolution from existing low spatial resolution image reconstruction, make the image as a result that generates have and be higher than the spatial resolution of importing image.
Background technology
How from the low spatial resolution image, obtaining more, the useful information of high spatial resolution is the major issue of super-resolution image reconstruction research.The super-resolution image reconstruction can be defined as a kind of through analyzing and handle the more method of high spatial resolution image that from one or more low spatial resolution image, generates; Image output not only is higher than the input image in spatial resolution, and its quantity of information is higher than the single width image of any input.The theory and the method for at present, remote sensing image being carried out super-resolution rebuilding can be divided into two main directions: first kind of super-resolution theory and method that is based on reconstruct and study.Second kind is based on super-resolution theory and the method that mixed pixel decomposes.At the initial stage that super-resolution technique proposes and develops, most of research work all are to launching based on the single width image.Because the finiteness of single width low resolution image information, the effect that in practical application, obtains are not very desirable, have been carried in the super-resolution rebuilding algorithm that is based on sequence and several images.Super-resolution method based on several images has extracted the complementary information between the different images; Can obtain than the better effect of single width image reconstruction, still, in the application of many reality; Several images that obtain Same Scene are very difficult, or even impossible.So, go out high resolution image from the single width image reconstruction and have crucial value and significance.
Fractal theory is a kind of effective ways of describing out of order, unstable, natural phenomenon.From Mandelbrot after 20th century, proposed fractal theory the seventies; This theory is applied in the every field soon; It not only can be described and the irrelevant a lot of natural feature on a map of yardstick; And can stride yardstick ground complicated phenomenon is carried out straightforward and appropriate description, and can carry out quantification (Mandelbrot, 1967) to it.Fractal coding is that Barnsley etc. is according to iterated function system (Iterated Function System, IFS) the theoretical a kind of method for compressing image (Barnsley and Sloan, 1988) that proposes the earliest.Because it is difficult the realization in practical application; Jacquin improves it; Piecemeal iterated function system (Partition Iterated Function System to gray level image has been proposed; PIFS), with handling and calculate (Jacquin, 1992) again behind a lot of little pieces of image segmentation one-tenth.Self-similarity and be the important prerequisite condition of target image being carried out fractal image from affinity.At present; People have recognized extensively that natural a lot of view and image all have fractal/multi-fractal features, like floating clouds, and the mountain range; The leaves of plant etc. exist a large amount of self similarity information (being present between the different yardsticks) in the image that these spontaneous phenomenons appeared.Utilize fractal coding can excavate the fractal characteristic that exists in the remote sensing image effectively, for super-resolution rebuilding provides the basis.
Summary of the invention
The present invention proposes a kind of method for reconstructing based on fractal super-resolution remote sensing image, this method can not need be imported other prioris or data.
Do not considering under the complex conditions such as motion, distortion that image that spatial resolution reduces and the relation between the original high resolution image can be described as following form:
L = H ⊗ s + e - - - ( 1 )
Wherein, L is the low resolution image; H is high-resolution image; S be yardstick information transport function (Information Transfer Function, ITF), the information that is illustrated between the different scale is if transmitted, and has promptly expressed when resolution reduces information and how have been preserved and to lose;
Figure BDA00001897577700022
expression product operator; E is a stochastic error.Suppose that s and e are known or can be estimated to come out that so, the super-resolution rebuilding problem can be expressed as: under known L, s and e condition, how from formula (1), to obtain H.To width of cloth image L arbitrarily,, can find a lot of H to satisfy the H of formula (1) even under identical s and e condition.But; If the object of research has fractal property; Be all to have self-similarity between its part and integral body and the different scale or from radioactivity; Just can confirm that according to fractal character the H that satisfies condition is unique so, this just means that similarity and affinity between image part and the integral body are present in (accompanying drawing 1) between the different scale.
Based on fractal super-resolution image reconstruction procedure shown in accompanying drawing 2.Before image is carried out super-resolution rebuilding, need correctly to calculate or estimate its fractal characteristic, estimate AWGN (Additive White Noise, additive white Gaussian noise) the parameter e and the yardstick information transport function s of image then.At last, noise reduction, scaling down are accomplished the reconstruction of SR image in the fractal image process.
Fractal image has the resolution enhancing that some important characteristics make it can be used for carrying out image: (1) yardstick independence.Image encoded, and what obtain is one group of parameter that is called as the branch font code, describes similarity and affinity between image part and the integral body.Fractal code book body is not retrained by yardstick, and it can decoded on the yardstick arbitrarily, obtains the new image different with former image yardstick.(2) similar retentivity.Fractal image is that similarity and affinity between the different scale of image inside are compressed, and after decoding on the different yardsticks, this similarity and affinity can be reduced out, have kept the characteristic of former image.(3) nonlinear operation.Fractal decoding method is a kind of local linear and overall nonlinear adaptive algorithm helps the reconstruction of image detail.Just owing to the dirigibility and the high efficiency of fractal encoding and decoding, some researchs have begun to turn to other application outside the image compression, comprise interpolation, recovery and noise reduction etc.Ghazel etc. have explored the relation between the branch font code of the image that contains the AWGN noise and noiselessness image, have proposed to utilize fractal encoding and decoding to reduce the method (Ghazel et al., 2003) of AWGN noise.On this basis, use discrete cosine transforms such as Chen have realized noise reduction and the resolution (Chen et al., 2008) that improves image at frequency domain.In present fractal encoding and decoding research and using, ITF is taken as down-sampling or equalization usually.This method (like compression of images, Texture Segmentation) in general application is to accept, and its implementation procedure is simple and clear.But if when carrying out super-resolution rebuilding research, it is very inappropriate selecting down-sampling or equalization method simply; Because in actual spontaneous phenomenon and view; Along with reducing of yardstick, quantity of information is often that index reduces, but not simple linear change.Therefore, in fractal image SR process of reconstruction, using suitable transforming function transformation function is exactly a very important step.
In the present invention, we mainly concentrate on the process that how to increase yardstick with research emphasis, and find not have and make an uproar image and the relation of making an uproar between the image is arranged, but propose to rebuild based on the denoising SR of fractal image.Should rebuild main contents based on fractal SR comprises: (a) parameter estimation: the estimation of noise and information transport function (ITF) are estimated; (b) but the denoising fractal image of image; (c) set the resolution amplification factor, according to piecemeal iterated function system PIFS (Partition Iterated Function System) coding initial image is decoded, iteration obtains final SR image to convergence.
Method for reconstructing based on fractal super-resolution remote sensing image of the present invention is realized through following step:
1, yardstick information transport function (ITF) confirms
ITF is the Changing Pattern of reflection information between different scale.For concrete image, it has been described when spatial resolution reduces, and low resolution image and high resolution image comprise the relation between the information, is how to be held like the textural characteristics and the shape of large scale, and little details is how by comprehensive.From the optical imagery angle, (Point Spread Function has certain similarity between PSF), and this relation can be expressed as s (I)=S for ITF and point spread function (f (I)), wherein f () is PSF, S () is the down-sampling operation.Through research, the down-sampling of being not difficult to find in the fractal image and being adopted is two kinds of special information transport function forms with getting Mean Method: (1) when ITF template (discrete form) center be 1, other are at 0 o'clock, promptly corresponding down-sampling (accompanying drawing 3 (b)); (2) all equate when the template all elements, and and be 1 o'clock, promptly be equivalent to average template (accompanying drawing 3 (c)).
In reality, if lack priori, obtaining ITF so exactly is not an easy thing.In order to carry out the super-resolution rebuilding of image, must the ITF of this image be estimated.Research shows that along with dwindling of yardstick, the minimizing speed of natural system information is exponential decay.Vision system and Flame Image Process correlative study show that gaussian pyramid has kept extraordinary consistance with the visual characteristic of human eye in the visual processes process of image.Gaussian function has embodied natural system change in information trend when dimensional variation preferably, is lacking under the situation of priori, can be used as the estimation of true ITF.In addition, Gaussian function and Downsapling method, get and satisfy above-mentioned general and special relationship between the Mean Method equally.
G ( x , y ) = 1 2 πδ 2 e - x 2 + y 2 2 δ 2 - - - ( 2 )
In two-dimensional Gaussian function, standard deviation δ has determined concentrating and degree of scatter of its figure, and more little its functional digraph of δ is narrow more, and δ more greatly then its figure is tending towards flat (accompanying drawing 3 (a)) more.More generally, when δ → 0, Gauss's template has just become the template of down-sampling mode, and when δ → ∞, Gauss's template has just become the average template.
When dimensional variation, different spontaneous phenomenon/views have different regularity of change, and the transmission and the damaed cordition of its information also vary.Even ITF Gaussian distributed, the shape of its ITF also might be different fully, promptly Gaussian distribution has different variances.Rebuild in order to carry out SR, need estimate its ITF.Divide two kinds of situation to consider, the image of muting image and band AWGN.
(1) when image noisy or can not ignore the time itself, in the ideal case, it is identical carrying out image and raw video that fractal encoding and decoding obtain afterwards with real ITF.In practical application because the existence of the error of calculation, image after the encoding and decoding and the distance between the raw video be one very little but be not equal to 0 value, but optimum ITF makes that the distance between the two is minimum.Therefore, the parameter δ of ITF satisfies condition:
δ=argmin||I-I′|| (3)
Wherein, I is a raw video, and I ' is the image with equal resolution after fractal encoding and decoding.
(2) when imaging belt AWGN, can know that according to formula (1) in the ideal case, image after the denoising and the differential images between the raw video are exactly the AWGN noise.Thus, can draw following relational expression:
δ = arg [ ( I - I ′ ′ ) ~ N ( 0 , δ 0 2 ) ] - - - ( 4 )
Wherein, I is a raw video, I " is the image with equal resolution after fractal encoding and decoding denoising.Be that differential images is that to meet average be 0, variance is the normal distribution of .
Especially; During as ; Top band noise situations just deteriorates to first kind of situation, and the meaning that formula (3) is expressed is identical with formula (4).Find the solution and use optimized Algorithm to realize that accompanying drawing 4 is implementation methods of a kind of routine.
2, the estimation of additive white Gaussian noise
AWGN is meant that the probability density function of noise satisfies the normal distribution statistical property, and its power spectral density function is a noise like of constant simultaneously.It relates to two different aspects of noise, i.e. the normal distribution property of probability density function and power spectral density function homogeneity simultaneously.Be expressed as
Figure BDA00001897577700044
wherein if having the image I of AWGN;
Figure BDA00001897577700045
for there not being the image of making an uproar, e is the AWGN noise.E has following character: (1) e iObeying average is 0, and variance does
Figure BDA00001897577700046
Normal distribution; (2) e jWith e j(separate between the i ≠ j); (3) e with Between be separate.
Rebuild in order to carry out SR, under the AWGN parameter in not knowing image and the situation of relevant information, need from only image I, estimate this parameter.For width of cloth image arbitrarily, single little information from the I image estimates that e is unusual difficulty.But at the occurring in nature of reality, because the ubiquity of spatial coherence, in certain areal extent, the property value of research object changes very little, or even identical.So, can think,, make that the gray-scale value in the zonule has had variation on the image just because of the existence of noise.Utilize natural this universal law, can the AWGN of image be estimated approx.Concrete method is following:
(1) confirms the size of the spatial autocorrelation property between the grey scale pixel value in the image, estimate the scope that autocorrelation exists.Spatial autocorrelation property can be estimated through semivariable function figure;
(2) according to the strong and weak degree of spatial autocorrelation property, estimate the size r that pixel value changes very little zone on image, promptly can think in this range scale interior pixel value less than changing;
(3) use the window sliding method, in the variation of the window interior pixel that is of a size of r, its method is the variance (being made as set V) that all pixels in the calculation window deduct its average differential images afterwards in the statistics image.
(4) pair set V makes histogram, and so, the variance that the frequency of occurrences is the highest just can be considered to the variance of AWGN.During concrete the realization, can carry out match to histogram earlier, confirm the value that probability is maximum according to matched curve then.
This method is in a kind of method of estimation that lacks under the relevant priori situation, but for real image, this assumed condition is acceptable often.
3, fractal image confirms
If (x, y are the target images of research p) to I, and (p representes the pixel value of this position for i, j) row of remarked pixel and column index.R and D expression value respectively piece (Range Block) and territory piece (Domain Block), they are that image is carried out cutting apart of two kinds of different modes, wherein, each value piece R i∈ R (i=1,2 ..., M, M are the numbers of value piece) through contraction maps w iAfter all will with some territory piece D j∈ D (j=1,2 ..., N, N are the numbers of territory piece) get in touch.Contraction maps has comprised two kinds of conversion, i.e. geometric transformation: g:D j→ R i( Ω representes image brilliance space: i=1,2 ..., M; J=1,2 ..., N) and luminance transformation
Figure BDA00001897577700052
Θ → Θ (Θ is a set of real numbers):
∀ ( x , y , p ) ∈ D j ,
Wherein, geometric transformation g is made up of affine maps r () and two parts of contracted transformation s (): g ()=s (r ()).Affine maps r () mainly accomplishes territory piece D jAffined transformation, under discrete case, it comprises following 8 kinds of conversion: spin upside down, flip vertical, the upset of 2 diagonal angles, ± 90 ° with ± 180 ° of rotations.Contracted transformation s () then dwindles the territory piece according to ITF, make it identical with the yardstick that is worth piece.
Luminance transformation also can be described as grey scale mapping, be one to g (k)(D j) linear transformation of (k is the sequence number of affine maps):
In the formula,
Figure BDA00001897577700056
Represent R respectively with t iAnd g (k)(D j) middle brightness value; α is a scalar factor; β is a conversion parameter.The fundamental purpose of this conversion is to find one and g (k)(D j) the value piece R of coupling i, its parameter alpha, β can obtain through minimizing collage error.When 0<α<1, then this luminance transformation meets the requirement of contraction maps.
Δ ij ( k ) = | | α ij g ij ( k ) ( D j ) + β ij - R i | | 2 - - - ( 7 )
In the formula, k is the sequence number of affine maps; Norm ‖ ‖ 2Calculate the territory piece g after changing Ij (k)(D j) and piece value R iBetween Euclidean distance.
Therefore, value piece R iThe branch font code can represent i.e. (i, j, k, α with a five-tuple Ij, β Ij), respectively corresponding value piece R i, territory piece D j, the sequence number of affine maps and alpha, the β of luminance transformation.The branch font code set of all values piece has just constituted the PIFS coding of image I.
4, the super-resolution rebuilding of image
4.1, the noise reduction of image
In the fractal image process, contracted transformation s () is the territory piece D that yardstick is bigger jThe process of the value of being contracted to piece yardstick, its essence are territory piece D jThe process that information is transmitted between yardstick under the ITF effect.Under discrete case, this process can be expressed as ITF template s in the territory piece D jThe process of last slip product.
r ( x , y ) = ( s ⊗ d ) ( x , y ) - - - ( 8 )
In the formula; Image on the d representative domain piece yardstick; R representes the image on value piece yardstick after the conversion,
Figure BDA00001897577700062
expression product operator.If the size of ITF template is n * n, then each pixel on the r image is that the informix of the n * n pixel on the d image forms:
υ = Σ i = 1 m × n ω i λ i - - - ( 9 )
In the formula, λ i(i=1,2 ..., n 2) be the pixel value on the d image, ω i(i=1,2 ..., n 2) be that template s goes up weight, it satisfies Σ i = 1 n × n ω i = 1 And 0≤| ω i|≤1.
Utilizations such as Ghazel have the image of AWGN noise and the relation between original noise-free picture, in the fractal image process, attempt to eliminate noise, obtain not having the image of making an uproar after being desirably in decoding, for noise reduction provides a kind of new method (Ghazel et al., 2003).But as previously mentioned, Ghazel etc. do not consider the effect of information transport function in the yardstick transfer process, but have optionally selected to get the mode of average, and this might introduce new noise and error, has reduced the quality of image.To generalized case more, we are with make an uproar image and have the AWGN image relation between the font code of dividing and promote to be fit to various forms of ITF functions of nothing.λ iBe not have the pixel value of making an uproar With noise e iSum, wherein e iObey independent same distribution, also obeying average simultaneously is 0, and standard deviation is δ 0Normal distribution, then having makes an uproar image and do not have the relation make an uproar between the image can be expressed as:
λ i = λ ~ i + e i , e i ~ N ( 0 , δ 0 2 ) - - - ( 10 )
Wherein, symbol "~" expression nothing the to be estimated image of making an uproar, e iWith
Figure BDA00001897577700067
Between be separate.
v = Σ i = 1 n × n ω i ( λ ~ i + e i ) = Σ i = 1 n × n ω i λ ~ i + Σ i = 1 n × n ω i e i = v ~ + Σ i = 1 n × n ω i e i - - - ( 11 )
Definition according to mathematical expectation and variance can know,
E ( v ) = E ( v ~ + Σ i = 1 n × n ω i e i ) = E ( v ~ ) + E ( Σ i = 1 n × n ω i e i ) = E ( v ~ ) (12)
δ v 2 = Var ( v ~ + Σ i = 1 n × n ω i e i ) = δ ~ v 2 + Σ i = 1 n × n ω i 2 · δ e 2
Wherein, (be respectively mathematical expectation and the variance that the image v that makes an uproar is arranged with
Figure BDA000018975777000611
v),
Figure BDA000018975777000612
is respectively mathematical expectation and the variance of not having image that make an uproar with
Figure BDA000018975777000613
to E.Can be known that by formula not having makes an uproar has identical mathematical expectation with the image of making an uproar is arranged, variance is then inequality, receives the influence of ITF and AWGN.
α and β estimation formulas according to propositions such as Ghazel make
Figure BDA00001897577700071
then
α = Cov ( X , Y ) δ X 2 = Cov ( X ~ , Y ~ ) δ X 2 + σ · δ e 2 = Cov ( X ~ , Y ~ ) / δ ~ X 2 1 + σ · δ e 2 / δ ~ X 2 - - - ( 13 )
In the formula; X and Y represent respectively through the territory piece after the contraction maps and after search the value piece that matches,
Figure BDA00001897577700073
has make an uproar image and the variance of not having the image of making an uproar.Be not difficult to find; In the formula (13) molecule just in time be do not have an image of making an uproar coding parameter therefore, can draw not have and make an uproar image and the following relation of making an uproar between the image fractal image parameter is arranged:
α ~ = ( 1 + σ / γ ) α (14)
β ~ = E ( Y ) - α ~ E ( X )
In the formula, is signal to noise ratio (S/N ratio):
Figure BDA00001897577700078
for there not being the image fractal image parameter of making an uproar; α, β are the image fractal image parameters of making an uproar.At this moment, corresponding collage error is:
Δ ~ ij ( k ) = E [ ( ( α ~ ij X ~ j ( k ) + β ~ ij ) - Y ~ i ) 2 ] (15)
= α ~ ij 2 ( E [ ( X j ( k ) ) 2 ] - σδ e 2 ) + 2 α ~ ij β ~ ij E [ X j ( k ) ] + β ~ ij 2 - 2 α ~ ij E [ X j ( k ) Y ] - 2 β ~ ij E [ Y ] + ( E [ Y i 2 ] - δ e 2 )
In the formula, X j(j=1,2 ..., N, N are the numbers of territory piece) the territory piece of expression after contraction maps, Y i(i=1,2 ..., M, M are the numbers of value piece) be after search and X jThe value piece of coupling, k is the sequence number of affine maps.So far, according to from branch font code (i, j, k, the α of the image of making an uproar are arranged Ij, β Ij) in got access to the version that its nothing is made an uproar
Figure BDA000018975777000711
γ has measured signal to noise ratio (S/N ratio), when former image does not have AWGN, promptly δ e 2 → 0 The time, γ → ∞, then α ~ = α , β ~ = β .
4.2, the enhancing of resolution
All five-tuples (i, j, k, α Ij, β Ij) set constituted the PIFS fractal image of image I, its quantitative description in nothing to be estimated is made an uproar image, between part and the integral body and in similarity between the different piece and affinity on the different scale.According to the character of minute font code, the image that resolution strengthens is when fractal decoding, to obtain.Comprise three key steps: (1) calculates the size that resolution strengthens the back image according to the multiplying power (as 2 times) that expectation strengthens.(2) set up the initial arbitrarily image of a width of cloth with the expectation image size that calculates.According to fixpoint theory, initial choosing of image do not influence the result.For simplicity, can choose width of cloth white image, perhaps with the former image average gray scale image that is pixel value as initial image.(3) according to the PIFS coding, initial image is carried out iteration until convergence.In the iterative process, can use Euclidean distance to measure the distance between adjacent twice iteration image, when distance during less than the threshold set (as 1 * 10 -6) then think do not had between adjacent twice reconstructed image to change.
Description of drawings
Fig. 1 is the self-similarity between the different scale;
Fig. 2 is a SR image reconstruction framework;
Fig. 3 information transport function (ITF), wherein a figure adopts general ITF model, and b figure is a down-sampling, and c figure adopts to get Mean Method;
Fig. 4 ITF parameter estimation procedure;
Fig. 5 is a SRTM digital elevation image, and wherein a figure is the LR image, and b figure is original HR image;
Fig. 6 is the SRTM3 multi-fractal features, and wherein a figure is α-f (α) singular spectrum, and b figure is q-D (q) curve;
Fig. 7 is the probability distribution function of local variance;
Fig. 8 is the super-resolution rebuilding of SRTM image, and figure a is the SR image, and figure b be and the absolute error of SRTM1, and figure c is the LR image of regional A, and scheming d is the SR image of regional A.
Embodiment
In conjunction with True Data, the embodiment of algorithm of the present invention is summarized as follows:
The first step, the collection of data and pre-service
SRTM (Shuttle Radar Topography Mission; Space shuttle radar topology mapping task) is to accomplish combined measurement, and obtains the altitude figures of earth surface by the interferometer radar sensor on " striving " number space shuttle that is equipped on the U.S. by NASA (American Space General Administration), NGA (American National NGA) and Germany and Italian space flight institution cooperation.Final issue dem data collection has two kinds of spatial resolutions, and promptly the graticule mesh size is 1 second of arc and 3 second of arcs, approximately is equivalent to the spatial resolution precision (on the equator) of 30m and 90m respectively, and the latter processes to obtain on the former basis.The zone that we select is to be positioned at latitude 35.48N to 35.63N, the SRTM image in longitude 99.68W to the 99.53W scope (accompanying drawing 5).The size of SRTM3 (LR) is 180 * 180, and the size of corresponding SRTM1 is 540 * 540 with it.
Second step, use the super resolution ratio reconstruction method that proposes that this regional SRTM3 is carried out the raising of spatial resolution,, directly compare with it when the image of rebuilding is estimated as real high resolution image with its SRTM1.
In order to estimate the noise variance of image, adopted a kind of partial statistics method.The moving window of one 2 * 2 pixel size is collected all possible block, and this window from left to right, move from top to bottom and moves a pixel at every turn.Can generate the histogram that a local variance distributes, approach lognormal distribution, average is 1.72, and variance is 1.03 (accompanying drawings 7), estimates that the standard deviation of average and variance is respectively 0.011,0.008.The R of matched curve 2Value is 0.93 (p<0.01).The counterparty of highest frequency place difference is 1.7 in the distribution, is worked as the noise variance as SRTM.Variance
Figure BDA00001897577700081
is based on that AWGN hypothesis estimates down.The size of value piece and territory piece is 2 * 2 pixels, 6 * 6 pixels, and the size of Gauss's template is 3 * 3 pixels.The Density Distribution variance of template does not have any variation basically less than 0.2 or greater than 2.Only estimate of variance approximately is 0.8.
After AWGN noise and information change function s have estimated, just can generate the PIFS coding of low resolution SRTM image.Rebuild SR (accompanying drawing 8 (a)).Because the spatial resolution of original LR image is lower, so its image is more coarse than the image of original high resolution.In the SR image, add a lot of details and LR image and made contrast (accompanying drawing 8 (c, d)).The error image is the difference of estimating between SR image and the original real high resolution image (accompanying drawing 8 (b)), and its average and standard deviation are respectively 0.09m, 2.25m.

Claims (5)

1. method for reconstructing based on fractal super-resolution remote sensing image is characterized in that following steps:
(1.1) the information transport function ITF between the estimation yardstick; Describe when spatial resolution reduces, low resolution image and high resolution image comprise the relation between the information;
(1.2) the additive white Gaussian noise AWGN that exists in the estimation image; Through the correlativity and the variability of regional area interior pixel on the spatial autocorrelation property estimation image, confirm the variance of white Gaussian noise according to frequency statistics;
(1.3) image fractal image carries out the self-adaptation piecemeal to the low resolution image, in conjunction with the noise profile model, after affined transformation, contracted transformation and luminance transformation, obtains the fractal image of raw video;
(1.4) image super-resolution rebuilding carries out iterative decoding according to the denoising fractal image and the amplification factor of low resolution image to the initial image of selecting, until convergence, and the output high resolution image.
2. the method for reconstructing based on fractal super-resolution remote sensing image according to claim 1 is characterized in that: the yardstick information transport function method of estimation in the said step (1.1) is following:
Lacking under the situation of priori, Gaussian function can embody natural system change in information trend when dimensional variation, rebuild in order to carry out SR, and be that the situation of Gaussian distribution is estimated to yardstick information transport function; At this moment, need to divide two kinds of situation to consider, the image of muting image and band AWGN;
(2.1) when image noisy or can not ignore the time itself because the existence of the error of calculation, figure after the encoding and decoding and the distance between the raw video be one very little but be not equal to 0 value, but optimum ITF makes that the distance between the two is minimum; Therefore, the parameter δ of ITF satisfies condition:
δ=arg?min||I-I′||
In the formula, I is a raw video, and I ' is the image with equal resolution after fractal encoding and decoding;
(2.2) when imaging belt AWGN, image after the denoising and the differential images between the raw video are exactly the AWGN noise; Thus, can draw following relational expression:
&delta; = arg [ ( I - I &prime; &prime; ) ~ N ( 0 , &delta; 0 2 ) ]
In the formula, I is a raw video, I " is the image with equal resolution after fractal encoding and decoding denoising; Be that differential images is that to meet average be 0, variance is the normal distribution of
Figure FDA00001897577600012
.
3. the method for reconstructing based on fractal super-resolution remote sensing image according to claim 1 is characterized in that: the AWGN method of estimation in the said step (1.2) is following:
(3.1) confirm the size of the spatial autocorrelation property between the grey scale pixel value in the image, estimate the scope that autocorrelation exists;
(3.2), estimate the size r that on image pixel value changes very little zone according to the strong and weak degree of spatial autocorrelation property;
(3.3) use the window sliding method, in the variation of the window interior pixel that is of a size of r, its method is the variance that all pixels in the calculation window deduct its average differential images afterwards in the statistics image, is made as set V;
(3.4) pair set V makes histogram, and so, the variance that the frequency of occurrences is the highest just can be considered to the variance of AWGN; Be specially, earlier histogram carried out match, confirm the value that probability is maximum according to matched curve then.
4. the method for reconstructing based on fractal super-resolution remote sensing image according to claim 1 is characterized in that: definite method of the fractal image in the said step (1.3) is following:
If (x, y p) are the goal in research image to I, and (p representes the pixel value of this position for i, j) row of remarked pixel and column index; R and D expression value respectively piece and territory piece, they are to the cutting apart of two kinds of different modes of image, wherein, each value piece R i∈ R is through contraction maps w iAfter all will with some territory piece D j∈ D gets in touch; (x, y p), carry out contracted transformation to goal in research image I; Contraction maps has comprised two kinds of conversion, i.e. geometric transformation g and luminance transformation
Figure FDA00001897577600021
I=1,2 ..., M, M are the numbers of value piece, j=1, and 2 ..., N, N are the numbers of territory piece;
(4.1) geometric transformation: geometric transformation g is made up of affine maps r () and two parts of contracted transformation s (): g ()=s (r ()); Affine maps r () mainly accomplishes territory piece D jAffined transformation; Contracted transformation s () then dwindles the territory piece according to ITF, make it identical with the yardstick that is worth piece;
(4.2) luminance transformation: to g (k)(D j) carrying out linear transformation, k is the sequence number of affine maps, finds one and g (k)(D j) the value piece R of coupling iValue piece R iThe branch font code can use five-tuple (i, j, k, a α Ij, β Ij) represent respectively corresponding value piece R i, territory piece D j, sequence number and the fractal image parameter alpha of affine maps, the branch font code set of β all values piece just constituted the branch font code of image I.
5. the method for reconstructing based on fractal super-resolution remote sensing image according to claim 4 is characterized in that: the image super resolution ratio reconstruction method in the said step (1.4) is following:
(5.1) noise reduction of image
Under discrete case, the contracted transformation in the fractal image can be expressed as ITF template s in the territory piece D jThe process of last slip product;
r ( x , y ) = ( s &CircleTimes; d ) ( x , y )
In the formula; Image on the d representative domain piece yardstick; R representes the image on value piece yardstick after the conversion,
Figure FDA00001897577600023
expression product operator.If the size of ITF template is n * n, then each pixel on the r image is that the informix of the n * n pixel on the d image forms:
&upsi; = &Sigma; i = 1 m &times; n &omega; i &lambda; i
In the formula, λ i, i=1,2 ..., n 2, be the pixel value on the d image, ω i, i=1,2 ..., n 2, be that template s goes up weight, it satisfies
Figure FDA00001897577600031
And 0≤| ω i|≤1;
(5.1.1) will have to make an uproar pixel and do not have the relation table make an uproar between the pixel and be shown:
&lambda; i = &lambda; ~ i + e i , e i ~ N ( 0 , &delta; 0 2 )
In the formula, symbol "~" expression nothing the to be estimated image of making an uproar, noise e iWith the nothing pixel value of making an uproar
Figure FDA00001897577600033
Between be separate, and to obey average be 0, standard deviation is δ 0Normal distribution;
The relation that (5.1.2) has make an uproar image v and nothing to make an uproar between the image
Figure FDA00001897577600034
is:
v = &Sigma; i = 1 n &times; n &omega; i ( &lambda; ~ i + e i ) = &Sigma; i = 1 n &times; n &omega; i &lambda; ~ i + &Sigma; i = 1 n &times; n &omega; i e i = v ~ + &Sigma; i = 1 n &times; n &omega; i e i
(5.1.3) formula in the step (5.1.2) can be known according to the definition of mathematical expectation and variance,
E ( v ) = E ( v ~ + &Sigma; i = 1 n &times; n &omega; i e i ) = E ( v ~ ) + E ( &Sigma; i = 1 n &times; n &omega; i e i ) = E ( v ~ )
&delta; v 2 = Var ( v ~ + &Sigma; i = 1 n &times; n &omega; i e i ) = &delta; ~ v 2 + &Sigma; i = 1 n &times; n &omega; i 2 &CenterDot; &delta; e 2
Wherein, (be respectively mathematical expectation and the variance that the image v that makes an uproar is arranged with
Figure FDA00001897577600038
v),
Figure FDA00001897577600039
is respectively mathematical expectation and the variance of not having image
Figure FDA000018975776000311
that make an uproar with
Figure FDA000018975776000310
to E;
(5.1.4) order &sigma; = &Sigma; i = 1 n &times; n &omega; i 2 , Then
&alpha; = Cov ( X , Y ) &delta; X 2 = Cov ( X ~ , Y ~ ) &delta; X 2 + &sigma; &CenterDot; &delta; e 2 = Cov ( X ~ , Y ~ ) / &delta; ~ X 2 1 + &sigma; &CenterDot; &delta; e 2 / &delta; ~ X 2
In the formula; X and Y represent respectively through the territory piece after the contraction maps and after search the value piece that matches,
Figure FDA000018975776000314
has make an uproar image and the variance of not having the image of making an uproar;
(5.1.5) molecule in the step (5.1.4) just in time be do not have an image of making an uproar coding parameter
Figure FDA000018975776000315
therefore, can draw not have and make an uproar image and the following relation of making an uproar between the image fractal image parameter is arranged:
&alpha; ~ = ( 1 + &sigma; / &gamma; ) &alpha;
&beta; ~ = E ( Y ) - &alpha; ~ E ( X )
In the formula,
Figure FDA000018975776000318
is signal to noise ratio (S/N ratio);
Figure FDA000018975776000319
be not for there being the image fractal image parameter of making an uproar; α, β are the image fractal image parameters of making an uproar;
(5.1.6) the corresponding collage error of formula is in the step (5.1.5):
&Delta; ~ ij ( k ) = E [ ( ( &alpha; ~ ij X ~ j ( k ) + &beta; ~ ij ) - Y ~ i ) 2 ]
= &alpha; ~ ij 2 ( E [ ( X j ( k ) ) 2 ] - &sigma;&delta; e 2 ) + 2 &alpha; ~ ij &beta; ~ ij E [ X j ( k ) ] + &beta; ~ ij 2 - 2 &alpha; ~ ij E [ X j ( k ) Y ] - 2 &beta; ~ ij E [ Y ] + ( E [ Y i 2 ] - &delta; e 2 )
In the formula, X jThe territory piece of expression after contraction maps, j=1,2 ..., N, N are the numbers of territory piece, Y iFor through the search after and X jThe value piece of coupling, i=1,2 ..., M, M are the numbers of value piece, k is the sequence number of affine maps;
So far, according to from branch font code (i, j, k, the α of the image of making an uproar are arranged Ij, β Ij) in got access to the version that its nothing is made an uproar
Figure FDA00001897577600041
Wherein γ has measured signal to noise ratio (S/N ratio);
(5.2) enhancing of resolution
The multiplying power that (5.2.1) strengthens according to expectation calculates the size that resolution strengthens the back image;
(5.2.2) set up the initial arbitrarily image of a width of cloth with the expectation image size that calculates;
(5.2.3) according to the five-tuple (i, j, k, the α that obtain in the claim explanation 4 Ij, β Ij) constitute the PIFS sign indicating number, initial image is carried out iteration until convergence.
CN2012102475326A 2012-07-17 2012-07-17 Rebuilding algorithm for super-resolution remote sensing image based on fractal theory Pending CN102819829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102475326A CN102819829A (en) 2012-07-17 2012-07-17 Rebuilding algorithm for super-resolution remote sensing image based on fractal theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102475326A CN102819829A (en) 2012-07-17 2012-07-17 Rebuilding algorithm for super-resolution remote sensing image based on fractal theory

Publications (1)

Publication Number Publication Date
CN102819829A true CN102819829A (en) 2012-12-12

Family

ID=47303933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102475326A Pending CN102819829A (en) 2012-07-17 2012-07-17 Rebuilding algorithm for super-resolution remote sensing image based on fractal theory

Country Status (1)

Country Link
CN (1) CN102819829A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455709A (en) * 2013-07-31 2013-12-18 华中科技大学 Super-resolution method and system for digital elevation model
CN103116897B (en) * 2013-01-22 2015-11-18 北京航空航天大学 A kind of Three-Dimensional Dynamic data compression based on image space and smoothing method
CN107466411A (en) * 2015-04-14 2017-12-12 微软技术许可有限责任公司 Two-dimensional infrared depth sense
CN107864360A (en) * 2017-11-15 2018-03-30 秦广民 Monitoring type radio data storage method
CN108230310A (en) * 2018-01-03 2018-06-29 电子科技大学 A kind of method that non-fire space-time data is extracted based on semivariable function
CN110927497A (en) * 2019-12-09 2020-03-27 交控科技股份有限公司 Point switch fault detection method and device
CN113704372A (en) * 2021-08-18 2021-11-26 中国人民解放军国防科技大学 Remote sensing image conversion map migration method and device based on depth countermeasure network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831556A (en) * 2006-04-14 2006-09-13 武汉大学 Single satellite remote sensing image small target super resolution ratio reconstruction method
CN101441765A (en) * 2008-11-19 2009-05-27 西安电子科技大学 Self-adapting regular super resolution image reconstruction method for maintaining edge clear
CN102163329A (en) * 2011-03-15 2011-08-24 河海大学常州校区 Super-resolution reconstruction method of single-width infrared image based on scale analogy
CN102547261A (en) * 2010-12-24 2012-07-04 上海电机学院 Fractal image encoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1831556A (en) * 2006-04-14 2006-09-13 武汉大学 Single satellite remote sensing image small target super resolution ratio reconstruction method
CN101441765A (en) * 2008-11-19 2009-05-27 西安电子科技大学 Self-adapting regular super resolution image reconstruction method for maintaining edge clear
CN102547261A (en) * 2010-12-24 2012-07-04 上海电机学院 Fractal image encoding method
CN102163329A (en) * 2011-03-15 2011-08-24 河海大学常州校区 Super-resolution reconstruction method of single-width infrared image based on scale analogy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MAO-GUI HU ET AL.: "Super-Resolution Reconstruction of Remote Sensing Images Using Multifractal Analysis", 《SENSORS 2009》, vol. 9, no. 11, 29 October 2009 (2009-10-29) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116897B (en) * 2013-01-22 2015-11-18 北京航空航天大学 A kind of Three-Dimensional Dynamic data compression based on image space and smoothing method
CN103455709A (en) * 2013-07-31 2013-12-18 华中科技大学 Super-resolution method and system for digital elevation model
CN103455709B (en) * 2013-07-31 2016-02-24 华中科技大学 A kind of super-resolution method for digital elevation model and system thereof
CN107466411A (en) * 2015-04-14 2017-12-12 微软技术许可有限责任公司 Two-dimensional infrared depth sense
CN107864360A (en) * 2017-11-15 2018-03-30 秦广民 Monitoring type radio data storage method
CN108230310A (en) * 2018-01-03 2018-06-29 电子科技大学 A kind of method that non-fire space-time data is extracted based on semivariable function
CN108230310B (en) * 2018-01-03 2021-12-17 电子科技大学 Method for extracting non-fire spatio-temporal data based on semi-variogram
CN110927497A (en) * 2019-12-09 2020-03-27 交控科技股份有限公司 Point switch fault detection method and device
CN113704372A (en) * 2021-08-18 2021-11-26 中国人民解放军国防科技大学 Remote sensing image conversion map migration method and device based on depth countermeasure network
CN113704372B (en) * 2021-08-18 2024-02-06 中国人民解放军国防科技大学 Remote sensing image conversion map migration method and device based on depth countermeasure network

Similar Documents

Publication Publication Date Title
CN102819829A (en) Rebuilding algorithm for super-resolution remote sensing image based on fractal theory
Fang et al. A variational approach for pan-sharpening
Zhu et al. Fast single image super-resolution via self-example learning and sparse representation
CN102708576B (en) Method for reconstructing partitioned images by compressive sensing on the basis of structural dictionaries
CN104574336B (en) Super-resolution image reconstruction system based on adaptive sub- mould dictionary selection
CN107657217A (en) The fusion method of infrared and visible light video based on moving object detection
CN105469360A (en) Non local joint sparse representation based hyperspectral image super-resolution reconstruction method
CN104008539A (en) Image super-resolution rebuilding method based on multiscale geometric analysis
CN104408700A (en) Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN105989611A (en) Blocking perception Hash tracking method with shadow removing
CN105046672A (en) Method for image super-resolution reconstruction
CN107220957B (en) It is a kind of to utilize the remote sensing image fusion method for rolling Steerable filter
CN101615290A (en) A kind of face image super-resolution reconstruction method based on canonical correlation analysis
CN105825477A (en) Remote sensing image super-resolution reconstruction method based on multi-dictionary learning and non-local information fusion
CN102842124A (en) Multispectral image and full-color image fusion method based on matrix low rank decomposition
CN106600533B (en) Single image super resolution ratio reconstruction method
CN101551902A (en) A characteristic matching method for compressing video super-resolution based on learning
CN103971354A (en) Method for reconstructing low-resolution infrared image into high-resolution infrared image
CN103236067B (en) The local auto-adaptive method for registering that a kind of Pixel-level SAR image time series builds
CN103093431A (en) Compressed sensing reconstruction method based on principal component analysis (PCA) dictionary and structural priori information
Yang et al. A sparse representation based pansharpening method
CN104766272A (en) Image super-resolution reestablishing method based on sub pixel displacement model
Huang et al. Atrous pyramid transformer with spectral convolution for image inpainting
CN106296583A (en) Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method mapped in pairs
CN102222321A (en) Blind reconstruction method for video sequence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121212