CN103576132A - Processing method and system for remote-sensing images - Google Patents

Processing method and system for remote-sensing images Download PDF

Info

Publication number
CN103576132A
CN103576132A CN201210253288.4A CN201210253288A CN103576132A CN 103576132 A CN103576132 A CN 103576132A CN 201210253288 A CN201210253288 A CN 201210253288A CN 103576132 A CN103576132 A CN 103576132A
Authority
CN
China
Prior art keywords
image
pixel
reflectivity
similar
spatial resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201210253288.4A
Other languages
Chinese (zh)
Inventor
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI LAIKAI DIGITAL TECHNOLOGY Co Ltd
Original Assignee
SHANGHAI LAIKAI DIGITAL TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI LAIKAI DIGITAL TECHNOLOGY Co Ltd filed Critical SHANGHAI LAIKAI DIGITAL TECHNOLOGY Co Ltd
Priority to CN201210253288.4A priority Critical patent/CN103576132A/en
Publication of CN103576132A publication Critical patent/CN103576132A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/04Interpretation of pictures
    • G01C11/06Interpretation of pictures by comparison of two or more pictures of the same area

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a processing method and system for remote-sensing images. A desired image A0 is obtained according to two groups of images different in spatial resolution. Firstly, the conversion coefficient, between the two groups of images, of the reflectivity variable quantity of an object region is calculated according to the two groups of images different in spatial resolution, then the variable quantity of the images with the low spatial resolution is converted into the variable quantity of the images with the high spatial resolution according to the conversion coefficient, and finally, the reflectivity of an object pixel of the obtained image A0 is obtained according to the known images with the high spatial resolution. The invention provides the processing system for the remote-sensing images correspondingly. According to the method and system, images with details kept better can be generated.

Description

A kind of disposal route of remote sensing image and system
Technical field
The present invention relates to the technical field that remote sensing image is processed, refer to especially the disposal route and the system that by using, more than a width remote sensing image, generate new remote sensing image.
Background technology
The technology of the different advantages of integrated application multisensor remotely-sensed data in space, time, spectral resolution, be that multi-Source RS Data Fusion Technique developed since the eighties in 20th century, object is by the processing to multiple sensors data, improves space and the spectral resolution of remotely-sensed data.Comparatively ripe fusion method has at present: IHS conversion, principal component transform, wavelet transformation and Brovey conversion, high-pass filtering etc., but these methods are mainly used in merging the high spatial resolution panchromatic wave-band image in same or the close moment and compared with the multi light spectrum hands image of low spatial resolution, the high spatial resolution multi-spectral image that its result images is synchronization.Because the image of high spatial resolution can not obtain with all low spatial resolution images simultaneously, these traditional multi-source data fusion methods can not improve the room and time resolution of remotely-sensed data simultaneously.That is, cannot generate the reflectivity data of high-spatial and temporal resolution.For this reason, the people such as Gao proposed STARFM method in 2006, and the method can access high-spatial and temporal resolution reflectivity image comparatively accurately, but the maintenance of the details of the image of its generation is bad.
Summary of the invention
In view of this, fundamental purpose of the present invention is to provide a kind of disposal route of remote sensing image, to realize, can generate the better image that details is kept.
For solving the problems of the technologies described above, the invention provides a kind of disposal route of remote sensing image, a pixel of setting certain required image A0 is constantly goal pels, determine that the ground table section that goal pels is corresponding is target area, obtain first group image with high spatial resolution and second group image with low spatial resolution of same ground table section, at least two pairs of corresponding selections image in the same time not from described two group images, calculate respectively according to the following steps the reflectivity of each goal pels to generate A0 image: a, according to selected image, calculate the conversion coefficient mountain between the reflectance varies amount of described target area in two group images, from the second group image, select the image B0 constantly corresponding with described image A0 and at least one other image constantly: the variable quantity that calculates described target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each according to selected image: c, according to described conversion coefficient, the variable quantity of described each reflectivity is converted to each reflectance varies amount in the first group image: and d, from the first group image, select and described other constantly corresponding image, according to each reflectance varies amount and the target area reflectivity in selected image being converted to, calculate the reflectivity of described target area.
Relation in two group images that just can obtain moment correspondence by least two pairs of corresponding images of the moment between the change information of earth's surface regional reflex rate, i.e. conversion coefficient.According to chosen image, via step a, obtain the conversion coefficient of target area.Because the pass between change information tie up within the scope of the short period, can think stable.So as long as the moment and the image A0 of selected each image is more or less the same constantly, can according to conversion coefficient, via step c, by the target area of step b gained, the reflectance varies amount in the second group image (being called for short the second variable quantity) be converted to the reflectance varies amount (be called for short first variable quantity) of target area in the first group image.Thus, by the reflectivity of steps d gained, be that reflectivity in the first group image based on meticulousr and the variable quantity after conversion generate.The A0 image generating so has just well kept the details in the first group image, avoids generated image visually to have smoothed effect.
Preferably, described step a comprises: a1, the pixel corresponding with described target area in selected the first group image is set as to center pixel: a2, from described center pixel, filter out respectively separately at least one the image of place and belong to the similar pixel of similar atural object with described center pixel: and a3, determine that each corresponding ground of similar pixel table section is similar area, the reflectivity according to described similar area in selected image calculates described conversion coefficient.
The variable quantity of mixed pixel reflectivity is the concentrated expression of the variable quantity of each end member reflectivity.So end member can provide than mixed pixel change information more accurately.Because of the similar pixel as end member, belong to similar atural object with center pixel again, so similar pixel can provide change information the most accurately, can obtain conversion coefficient more accurately thus.Wherein, center pixel is one of similar pixel at last also, failing to filter out other similar pixel Shi, center pixel, is unique similar pixel.Especially for the ground table section that has broken patch and little atural object in target area, its image is mixed pixel in the second group image.The conversion coefficient conversion that its change information providing (the second variable quantity) is obtained by each similar pixel is modified to the change information (the first variable quantity) that belongs to the end member of similar atural object with goal pels.Thus, the reflectivity accuracy of goal pels is higher, especially broken patch and little atural object is had to better syncretizing effect.
Preferably, before described step a3, also comprise: a21, according to selected image, calculate the weight of each similar pixel: described step b comprises: b1, according to other images constantly of B0 image and described second group, calculate the reflectance varies amount mountain 2 of similar area in the second group image described in each, the variable quantity of target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each described in each reflectance varies amount weighted average calculation step b1 being obtained according to the described weight obtaining.
The second variable quantity of target area is not directly by target area, at the reflectivity of other images constantly of B0 image and described second group, to ask poor to obtain, but is jointly provided according to weight separately by each similar pixel.The change information that in other that revised thus B0 image and second group image constantly, mixed pixel corresponding to target area provides.Thereby obtain the second variable quantity more accurately.
Preferably, the calculating of conversion coefficient described in described step a3 be by similar area described in each reflectivity in selected image to take the variable quantity of the reflectivity in the second group image be independent variable, the variable quantity of the reflectivity in the first group image is that dependent variable is carried out regretional analysis.
Utilize regretional analysis can eliminate the error causing because of reasons such as noises, thereby obtain more sane conversion coefficient, reduce possible error.
Preferably, also comprise before described step a3: a22, calculates the weight of each similar pixel: described regretional analysis is the weighted regression analysis carrying out according to the weight of each similar pixel according to selected image.
Each similar pixel distance is the space length of imago unit wherein, with and spectral signature and the reasons such as consistance of the spectral signature of center pixel while all having caused each similar pixel that change information is provided, there is the difference of contribution.When regretional analysis, utilize the weight of each similar pixel to be weighted regretional analysis, can embody these differences.And then obtain conversion coefficient more accurately, further to improve the reflectivity accuracy of goal pels.
Preferably, the calculating of the weight of described each similar pixel comprises the following steps: a221, according to selected image, calculate in all band spectrum vector sum the second group images of each similar pixel the consistent degree of all band spectrum vectors of pixel of similar area described in each: a222, according to described consistent degree, calculate the weight of each similar pixel.
The pixel of described the first group image is high spatial resolution pixel, the pixel of described the second group image is low spatial resolution pixel, high spatial resolution pixel accordingly under table section atural object corresponding low spatial resolution pixel accordingly in the various atural objects of table section proportion be the purity of high spatial resolution pixel.So the purity of similar pixel is larger, also just mean that its change information providing is more accurate.Thereby can improve with step a222 the weight that the similar pixel that the purity of more accurate change information is large can be provided by step a221.The calculating of purity size is that the consistent degree by all band spectrum vectors calculates.
Preferably, described step a2 also comprises: corresponding ground of the similar pixel table section respectively filtering out is got to common factor, and the corresponding pixel of ground table section of getting after common factor of take is similar pixel.
Some atural object can change in time, and its spectral signature is corresponding changing also, only utilizes one group of similar pixel that the image of correspondence filters out constantly may therefore filter out the similar pixel that Yi He center pixel does not belong to same atural object.So getting each moment Jun He center pixel that can filter out after common factor is all the similar pixel of same atural object.Thereby obtain more reliable goal pels.
The present invention also provides a kind of disposal system of remote sensing image accordingly, according to described two group images, try to achieve the A0 image of high spatial resolution, described A0 image does not belong to the first group image, and formed by least one pixel to be calculated, described disposal system comprises: image is selected module 101, for obtaining first group image with high spatial resolution and second group image with low spatial resolution of same ground table section, at least two pairs of corresponding selections image in the same time not from described two group images: goal pels setting module 102, for setting a pixel of certain required image A0 constantly, it is goal pels, determine that the ground table section that goal pels is corresponding is target area: goal pels reflectivity calculates module 104, for calculating the reflectivity of goal pels: and video generation module 108, for calculate the reflectivity of each goal pels that module 104 calculates according to goal pels reflectivity, generate A0 image: described goal pels reflectivity calculates module 104 and comprises following submodule: conversion coefficient module 103, according to selected image, calculate the conversion coefficient between the reflectance varies amount of described target area in two group images: the second variable quantity computing module 105, for select the image B0 constantly corresponding with described image A0 and at least one other image constantly from the second group image, according to selected image, calculate the variable quantity of described target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each: variable quantity modular converter 106, for each variable quantity described each second variable quantity computing module 105 being obtained according to described conversion coefficient, be converted to each reflectance varies amount at the first group image: and reflectivity calculates module 107, for selecting from the first group image and described other constantly corresponding image, according to each reflectance varies amount and the target area reflectivity in selected image being converted to, calculate the reflectivity of described target area.
Preferably, described conversion coefficient module 103 comprises: center pixel setting module 1031, for the pixel corresponding with described target area of selected the first group image is set as to center pixel: similar pixel screening module 1032, for from described center pixel separately place image filter out respectively at least one and belong to the similar pixel of similar atural object with described center pixel: with conversion coefficient computing module 1033, for determining that each corresponding ground of similar pixel table section is similar area, reflectivity according to described similar area in selected image calculates described conversion coefficient.
Preferably, described goal pels reflectivity calculates module 104 and also comprises: weight module, for calculate the weight of each similar pixel according to selected image: described the second variable quantity computing module 105 comprises: similar pixel the second variable quantity computing module 1051, for selecting from the second group image and described certain constantly corresponding image B0 and at least one other image constantly, according to selected image, calculate the reflectance varies amount of similar area in the second group image described in each: the second variable quantity weighted mean module 1052, the variable quantity of target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each described in each reflectance varies amount weighted average calculation similar pixel the second variable quantity computing module 1051 being obtained according to the described weight obtaining.
Accompanying drawing explanation
Fig. 1 is the embodiment process flow diagram of the disposal route of remote sensing image:
Fig. 2 is the process flow diagram of the similar pixel of screening:
Fig. 3 is for getting the similar pixel schematic diagram of screening that occurs simultaneously:
Fig. 4 calculates the process flow diagram of the weight of similar pixel in the present embodiment:
Fig. 5 is the process flow diagram that obtains conversion coefficient:
Fig. 6 is the calculating schematic diagram of conversion coefficient:
Fig. 7 is for calculating the process flow diagram of the high spatial reflectivity of prediction goal pels constantly:
Fig. 8 is the structural drawing of disposal system:
Fig. 9 is conversion coefficient module 103 structural drawing:
Figure 10 is the second variable quantity computing module 105 structural drawing:
Figure 11 is weight module 109 structural drawing in an embodiment:
Figure 12 is weight module 109 structural drawing in another embodiment:
Figure 13 is similar pixel screening module 1032 structural drawing:
Figure 14 is adjusting module 1034 structural drawing
Figure 15 is the situation of simulation Phenological change:
Figure 16 is the situation of simulation linear ground object:
Figure 17 is MDCM and STARFM predicting the outcome to the reflectivity of linear ground object in Figure 16 (b):
Figure 18 is the situation of simulation small size atural object:
Figure 19 is MDCM and the predicated error of STARFM to each circular little atural object in Figure 18 (b):
Embodiment
Before embodiment is described, first theoretical foundation of the present invention is described.
For the same area from the remotely-sensed data of different sensors after radiation calibration, geometrical registration and Atmospheric Correction, between these data, there is certain comparability and correlativity.But, because different sensors there are differences at waveband width, spectral response functions, the aspects such as atmospheric condition that obtain constantly, make to exist between these multi-source datas certain system deviation.When systematic error is corrected, the fusion that how to utilize correlativity between them to realize multi-source data has formed core of the present invention.Below introduce in two kinds of situation theoretical foundation of the present invention.
For the condition of expressing one's feelings equably: suppose that low spatial resolution, high time resolution (being slightly written as low spatial resolution) and high spatial resolution, low temporal resolution (being slightly written as high spatial resolution) sensor have similar spectral band setting.When low spatial resolution image resamples as the spatial resolution (be identical Pixel size) identical with high spatial resolution image and coordinate system, for the atural object of same and homogeneous, it in low spatial resolution pixel, is only the pure pixel of a kind of type of ground objects, the relation of now hearing on wave band B between spatial resolution reflectivity and low spatial resolution reflectivity is news, relative radiometric calibration relation on low spatial resolution image between reflectivity, this relation is only determined by two kinds of sensor characteristic differences (waveband width and spectral response functions) and imaging moment atmospheric condition difference, generally can be considered following linear relationship:
F(Xi,Yj,tk,B)=aXC(Xi,jptk,B)+b1
Wherein, F, C represent respectively the pixel reflectivity of high and low spatial resolution, Xi, yj) be pixel position, B is spectral band, and tk has represented image capturing (Yi Wei unit) constantly, and a, b represent respectively gain and the deviation value of high spatial resolution and low spatial resolution image relative radiometric calibration.For the atural object of same and homogeneous, relation shown in 1 formula is all set up at any image capturing constantly.
If obtained, the image of the high and low spatial resolution of wave band B and the tp low spatial resolution image of the same band constantly constantly, the pixel reflectivity of tp this wave band high spatial resolution of the moment can be represented by formula 2 so: F (Xi, yj, tp, B)=aXC (Xi, yj, tp, B)+b2
And, constantly the pixel reflectivity of high and low spatial resolution image closes and is:
F(Xi,yj,t0,B)=aXC(Xi,yj,t0,B)+b3
By formula 2 and formula 3, subtract each other and transplant to arrange and can obtain formula 4:
F(Xi,yj,tp,B)=F(Xi,jPt0,B)+aX(C(Xi,y』,tp,B)-C(Xi,jPt0,B))4
Above formula can be understood as tp constantly high spatial resolution pixel reflectivity by, the reflectivity of high spatial resolution adds tp reflectance varies amount relatively, constantly constantly.In formula, a represents the gain of high spatial resolution and low spatial resolution image relative radiometric calibration, also can be regarded as the conversion coefficient between high spatial resolution and low spatial resolution pixel reflectivity, this coefficient is determined by two kinds of sensor characteristic differences and imaging moment atmospheric condition difference.If two kinds of sensor imagings are constantly close, can be considered atmospheric condition identical, this coefficient is only determined by two kinds of sensor characteristic differences, and does not change in time.
If obtain any two high and low spatial resolution images of two couples of I, tn constantly, by the pure pixel reflectivity of each wave band on the high and low spatial resolution image in these two moment, carry out the conversion coefficient a that linear regression can be obtained each wave band so, can be by the reflectivity image of the high spatial resolution of a series of Time Continuous of low spatial resolution video generation of a series of Time Continuous by this coefficient and formula 4.
For the non-condition of expressing one's feelings equably: due to the restriction of sensor spatial resolution and the complicated variety of atural object, mixed pixel is prevalent on low spatial resolution image, various to earth's surface type of ground objects, the mixed and disorderly region that distributes is especially true.Now, the reflectivity of the high spatial resolution pixel that low spatial resolution pixel is corresponding with it is not the reaction to same atural object, difference between the two is not only the difference of radiation calibration, now the formula 4 based on the pure pixel in homogeneous earth's surface be not suitable for the situation of non-homogeneous earth's surface mixed pixel.
The spectral signature of mixed pixel is the concentrated expression of its inner all kinds of object spectrum end member spectrum, so two concentrated expressions that are changed to all kinds of end member spectrum change of its reflectivity between tm, tn constantly.What mixed pixel spectral mixing model was the most frequently used at present is Areca trees model, utilizes a linear relationship to express the relation between mixed pixel and each end member spectrum.Suppose that on low spatial resolution image, tm, tn certain wave band mixed pixel reflectivity is constantly respectively Ym, Yn, according to Areca trees model, Ym, Yn can be expressed by following formula:
M
Ym=YJfiXim+S
/=1
M
Yn=Yuf<Xin+s
′=1 5
Wherein, the end member number that M comprises for this mixed pixel, Xim and Xin represent respectively tm, the tn spectrum of i kind end member constantly, e is error term, and considers that e is constantly constant at tm, tn.By formula 5, can obtain variable quantity Yn-Ym: wherein the fi in following formula is the ratio of i end member:
Yn-Ym=2]/:(Xjn-Xim)
,.=i6
The time interval of supposing tm, tn, the variation of each end member reflectivity can be approximated to be linear change within the specific limits time, and end member spectrum Xin can be represented by formula 7: wherein the ri in following formula is the linear change rate of i end member reflectivity: Xin=AXAt+Xim7
Wherein, At=tn-tm, can obtain following formula-r^A^ by formula 7 substitution formulas 6: many 8 known the
/=1
K end member tm, tn reflectivity constantly, can obtain At=A^Xkn~Xkm9 by formula 9 substitution formulas 8 by formula 7, can obtain:
rk
XknXkm_1
/=1
Due to hypothesis tm, tn the time interval within the specific limits, the linear change rate r of each end member ratio f and each end member reflectivity can be considered stable at tm, tn constantly, the right of formula 10 is one to stablize variable, with v, represent, its meaning is that the ratio of k class atural object end member reflectance varies amount and mixed pixel reflectance varies amount changes scale-up factor.
For non-homogeneous earth's surface, low spatial resolution pixel is mixed pixel, and each pixel on its corresponding high spatial resolution image is considered as end member.According to formula 10, tm, tn high spatial resolution pixel reflectivity F (Xi, Yi, tffi, B) and F (Xi constantly, Yi, tn, B) with corresponding low spatial resolution pixel reflectivity C (Xi, Yi, tm, B) and C (Xi, Yi, tn, B) meet following relation:
F(xi,yi,tn,B)-F(xi,yl,tm,B)
Cix^t^-C^y^t^B)′11
Through type 11 is known, and it is linear relationship with corresponding low spatial resolution reflectivity that tm, tn each wave band is constantly heard spatial resolution reflectivity, carries out the variation scale-up factor V that linear regression can be obtained each wave band.
If in tn, obtained the tQ image of the high and low spatial resolution of wave band B and the low spatial resolution image of % moment the same band constantly at tm.According to above hypothesis, tp compares, constantly the pixel of this wave band high spatial resolution and scale-up factor between the reflectance varies amount of low spatial resolution pixel be still v: F{xi, y,, tB)-F (xt, t0, B)
C(xi,yi,tp,B)-C(xi,yi,t0,B)^
Through type 12 can solve tp high spatial resolution reflectivity constantly:
F(Xi,yj,tp,B)=F(Xi,y』,t0,B)+vX(C(Xi,y』,tp,B)-C(Xi,y』,t0,B))13
Although formula 13 is identical with the form of formula 4, its meaning is different with usable condition.What formula 4 represented is the relation of relative calibration between pure pixel different resolution reflectivity, all sets up at any time, and the result of its calculating is the most accurate.And formula 13 expressions is the proportionate relationship between different resolution reflectance varies amount in mixed pixel, according to hypothesis only between or near it, constantly just set up, and be the linear hypothesis that is changed to based on each atural object reflectivity in short time range, so the result of its calculating is approximate solution.
Theoretical foundation based on above-mentioned, when low spatial resolution pixel is pure pixel, can utilizes formula 4 to solve tp high spatial resolution reflectivity constantly, and when low spatial resolution pixel is mixed pixel, utilize formula 13 to solve.But the result that pure pixel calculates compared with mixed pixel accurately and reliably.For utilize the information of pure pixel as far as possible, and reduce image and be subject to the uncertainty that the pollutions such as cloud, atmosphere bring to result of calculation, in the present embodiment, take the way solving in window.That is, with pixel xw/2,, yw/2) centered by, get the window that width is w, first filter out the similar pixel of pixel that belongs to similar atural object with center pixel, then utilize the pixel tp of Shi14 computing center high spatial resolution reflectivity constantly:
N
Hk’/2,yw/2,5)=f(xw/2>yw/2A^B)+xVx(c(uJp’B、-c(u,h,B))
<=114
Wherein, N is the similar pixel number that window Nei Yu center pixel belongs to similar atural object, the weight that W is each similar pixel, the conversion coefficient that V is high and low spatial resolution.To be tp reflectivity constantly add that by the reflectivity in the known moment the constantly relatively known h of tp variable quantity constantly determines to the meaning of formula 14, and this variable quantity is predicted jointly by the similar pixel in window.
Before the present embodiment, conventionally need first to the high and low spatial resolution image obtaining, carry out pre-service, make between multi-source data, to there is similar band setting, identical Pixel size and map format and identical coordinate system.Adopt conventional software and method to realize.Below in connection with accompanying drawing, the embodiment of the disposal route of remote sensing image is elaborated.
Two group images of remote sensing image for the shooting of same earth's surface is obtained in present embodiment, wherein, the first group image has high spatial resolution, the second group image has low spatial resolution, the A image of trying to achieve tp high spatial resolution constantly according to described two group images, described A image is comprised of at least one pixel to be calculated.As shown in Figure 1, the disposal route of remote sensing image comprises the following steps: step 100, from described remote sensing image, select at least two pairs of images of correspondence constantly.In the present embodiment, what select is known tm, tn two pairs of images constantly.From the second group image, select its moment and tp corresponding image of the moment: step 101, setting a pixel to be calculated is goal pels, the ground table section of answering of determining goal pels is target area: step 102, pixel centered by pixel corresponding to target area, the similar atural object pixel filtering out in the window of center pixel described in the high spatial resolution image in the known moment obtains similar pixel, wherein, center pixel is one of similar pixel at last also, when failing to filter out other similar pixel, center pixel is unique similar pixel: step 104, calculate the weight of described similar pixel.In the present embodiment, according to tm, , two width low spatial resolution images constantly, calculate the weight of the similar pixel filtering out in step 102: step 106, according to described weight, adopt the height of weighting algorithm to the known moment, low spatial resolution reflectivity carries out linear regression and calculates conversion coefficient: step 108, calculate the high spatial resolution reflectivity of prediction goal pels constantly, in the present embodiment, according to the low spatial resolution image of conversion coefficient and prediction moment tp, calculate the time weighting of known moment high spatial resolution image, and according to known moment high spatial resolution image, according to described time weighting weighted calculation, obtain the high spatial resolution reflectivity of prediction goal pels constantly: step 110, successively calculate all goal pels the final A of generation image.Below in connection with accompanying drawing, above-mentioned each step is elaborated.
Fig. 2 screens the process flow diagram of similar pixel in the present embodiment, comprise as shown in the figure:
Step 1022, pixel centered by pixel corresponding to target area, the pixel that belongs to similar atural object with goal pels filtering out in the window of center pixel described in the high spatial resolution image in each known moment obtains similar pixel.Goal pels (xw/2,, in the window centered by yw/2), the high spatial resolution pixel that belongs to similar atural object with the center pixel that will calculate just can provide relatively more correct reflectance varies information reliably.Can use following two kinds of methods to filter out similar pixel: (a) high spatial resolution image to be carried out to unsupervised classification, the pixel identical with center pixel type is similar pixel: (b) use threshold decision, with other pixel reflectivity in window, judge whether to belong to similar atural object with the difference of center pixel reflectivity.The methods of these the two kinds similar pixels of screening have common meaning, are that to take the similar pixel of spectrum be same class atural object.But there is again difference: threshold method is in window, to find the pixel similar to the spectrum of center pixel, center pixel is as the center of searching, and its screening conditions are local applicable, along with the change in location of center pixel, become: and classification is to carry out at total image, if classification has error, the similar pixel of judging so may have very large mistake.So in the present embodiment, screen similar pixel with threshold method.In the present embodiment, in judgement window, all wave bands of certain pixel all meet the condition of formula 15, and this pixel is confirmed as the similar pixel of center pixel.
|F(Xi,tk,B)-F(xw/2,yw/2,tk,B)|(o(B)X2/m15
Wherein, o (B) is the tk standard deviation of all pixel B of high spatial resolution image wave band constantly, and m is the atural object classification number of estimating, and the region covering such as image mainly contains vegetation, bare area, water body, rock 4 class atural objects, m value can be made as to 4.Certainly, also can screen or employing method (a) is screened by the subband in all wave bands.
Step 1024, gets common factor by the similar pixel in two groups of known moment that filter out.Some atural object can change in time, and its spectral signature is corresponding changing also, and mistake may appear in the similar pixel that while only utilizing, the image of phase filters out.Such as there being two kinds of types of ground objects of bare area and crops in window, if center pixel is crops, tm constantly crops does not grow, its spectral signature is identical with bare area, the similar pixel now screening is bare area, if tn constantly crops grows, on, image constantly, can filter out correct crops pixel.Given this, in the present embodiment, utilize respectively tm, the tn moment two width high spatial resolution images to screen similar pixel, and the selection result is got to common factor, with this, improve the accuracy of the selection result.Be illustrated in figure 3 and get the similar pixel schematic diagram of screening that occurs simultaneously and represent that center pixel represents the similar pixel of center pixel.
Fig. 4 calculates the flow process of similar pixel weight in the present embodiment, comprise as shown in the figure:
Step 10402, the purity of calculating similar pixel is big or small.Weights W has determined that each the similar pixel filtering out in window is to calculating the contribution of goal pels reflectivity.Weights W is by the purity size of each similar pixel and far and near common decision of the space length of decentering pixel in the present embodiment.The similar pixel weight that purity is higher, space length is nearer is larger, and this is because the pure pixel on homogeneous earth's surface can provide change information the most accurately, and thinks that the situation of change of the similar pixel that decentering pixel on space is nearer is more consistent with center pixel.
Certain pixel on high spatial resolution image has represented certain type of ground objects, and its correspondence position low spatial resolution
The pixel of image has comprised such atural object, and purity refer to this kind of atural object proportion in low spatial resolution pixel.If low spatial resolution pixel is covered by this kind of atural object entirely under the condition of expressing one's feelings equably, think pure pixel.Such as the region of the 30mX30m that on Landsat image, certain pixel covers is wheat, the region of the 500mX500m that on the M0DIS image of its correspondence position, pixel covers is also wheat entirely, thinks that this pixel is the pure pixel of wheat.Because different atural object has different spectral signatures, so can be by judge purity size than the consistent degree of the curve of spectrum of the low spatial resolution pixel of higher spatial resolution pixel and its correspondence position.If the spectrum consistent degree of high spatial resolution pixel and its corresponding low spatial resolution pixel is higher, think that purity is larger.And consistance between the curve of spectrum of high and low spatial resolution pixel can be portrayed by related coefficient, so the purity R of each the similar pixel filtering out can be calculated by formula 16: minute cov (F,, C)
^D(ct)16
Fi={F(Xi,Yi,tm,Bj),…,F(Xi,tm,Bn),F(x^tn,Bj),…,F(Xi,tn,Bn)}
Ci={C(Xi,tm,Bj),C(Xi,y”tm,Bn),C(Xi,y”tn,BD,…,C(Xi,tn,Bn)}
Wherein vectorial FyCi represents respectively the spectrum vector of i the high and low spatial resolution of similar pixel, and cov is for asking covariance, and D is for asking variance.The scope of R value is that the larger expression purity of-1 to 1, R value is higher.The calculating related coefficient of why tm, the tn curve of spectrum constantly being put together, is to consider that atural object is with the variation of phenology, and spectral signature also can change, and while only using one, mistake may appear in the spectrum of phase.Such as wheat its spectral signature before turning green is consistent with bare area, if the low spatial resolution pixel that certain wheat high spatial resolution pixel is corresponding also comprises bare area except wheat, the image calculating of obtaining before turning green with wheat can obtain higher purity, is obviously incorrect.When the image obtaining judges, will obtain more correct purity after adding wheat to turn green.As seen from the above, as long as all band spectrum vector sums of the high spatial resolution pixel at least two known moment of group of acquisition, at all band spectrums vectors of its correspondence position low spatial resolution pixel simultaneously, just can carry out calculated purity size according to its consistent degree.
Utilize purity size as a key element calculating weight, as the key element of calculating weight, the result of more effectively utilizing similar pixel and obtaining is also more accurate than the current employing spectral characteristic that the difference of high and low spatial resolution reflectivity represents in the same time.
Step 10404, judges in similar pixel whether have pure pixel, has and enters step 10406, does not enter step 10408.
Step 10406, the weight of setting pure pixel is 1.In the present embodiment, the span of weights W is 0 to IJ1, and the weight sum of all similar pixels is 1.The pixel of R=1 is pure pixel.In order to utilize pure pixel as far as possible, the weight of pure pixel is made as to 1.The reflectance varies information of Ji, center pixel is provided by these pure pixels completely.In another embodiment, have P pure pixel to exist while being the pixel of R=1 in similar pixel, the weights W of stipulating these pure pixels is 1/P, and the weight of other similar pixels is 0.Certainly, the judgement of pure pixel can be also to realize in other step in calculating weight process.For example, before step 10402 or after step 10408.Even do not judged whether that pure pixel is also passable.
Step 10408, calculates the space length of similar pixel distance center pixel.Similar pixel Xi,, Yi) with window center pixel xw/2,, space length village yw/2) is calculated by formula 17:
Dt=1+yl (xw/2-X .) 2+ (brother yw/2-) 2/ (w/2) 17
Above formula has comprised the normalization to space length, and in window w, the span in the space length of each similar pixel village is 1 to 1+20 ° _ 5, is worth greatlyr, and this similar pixel decentering pixel is far away.
Step 10410, calculates the weight of similar pixel.Next purity and these two incoherent amounts of space length are linked together, determine the weights W of each similar pixel.Above mentioned purity higher, distance nearer pixel information more accurately can be provided, R is larger, d is less, its weights W is larger.In the present embodiment, by formula 18, purity and space distance are still combined into still D:Dj=(1-Rj) Xdj18 of total distance
The similar pixel that D is larger is that the calculating contribution of goal pels reflectivity is less, then the normalization reciprocal of D is obtained to the weights W formula 19 of each similar pixel
,=i19
Certainly, the relation of setting up purity, space and weight can adopt various ways, as long as the weight that can realize the nearer similar pixel of purity Yue great ,Ju center pixel more greatly.
Fig. 5 is the flow process that obtains conversion coefficient, comprises as shown in the figure:
Step 1062, calculates conversion coefficient.From formula 4 and formula 13, the gain a that conversion coefficient V has comprised relative calibration and the scale-up factor V of high low spatial resolution variable quantity, the two all can carry out linear regression by the tm having obtained, the tn moment each wave band reflectivity of high and low spatial resolution and obtain.For uncertainties that calculating brings to conversion coefficient such as noise decrease pollutions, all similar pixel tm, tn high and low spatial resolution reflectivity of the moment is carried out to linear regression, the slope of regression straight line is conversion coefficient V.In order to utilize the information of more reliable similar pixel to calculate conversion coefficient, adopt weighted least-squares method to carry out linear regression, the W that weight is each similar pixel.Be illustrated in figure 6 the calculating schematic diagram of conversion coefficient.In certain window, there are 12 pixels and center pixel to belong to similar atural object and comprise center pixel itself, dotted line frame marks out respectively,, tn reflectivity constantly, visible from tm to tn constantly, high spatial resolution reflectivity and its corresponding low spatial resolution reflectivity all increase to some extent, but high spatial resolution reflectivity increasing degree is larger, 6.448 times of R=0.915 of low spatial resolution, P < 0.001.This illustrates that this wave band reflectivity of such atural object constantly, larger variation has occurred at tm, tn, and the near infrared reflectivity causing such as vegetation growth sharply increases.According to the discussion of theoretical foundation, that each similar pixel has all reflected is low, the conversion coefficient between high spatial resolution reflectivity, but due to impacts such as noises, conventionally can not only utilize some similar pixels to calculate V, such result is with very large uncertainty, and linear regression can utilize the information of all similar pixels to obtain a more sane conversion coefficient.Because each similar pixel exists the difference of contribution, in order to embody these difference, further also consider the weight of each sample among linear regression again, what the present embodiment adopted is weighted least-squares method.Certainly, also can adopt other homing method.
Step 1064, adjusts conversion coefficient according to uncertainty.Because the data handling procedures such as geometrical registration, Atmospheric corrections are all brought certain error to reflectivity, if low spatial resolution pixel tm, the tn reflectance varies between the moment is too little, in error range, be weighted and return the slope V obtain and there is very large uncertainty.This may be following two kinds of situations: the atural object area occupied ratio that low spatial resolution pixel internal reflection rate changes is very little, on whole low spatial resolution grid cell size, can not embody this variation: may be also that the clutter reflections rate having in low spatial resolution pixel increases, some clutter reflections rates reduce, and the amount that increases and reduce equates, make not embody on low spatial resolution grid cell size to change and.For addressing this problem, in the present embodiment, when the low spatial resolution reflectance varies average of all similar pixels is less than variable quantity uncertain of at least two known moment reflectivity, conversion coefficient is made as to 1.The uncertain u that supposes each wave band of reflectivity is peaked 1% for this wave band, and this uncertainty is mutually independently between each wave band of each image, and the uncertainty of the variable quantity of two known moment reflectivity is so: U=is as J+Utn2
Wherein, utD1 be utn be respectively,,, the uncertainty of reflectivity constantly.If the low spatial resolution reflectance varies average of all similar pixels is less than the uncertainty that formula 20 calculates, can only be by the second variable quantity mean allocation to each high spatial resolution pixel of its inside, the conversion coefficient V of variable quantity is 1.Certainly, the uncertainty of the variable quantity of known moment reflectivity also can obtain by other statistical method.Different expression waies to formula 20, and within the account form of other conversion all should be included in protection scope of the present invention.The variation of so just having avoided reflectivity between two moment too hour because reflectivity self is with certain uncertainty, error namely, and cause calculating false conversion coefficient V.
Step 1066, revises unusual conversion coefficient.Due to the existence of image self-noise, cause the conversion coefficient that absolute value is larger to occur.For revising indivedual unusualr conversion coefficients, the conversion coefficient of all pixels of full width image is added up, in the present embodiment the conversion coefficient V beyond the upper and lower 2 times of standard deviations of average is set as to 1.Certainly, also can as required the conversion coefficient beyond the standard deviation of other preset multiple be set as to 1.
Fig. 7 is for calculating the flow process of the high spatial reflectivity of prediction goal pels constantly, comprise as shown in the figure: step 10802, according to prediction low spatial resolution image constantly, calculate second variable quantity in the relatively known moment of each similar pixel: step 10804, according to described conversion coefficient, described the second variable quantity is converted to the first variable quantity: step 10806, after the first variable quantity of each similar pixel is average by described Weight, obtain the first variable quantity of goal pels: and, step 10808, the high spatial resolution reflectivity in the first variable quantity of described goal pels and the known moment is added to the high spatial resolution reflectivity of the prediction moment goal pels obtaining based on the different known moment.By above-mentioned 4 steps, completed the calculating of formula 14.Between the moment of being obtained with two groups of known moment that are respectively tm, tn or near any tp low spatial resolution image constantly, calculating tp high spatial resolution reflectivity is constantly example.According to formula 14, by known I, tn constantly high and low spatial resolution image can calculate tp high spatial resolution reflectivity constantly, now narrow eyes into a slit,, high and low spatial resolution reflectivity is constantly all known, so can calculate tp high spatial resolution reflectivity Fxw/2 constantly, yw/2, tp by tm, tn high spatial resolution albedometer constantly respectively, B, these two result of calculations are denoted as respectively to Fmxw/2, yw/2, tp, B, Fnxw/2, yw/2, tp, B.Certainly, as long as the weight of low spatial resolution image constantly of known transition coefficient, prediction and each similar units just can calculate the high spatial resolution reflectivity of the goal pels in the prediction moment on the basis of the high spatial resolution image in the known moment.Different expression waies to formula 14, and within the account form of other conversion all should be included in protection scope of the present invention.
Step 10810, weighted calculation obtains prediction high spatial resolution reflectivity constantly.For making full use of the information of tm, tn high spatial resolution reflectivity constantly, two result of calculation weighted sums are obtained to last predicting the outcome.Suppose that the relatively known moment of tp low spatial resolution reflectivity constantly changes less, the result of its calculating more accurately and reliably.This is because if do not embody the variation of reflectivity on low spatial resolution image, thinks that high spatial resolution image does not have the variation of reflectivity yet.Based on above hypothesis, the time weighting T of two result of calculations is calculated by formula 21.
Certainly, also can adopt other method to calculate the weight of the high spatial resolution reflectivity calculating based on the known moment of difference.For example, suc as formula 23, by the difference of the prediction moment and known moment tk low spatial resolution reflectivity, calculate weight size.
TiJk=|C(Xi,yj,t0)-C(Xi,yj,tk)|23
So far obtained the high spatial resolution reflectivity of final prediction goal pels constantly.Only to apply separately conversion coefficient matching step 108 just can improve the computational accuracy to little atural object and linear ground object.The screening of similar pixel and the improvement of weight calculation contribute to the raising of overall precision.
In order better present embodiment to be illustrated further and to be verified, present embodiment is carried out respectively to the check of simulated data and True Data below.
Simulated data check.For better accuracy and the reliability of the method for inspection apply to the present embodiment simple simulated data below.And for the ease of the STARFM method with GA0, compare, utilize the simulated data in GA0 literary composition to test.Present embodiment is referred to as MDCM hereinafter.
The situation of linear ground object.In soil covering or land use pattern, linear ground object is quite general, such as road, bridge, river etc., and human eye is higher to the sensitivity of linear ground object, and in the high resolution image of generation, the correctness of linear ground object directly affects the visual effect of image.As shown in figure 16 for simulating the situation of linear ground object.Figure 16.(a) reflectivity of-border circular areas in (c) is fixed as 0.05, and the reflectivity of straight line is fixed as 0.5, and the reflectivity of peripheral region becomes 0.2 again to 0.4 from 0.1; (4)-(f) respectively by the low spatial resolution image of (a)-(c) be polymerized: (g), (h) be MDCM and STARFM predicting the outcome to wire clutter reflections rate in (b).
Figure 16 (a) (c) increases by a linear ground object on Figure 15 (a) basis (c) on diagonal line, reflectivity is fixed as 0.5, simulate on a road land and bridge water body on, Figure 16 (d)-(f) is respectively by the low spatial resolution image of Figure 16 (a)-(c) be polymerized.Utilize 5 width images outside Figure 16 (b) to regenerate Figure 16 (b), Figure 16 (g) has shown Figure 16 (b) that MDCM and STARFM regenerate, and visible MDCM and STARFM can dope this linear ground object qualitatively.Be MDCM and STARFM predicting the outcome to the reflectivity of linear ground object in Figure 16 (b) as shown in figure 17.From two kinds of quantitative results that method is predicted the linear ground object of Figure 16 (b) shown in Figure 17, STARFM can not obtain the reflectivity of linear ground object entirely truely, mistake mainly appears at outside one section of circular culture object that linear ground object peripheral region occurred to change, and MDCM method can obtain the reflectivity of right-on linear ground object.
The situation of small size atural object.Small size atural object is also very ubiquitous in actual image, particularly in the more broken region of atural object patch, such as haggard or small size water body etc.If the size of these small size atural objects has been less than the yardstick of low spatial resolution pixel, the reflectivity information that can not clearly observe their shape and obtain them in low spatial resolution image, and the high spatial resolution image regenerating wishes to reflect exactly shape and the reflectivity information of these little atural objects.As shown in figure 18 for simulating the situation of small size atural object.(a) reflectivity of-border circular areas in (c) becomes 0.2 again to 0.4 from 0.1, in each figure the diameter of 4 circular culture objects be followed successively by 5,10,15,20Ge high spatial resolution unit, the reflectivity of peripheral region is fixed as 0.05:(d)-(f) respectively by the low spatial resolution image of (a)-(c) be polymerized: (g) and (h) be respectively STARFM and MDCM method predicting the outcome to (b).
Suppose only to exist two class atural objects, the circular culture object in Figure 18 (a)-(c) and around background.In figure, the diameter of 4 circular culture objects is followed successively by 4 small size atural objects of 5,10,15,20, and the reflectivity of circular culture object is increased to 0.2 Figure 18 (b) again to 0.4 Figure 18 (c) from 0.1 Figure 18 (a), and the reflectivity of peripheral region is fixed as 0.05.Figure 18 (d)-(f) is by Figure 18 (a)-(c) aggregate into by 17*17 image of low spatial resolution.Figure 18 (g) and (h) be respectively STARFM and the reconstructed results of MDCM method to Figure 18 (b), and be MDCM and the STARFM predicated error to each circular little atural object in Figure 18 (b) as shown in figure 19.Figure 19 has shown the two mean value to the difference of each pixel reflectivity predicted value of the little atural object of prediction average error of the little atural object of different-diameter and actual value, it is that the reflectivity of 20 circular culture object is correct that visible STARFM only has diameter, other reflectivity of atural object that are less than low spatial resolution grid cell size 17 are all less than normal than actual value, and MDCM method can generate correct reflectivity to the atural object of all sizes.Reason is mainly that STARFM needs pure pixel that change information is provided, and there is not pure pixel in the small size atural object that is less than low spatial resolution grid cell size on low spatial resolution image, so there is error in the reflectivity generating, if and MDCM method while can not find pure pixel in window, by the scale-up factor between the low spatial resolution pixel reflectance varies amount at these small size clutter reflections rate variable quantities and its place, the change information that non-pure pixel is provided in addition correction, so can generate exactly the reflectivity of these small size atural objects.
As shown in Figure 8, the disposal system of remote sensing image comprises with lower module: image is selected module 101, for obtaining first group image with high spatial resolution and second group image with low spatial resolution of same ground table section, at least two pairs of corresponding selections image in the same time not from described two group images: goal pels setting module 102, for setting a pixel of certain required image A0 constantly, be goal pels, determine that the ground table section that goal pels is corresponding is target area: target picture
Unit's reflectivity calculates module 104, for calculating the reflectivity of goal pels: and video generation module 108, generates A0 image for calculate the reflectivity of each goal pels that module 104 calculates according to goal pels reflectivity.
Described goal pels reflectivity calculates module 104 and comprises following submodule:
Conversion coefficient module 103, the conversion coefficient of the reflectance varies amount of calculating described target area according to selected image between in two group images.As shown in Figure 9, in the present embodiment, described conversion coefficient module 103 comprises: center pixel setting module 1031, for the pixel corresponding with described target area of selected the first group image is set as to center pixel: similar pixel screening module 1032, for from described center pixel separately place image filter out respectively at least one and belong to the similar pixel of similar atural object with described center pixel: conversion coefficient computing module 1033, for determining that each corresponding ground of similar pixel table section is similar area, reflectivity according to described similar area in selected image calculates described conversion coefficient.Adjusting module 1034, the possible incorrect conversion coefficient causing for alignment error and the unusual conversion coefficient of correction.Adjusting module 1034 will below describe in detail in conjunction with Figure 14.In the present embodiment, the calculating of conversion coefficient described in described conversion coefficient computing module 1033 is that described obtained reflectivity be take to the second variable quantity as independent variable, and the first variable quantity is that dependent variable is weighted regretional analysis according to the weight of each similar pixel.The specific implementation of the screening of similar pixel can, referring to the above explanation of step 1022, not repeat them here.
Weight module 109, for calculating the weight of each similar pixel according to selected image.Below in connection with Figure 11 and Figure 12, describe in detail.
The second variable quantity computing module 105, for select the image B0 constantly corresponding with described image A0 and at least one other image constantly from the second group image, according to selected image, calculate the variable quantity of described target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each.As shown in figure 10, described the second variable quantity computing module 105 comprises: similar pixel the second variable quantity computing module 1051, for selecting from the second group image and described certain constantly corresponding image B0 and at least one other image constantly, according to selected image, calculate the reflectance varies amount of similar area in the second group image described in each: the second variable quantity weighted mean module 1052, the variable quantity of target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each described in each reflectance varies amount weighted average calculation similar pixel the second variable quantity computing module 1051 being obtained according to the described weight obtaining.
Variable quantity modular converter 106, is converted to each reflectance varies amount at the first group image for each variable quantity described each second variable quantity computing module 105 being obtained according to described conversion coefficient.
Reflectivity calculates module 107, for each reflectance varies amount and the target area that obtain according to variable quantity modular converter 106, calculates the reflectivity of described target area at the reflectivity of each image in first group of corresponding moment.Via the second variable quantity weighted mean module 1052, variable quantity modular converter 106 and reflectivity, calculate the calculating that module 107 has completed formula 14 in the present embodiment, obtained the reflectivity of goal pels.Video generation module 108, generates A0 image for successively calculating the reflectivity of all pixels to be calculated.
In another embodiment, goal pels reflectivity calculates module 104 and also comprises: time weighting computing module 111, and for calculate the time weighting of each reflectivity of resulting described target area according to the resulting variable quantity of the second variable quantity computing module 105.Reflectivity calculates module 107 and also comprises: reflectivity weighted calculation module, for according to described time weighting, obtains each reflectivity weighted mean the reflectivity of required goal pels.The specific implementation of the calculating of the calculating of time weighting and final goal pixel reflectivity can, referring to the explanation in step 10810 above, not repeat them here.
As shown in figure 11, described weight module 109 comprises: purity computing module 1091, for calculate the consistent degree of all band spectrum vector sum the second group images of each similar pixel all band spectrum vectors of pixel of similar area described in each according to selected image: pure pixel judge module 1093, be used for judging whether similar pixel has pure pixel, and by this pure pixel weight setting, be maximal value when having pure pixel, , all changes information is all taken from this pure pixel, when there is no pure pixel, do not adjust the weight of each similar pixel: space length computing module 1094, for calculating each similar pixel distance space length of center pixel separately: weight computation module 1095, for according to the purity of each similar pixel with apart from the space length of center pixel separately, calculate the weight of each similar pixel.In weight module 109, the specific implementation of each module can, referring to the above explanation of step 104, not repeat them here.
As shown in figure 12, in another embodiment, described weight module 109 comprises: purity computing module 1091, for according to described at least two pairs of consistent degrees of all band spectrum vectors of pixel corresponding to the corresponding images table section of calculating the second group image that all band spectrum vector sums of each similar pixel are corresponding with its moment constantly: similar pixel weight computation module 1092, for calculate the weight of each similar pixel according to described consistent degree.Concrete purity calculates and weight calculation can, with reference to a upper embodiment of weight module 109, not repeat them here.
As shown in figure 13, in another embodiment, described similar pixel screening module 1032 also comprises: the screening module 10321 of occuring simultaneously is similar pixel for corresponding ground of the similar pixel table section respectively filtering out being got to common factor, take the corresponding pixel of ground table section of getting after common factor.The specific implementation of the screening module 10321 of occuring simultaneously can, referring to the above explanation of step 1024, not repeat them here.
As shown in figure 14, described adjusting module 1034 comprises: average judge module 10341, for judging whether the average of the second variable quantity that each similar pixel is corresponding is less than reflectivity self error: error adjusting module 10342, while being less than for being judged as when average judge module (10341), conversion coefficient is set as making the first variable quantity to equal the second variable quantity, when average judge module (10341) is judged as while being not less than, conversion coefficient remains unchanged: unusual conversion coefficient judge module 10343, be used for judging that whether conversion coefficient is unusual: unusual conversion coefficient correcting module 10344, for unusual conversion coefficient is modified to.The specific implementation of adjusting module 1034 can, referring to the above explanation of step 1064 and step 1066, not repeat them here.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. the disposal route of a remote sensing image, it is characterized in that, a pixel of setting certain required image A0 is constantly goal pels, determine that the ground table section that goal pels is corresponding is target area, obtain first group image with high spatial resolution and second group image with low spatial resolution of same ground table section, at least two pairs of corresponding selections image in the same time not from described two group images, calculates respectively the reflectivity of each goal pels according to the following steps to generate A0 image:
A, the conversion coefficient of the reflectance varies amount of calculating described target area according to selected image between in two group images:
B selects the image B0 constantly corresponding with described image A0 and at least one other image constantly: the variable quantity that calculates described target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each according to selected image from the second group image:
C, is converted to each reflectance varies amount in the first group image according to described conversion coefficient by the variable quantity of described each reflectivity: and
D selects and described other constantly corresponding image from the first group image, according to each reflectance varies amount and the target area reflectivity in selected image being converted to, calculates the reflectivity of described target area.
2. method according to claim 1, is characterized in that, described step a comprises:
A1, is set as center pixel by the pixel corresponding with described target area in selected the first group image:
A2, filters out respectively separately at least one the image of place from described center pixel and belongs to the similar pixel of similar atural object with described center pixel: and
A3, determines that each corresponding ground of similar pixel table section is similar area, and the reflectivity according to described similar area in selected image calculates described conversion coefficient.
3. method according to claim 2, is characterized in that, before described step a3, also comprises: a21, calculates the weight of each similar pixel according to selected image:
Described step b comprises:
B1, calculates the reflectance varies amount of similar area in the second group image described in each according to other images constantly of B0 image and described second group:
B2, the variable quantity of target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each described in each reflectance varies amount weighted average calculation step b1 being obtained according to the described weight obtaining.
4. method according to claim 2, it is characterized in that, the calculating of conversion coefficient described in described step a3 be by similar area described in each reflectivity in selected image to take the variable quantity of the reflectivity in the second group image be independent variable, the variable quantity of the reflectivity in the first group image is that dependent variable is carried out regretional analysis.
5. method according to claim 4, is characterized in that, before described step a3, also comprises: a22, calculates the weight of each similar pixel: described regretional analysis is the weighted regression analysis carrying out according to the weight of each similar pixel according to selected image.
6. according to method described in claim 3 or 5, it is characterized in that, the calculating of the weight of described each similar pixel comprises the following steps:
A221, calculates in all band spectrum vector sum the second group images of each similar pixel the consistent degree of all band spectrum vectors of pixel of similar area described in each according to selected image:
A222, calculates the weight of each similar pixel according to described consistent degree.
7. method according to claim 2, is characterized in that, described step a2 also comprises: corresponding ground of the similar pixel table section respectively filtering out is got to common factor, and the corresponding pixel of ground table section of getting after common factor of take is similar pixel.
8. the disposal system of remote sensing image, it is characterized in that, according to first group image with high spatial resolution of same ground table section and second group image with low spatial resolution, try to achieve the A0 image of high spatial resolution, described A0 image does not belong to the first group image, and formed by least one pixel to be calculated, described disposal system comprises:
Image is selected module (101), for obtaining first group image with high spatial resolution and second group image with low spatial resolution of same ground table section, at least two pairs of corresponding selections image in the same time not from described two group images: goal pels setting module (102), for setting a pixel of certain required image A0 constantly, be goal pels, determine that the ground table section that goal pels is corresponding is target area:
Goal pels reflectivity calculates module (104), for calculating the reflectivity of goal pels: and video generation module (108), generates A0 image for calculate the reflectivity of each goal pels that module (104) calculates according to goal pels reflectivity:
Described goal pels reflectivity calculates module (104) and comprises following submodule:
Conversion coefficient module (103), the conversion coefficient of the reflectance varies amount of calculating described target area according to selected image between in two group images:
The second variable quantity computing module (105), for select the image B0 constantly corresponding with described image A0 and at least one other image constantly from the second group image, according to selected image, calculate the variable quantity of described target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each:
Variable quantity modular converter (106), for each variable quantity described each second variable quantity computing module (105) being obtained according to described conversion coefficient, be converted to each reflectance varies amount at the first group image: and reflectivity calculates module (107), for selecting from the first group image and described other constantly corresponding image, each reflectance varies amount and the target area reflectivity in selected image obtaining according to variable quantity modular converter (106) calculates the reflectivity of described target area.
9. system according to claim 8, is characterized in that, described conversion coefficient module (103) comprising:
Center pixel setting module (1031), for the pixel corresponding with described target area of selected the first group image is set as to center pixel:
Similar pixel screening module (1032), for from described center pixel separately place image filter out respectively at least one and belong to the similar pixel of similar atural object with described center pixel: with conversion coefficient computing module (1033), for determining that each corresponding ground of similar pixel table section is similar area, the reflectivity according to described similar area in selected image calculates described conversion coefficient.
10. system according to claim 9, is characterized in that, described goal pels reflectivity calculates module (104) and also comprises: weight module, for calculate the weight of each similar pixel according to selected image:
Described the second variable quantity computing module (105) comprising:
Similar pixel the second variable quantity computing module (1051), for selecting from the second group image and described certain constantly corresponding image B0 and the images at least one other moment, according to selected image, calculate the reflectance varies amount of similar area in the second group image described in each:
The second variable quantity weighted mean module (1052), the variable quantity of target area reflectivity reflectivity in image B0 with respect to it in other images constantly described in each described in each reflectance varies amount weighted average calculation similar pixel the second variable quantity computing module (1051) being obtained according to the described weight obtaining.
CN201210253288.4A 2012-07-20 2012-07-20 Processing method and system for remote-sensing images Pending CN103576132A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210253288.4A CN103576132A (en) 2012-07-20 2012-07-20 Processing method and system for remote-sensing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210253288.4A CN103576132A (en) 2012-07-20 2012-07-20 Processing method and system for remote-sensing images

Publications (1)

Publication Number Publication Date
CN103576132A true CN103576132A (en) 2014-02-12

Family

ID=50048313

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210253288.4A Pending CN103576132A (en) 2012-07-20 2012-07-20 Processing method and system for remote-sensing images

Country Status (1)

Country Link
CN (1) CN103576132A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105510897A (en) * 2015-12-01 2016-04-20 中国科学院上海技术物理研究所 Method for estimating satellite laser radar emergent laser wavelength reflection rate based on ground object type
CN108280419A (en) * 2018-01-18 2018-07-13 中国地质科学院矿产资源研究所 Spatial feature detection method and system
CN110334623A (en) * 2019-06-25 2019-10-15 华中农业大学 A method of slope collapse information is extracted based on Sentinel-2A satellite remote-sensing image
CN111353402A (en) * 2020-02-24 2020-06-30 中国科学院地理科学与资源研究所 Remote sensing extraction method for oil palm forest
CN112906531A (en) * 2021-02-07 2021-06-04 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on unsupervised classification

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105510897A (en) * 2015-12-01 2016-04-20 中国科学院上海技术物理研究所 Method for estimating satellite laser radar emergent laser wavelength reflection rate based on ground object type
CN108280419A (en) * 2018-01-18 2018-07-13 中国地质科学院矿产资源研究所 Spatial feature detection method and system
CN108280419B (en) * 2018-01-18 2020-06-30 中国地质科学院矿产资源研究所 Spatial feature detection method and system
CN110334623A (en) * 2019-06-25 2019-10-15 华中农业大学 A method of slope collapse information is extracted based on Sentinel-2A satellite remote-sensing image
CN110334623B (en) * 2019-06-25 2021-04-30 华中农业大学 Method for extracting collapsing information based on Sentinel-2A satellite remote sensing image
CN111353402A (en) * 2020-02-24 2020-06-30 中国科学院地理科学与资源研究所 Remote sensing extraction method for oil palm forest
CN111353402B (en) * 2020-02-24 2021-03-30 中国科学院地理科学与资源研究所 Remote sensing extraction method for oil palm forest
CN112906531A (en) * 2021-02-07 2021-06-04 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on unsupervised classification
CN112906531B (en) * 2021-02-07 2023-05-23 清华苏州环境创新研究院 Multi-source remote sensing image space-time fusion method and system based on non-supervision classification

Similar Documents

Publication Publication Date Title
CN101482929B (en) Remote-sensing image processing method and system
CN103576132A (en) Processing method and system for remote-sensing images
CN108682026A (en) A kind of binocular vision solid matching method based on the fusion of more Matching units
CN102982517B (en) Remote-sensing image fusion method based on local correlation of light spectrum and space
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN110321774B (en) Crop disaster situation evaluation method, device, equipment and computer readable storage medium
CN110321861A (en) A kind of main crops production moon scale Dynamic Extraction method
CN110866364A (en) Ground surface temperature downscaling method based on machine learning
US8855439B2 (en) Method for determining a localization error in a georeferenced image and related device
CN110414738A (en) A kind of crop yield prediction technique and system
CN102800074A (en) Synthetic aperture radar (SAR) image change detection difference chart generation method based on contourlet transform
CN110969654A (en) Corn high-throughput phenotype measurement method and device based on harvester and harvester
CN110222870A (en) Assimilate the Regional Fall Wheat yield estimation method of satellite fluorescence data and crop growth model
WO2023088366A1 (en) Method for jointly estimating soil profile salinity by using time-series remote sensing image
CN107782700B (en) A kind of AVHRR Reflectivity for Growing Season method for reconstructing, system and device
CN114881620B (en) Territorial space monitoring method and system based on satellite remote sensing
CN110503137A (en) Based on the determination method of the remote sensing image temporal-spatial fusion base image pair of mixing together
CN113888416A (en) Processing method of satellite remote sensing image data
CN116912690A (en) Forest leaf area index inversion acquisition method and system based on data fusion
CN105842245A (en) Method for assessing rice yield
CN116597322A (en) Environment monitoring method and system based on unmanned aerial vehicle acquired image
CN109086661B (en) A kind of crops relative radiometric normalization method and device
CN108985154B (en) Small-size ground object sub-pixel positioning method and system based on image concentration
CN111178186A (en) Rice extraction method, device and equipment based on sentinel remote sensing data
CN114359725B (en) Crop growth condition remote sensing monitoring system and method based on crop model and assimilation technology

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140212