CN105651263B - Shallow water depth multi-source remote sensing merges inversion method - Google Patents

Shallow water depth multi-source remote sensing merges inversion method Download PDF

Info

Publication number
CN105651263B
CN105651263B CN201510975396.6A CN201510975396A CN105651263B CN 105651263 B CN105651263 B CN 105651263B CN 201510975396 A CN201510975396 A CN 201510975396A CN 105651263 B CN105651263 B CN 105651263B
Authority
CN
China
Prior art keywords
depth
water
mrow
msub
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510975396.6A
Other languages
Chinese (zh)
Other versions
CN105651263A (en
Inventor
张靖宇
马毅
张震
梁建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Institute of Oceanography SOA
Original Assignee
First Institute of Oceanography SOA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Institute of Oceanography SOA filed Critical First Institute of Oceanography SOA
Priority to CN201510975396.6A priority Critical patent/CN105651263B/en
Publication of CN105651263A publication Critical patent/CN105651263A/en
Application granted granted Critical
Publication of CN105651263B publication Critical patent/CN105651263B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C13/00Surveying specially adapted to open water, e.g. sea, lake, river or canal
    • G01C13/008Surveying specially adapted to open water, e.g. sea, lake, river or canal measuring depth of open water

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Hydrology & Water Resources (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

Shallow water depth multi-source remote sensing fusion inversion method includes:The first step:Multi-spectrum remote sensing image is pre-processed, obtains extra large table reflectivity;Second step:Field measurement water depth value obtains and processing;3rd step:Single source Depth extraction and depth of water segment identification;4th step:Multi-source Depth extraction merges;5th step:Depth extraction precision test;Using the Depth extraction result in n kind lists source and its corresponding depth of water segment identification image and fusion parameters as input, carry out the fusion of multi-source Depth extraction by pixel;After the completion of Depth extraction precision test, using final water depth value as the actual water depth value output data of remote sensing images.Compared with existing inversion method, this method can comprehensively utilize a variety of remotely-sensed data sources and the difference of Water Depth Information is responded, and excavate bathymetric data therein, improve inversion accuracy, handled by Decision fusion, the marine sounding of shallow water area under the complex situations that are particularly suitable for use in.

Description

Shallow water depth multi-source remote sensing merges inversion method
Technical field
The present invention relates to a kind of marine sounding method, belongs to space remote sensing technical field, more particularly to one kind can profit The deep penetrating shallow water depth multi-source remote sensing fusion inversion method of ocean water is carried out with a variety of remote sensors.
Background technology
Marine sounding is to ensure ship's navigation, development port and pier and ocean engineering construction, formulate seashore and island The necessary basis data of Correlative plan.Compared with depth of water in-site measurement means, remote sensing technology is with covering is wide, the cycle is short, expense Low, many-sided advantage such as spatial resolution is high.Since the 1970s, it is anti-the various passive remote sensing depth of waters have been carried out both at home and abroad The research of model is drilled, conventional visible ray Depth extraction model mainly includes analysis model, half analysis semiempirical model and statistics Model.Using different models, carried out in recent years in water-depth measurement fields such as river, lake, reservoir, island and littoral zone peripheries Inverting application.
Depth of water visual remote sensing inverting is to obtain effective solution of the shallow sea complicated landform depth of water, it is particularly possible to which inverting obtains Ship is taken can not be close and to be difficult to the water depth information into region.But because model is difficult to take into account physical mechanism and parametrization, because The space that this existing visible ray RS Fathoming inverse model precision improves again is limited.
The limitation of environmental condition when the inverting of depth of water multi-source remote sensing can overcome single source video imaging, and multi-source Remote Sensing Images carry The more rich band class information and its spectral resolution being not quite similar that supply are also beneficial to the extraction of Water Depth Information, currently will Multi-source is applied to the research work of RS Fathoming inverting, but is the interpolation for spatial information mostly, not in Decision fusion aspect Develop and apply.And Decision fusion can make full use of existing remote sensing image resource and information, to improve the optical remote sensing depth of water Inversion accuracy provides new way.
Chinese patent (application number 201310188829.4, data of publication of application CN 104181515A) discloses that " one kind is based on The shallow water depth inversion method of blue-yellow wave band high-spectral data ".It is mainly used in solving to carry out clean water using optical remote sensing means The model of body Depth extraction is mostly established for multispectral data, and such algorithm is few by multispectral data wide waveband, spectral information Restriction, the invention proposed based on high-spectral data and a kind of utilizes blue-yellow wave band (450-610 according to water body optical attenuation mechanism Nanometer) high-spectral data inverting cleaning water body shallow water depth new method, this method can accurately extract shallow water depth within 30 meters Distributed intelligence, and it is directed to a kind of remote sensor, it is only necessary to an algorithm coefficient demarcation is carried out, algorithm universality is substantially changed It is kind.But the party obtains image as detection data source using the remote sensor in single source, available remote sensing image spectroscopic data wave band, Spectral information is limited in scope, and is unfavorable for improving the accuracy that shallow water depth inverting is used for water-depth measurement, especially in complex situations Under to the Effect on Detecting of neritic province domain depth of water deficiency.
The content of the invention
The invention provides a kind of shallow water depth multi-source remote sensing based on Decision fusion to merge inversion method, existing for solving Have in technology and only to use the image of single source remote sensor as data source, its remote sensing image spectroscopic data wave band, spectral information make It is limited with scope, the problem of water-depth measurement precision and poor accuracy.
Shallow water depth multi-source remote sensing merges inversion method, comprises the following steps:
The first step:Multi-spectrum remote sensing image is pre-processed, obtains extra large table reflectivity;
The pretreatment includes radiance conversion, atmospheric correction and solar flare and removed;
Second step:Field measurement water depth value obtains and processing;
The bathymetric data of test block and corresponding latitude and longitude coordinates are obtained, the tidal height at measurement moment is confirmed by tide table Value, bathymetric data is corrected to the depth of water for obtaining theoretical depth reference plane, further according to the acquisition moment of multi-spectrum remote sensing image, to reason The tidal correction of the instantaneous depth of water is carried out by the bathymetric data of depth datum to obtain the instantaneous depth of water;
3rd step:Single source Depth extraction and depth of water segment identification;
According to the relation between the depth of water at depth of water control point and corresponding image picture element reflectivity, carried out using multiband model Statistical regression, the input that the parameter exported in the source image Depth extraction merges as multi-source inverting, and to multiband mould Type carries out parameter calibration, and multiband model formation is as follows,
Xi=Ln (ρisi) (2)
Wherein, Z is the depth of water, and n is the wave band number for participating in inverting, A0And AiFor undetermined coefficient;ρiIt is the i-th wave band reflectivity Data, ρsiIt is the reflectivity at the wave band deep water;
Depth of water control point is divided into multiple depth of water sections as inputting, the average relative error of each depth of water section is exported, as more Another input, i.e. fusion parameters of source Depth extraction fusion;As the fusion parameters of multi-source Depth extraction fusion input, also wrap Include the Kappa coefficients of single source image of output and the segmental averaging precision of each depth of water section;
Wherein, n is depth of water control point number, and k represents depth of water section, in formula 3, δkFor average relative error, ziIt is i-th of water The measured value at deep control point, zi' it is its inverting value, in formula 4,For Kappa coefficients, xiiRepresent the control point correctly classified Number, xi+、x+iIt is the ranks boundary value of error matrix when segmentation statistics is carried out to depth of water control point, in formula 5, δma_kIt is that segmentation is flat Equal precision, PAkIt is producer's precision of k-th of depth of water section, UAkIt is user's precision of k-th of depth of water section;
Using fusion parameters and whole scape remote sensing image, single source Depth extraction result is calculated, and be corrected reason By being segmented after depth datum to result, depth of water segment identification image is obtained;
4th step:Multi-source Depth extraction merges;
Using the Depth extraction result in n kind lists source and its corresponding depth of water segment identification image and fusion parameters as input, by Pixel carries out the fusion of multi-source Depth extraction, specifically includes;
A) when the poll of some depth of water section is t, andExplanation hasKind or more kind image Inversion result be in same depth of water section, whereinExpression rounds downwards, now, if 2 kinds or image of more than two kinds Equal Depth extraction value is obtained, then directly assigns this value for current pixel, otherwise, compares this several image in the flat of the depth of water section Equal relative error and mean accuracy, using depth of water section mean accuracy maximum as final pixel value;Only when the average essence of the depth of water section Spend depth of water section average relative error corresponding to maximum image it is also maximum when, the image that selects mean accuracy to take second place;
B) when maximum number of votes obtained t meetsAnd it is t to have x (x >=2) numbers of votes obtained, now contrasts Kappa Coefficient and n grader respective corresponding depth section mean accuracy, if the maximum image of Kappa coefficients and mean accuracy are maximum Be all determined as same depth of water section, and be homologous image, by the water depth value of the image picture element as a result;If not same scape Image, it is determined that this two scape average relative error in the depth of water section is less;If the maximum image of Kappa coefficients is judged The depth of water section difference maximum with mean accuracy, then take the former water depth value;It is t in votes if only 1 number of votes obtained is t Depth of water section in, using depth of water section mean accuracy it is maximum as final pixel value;When the image that the depth of water section mean accuracy is maximum When corresponding depth of water section average relative error is also maximum, the image that selects mean accuracy to take second place;
C) as maximum votes t=1, the water depth value corresponding to the maximum image of Kappa coefficients is selected;
5th step:Depth extraction precision test;
The precision test is to utilize multi-source inverting knot after single source inversion result before the development fusion of depth of water checkpoint and fusion The comparison of fruit, after the completion of Depth extraction precision test, using final water depth value as the actual water depth value output data of remote sensing images.
Shallow water depth multi-source remote sensing fusion inversion method as described above, the spoke brightness transition in the first step is will be distant Sense image DN value is converted into spoke brightness value;The solar flare, which removes, can use median method, averaging method or wavelet method;It is described big Gas correction can use FLAASH, dark pixel or 6S atmospheric correction methods.
The reference images that Decision fusion selects in the present invention, depending on the engineer's scale of required depth of water image and resolution ratio, nothing Particular/special requirement.If the maximum image of selection spatial resolution makees benchmark, although the speed of service of fusion has a certain upgrade, Spatial match uses represents whole pixel with the Decision fusion numerical value at pixel centre coordinate, and the information content of loss is larger.So Consider from treatment effeciency and inverting fusion accuracy angle, it is preferred to use anti-with the depth of water that resolution ratio highest image is generated Result image is drilled as benchmark, while needs the position by reference images pixel centre coordinate and other remote sensing source depth of water images to enter Row matching, obtains all single source inverting water depth values and other information at the coordinate, Decision fusion is carried out, to reduce possible loss Information content, ensure the precision of inverting.
Beneficial effects of the present invention:
Compared with existing inversion method, this method can comprehensively utilize a variety of remotely-sensed data sources and the difference of Water Depth Information is rung Should, remote sensing image spectroscopic data wave band, the use range of spectral information are expanded, excavates bathymetric data therein, improves inverting Precision, handled by Decision fusion, the marine sounding of shallow water area under the complex situations that are particularly suitable for use in.
Brief description of the drawings
Fig. 1 is the flow chart of the present invention;
Fig. 2 is the depth of water multi-source inverting fusion flow chart of the present invention;
Fig. 3 a are depth of water multi-source fusion result scatter diagrams;
Fig. 3 b are single source WorldView-2 Depth extractions result scatter diagrams;
Fig. 3 c are single source Pleiades Depth extractions result scatter diagrams;
Fig. 3 d are single source QuickBird Depth extractions result scatter diagrams;
Fig. 3 e are single source SPOT-6 Depth extractions result scatter diagrams;
Fig. 4 is depth of water multi-source remote sensing inverting fusion results of the present invention;
Embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described.
The specific embodiment of the invention is described in further detail with reference to accompanying drawing 1, the fusion inverting of shallow water depth multi-source remote sensing Method, specifically include following steps:
The first step:Multi-spectral Remote Sensing Data pre-processes:
First have to locate the multi-spectrum remote sensing image that the participation Depth extraction obtained by Multiple Source Sensor merges in advance Reason, including spoke brightness transition, atmospheric correction and solar flare remove.Spoke brightness transition is that image DN value is converted into spoke brightness Process, spoke brightness transition formula corresponding to different remotely-sensed data products is different, typically using following two:
Corresponding parameters can obtain in the meta data file of image in formula.Obtain multispectral spoke brightness image Afterwards, atmospheric correction is carried out using the methods of FLAASH or dark pixel, 6S, obtains extra large table reflectivity data;To remove Hai Biaotai The interference that positive solar flare and floating object etc. are brought, then using progress solar flare removal the methods of intermediate value, average or small echo.
Second:Field measurement water depth value obtains and processing:
The bathymetric data of test block is obtained using multi-beam fathometer or other water-depth measurement means, while corresponding to acquisition Latitude and longitude coordinates.The tidal height value for confirming the measurement moment with tide table is corrected to bathymetric data.
3rd step:Single source Depth extraction and depth of water segment identification:
According to the relation between the depth of water at depth of water control point and corresponding image picture element reflectivity, carried out using multiband model Statistical regression, the input that the parameter exported in the source image Depth extraction merges as multi-source inverting, and to multiband mould Type carries out parameter calibration, and multiband model formation is as follows,
Xi=Ln (ρisi) (2)
Wherein, Z is the depth of water, and n is the wave band number for participating in inverting, A0And AiFor undetermined coefficient;ρiIt is the i-th wave band reflectivity Data, ρsiIt is the reflectivity at the wave band deep water.
Depth of water control point is divided into multiple depth of water sections as inputting, the average relative error of each depth of water section is exported, as more Another input, i.e. fusion parameters of source Depth extraction fusion;
The fusion parameters of input, in addition to the Kappa coefficients of single source image of output and the segmental averaging of each depth of water section Precision;
Wherein, n is depth of water control point number, and k represents depth of water section, in formula 3, δkFor average relative error, ziIt is i-th of water The measured value at deep control point, zi' it is its inverting value, in formula 4,For Kappa coefficients, xiiRepresent the control point correctly classified Number, xi+、x+iBe to depth of water control point carry out segmentation statistics when, the ranks boundary value of error matrix, δma_kIt is segmental averaging precision, PAkIt is producer's precision of k-th of depth of water section, UAkIt is user's precision of k-th of depth of water section;
Using obtained parameter and whole scape remote sensing image, single source Depth extraction result is calculated to obtain, and by its instantaneous depth of water school Theoretical depth reference plane is just arrived, result is being segmented afterwards, is obtaining depth of water segment identification image;
This sentences error matrix of the WorldView-2 images used in multi-source inverting fusion method at depth of water control point Exemplified by, shown in table 1.
Table 1:The error matrix at WorldView-2 depth of waters control point
In error matrix, row represent ground reference checking information, the classification that behavior remotely-sensed data obtains, main diagonal element (such as x11, x is expressed as in formulaii) it is the correct pixel of classification, the outer element of diagonal is that remotely-sensed data classification is joined relative to ground Examine the pixel number of mistake.So in this experiment, 1~4 represents 4 depth of water sections being divided into using 2m, 5m, 10m as interval, z respectively To survey the depth of water, z' is the inverting depth of water.Row represent the quantity at control point in 4 actual measurement depth of water sections, and row represents and utilizes remote sensing image Inverting obtains the quantity at control point in 4 depth of water sections, and leading diagonal is that the pixel inverting depth of water is assigned to the section that correctly sounds the depth of the water Point number, conversely, line outside be mistake segmentation point number.Producer's precision (PA) in error matrix is it is assumed that 1 water Deep control point in kth class, during the remote sensing image inverting depth of water by this put corresponding to pixel be classified as k probability, by kth class just The summation of true classification number divided by kth row (is expressed as x in formula+i) try to achieve.User's precision (UA) is if be that the image inverting depth of water will When certain dominating pair of vertices answers the pixel to be grouped into kth class, true sound the depth of the water at the depth of water control point belongs to the percentage of kth class, its Calculate the number by being correctly categorized as kth class divided by be categorized as k summation (in the namely summation of row k, i.e. formula xi+)。
Kappa coefficients are that uniformity or precision are measured between Classification in Remote Sensing Image figure and reference data, by leading diagonal and ranks Probabilistic consistency that sum provides is expressed.Kappa coefficients in example can be construed to utilize this scape for 0.7686 The water depth distribution obtained after the WorldView-2 image inverting depth of waters is better than the depth of water section of random division with 76.86% degree.
Producer's precision and user's precision are better closer to 1, optimal situation be producer's precision and user's precision all For 1.So considered for balance do not lose it is biased, and also to simplify the final number of parameters for participating in Decision fusion, this implementation The average of the two is taken as fusion parameters, i.e. segmental averaging precision in example.
Using fusion parameters and whole scape remote sensing image, single source Depth extraction result is calculated, and be corrected reason By being segmented after depth datum to result, depth of water segment identification image is obtained;
4th step:Multi-source Depth extraction merges:
Merged using single source Depth extraction result, depth of water segment identification image and fusion parameters as multi-source Depth extraction defeated Enter, carry out by pixel and merge.Determine finally to take in the depth of water section institute votes of current pixel using 4 single sources (i.e. remote sensing image) Value, it is specific as follows:
A) when the poll of some depth of water section is more than or equal to 3, the image inversion result for illustrating to have 3 kinds or more than 3 kinds is same Then directly it is current pixel if the image of 2 and the above obtain equal Depth extraction value now in one depth of water section This value is assigned, otherwise, compares average relative error and mean accuracy of this several image in the depth of water section, selects mean accuracy as far as possible The big and water depth value as current pixel that average relative error is small.On the premise of mean accuracy is larger, investigates the image and exist The average relative error of this depth of water section, if the image average relative error of mean accuracy maximum is also maximum, the selection abandoned is flat Second largest image of equal precision;
B) when maximum number of votes obtained be equal to 2, and win the vote situation be 2,2, it is meant that the inverting depth of water for having two width images respectively falls In same depth of water section.The mean accuracy of Kappa coefficients and 4 graders in respective corresponding depth section is now investigated, if Kappa systems Several maximum images and mean accuracy maximums are all determined as same depth of water section, and are homologous images, just select the image picture The water depth value of member is as a result, if not same scape image, then select this two scape average relative error in the depth of water section smaller 's.If the difference of the depth of water section that the maximum image of Kappa coefficients is judged and mean accuracy maximum, selects the former depth of water Value.When maximum number of votes obtained be equal to 2, and win the vote situation be 2,1,1, votes be 2 depth of water section in, select mean accuracy as far as possible The big and water depth value as current pixel that average relative error is small;
C) when maximum votes are 1, i.e., the classification results in 4 single sources differ, that is to say, that the inverting of 4 scape images Water depth distribution now believes the maximum image of Kappa coefficients in 4 depth of water sections.
5th step:Depth extraction precision test:
Carry out the precision test of single source inversion result and multi-source inversion result using checkpoint, calculate overall and point different water The average relative error and mean absolute error of deep section, so as to be verified to the precision of multi-source Depth extraction fusion.
(1) depth of water multi-source remote sensing inverting fusion parameters and Fusion Model implementation status
The present embodiment to choose the QuickBird on January 10th, 2008, the WorldView-2 of on 2 7th, 2010, The process that the SPOT-6 in 5, Pleiades and 2013 on April on March 9th, 2012 carries out depth of water multi-source inverting fusion is carried out pair Than checking.The inverted parameters obtained by blue, green, the red three wave bands log-linear model inverting depth of water, and control are illustrated in table 2 Segmental averaging relative error at system point.Multi-source Remote Sensing Images Depth extraction fusion using WorldView-2 Depth extractions image as Basis, Decision fusion is carried out with the Depth extraction result of Pleiades, QuickBird and SPOT-6 image.Image modality in table 2 Order according to the spatial resolution of image, gradually increase arranges from left to right, segmental averaging precision and segmental averaging relative error 1-4 represent 4 depth of water sections using 2m, 5m, 10m by spaced points point successively.
Table 2:Depth of water list source remote-sensing inversion parameter and multi-source Decision fusion parameter
By comparative analysis as can be seen that overall segmentation precision highest is SPOT-6 images, that worst is QuickBird Image.Comprehensive segmentation mean accuracy and segmental averaging relative error are considered, the most preferably Pleiades images in the 1st section, it Segmentation precision highest, it is and also smaller in the average relative error of this section, be secondly SPOT-6 images, its average relative error It is small 8 percentage points compared with the former, but segmental averaging precision is inferior to the former.Although segmental averaging of the Pleiades images in the 2nd section Precision is best, but it is maximum in the average relative error of this depth of water section is 4 scape images, so ensure that higher point On the premise of section mean accuracy, average relative error is it is also preferable that SPOT-6 images.In 3rd and 4 section, no matter SPOT-6 images It is best in segmental averaging precision or segmental averaging relative error.
As shown in Fig. 2 in the result of depth of water multi-source inverting fusion, the inverting depth of water fusion evaluation of generation has 1002 rows, 1054 row, that is, share 1056108 pixels.By statistics, determine that the pixel number of water depth value is most according to the 2nd rule, be 860835, the 81.51% of all pixel numbers are accounted for, when illustrating to carry out Decision fusion by pixel, the maximum ballot of most of pixels Number account for sum more than half, that is to say, that at least 3 scape images are same depth of water section in the inverting water depth value of the pixel, and And final water depth value is depended in the maximum Depth extraction image of this depth of water section mean accuracy.Secondly, it is more to perform number Be the 6th rule, be 72132, percentage 6.83%, minimum is the 9th rule, only 128 pixels.Only when 4 The water depth value of image inverting carries out the 9th rule in different water depth Duan Zhongcai, although this means that the Depth extraction of 4 scape images Ability is different, but the result difference in depth of water segmentation is not too large, and only very small part has obvious difference, institute It can be played a role with this 4 scape image in Decision fusion.
(2) the overall precision checking analysis of depth of water multi-source remote sensing inverting fusion
Ratio of precision is made compared with obtained each precision evaluation refers to single source result before merging to depth of water multi-source inverting fusion results Mark as shown in table 3 below.
Table 3:The overall precision of depth of water multi-source inverting fusion compares
Three evaluation indexes all show for the result of the more former image inverting of result after depth of water multi-source Decision fusion Improve more notable.Average relative error be followed successively by from small to large Decision fusion image, SPOT-6 images, QuickBird images, Pleiades images and WorldView-2 images, compared to worst WorldView-2 images, image controls in the depth of water after fusion The average relative error at point place reduces more than 40 percentage points, and when being initialized before fusion to result image, it is exactly with this scape shadow The inversion result of picture illustrates that Decision fusion largely improves the inversion result of former image really as benchmark.Even if with 4 scapes The best SPOT-6 images of inversion accuracy are compared in image, and fusion evaluation also has subtracting for 12.7 percentage in relative error It is small.The minimum value of mean absolute error is by fusion evaluation or SPOT-6 images and Pleiades with differing 1.4m between maximum For image compared to obtaining, the mean absolute error of QuickBird images and WorldView-2 images is larger, value be respectively 1.6m and 1.8m, as many as 0.8m and 1m are differed between minimum value.Kappa coefficients for evaluating segmentation precision also indicate that:Fusion evaluation Pixel is more accurate in the differentiation that depth of water section belongs to, next to that Pleiades images and SPOT-6 images, by QuickBird shadows As inverting, to obtain depth of water segment identification image precision poor.It is generally understood that Kappa values, when more than 0.80, classification chart and ground are joined Examine that uniformity between information is very big or precision is very high, Kappa values both greater than critical value of this 4 images, illustrate that its is consistent Property is all relatively good.Worst is WorldView-2 images, ranks most end with 0.6139 Kappa values.
As shown in Fig. 3 a, 3b, 3c, 3d, 3e, the front and rear actual measurement depth of water of depth of water multi-source inverting fusion and the inverting depth of water are given Scatter diagram.By scatter diagram it can be found that in addition to Pleiades images, another 3 scape image to below 2m depth of water point efficiency of inverse process all It is undesirable.In WorldView-2 image scatter diagrams, concentration is compared in the distribution of data point, and average relative error should be received greatly The influence of the data point of phytal zone.WorldView-2 images and the maximum water depth value of Pleiades image invertings are beyond actual measurement The scope of depth of water checkpoint, this does not occur in the scatter diagram of QuickBird images and SPOT-6 images.
As shown in figure 4, single source inversion result of different resolution is combined together by the present embodiment by Decision fusion, enclose The 20m that island one is enclosed is fine and smooth with shallow water area texture, and water depth ratio is smaller, the reef disk being clear that where northern island;In island west The profundal zone texture of the bigger depth of south and northeastward is then more coarse;It is about at 20m in depth, depth of water gradient is larger, by Shallow to deep transition is more apparent.
(3) the segmentation precision checking analysis of depth of water multi-source remote sensing inverting fusion
The segmentation error distribution of depth of water multi-source inverting fusion is observed, as shown in table 4, is increased with depth, average relative error With mean absolute error without the trend of regular increase or reduction.
Table 4:The segmentation error of depth of water multi-source inverting fusion compares
In 0-2m depth of water sections, although the average relative error of inversion result is generally relatively low, but still have it is very significant poor Away from.Precision highest is depth of water multi-source inverting fusion evaluation, and average relative error is 39.1%, mean absolute error 0.3m. Next to that Pleiades images, differ 3.9%, mean absolute error is equal with the former average relative error.It is afterwards QuickBird images, SPOT-6 images and WorldView-2 images, its average relative error and mean absolute error all be in by Cumulative big gesture, especially WorldView-2 images, its average relative error in the depth of water section is the 2 of SPOT-6 images Times, compared with best inverting fusion evaluation, gap up to 210.1%, and mean absolute error is also almost inverting fusion 4 times of image, it is 1.1m.In 2-5m depth of water sections, precision highest is inverting fusion evaluation and SPOT-6 images, and both is flat Equal relative error and mean absolute error are equal, respectively 5.3% and 0.2m.WorldView-2 images are anti-the depth of water section Drill ability to be substantially improved compared to shallow water section, average relative error 28.8%, mean absolute error 1.0m, but with the depth of water Inverting fusion evaluation best Duan Jingdu is compared with SPOT-6 images, and gap is quite obvious.QuickBird images with 32.8% average relative error and 1.4m mean absolute error come the 4th, and inversion accuracy is best in 0-2m depth of water sections Inversion accuracy of the Pleiades images in this depth of water section is worst.In 5-10m depth of water sections, by average relative error and averagely The order arrangement of absolute error from small to large, be followed successively by inverting fusion evaluation and SPOT-6 images, QuickBird images, Pleiades images, WorldView-2 images.8 percentages, average absolute are differed between the average relative error of minimum and maximum Error at most differs 0.6m.In the depth of water section of 10-20m scopes, minimum average relative error and mean absolute error are come It is maximum for Pleiades images, value respectively 6.3%, 22.5% and 3.4m, 0.9m from SPOT-6 images.Shadow is merged in inverting The inversion accuracy of picture is preferable, average relative error 6.4%, mean absolute error 1.0m.
The inverting of depth of water multi-source merge inverting in, SPOT-6 images in addition to inverting ability is performed poor in shallow water section, Other depth of water Duan Jun are the scapes of precision highest 1 in all remote sensing Depth extraction images.Pleiades images effectively compensate for SPOT- 6 images are in the deficiency of phytal zone, but it is worst in 2-5m and 10-20m precision is 4 scapes.WorldView-2 images are in 0- Inversion accuracy is worst in this 2 depth of water sections of 2m and 5-10m, general in other 2 depth of water section precision.And QuickBird images exist Inversion accuracy in each depth of water section is all hovered moderate.Except the precision in 10-20m depth of water sections is slightly below SPOT-6 shadows Picture, multi-source inverting fusion evaluation are best in the inversion accuracy of other depth of water sections.
It is of the invention compared with existing inversion method, this method can comprehensively utilize a variety of remotely-sensed data sources to Water Depth Information Difference response, expands remote sensing image spectroscopic data wave band, the use range of spectral information, excavates bathymetric data therein, carry High inversion accuracy, is handled by Decision fusion, the marine sounding of shallow water area under the complex situations that are particularly suitable for use in.
The technology contents of the not detailed description of the present invention are known technology.

Claims (2)

1. shallow water depth multi-source Remote Sensing Images inversion method, it is characterised in that comprise the following steps:
The first step:Multi-spectrum remote sensing image is pre-processed, obtains extra large table reflectivity;
The pretreatment includes radiance conversion, atmospheric correction and solar flare and removed;
Second step:Field measurement water depth value obtains and processing;
The bathymetric data of test block and corresponding latitude and longitude coordinates are obtained, the tidal height value at measurement moment is confirmed by tide table, will Bathymetric data correction obtains the depth of water of theoretical depth reference plane, further according to the acquisition moment of multi-spectrum remote sensing image, to theoretical deep The bathymetric data of degree reference plane carries out the tidal correction of the instantaneous depth of water to obtain the instantaneous depth of water;
3rd step:Single source Depth extraction and depth of water segment identification;
According to the relation between the depth of water at depth of water control point and corresponding image picture element reflectivity, counted using multiband model Return, the input that the parameter exported in the source image Depth extraction merges as multi-source inverting, and multiband model is entered Row parameter calibration, multiband model formation is as follows,
<mrow> <mi>Z</mi> <mo>=</mo> <msub> <mi>A</mi> <mn>0</mn> </msub> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msub> <mi>A</mi> <mi>i</mi> </msub> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Xi=Ln (ρisi) (2)
Wherein, Z is the depth of water, and n is the wave band number for participating in inverting, A0And AiFor undetermined coefficient;ρiIt is the i-th wave band reflectivity data, ρsiIt is the reflectivity at the i-th wave band deep water;
Depth of water control point is divided into multiple depth of water sections as inputting, the average relative error of each depth of water section is exported, as multi-source water Another input, i.e. fusion parameters of deep inverting fusion;The fusion parameters inputted as the fusion of multi-source Depth extraction, in addition to it is defeated The Kappa coefficients of the single source image gone out and the segmental averaging precision of each depth of water section;
<mrow> <msub> <mi>&amp;delta;</mi> <mi>k</mi> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mi>n</mi> </mfrac> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <mfrac> <mrow> <mo>|</mo> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>-</mo> <msup> <msub> <mi>z</mi> <mi>i</mi> </msub> <mo>&amp;prime;</mo> </msup> <mo>|</mo> </mrow> <msub> <mi>z</mi> <mi>i</mi> </msub> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mover> <mi>K</mi> <mo>^</mo> </mover> <mo>=</mo> <mfrac> <mrow> <mi>n</mi> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mi>i</mi> </mrow> </msub> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>+</mo> </mrow> </msub> <mo>&amp;times;</mo> <msub> <mi>x</mi> <mrow> <mo>+</mo> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> <mrow> <msup> <mi>n</mi> <mn>2</mn> </msup> <mo>-</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>k</mi> </munderover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mo>+</mo> </mrow> </msub> <mo>&amp;times;</mo> <msub> <mi>x</mi> <mrow> <mo>+</mo> <mi>i</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>&amp;delta;</mi> <mrow> <mi>m</mi> <mi>a</mi> <mo>_</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>PA</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>UA</mi> <mi>k</mi> </msub> </mrow> <mn>2</mn> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
Wherein, n is depth of water control point number, and k represents depth of water section, in formula 3, δkFor average relative error, ziIt is i-th of depth of water control Make the measured value of point, zi' it is its inverting value, in formula 4,For Kappa coefficients, xiiRepresent the control point number correctly classified, xi+、 x+iIt is the ranks boundary value of error matrix when segmentation statistics is carried out to depth of water control point, in formula 5, δma_kIt is segmental averaging precision, PAkIt is producer's precision of k-th of depth of water section, UAkIt is user's precision of k-th of depth of water section;
Using fusion parameters and whole scape remote sensing image, single source Depth extraction result is calculated, and is corrected theoretical deep Result is segmented after degree reference plane, obtains depth of water segment identification image;
4th step:Multi-source Depth extraction merges;
Using the Depth extraction result in n kind lists source and its corresponding depth of water segment identification image and fusion parameters as input, by pixel Carry out the fusion of multi-source Depth extraction, specifically include;
A) when the poll of some depth of water section is t, andExplanation hasThe inverting knot of kind or more kind image Fruit be in same depth of water section, whereinExpression rounds downwards, now, if 2 kinds or image of more than two kinds obtain phase Deng Depth extraction value, then directly assign this value for current pixel, otherwise, compare this several image in the average relative of the depth of water section Error and mean accuracy, using depth of water section mean accuracy maximum as final pixel value;Only when the depth of water section mean accuracy is maximum Image corresponding to depth of water section average relative error it is also maximum when, select the image that takes second place of mean accuracy;
B) when maximum number of votes obtained t meetsAnd it is t to have x (x >=2) numbers of votes obtained, now contrasts Kappa coefficients With n grader respective corresponding depth section mean accuracy, if the maximum image of Kappa coefficients and mean accuracy are maximum all It is determined as same depth of water section, and is homologous image, by the water depth value of the image picture element as a result;If not same scape shadow Picture, it is determined that this two scape average relative error in the depth of water section is less;If the water that the maximum image of Kappa coefficients is judged The deep section difference maximum with mean accuracy, then take the former water depth value;It is t's in votes if only 1 number of votes obtained is t In depth of water section, using depth of water section mean accuracy maximum as final pixel value;When the image pair that the depth of water section mean accuracy is maximum When the depth of water section average relative error answered is also maximum, the image that takes second place of mean accuracy is selected;
C) as maximum votes t=1, the water depth value corresponding to the maximum image of Kappa coefficients is selected;
5th step:Depth extraction precision test;
The precision test utilizes multi-source inversion result after single source inversion result before the development fusion of depth of water checkpoint and fusion Compare, after the completion of Depth extraction precision test, using final water depth value as the actual water depth value output data of remote sensing images.
2. shallow water depth multi-source Remote Sensing Images inversion method according to claim 1, it is characterised in that in the first step Radiance conversion be that remote sensing image DN values are converted into spoke brightness value;The solar flare, which removes, can use median method, Value method or wavelet method;The atmospheric correction can use FLAASH, dark pixel or 6S atmospheric correction methods.
CN201510975396.6A 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method Expired - Fee Related CN105651263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510975396.6A CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510975396.6A CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Publications (2)

Publication Number Publication Date
CN105651263A CN105651263A (en) 2016-06-08
CN105651263B true CN105651263B (en) 2018-02-23

Family

ID=56476649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510975396.6A Expired - Fee Related CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Country Status (1)

Country Link
CN (1) CN105651263B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059796B (en) * 2018-07-20 2020-07-31 自然资源部第三海洋研究所 Shallow sea water depth multispectral satellite remote sensing inversion method for water depth control point-free area
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
CN109631862A (en) * 2019-01-22 2019-04-16 青岛秀山移动测量有限公司 A kind of multi-Sensor Information Fusion Approach of intertidal zone integration mapping
CN111561916B (en) * 2020-01-19 2021-09-28 自然资源部第二海洋研究所 Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN111651707B (en) * 2020-05-28 2023-04-25 广西大学 Tidal level inversion method based on optical shallow water region satellite remote sensing image
CN112013822A (en) * 2020-07-22 2020-12-01 武汉智图云起科技有限公司 Multispectral remote sensing water depth inversion method based on improved GWR model
CN111947628B (en) * 2020-08-25 2022-05-27 自然资源部第一海洋研究所 Linear water depth inversion method based on inherent optical parameters
CN113326470B (en) * 2021-04-11 2022-08-16 桂林理工大学 Remote sensing water depth inversion tidal height correction method
CN113255144B (en) * 2021-06-02 2021-09-07 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113639716A (en) * 2021-07-29 2021-11-12 北京航空航天大学 Depth residual shrinkage network-based water depth remote sensing inversion method
CN113793374B (en) * 2021-09-01 2023-12-22 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by improved four-band remote sensing image QAA algorithm
CN114943161B (en) * 2022-07-27 2022-09-27 中国水利水电科学研究院 Inland lake terrain inversion method based on multi-source remote sensing data
CN117514148B (en) * 2024-01-05 2024-03-26 贵州航天凯山石油仪器有限公司 Oil-gas well working fluid level identification and diagnosis method based on multidimensional credibility fusion

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9046363B2 (en) * 2012-04-27 2015-06-02 SATOP GmbH Using multispectral satellite data to determine littoral water depths despite varying water turbidity
CN104457901B (en) * 2014-11-28 2018-01-05 南京信息工程大学 A kind of method and system for determining the depth of water

Also Published As

Publication number Publication date
CN105651263A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
CN105651263B (en) Shallow water depth multi-source remote sensing merges inversion method
CN102254319B (en) Method for carrying out change detection on multi-level segmented remote sensing image
CN102436652B (en) Automatic registering method of multisource remote sensing images
CN102708369B (en) Sea ice parameter extraction method on basis of satellite image
CN103148842B (en) Shallow sea sand wave area multi-beam sounding terrain reconstruction method based on remote sensing image features
CN102750696B (en) Affine invariant feature and coastline constraint-based automatic coastal zone remote-sensing image registration method
CN109059796A (en) The multispectral satellite remote sensing inversion method of shallow water depth without depth of water control point region
CN101980293B (en) Method for detecting MTF of hyperspectral remote sensing system based on edge image
CN107610164B (en) High-resolution four-number image registration method based on multi-feature mixing
CN103292792B (en) Actual measurement SVP reconstruction method suitable for submarine detection and pseudo-landform processing
CN105627997A (en) Multi-angle remote sensing water depth decision fusion inversion method
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
CN102279973A (en) Sea-sky-line detection method based on high gradient key points
CN106156758B (en) A kind of tidal saltmarsh method in SAR seashore image
CN109461178A (en) A kind of monocular image depth estimation method and device merging sparse known label
CN112013822A (en) Multispectral remote sensing water depth inversion method based on improved GWR model
CN103871039A (en) Generation method for difference chart in SAR (Synthetic Aperture Radar) image change detection
CN103729846A (en) LiDAR point cloud data edge detection method based on triangular irregular network
CN104680151B (en) A kind of panchromatic remote sensing image variation detection method of high-resolution for taking snow covering influence into account
CN105139401A (en) Depth credibility assessment method for depth map
CN110189282A (en) Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method
CN111561916B (en) Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN113111706B (en) SAR target feature unwrapping and identifying method for azimuth continuous deletion
CN108230365A (en) SAR image change detection based on multi-source differential image content mergence
CN104613945A (en) Reconstruction method for terrain of shallow-sea large-sized complicated sand wave area

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180223

Termination date: 20181223