NL2026463B1 - Method and tool to georeference, cross-calibrate and fuse remote sensed imagery - Google Patents

Method and tool to georeference, cross-calibrate and fuse remote sensed imagery Download PDF

Info

Publication number
NL2026463B1
NL2026463B1 NL2026463A NL2026463A NL2026463B1 NL 2026463 B1 NL2026463 B1 NL 2026463B1 NL 2026463 A NL2026463 A NL 2026463A NL 2026463 A NL2026463 A NL 2026463A NL 2026463 B1 NL2026463 B1 NL 2026463B1
Authority
NL
Netherlands
Prior art keywords
layer
network
bands
correction
data
Prior art date
Application number
NL2026463A
Other languages
Dutch (nl)
Inventor
Hefele John
Vercruyssen Nathan
Esposito Marco
Original Assignee
Cosine Remote Sensing B V
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cosine Remote Sensing B V filed Critical Cosine Remote Sensing B V
Priority to NL2026463A priority Critical patent/NL2026463B1/en
Application granted granted Critical
Publication of NL2026463B1 publication Critical patent/NL2026463B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4061Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64GCOSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
    • B64G1/00Cosmonautic vehicles
    • B64G1/10Artificial satellites; Systems of such satellites; Interplanetary vehicles
    • B64G1/1021Earth observation satellites
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Processing (AREA)

Abstract

The invention comprises of a method to geo-reference, cross- calibrate and fuse remote sensed imagery using novel computer- vision and machine-learning techniques, and a software tool that implements this method. By remote sensed, we are referring to data that has been acquired aircraft, UAVs (Unmanned Aerial Vehicles) or satellites. The geo-referencing method maps datasets from various instruments onto the same grid using computer vision techniques, which allows the datasets to be used in the various corrects networks described below. The cross-calibration is performed with a machine learning network, hereon referred to as the Cross-Calibration network, which takes a single instruments’ imagery and associated meta- data as input and enhances it radiometrically, spectrally and/or spatially. The fusion is also performed with a machine learning network, hereon referred to as the fusion network, which takes multiple geo-referenced datasets as input and fuses them together to create a data cube with greater radiometric, spectral and/or spatial resolution than any of the single inputs.

Description

Method and tool to georeference, cross-calibrate and fuse remote sensed imagery Field and background of the invention Cross-calibration and fusion techniques utilize data from various remotes sensing instruments to enhance a single instruments imagery or create a greater combined product. These techniques are especially beneficial when applied to small satellites, as their specialized hardware and high temporal resolution, afforded by lower production costs, can be combined with the high quality data of larger, mostly institutional satellites. Because of this, these techniques have myriad application in agricultural monitoring, early hazard detection, and general change monitoring activities where short revisit times and specialized sensors are a necessity.
Cross-calibration and fusion techniques require datasets that are geo-referenced on the same grid. This invention facilitates this need with a computer-vision based geo-referencing method that only takes image data as input. Because this method does not require any other auxiliary information, it can easily be applied to arbitrary instrument pairs, which considerably increases the flexibility of the method.
The cross-calibration network, shown in Fig.1, enhances the imagery of an instrument by correcting for various sources of error. Traditionally, these sources of error are identified and corrected for through meticulous laboratory characterization procedures and/or vicariously in orbit. These processes, however, can be circumvented by supplying a cross-calibration network with geo-referenced data from other, more advanced and often institutional instruments that can act as reliable ground truths.
The new production process, therefore, could be to create an instrument, deploy it, take a few hundred images, feed them into the tool to find the optimal correction weight factors, and then send the weight factors up to the instrumet. The weight factors are small in data volume (of the order of a few megabytes) and therefore can be easily transmitted to a satellite in orbit.
The cross-calibration network is embedded with correction matrices which correspond to distorting effects.
When the network is trained, the values of these matrices are determined to yield the highest quality data product.
These matrices can then be analyzed, to identify the sources of error and potentially improve the overall production of the instrument.
The fusion network, shown in Fig. 2, combines the best characteristics of multiple instruments into a single product.
For example, a small satellite with high spectral resolution can be combined with one of high spatial resolution to form a product with both high spectral and spatial resolution.
Such a product would be of use in agricultural, hazard detection or remote material identification applications.
Prior art In Earth Observation Open Science and Innovation: Volume 15 chapter “Machine Learning Applications for Earth Observation” it is explained that machine learning techniques can be used for cross-calibration activities.
The US2020025613A1 HYPERSPECTRAL SENSING SYSTEM AND PROCESSING METHODS FOR HYPERSPECTRAL DATA patent claims: “[309] Any suitable method may be used to perform the correlation.
Suitable methods may include, without limitation, regression, interpolation, neural networks, etc.
In some examples, performing the correlation includes using a nonlinear optimization method (e.g., a regularization method) to identify a fitting function that maps the remote coefficients to ground-truth coefficients with high accuracy (e.g., with error minimized according to some predetermined scheme, such as a Tikhonov regularization).” Shao and Cai 2018, proposed a neural network architecture for the data fusion of multi-spectral with panchromatic imagery.
This invention differs from the described fusion network in following respect:
1. The cost function is calculated using generalized automatic differentiation techniques where the error is propagated through the band aggregation functions. The approach can operate on any collection of remote sensed imagery, and is not necessarily tailored for a specific scheme. Detailed description Computer-Vision Georeferencing method: The computer vision georeferencing occurs in 3 general steps. First, the image to be geo-referenced and the reference image (the image that will be geo-referenced to) are rebinned and rotated to approximately match each other’s ground sampling distances and orientations. The coordinates of the reference image are also altered accordingly. This, however, does not need to be done precisely as fine corrections are made during the final georeferencing processing step. The second processing step uses the computer vision technique known as “template matching”. Here, a larger image is scanned with a smaller image referred to as the template. At every scan step a similarity between the template and the portion of the larger image under it is computed, which in turn creates a similarity map. The highest or lowest point on the map (depending on the similarity metric used) is then chosen as the relative location of the template with respect on the larger image. The larger image is then cropped so that it roughly matches the template in size and alignment. The third processing step is a newly invented “dense flow alignment” technique. The dense flow field, defined as the displacement between the individual pixels of two images, is calculated using the Farneback’s algorithm or some other similar method. Using the calculated displacement field, the input image is remapped to the larger one. The flexibility of this method essentially allows any transformation to be applied to the smaller image such as scale adjustment, rotation, sheer etc. Cross-Calibration Network Architecture: The Cross-Calibration Network (CCN), pictured in Fig. 1., radiometrically corrects remote sensed imagery by making it coincide with a collection of geo-referenced ground truths. The network is composed of 3 layers: a Embedded Correction Matrices (ECM) layer (Fig. 1. [2]), a General Correction Network (GCN) layer (Fig. 1. [3]) and a Band Aggregation layers (Fig. 1. [5]).
The ECM layer is composed of matrices that are designed to correct for various sources or error of, which distort the instrument’s imagery. Here, the imagery is radiometrically enhanced by matrices which correct for sensor dependent gains. The dimensions of the gain correction matrices are equal to that of the instrument’s respective sensor. The ECM layer takes a data cube as input (Fig. 1. [1]), where 2 of the dimensions are spatial and 1 dimension is spectral. In other words, the data cube is composed of images, corresponding to different spectral regions, stacked on top of on and other.
However, the gain matrices are parameterized by polynomial equations, so that only the values of a limited amount of constants need to be found during the training process.
The imagery is spectrally enhanced within the ECM layer by Spectral Straylight correction matrices. The number of rows and columns of these matrices are equal to the number of bands of the data cube to be corrected. Therefore, the amount of free variables without any form of parametrization, is equal to the number of bands squared.
The ECM layer spatially enhances the imagery by de-convolving the imagery with a kernel corresponding to the instrument’s point spread function. Here, the kernel itself is the free parameter whose optimal shape is found during the training the process.
The GCN layer is composed of neural network(s) with standard architectures, such as multi-layer perceptrons and convolution neural network, with many degrees of freedom to approximate a wide variety of radiometric and spectral transform functions. 5 This layer is included to correct for all error sources that are not taken into account by the ECM layer.
The GCN layer can form cubes with the same number of bands as the input (Fig. 1 [4]), or internally aggregate the bands to directly infer the ground truth values. In the latter case, the inferred ground truth bands can be compared to the actual ground truth values via a differentiable cost function.
The band aggregation layer linearly combines the bands of the corrected cube (Fig. 1 [4]), i.e. the output of the ECM and GCN layers, so that they coincide with the band responses of the ground truths, which allows the two datasets to be directly compared. The linear combination of the bands is characterized by a matrix whose number of rows of and columns correspond to the number of input and the ground truths bands respectively. Here, every column essentially specifies the weight of each of the input bands to form a specific ground truth band. Because the band aggregation is performed by multiplying the described matrix by the input data cube, the partial derivative of the weights can be efficiently propagated through the process, as will be further described below.
In addition to image data, the CCN architecture also accepts other ancillary data types such as the sensor positions of the pixel values, the temperature of the sensor, the time since launch, etc. These additional data streams allow the CCN to take into account other factors that may be influencing the quality of the image. The ancillary data may be fed into the GCN layer or used to parameterize any of the described the matrices of the ECM layer.
Cross-Calibration Network Training: Both the matrices of the ECM layer and the neural networks of the GCN layer are governed by a large set of free parameters, hereafter referred to as weights.
The optimal values of these weights are obtained through a training process, which attempts to reduce the value of a cost function (Fig. 1 [8]). In the case of the CCN, the cost function is the difference between the aggregated bands (Fig. 1 [6]) and the ground truth (Fig. 1 [7]). This difference is quantified by a differentiable function such as the mean square error or structural similarly index.
Furthermore, the cost function can weigh the ground truths inversely by their respective error so that the less reliable measurements have less influence during the training process.
Using generalized automatic differentiation techniques, the partial derivatives of the free parameters with respect to the cost are calculated by the hard-coded band aggregation layer.
These partial derivatives are then fed into an optimizer, such as the Adam Optimizer, to find the set of weights which reduces the cost function.
The process of calculating the cost function, calculating the partial derivatives and finding a new set of weights is be repeated until the error reaches an acceptable level.
The network is then trained in three stages.
In the first stage the weights of the CCN are adjusted to correct for known errors, while the weights of the GCN are kept constant.
During the second training phase the weights of the GCN are adjusted, while the weights of the CCN are frozen.
During the final, third, stage of training the weights of both the ECM and GCN layers are adjusted concurrently.
The training is performed in this manner so that the GCN layer will not try correct for all source of error on its own - as its high degrees of freedom would surely permit it.
In this way, the ECM corrects for the most error it possibly can, which implies that its embedded matrices can be inspected to better understand the nature of the instrument.
> Fusion Network Architecture: The Fusion Network Architecture, shown in Figure 2, is composed of 2 layers. The first layer, the so-called Fusion Layer (Fig. 2 [2]), accepts inputs from multiple geo-referenced sources (Fig. 2 [14]) which are mapped onto a common grid-array and subsequently scanned by a convolutional filter to form a Fused Cube (Fig. 2 [3]). Alternatively, the common grid-array can be scanned by multiple successive filters to form layers of intermediate feature maps between the input and Fused Cube.
The above described method 1s optionally also applied to individual bands of the Fused Cube. Instead of the geo-referenced sources being mapped to a single common grid-array, select bands from the sources are mapped to various common grid-arrays, which each have their own sets of filters to create a single band of the Fused Cube.
The Fused Cube produced by the Fusion Layer is fed into a Band Aggregation layer (Fig. 2 [4]), which operates in an identical manner to the Band Aggregation layer of the CCN. However, for the Fusion Network Architecture, the geo-referenced input sources act as the ground truths. In other words, the Fusion Network Architecture forms a Fused Cube whose bands can be linearly added together to recreate the bands it was formed from.
Fusion Network Training: The Fusion Network Architecture is trained in the same manner as the CCN where error is propagated through the Band Aggregation Layer using generalized automatic differentiation techniques iteratively until an acceptable error is reached or it is no longer possible to reduce the cost function. Therefore, the training process involves a cost function (Fig. 2 [7]), aggregated cubes (Fig. 2 [5]) and ground truth (Fig. 2 [6]) that are also the geo-referenced input sources as is described on page 7 lines 16 through 22.
Concurrent Network Training: Before being fed into the Fusion Network, the input datasets can optionally be preprocessed by
ECM/GCN layers so that they are radiometrically, spectrally and/or spatially corrected before they are fused.
These layers can be trained concurrently with the Fusion Layer to form a larger combined network.

Claims (13)

ConclusiesConclusions 1. Methode om gegevenskubussen die zijn verkregen met teledetectie radiometrisch, spectraal en/of ruimtelijk te corrigeren met een kruiscalibratienetwerk dat bestaat uit een Algemene Correctienetwerklaag met een groot aantal vrijheidsgraden waarmee vrijwel elk onder Liggende correctiefunctie benaderd kan worden.A method of radiometrically, spectrally and/or spatially correcting data cubes obtained by remote sensing with a cross-calibration network consisting of a General Correction Network layer with a large number of degrees of freedom that allows approximation of virtually any underlying Overlay correction function. 2. De methode van conclusie 1 waarbij de verzamelingen invoergegevens met de werkelijke grondpositie gecoregistreerd zijn met behulp van georeferentie met computerbeeldverwerking, waarbij de relatieve positie van de invoerbeelden ten opzichte van de grondpositie wordt bepaald door afstemming met een sjabloon, waarna de pixels in het beeld nauwkeurig worden uitgelijnd met een hoge- dichtheidstroomuitlijningsmethode.The method of claim 1 wherein the sets of input data are co-registered with the actual ground position using computer image processing georeference, wherein the relative position of the input images relative to the ground position is determined by matching with a template, after which the pixels in the image accurately aligned using a high-density flow alignment method. 3. De methode van conclusie 1 en 2 waarbij de Algemene Correctienetwerklaag de invoerbanden combineert om afgeleide banden te maken die overeenkomen met de werkelijke positie op de grond, waarbij de kostfunctie wordt berekend door de afgeleide banden op de werkelijke grondposities te vergelijk met de daadwerkelijke waarden.The method of claims 1 and 2 wherein the General Correction Network layer combines the input bands to create derivative bands corresponding to the real position on the ground, wherein the cost function is calculated by comparing the derived bands at the real ground positions with the actual values . 4. De methode van conclusie 1 en 2 waarbij de Algemene Correctienetwerklaag de gegevenskubus afleidt met hetzelfde aantal banden als de ingevoerde gegevenskubus.The method of claims 1 and 2 wherein the General Correction Network layer infers the data cube with the same number of bands as the input data cube. 5. De methode van conclusie 4 waarbij de banden die zijn geproduceerd door de Algemene Correctienetwerklaag linear gecombineerd worden om kubussen te vormen met banden die overeenkomen met de werkelijke grondposities met behulp van een Bandaggregatielaag, waarbij de partiële afgeleides van de Algemene Correctienetwerklaag naar de kostfunctie door deze laag worden gepropageerd en aan een optimalisatiealgoritme wordt gegeven die de gewichtsfactoren aanpast zodat de geaggregeerde banden iteratief convergeren naar de nauwkeurigheid van de werkelijke grondposities.The method of claim 4 wherein the bands produced by the General Correction Network layer are linearly combined to form cubes with bands corresponding to the actual ground positions using a Band Aggregation layer, wherein the partial derivatives of the General Correction Network layer are transferred to the cost function this layer is propagated and given to an optimization algorithm that adjusts the weighting factors so that the aggregated bands converge iteratively to the accuracy of the true ground positions. 6. De methode van claim 5 waarbij de banden overeenkomstig de werkelijke grondposities in de kostfunctie worden gewogen met de inverse van hun respectievelijke onnauwkeurigheid.6. The method of claim 5 where the tires are weighted according to the actual ground positions in the cost function with the inverse of their respective inaccuracy. 7. De methode van conclusies 5 en 6 waarbij een Ingebedde Correctiematrixlaag met daarin matrices voor de versterking, spectraal strooilicht en/of ruimtelijke correctiematrices de teledetectiebeelden verbeteren voordat deze worden verwerkt in de Algemene Correctienetwerklaag.The method of claims 5 and 6 wherein an Embedded Correction Matrix layer containing gain matrices, spectral stray light and/or spatial correction matrices enhances the remote sensing images before being processed in the General Correction Network layer. 8. De methode van conclusie 7 waarbij de individuele matrices van de Ingebedde Correctiematrixlaag en de netwerken van de Algemene Correctienetwerklaag opeenvolgend zijn getraind waarbij de gewichten van één correctiematrix of -netwerk worden aangepast om de kostfunctie te optimaliseren terwijl de andere gewichten gelijk gehouden worden en waarbij na één of meer iteraties van deze trainingsstappen de gewichten van alle lagen tegelijk worden aangepast.The method of claim 7 wherein the individual matrices of the Embedded Correction Matrix layer and the networks of the General Correction Network layer are sequentially trained wherein the weights of one correction matrix or network are adjusted to optimize the cost function while keeping the other weights equal and wherein after one or more iterations of these training steps, the weights of all layers are adjusted simultaneously. 9. De methodes van conclusies 7 en 8 waar een Samenvoegingslaag tussen de Ingebedde Correctiematrixlaag en de bandaggregatielaag is ingevoegd, waarbij de Ingebedde Correctiematrixlaag en de Algemene Correctienetwerklaag de invoergegevens corrigeren voordat deze op een gezamenlijk raster worden afgebeeld en samengevoegd met een convolutioneel neuraal netwerk teneinde een gegevenskubus te maken waarin de gecombineerde eigenschappen als invoer dienen.The methods of claims 7 and 8 where a Merge layer is interposed between the Embedded Correction Matrix layer and the band aggregation layer, wherein the Embedded Correction Matrix layer and the General Correction Network layer correct the input data before it is mapped to a common grid and merged with a convolutional neural network to form a data cube in which the combined properties serve as input. 10. De methode van conclusie 9 waarbij een individuele verzameling filters wordt gebruikt om een enkele spectrale band van de samengevoegde gegevenskubus te produceren.The method of claim 9 wherein an individual set of filters is used to produce a single spectral band of the merged data cube. 11. De methode van conclusie 9 en 10 waarbij de Ingebedde Correctiematrixlaag, de Algemene Correctienetwerklaag en de Samenvoegingslaag eerst opeenvolgend worden getraind en voor alle gewichten tegelijk worden aagepast overeenkomstig het trainingsproces van conclusie 8.The method of claims 9 and 10 wherein the Embedded Correction Matrix layer, the General Correction Network layer and the Merge layer are first trained sequentially and adjusted for all weights simultaneously according to the training process of claim 8. 12. Een computerprogramma dat een methode of meerdere methodes van conclusie 1 tot en met 11 implementeert in het teledetectieinstrument danwel in een computerplatform op de grond nadat de gegevens naar beneden zijn gestuurd.A computer program that implements a method or more methods of claims 1 to 11 in the remote sensing instrument or in a ground computer platform after the data is sent down. 13. Het computerprogramma van conclusie 12 dat is geïmplementeerd in hardware die speciaal is gemaakt om een neuraal netwerk te implementeren.The computer program of claim 12 implemented in hardware specially made to implement a neural network.
NL2026463A 2020-09-14 2020-09-14 Method and tool to georeference, cross-calibrate and fuse remote sensed imagery NL2026463B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
NL2026463A NL2026463B1 (en) 2020-09-14 2020-09-14 Method and tool to georeference, cross-calibrate and fuse remote sensed imagery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
NL2026463A NL2026463B1 (en) 2020-09-14 2020-09-14 Method and tool to georeference, cross-calibrate and fuse remote sensed imagery

Publications (1)

Publication Number Publication Date
NL2026463B1 true NL2026463B1 (en) 2022-05-12

Family

ID=74125607

Family Applications (1)

Application Number Title Priority Date Filing Date
NL2026463A NL2026463B1 (en) 2020-09-14 2020-09-14 Method and tool to georeference, cross-calibrate and fuse remote sensed imagery

Country Status (1)

Country Link
NL (1) NL2026463B1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932710A (en) * 2018-07-10 2018-12-04 武汉商学院 Remote sensing Spatial-temporal Information Fusion method
US20190050625A1 (en) * 2017-08-08 2019-02-14 Spaceknow Inc. Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images
US20190114744A1 (en) * 2017-10-18 2019-04-18 International Business Machines Corporation Enhancing observation resolution using continuous learning
US20200025613A1 (en) 2018-03-27 2020-01-23 Flying Gybe Inc. Hyperspectral sensing system and processing methods for hyperspectral data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190050625A1 (en) * 2017-08-08 2019-02-14 Spaceknow Inc. Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images
US20190114744A1 (en) * 2017-10-18 2019-04-18 International Business Machines Corporation Enhancing observation resolution using continuous learning
US20200025613A1 (en) 2018-03-27 2020-01-23 Flying Gybe Inc. Hyperspectral sensing system and processing methods for hyperspectral data
CN108932710A (en) * 2018-07-10 2018-12-04 武汉商学院 Remote sensing Spatial-temporal Information Fusion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Earth Observation Open Science and Innovation", vol. 15, article "Machine Learning Applications for Earth Observation"
SHAO ZHENFENG ET AL: "Remote Sensing Image Fusion With Deep Convolutional Neural Network", IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, IEEE, USA, vol. 11, no. 5, 1 May 2018 (2018-05-01), pages 1656 - 1669, XP011682495, ISSN: 1939-1404, [retrieved on 20180427], DOI: 10.1109/JSTARS.2018.2805923 *

Similar Documents

Publication Publication Date Title
Zhao et al. A robust adaptive spatial and temporal image fusion model for complex land surface changes
Devia et al. High-throughput biomass estimation in rice crops using UAV multispectral imagery
CN108830889B (en) Global geometric constraint-based remote sensing image and reference image matching method
CN113029971B (en) Crop canopy nitrogen monitoring method and system
CN113963240B (en) Comprehensive detection method for multi-source remote sensing image fusion target
CN107274441B (en) Wave band calibration method and system for hyperspectral image
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
Karmas et al. Online analysis of remote sensing data for agricultural applications
CN111144350B (en) Remote sensing image positioning accuracy evaluation method based on reference base map
CN113223040A (en) Remote sensing-based banana yield estimation method and device, electronic equipment and storage medium
Lysenko et al. Determination of the not uniformity of illumination in process monitoring of wheat crops by UAVs
Wang et al. The impact of variable illumination on vegetation indices and evaluation of illumination correction methods on chlorophyll content estimation using UAV imagery
NL2026463B1 (en) Method and tool to georeference, cross-calibrate and fuse remote sensed imagery
CN104766313B (en) One kind uses the recursive EO-1 hyperion rapid abnormal detection method of core
Dhiman et al. Soil textures and nutrients estimation using remote sensing data in north india-Punjab region
CN112200845A (en) Image registration method and device
Hall Remote sensing applications in viticulture: recent advances and new opportunities
Kim et al. Multi-temporal orthophoto and digital surface model registration produced from UAV imagery over an agricultural field
CN114359755A (en) Method, system, equipment and storage medium for extracting rice lodging region
Devia et al. Aerial monitoring of rice crop variables using an UAV robotic system
CN115205224A (en) Adaptive feature-enhanced multi-source fusion visual detection method, device and medium
Latif Multi-crop recognition using UAV-based high-resolution NDVI time-series
CN114639014A (en) NDVI normalization method based on high-resolution remote sensing image
CN114463642A (en) Cultivated land plot extraction method based on deep learning
Wang et al. A Robust Multispectral Point Cloud Generation Method Based on 3D Reconstruction from Multispectral Images