NL2026463B1 - Method and tool to georeference, cross-calibrate and fuse remote sensed imagery - Google Patents
Method and tool to georeference, cross-calibrate and fuse remote sensed imagery Download PDFInfo
- Publication number
- NL2026463B1 NL2026463B1 NL2026463A NL2026463A NL2026463B1 NL 2026463 B1 NL2026463 B1 NL 2026463B1 NL 2026463 A NL2026463 A NL 2026463A NL 2026463 A NL2026463 A NL 2026463A NL 2026463 B1 NL2026463 B1 NL 2026463B1
- Authority
- NL
- Netherlands
- Prior art keywords
- layer
- network
- bands
- correction
- data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000003595 spectral effect Effects 0.000 claims abstract description 9
- 238000012937 correction Methods 0.000 claims description 24
- 238000012549 training Methods 0.000 claims description 13
- 230000002776 aggregation Effects 0.000 claims description 10
- 238000004220 aggregation Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 230000000644 propagated effect Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 2
- 238000004590 computer program Methods 0.000 claims 2
- 238000013527 convolutional neural network Methods 0.000 claims 1
- 230000004927 fusion Effects 0.000 abstract description 17
- 238000010801 machine learning Methods 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 14
- 230000004069 differentiation Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4061—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/60—Image enhancement or restoration using machine learning, e.g. neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/194—Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64G—COSMONAUTICS; VEHICLES OR EQUIPMENT THEREFOR
- B64G1/00—Cosmonautic vehicles
- B64G1/10—Artificial satellites; Systems of such satellites; Interplanetary vehicles
- B64G1/1021—Earth observation satellites
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Radar, Positioning & Navigation (AREA)
- Image Processing (AREA)
Abstract
The invention comprises of a method to geo-reference, cross- calibrate and fuse remote sensed imagery using novel computer- vision and machine-learning techniques, and a software tool that implements this method. By remote sensed, we are referring to data that has been acquired aircraft, UAVs (Unmanned Aerial Vehicles) or satellites. The geo-referencing method maps datasets from various instruments onto the same grid using computer vision techniques, which allows the datasets to be used in the various corrects networks described below. The cross-calibration is performed with a machine learning network, hereon referred to as the Cross-Calibration network, which takes a single instruments’ imagery and associated meta- data as input and enhances it radiometrically, spectrally and/or spatially. The fusion is also performed with a machine learning network, hereon referred to as the fusion network, which takes multiple geo-referenced datasets as input and fuses them together to create a data cube with greater radiometric, spectral and/or spatial resolution than any of the single inputs.
Description
Method and tool to georeference, cross-calibrate and fuse remote sensed imagery Field and background of the invention Cross-calibration and fusion techniques utilize data from various remotes sensing instruments to enhance a single instruments imagery or create a greater combined product. These techniques are especially beneficial when applied to small satellites, as their specialized hardware and high temporal resolution, afforded by lower production costs, can be combined with the high quality data of larger, mostly institutional satellites. Because of this, these techniques have myriad application in agricultural monitoring, early hazard detection, and general change monitoring activities where short revisit times and specialized sensors are a necessity.
Cross-calibration and fusion techniques require datasets that are geo-referenced on the same grid. This invention facilitates this need with a computer-vision based geo-referencing method that only takes image data as input. Because this method does not require any other auxiliary information, it can easily be applied to arbitrary instrument pairs, which considerably increases the flexibility of the method.
The cross-calibration network, shown in Fig.1, enhances the imagery of an instrument by correcting for various sources of error. Traditionally, these sources of error are identified and corrected for through meticulous laboratory characterization procedures and/or vicariously in orbit. These processes, however, can be circumvented by supplying a cross-calibration network with geo-referenced data from other, more advanced and often institutional instruments that can act as reliable ground truths.
The new production process, therefore, could be to create an instrument, deploy it, take a few hundred images, feed them into the tool to find the optimal correction weight factors, and then send the weight factors up to the instrumet. The weight factors are small in data volume (of the order of a few megabytes) and therefore can be easily transmitted to a satellite in orbit.
The cross-calibration network is embedded with correction matrices which correspond to distorting effects.
When the network is trained, the values of these matrices are determined to yield the highest quality data product.
These matrices can then be analyzed, to identify the sources of error and potentially improve the overall production of the instrument.
The fusion network, shown in Fig. 2, combines the best characteristics of multiple instruments into a single product.
For example, a small satellite with high spectral resolution can be combined with one of high spatial resolution to form a product with both high spectral and spatial resolution.
Such a product would be of use in agricultural, hazard detection or remote material identification applications.
Prior art In Earth Observation Open Science and Innovation: Volume 15 chapter “Machine Learning Applications for Earth Observation” it is explained that machine learning techniques can be used for cross-calibration activities.
The US2020025613A1 HYPERSPECTRAL SENSING SYSTEM AND PROCESSING METHODS FOR HYPERSPECTRAL DATA patent claims: “[309] Any suitable method may be used to perform the correlation.
Suitable methods may include, without limitation, regression, interpolation, neural networks, etc.
In some examples, performing the correlation includes using a nonlinear optimization method (e.g., a regularization method) to identify a fitting function that maps the remote coefficients to ground-truth coefficients with high accuracy (e.g., with error minimized according to some predetermined scheme, such as a Tikhonov regularization).” Shao and Cai 2018, proposed a neural network architecture for the data fusion of multi-spectral with panchromatic imagery.
This invention differs from the described fusion network in following respect:
1. The cost function is calculated using generalized automatic differentiation techniques where the error is propagated through the band aggregation functions. The approach can operate on any collection of remote sensed imagery, and is not necessarily tailored for a specific scheme. Detailed description Computer-Vision Georeferencing method: The computer vision georeferencing occurs in 3 general steps. First, the image to be geo-referenced and the reference image (the image that will be geo-referenced to) are rebinned and rotated to approximately match each other’s ground sampling distances and orientations. The coordinates of the reference image are also altered accordingly. This, however, does not need to be done precisely as fine corrections are made during the final georeferencing processing step. The second processing step uses the computer vision technique known as “template matching”. Here, a larger image is scanned with a smaller image referred to as the template. At every scan step a similarity between the template and the portion of the larger image under it is computed, which in turn creates a similarity map. The highest or lowest point on the map (depending on the similarity metric used) is then chosen as the relative location of the template with respect on the larger image. The larger image is then cropped so that it roughly matches the template in size and alignment. The third processing step is a newly invented “dense flow alignment” technique. The dense flow field, defined as the displacement between the individual pixels of two images, is calculated using the Farneback’s algorithm or some other similar method. Using the calculated displacement field, the input image is remapped to the larger one. The flexibility of this method essentially allows any transformation to be applied to the smaller image such as scale adjustment, rotation, sheer etc. Cross-Calibration Network Architecture: The Cross-Calibration Network (CCN), pictured in Fig. 1., radiometrically corrects remote sensed imagery by making it coincide with a collection of geo-referenced ground truths. The network is composed of 3 layers: a Embedded Correction Matrices (ECM) layer (Fig. 1. [2]), a General Correction Network (GCN) layer (Fig. 1. [3]) and a Band Aggregation layers (Fig. 1. [5]).
The ECM layer is composed of matrices that are designed to correct for various sources or error of, which distort the instrument’s imagery. Here, the imagery is radiometrically enhanced by matrices which correct for sensor dependent gains. The dimensions of the gain correction matrices are equal to that of the instrument’s respective sensor. The ECM layer takes a data cube as input (Fig. 1. [1]), where 2 of the dimensions are spatial and 1 dimension is spectral. In other words, the data cube is composed of images, corresponding to different spectral regions, stacked on top of on and other.
However, the gain matrices are parameterized by polynomial equations, so that only the values of a limited amount of constants need to be found during the training process.
The imagery is spectrally enhanced within the ECM layer by Spectral Straylight correction matrices. The number of rows and columns of these matrices are equal to the number of bands of the data cube to be corrected. Therefore, the amount of free variables without any form of parametrization, is equal to the number of bands squared.
The ECM layer spatially enhances the imagery by de-convolving the imagery with a kernel corresponding to the instrument’s point spread function. Here, the kernel itself is the free parameter whose optimal shape is found during the training the process.
The GCN layer is composed of neural network(s) with standard architectures, such as multi-layer perceptrons and convolution neural network, with many degrees of freedom to approximate a wide variety of radiometric and spectral transform functions. 5 This layer is included to correct for all error sources that are not taken into account by the ECM layer.
The GCN layer can form cubes with the same number of bands as the input (Fig. 1 [4]), or internally aggregate the bands to directly infer the ground truth values. In the latter case, the inferred ground truth bands can be compared to the actual ground truth values via a differentiable cost function.
The band aggregation layer linearly combines the bands of the corrected cube (Fig. 1 [4]), i.e. the output of the ECM and GCN layers, so that they coincide with the band responses of the ground truths, which allows the two datasets to be directly compared. The linear combination of the bands is characterized by a matrix whose number of rows of and columns correspond to the number of input and the ground truths bands respectively. Here, every column essentially specifies the weight of each of the input bands to form a specific ground truth band. Because the band aggregation is performed by multiplying the described matrix by the input data cube, the partial derivative of the weights can be efficiently propagated through the process, as will be further described below.
In addition to image data, the CCN architecture also accepts other ancillary data types such as the sensor positions of the pixel values, the temperature of the sensor, the time since launch, etc. These additional data streams allow the CCN to take into account other factors that may be influencing the quality of the image. The ancillary data may be fed into the GCN layer or used to parameterize any of the described the matrices of the ECM layer.
Cross-Calibration Network Training: Both the matrices of the ECM layer and the neural networks of the GCN layer are governed by a large set of free parameters, hereafter referred to as weights.
The optimal values of these weights are obtained through a training process, which attempts to reduce the value of a cost function (Fig. 1 [8]). In the case of the CCN, the cost function is the difference between the aggregated bands (Fig. 1 [6]) and the ground truth (Fig. 1 [7]). This difference is quantified by a differentiable function such as the mean square error or structural similarly index.
Furthermore, the cost function can weigh the ground truths inversely by their respective error so that the less reliable measurements have less influence during the training process.
Using generalized automatic differentiation techniques, the partial derivatives of the free parameters with respect to the cost are calculated by the hard-coded band aggregation layer.
These partial derivatives are then fed into an optimizer, such as the Adam Optimizer, to find the set of weights which reduces the cost function.
The process of calculating the cost function, calculating the partial derivatives and finding a new set of weights is be repeated until the error reaches an acceptable level.
The network is then trained in three stages.
In the first stage the weights of the CCN are adjusted to correct for known errors, while the weights of the GCN are kept constant.
During the second training phase the weights of the GCN are adjusted, while the weights of the CCN are frozen.
During the final, third, stage of training the weights of both the ECM and GCN layers are adjusted concurrently.
The training is performed in this manner so that the GCN layer will not try correct for all source of error on its own - as its high degrees of freedom would surely permit it.
In this way, the ECM corrects for the most error it possibly can, which implies that its embedded matrices can be inspected to better understand the nature of the instrument.
> Fusion Network Architecture: The Fusion Network Architecture, shown in Figure 2, is composed of 2 layers. The first layer, the so-called Fusion Layer (Fig. 2 [2]), accepts inputs from multiple geo-referenced sources (Fig. 2 [14]) which are mapped onto a common grid-array and subsequently scanned by a convolutional filter to form a Fused Cube (Fig. 2 [3]). Alternatively, the common grid-array can be scanned by multiple successive filters to form layers of intermediate feature maps between the input and Fused Cube.
The above described method 1s optionally also applied to individual bands of the Fused Cube. Instead of the geo-referenced sources being mapped to a single common grid-array, select bands from the sources are mapped to various common grid-arrays, which each have their own sets of filters to create a single band of the Fused Cube.
The Fused Cube produced by the Fusion Layer is fed into a Band Aggregation layer (Fig. 2 [4]), which operates in an identical manner to the Band Aggregation layer of the CCN. However, for the Fusion Network Architecture, the geo-referenced input sources act as the ground truths. In other words, the Fusion Network Architecture forms a Fused Cube whose bands can be linearly added together to recreate the bands it was formed from.
Fusion Network Training: The Fusion Network Architecture is trained in the same manner as the CCN where error is propagated through the Band Aggregation Layer using generalized automatic differentiation techniques iteratively until an acceptable error is reached or it is no longer possible to reduce the cost function. Therefore, the training process involves a cost function (Fig. 2 [7]), aggregated cubes (Fig. 2 [5]) and ground truth (Fig. 2 [6]) that are also the geo-referenced input sources as is described on page 7 lines 16 through 22.
Concurrent Network Training: Before being fed into the Fusion Network, the input datasets can optionally be preprocessed by
ECM/GCN layers so that they are radiometrically, spectrally and/or spatially corrected before they are fused.
These layers can be trained concurrently with the Fusion Layer to form a larger combined network.
Claims (13)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2026463A NL2026463B1 (en) | 2020-09-14 | 2020-09-14 | Method and tool to georeference, cross-calibrate and fuse remote sensed imagery |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2026463A NL2026463B1 (en) | 2020-09-14 | 2020-09-14 | Method and tool to georeference, cross-calibrate and fuse remote sensed imagery |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2026463B1 true NL2026463B1 (en) | 2022-05-12 |
Family
ID=74125607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2026463A NL2026463B1 (en) | 2020-09-14 | 2020-09-14 | Method and tool to georeference, cross-calibrate and fuse remote sensed imagery |
Country Status (1)
Country | Link |
---|---|
NL (1) | NL2026463B1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108932710A (en) * | 2018-07-10 | 2018-12-04 | 武汉商学院 | Remote sensing Spatial-temporal Information Fusion method |
US20190050625A1 (en) * | 2017-08-08 | 2019-02-14 | Spaceknow Inc. | Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images |
US20190114744A1 (en) * | 2017-10-18 | 2019-04-18 | International Business Machines Corporation | Enhancing observation resolution using continuous learning |
US20200025613A1 (en) | 2018-03-27 | 2020-01-23 | Flying Gybe Inc. | Hyperspectral sensing system and processing methods for hyperspectral data |
-
2020
- 2020-09-14 NL NL2026463A patent/NL2026463B1/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190050625A1 (en) * | 2017-08-08 | 2019-02-14 | Spaceknow Inc. | Systems, methods and computer program products for multi-resolution multi-spectral deep learning based change detection for satellite images |
US20190114744A1 (en) * | 2017-10-18 | 2019-04-18 | International Business Machines Corporation | Enhancing observation resolution using continuous learning |
US20200025613A1 (en) | 2018-03-27 | 2020-01-23 | Flying Gybe Inc. | Hyperspectral sensing system and processing methods for hyperspectral data |
CN108932710A (en) * | 2018-07-10 | 2018-12-04 | 武汉商学院 | Remote sensing Spatial-temporal Information Fusion method |
Non-Patent Citations (2)
Title |
---|
"Earth Observation Open Science and Innovation", vol. 15, article "Machine Learning Applications for Earth Observation" |
SHAO ZHENFENG ET AL: "Remote Sensing Image Fusion With Deep Convolutional Neural Network", IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, IEEE, USA, vol. 11, no. 5, 1 May 2018 (2018-05-01), pages 1656 - 1669, XP011682495, ISSN: 1939-1404, [retrieved on 20180427], DOI: 10.1109/JSTARS.2018.2805923 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhao et al. | A robust adaptive spatial and temporal image fusion model for complex land surface changes | |
Devia et al. | High-throughput biomass estimation in rice crops using UAV multispectral imagery | |
CN108830889B (en) | Global geometric constraint-based remote sensing image and reference image matching method | |
CN113029971B (en) | Crop canopy nitrogen monitoring method and system | |
CN113963240B (en) | Comprehensive detection method for multi-source remote sensing image fusion target | |
CN107274441B (en) | Wave band calibration method and system for hyperspectral image | |
CN108428220A (en) | Satellite sequence remote sensing image sea island reef region automatic geometric correction method | |
Karmas et al. | Online analysis of remote sensing data for agricultural applications | |
CN111144350B (en) | Remote sensing image positioning accuracy evaluation method based on reference base map | |
CN113223040A (en) | Remote sensing-based banana yield estimation method and device, electronic equipment and storage medium | |
Lysenko et al. | Determination of the not uniformity of illumination in process monitoring of wheat crops by UAVs | |
Wang et al. | The impact of variable illumination on vegetation indices and evaluation of illumination correction methods on chlorophyll content estimation using UAV imagery | |
NL2026463B1 (en) | Method and tool to georeference, cross-calibrate and fuse remote sensed imagery | |
CN104766313B (en) | One kind uses the recursive EO-1 hyperion rapid abnormal detection method of core | |
Dhiman et al. | Soil textures and nutrients estimation using remote sensing data in north india-Punjab region | |
CN112200845A (en) | Image registration method and device | |
Hall | Remote sensing applications in viticulture: recent advances and new opportunities | |
Kim et al. | Multi-temporal orthophoto and digital surface model registration produced from UAV imagery over an agricultural field | |
CN114359755A (en) | Method, system, equipment and storage medium for extracting rice lodging region | |
Devia et al. | Aerial monitoring of rice crop variables using an UAV robotic system | |
CN115205224A (en) | Adaptive feature-enhanced multi-source fusion visual detection method, device and medium | |
Latif | Multi-crop recognition using UAV-based high-resolution NDVI time-series | |
CN114639014A (en) | NDVI normalization method based on high-resolution remote sensing image | |
CN114463642A (en) | Cultivated land plot extraction method based on deep learning | |
Wang et al. | A Robust Multispectral Point Cloud Generation Method Based on 3D Reconstruction from Multispectral Images |