CN116580320A - Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range - Google Patents

Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range Download PDF

Info

Publication number
CN116580320A
CN116580320A CN202310604455.3A CN202310604455A CN116580320A CN 116580320 A CN116580320 A CN 116580320A CN 202310604455 A CN202310604455 A CN 202310604455A CN 116580320 A CN116580320 A CN 116580320A
Authority
CN
China
Prior art keywords
remote sensing
mii
feature
image
soil loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310604455.3A
Other languages
Chinese (zh)
Other versions
CN116580320B (en
Inventor
江威
温庆可
刘朔
崔师爱
庞治国
谭杰峻
刘昌军
张晓雪
阿旺格列
王敬浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Institute of Water Resources and Hydropower Research
Original Assignee
China Institute of Water Resources and Hydropower Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Institute of Water Resources and Hydropower Research filed Critical China Institute of Water Resources and Hydropower Research
Priority to CN202310604455.3A priority Critical patent/CN116580320B/en
Publication of CN116580320A publication Critical patent/CN116580320A/en
Application granted granted Critical
Publication of CN116580320B publication Critical patent/CN116580320B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a remote sensing intelligent extraction method of a large-scale artificial water and soil loss disturbance range, which comprises the steps of firstly acquiring time sequence multi-mode remote sensing image data before and after the artificial water and soil loss disturbance in a supervision range, and carrying out high-precision registration on the time sequence multi-mode remote sensing image data to obtain a time sequence remote sensing image set; then constructing an artificial water and soil loss multi-mode remote sensing optimization feature set fused with a visual attention mechanism according to the time sequence remote sensing image set; and finally, according to the multi-mode remote sensing optimization feature set of the artificial water and soil loss, carrying out remote sensing intelligent extraction on the artificial water and soil loss disturbance range in the supervision range based on the LSP graph convolution neural network. The invention provides an intelligent extraction method for artificial water and soil loss based on multi-mode remote sensing, which breaks through the technical bottleneck that the original large-scale artificial water and soil loss disturbance range is difficult to automatically extract, and realizes high-precision and intelligent extraction of the large-scale area artificial water and soil loss disturbance range.

Description

Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range
Technical Field
The invention belongs to the technical field of intelligent processing of remote sensing images, and particularly relates to a design of a remote sensing intelligent extraction method for a large-scale artificial soil erosion disturbance range.
Background
The formation and aggravation of water and soil loss are closely related to the behaviors and activities of people, and the activities of unreasonable production and life style of people, such as excavation, occupation and dumping of project construction, can cause or aggravate the water and soil loss. The satellite remote sensing has the advantages of large range, high return, low cost and the like, can objectively and accurately reflect the change of the land coverage attribute caused by human activities, is an important means for monitoring the disturbance of the human water and soil loss, and is widely applied to the national survey of the monitoring of the human water and soil loss.
Currently, the remote sensing means is utilized to carry out artificial water and soil loss supervision, and the most core is the disturbance range which needs to be clear for artificial water and soil loss. In the past, the artificial soil erosion disturbance range extraction mainly relies on expert experience to perform artificial visual interpretation, and the method has 3 defects: (1) The interpretation efficiency is low, the labor cost is high, a large amount of manpower is required to be input for large-scale artificial water and soil loss disturbance range extraction, and the real-time dynamic update requirement is difficult to meet; (2) The artificial soil erosion remote sensing disturbance type is complex and the characteristics are various, in the past, single-period high-resolution optical images are adopted for reference interpretation, the spectrum characteristics are single, and high-precision extraction is difficult to realize; (3) The automation level is low, the intelligent methods such as deep learning and the like are lack to identify the artificial soil erosion disturbance range, and the requirement of large-scale automatic extraction is difficult to meet.
Along with the rapid development of satellite remote sensing imaging and deep learning technologies, multi-source optical and radar remote sensing cooperation can acquire surface time sequence multi-mode satellite remote sensing data, and can represent the complex underlying surface change characteristics of an artificial soil erosion disturbance area; in addition, through extracting sequential multi-mode remote sensing characteristics, the spectrum-texture space characteristic variation rule of the artificial water and soil loss disturbance area can be excavated by intelligent methods such as deep learning, and the remote sensing intelligent extraction of the large-scale artificial water and soil loss disturbance range can be realized.
Disclosure of Invention
The invention aims to solve the problem that a large-scale artificial water and soil loss disturbance range is difficult to automatically extract in the prior art, and provides a large-scale artificial water and soil loss disturbance range remote sensing intelligent extraction method.
The technical scheme of the invention is as follows: a remote sensing intelligent extraction method for a large-scale artificial soil erosion disturbance range comprises the following steps:
s1, acquiring time sequence multi-mode remote sensing image data before and after disturbance of artificial water and soil loss in a supervision range, and carrying out high-precision registration on the time sequence multi-mode remote sensing image data to obtain a time sequence remote sensing image set.
S2, constructing an artificial water and soil loss multi-mode remote sensing optimization feature set integrating a visual attention mechanism according to the time sequence remote sensing image set.
And S3, according to the multi-mode remote sensing optimization feature set of the artificial water and soil loss, carrying out remote sensing intelligent extraction on the artificial water and soil loss disturbance range in the supervision range based on the LSP graph convolution neural network.
Further, the time sequence multi-mode remote sensing image data in the step S1 comprises optical remote sensing data and radar remote sensing data, geometric fine correction, atmosphere correction and cloud mask processing are performed on the optical remote sensing data, and geometric fine correction processing is performed on the radar remote sensing data.
Further, in step S1, the specific method for performing high-precision registration on the time-series multi-mode remote sensing image data is as follows:
a1, filtering the time-series multi-mode remote sensing image data by adopting a Log-Gabor filter to obtain local phase information of the time-series multi-mode remote sensing image data, wherein the Log-Gabor filter is Log d,o (x, y) is expressed as:
wherein the method comprises the steps ofRepresenting even symmetric filters at direction o and dimension d +.>Representing an odd symmetric filter at direction o and dimension d, i is the imaginary unit of the complex number.
A2, respectively convolving the time sequence multi-mode remote sensing image data with an even symmetric filter and an odd symmetric filter to obtain even symmetric response energy E under the direction o and the scale d d,o (x, y) and odd symmetric response energy O d,o (x,y):
Wherein the method comprises the steps ofRepresenting a convolution operation, J (x, y) represents a pixel point having coordinates (x, y).
A3, response energy E according to even symmetry d,o (x, y) and odd symmetric response energy O d,o (x, y) calculating amplitude A d,o (x, y) and local phase
A4, according to amplitude A d,o (x, y) and local phaseCalculating to obtain a phase consistency characteristic value PC (x, y) at a pixel point J (x, y):
wherein W (x, y) is a weight factor, a symbolIndicating that taking itself when positive, otherwise 0, t indicates the estimated noise threshold, epsilon indicates a constant preventing denominator 0, +.>Indicating phase difference>Representing the average phase value.
A5, obtaining a phase consistency graph PC (theta) in each direction according to the phase consistency eigenvalue PC (x, y) j ),j=1,2,...,o;
A6, based on the phase consistency map PC (θ) j ) Maximum moment max of phase consistency is calculated pc And minimum moment min pc
Wherein a, b and c are all intermediate parameters, θ j Indicating the direction angle.
A7, maximum moment max pc And minimum moment min pc Corresponding images are overlapped to form an overlapped feature map, the overlapped feature map is divided into n multiplied by n image blocks which are not overlapped with each other, angular point monitoring is realized in each image block area by adopting Harris operator, points with response intensity exceeding a preset threshold are selected as points to be registered, and a point set { P to be registered is obtained j }。
A8, d×o amplitude values A obtained in step A3 d,o (x, y) accumulating the amplitude values of d scales in the same direction to obtain o accumulated amplitude values A o (x,y)。
A9, respectively selecting reference images I stan And the image I to be registered reg Middle o cumulative amplitude values A o Maximum value in (x, y) and recording the direction of the maximum value to obtain a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg
A10, calculating a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg MII.IN (M)II stan ,MII reg ):
MII·IN(MII stan ,MII reg )=H(MII stan )+H(MII reg )+H(MII stan ,MII reg )
Wherein H (MII) stan ) Entropy value, H (MII reg ) Entropy value representing maximum index map of image to be registered, H (MII) stan ,MII reg ) Represents the joint entropy of the maximum index map of the reference image and the maximum index map of the image to be registered,probability distribution representing maximum index value A of reference image, < ->Probability distribution representing maximum index value B of images to be registered, < ->And the joint probability distribution of the maximum index value A of the reference image and the maximum index value B of the image to be registered is represented.
A11, according to the point set { P to be registered j Some characteristic point P in } j Determining a corresponding point P on the image to be registered r A template region is determined.
A12, according to similarity measure MII.IN (MII) IN the template region stan ,MII reg ) Searching for the same name by using the maximum value principleAnd (3) carrying out point-to-point and recording offset to obtain matched homonymy point pairs, calculating a transformation matrix of the homonymy point pairs to finish registration, and obtaining a time sequence remote sensing image set I= { I i I=1, 2,..t }, t is the number of time-series remote sensing images.
Further, step S2 includes the following sub-steps:
s21, extracting a multi-mode remote sensing characteristic wave band F according to the time sequence remote sensing image data in the time sequence remote sensing image set 1 Normalized vegetation index F 2 Normalized building index F 3 Normalized bare die index F 4 Normalized water index F 5 And normalized polarization characteristic F 6
S22, extracting visual saliency features F concentrated by the time sequence remote sensing images by adopting a visual attention mechanism from bottom to top 7 Azimuth characteristic F 8 Contrast feature F 9 Entropy feature F 10 Angular second moment characteristics F 11 Uniformity characteristics F 12 Correlation feature F 13 And gradient amplitude feature F 14 Further constructing a multi-mode remote sensing feature set { F ] of artificial water and soil loss i ,i=1,2,...,14}。
S23, adopting a Relief-F algorithm to perform multi-mode remote sensing characteristic set { F on man-made water and soil loss i I=1, 2, 14} performs optimization to obtain an artificial soil and water loss multi-modal remote sensing optimization feature set F':
F‘={F′ i ,i=1,2,...,f}
wherein F represents the number of features in the multi-mode remote sensing optimized feature set F' for artificial water and soil loss.
Further, the visual saliency feature F in step S22 7 The extraction formula of (2) is:
C max =max(I R ,I G ,I B ),C min =min(I R ,I G ,I B )
I′ n =(C max -I n )/(C max -C min ),n=R,G,B
wherein num represents the total number of extracted color features, H, S, V represents the H color features, S color features and V color features of the time sequence remote sensing image in HSV color space respectively, I R 、I G 、I B Respectively representing R color characteristics, G color characteristics and B color characteristics of the time sequence remote sensing image in RGB color space, C max And C min Respectively represent I R 、I G 、I B Maximum and minimum values among the three, H 'represents the comparison value, I' n Representation I R 、I G 、I B Is a recalculated value of (1).
Further, the extracting method of the orientation feature F8 in step S22 is as follows: extracting by using a Gabor filter, and convolving the four directions by using a Gaussian kernel function to obtain an azimuth characteristic F, wherein the azimuth is theta= {0 degrees, 45 degrees, 90 degrees and 135 degrees in 4 azimuths respectively 8
Further, the contrast feature F in step S22 9 Entropy feature F 10 Angular second moment characteristics F 12 Uniformity characteristics F 12 Correlation feature F 13 All are extracted through the gray level co-occurrence matrix.
Further, the gradient magnitude feature F in step S22 14 The extraction formula of (2) is:
F x (x,y)=I(x+1,y)-I(x-1,y)/2
F y (x,y)=I(x,y+1)-I(x,y-1)/2
wherein I (x, y) represents a time-series remote sensing image with pixel row and column numbers (x, y), F x (x, y) represents the gradient magnitude in the x-direction, F y (x, y) represents the magnitude of the gradient in the y direction.
Further, the Relief-F algorithm in step S23 is specifically:
b1, randomly sampling samples from an artificial water and soil loss range selected from the multi-mode remote sensing feature set of the artificial water and soil loss by randomly sampling samples, and randomly selecting a training set sample R from the samples.
And B2, searching a k nearest neighbor sample H from a sample set of artificial water and soil loss similar to the sample set R, and searching a k nearest neighbor sample M from a sample set of non-artificial water and soil loss different from the sample set R.
And B3, calculating the weight omega' (Z) of each characteristic in the multi-mode remote sensing characteristic set for water and soil loss according to the k nearest neighbor samples H and M:
wherein Z represents the number of characteristic variables in the multi-mode remote sensing characteristic set for artificial water and soil loss, ω (Z) represents the initial weight value of the Z-th characteristic variable, m represents the sample sampling times, k represents the number of the nearest zero samples, and H j Represents the j-th nearest neighbor of sample R, diff (A, R, H j ) Representing samples R and H on feature Z j Difference of M j (C) Represents the j-th nearest-neighbor heterogeneous point of sample R, class (R) represents the class of sample R, C represents the class, P (·) represents the prior probability, diff (A, R, M) j (C) Representing samples R and M on feature Z j (C) Is a difference between (a) and (b).
And B4, sequencing the characteristics in the artificial water and soil loss multi-mode remote sensing characteristic set through the weight omega '(Z) to obtain an artificial water and soil loss multi-mode remote sensing optimized characteristic set F'.
Further, step S3 includes the following sub-steps:
s31, extracting a feature set corresponding to the data tag position according to the multi-mode remote sensing optimized feature set of the artificial water and soil loss, and constructing the artificial water and soilLoss multi-mode remote sensing training sample libraryWherein->Representing a multi-modal remote sensing data feature set, X i I=1, 2, i, n, n is the number of samples, Y n Representing the marking data.
S32, sample X i Connected with the neighborhood sample points form graph G (v, epsilon), where u= { X 1 ,X 2 ,...,X i ,...,X N The data set composed of nodes on the graph is represented by }, N represents the number of neighbor samples, and ε represents the set of edges, i.e., neighbor points and sample X i Is a distance of (3).
S33, obtaining a local structure vector LS through LSP description graph topology information i,j
Wherein X is j Representation and sample X i The j-th sample point of the connected neighborhood, SIM (·) represents the similarity function.
S34, calculating local structure vectors of student models in LSP graph convolution neural networkLocal structural vector with teacher model>Similarity S of (2) i
Wherein D is KL (. Cndot.) represents the relative entropy.
S35, according to the similarity S i Calculating a loss function L LSP
S36, according to the loss function L LSP Calculate the total loss L:
L=H(y s ,y)+λL LSP
where H (·) represents the cross entropy loss function, y represents the truth sample data label, y s Represents the values predicted by the student model, λ represents the hyper-parameters that balance the two losses.
S37, training the LSP graph convolution neural network through total loss L, and collecting the multi-mode remote sensing data characteristicsAnd inputting the trained LSP graph convolution neural network to realize remote sensing intelligent extraction of the artificial soil and water loss disturbance range in the supervision range.
The beneficial effects of the invention are as follows:
(1) The invention provides an intelligent extraction method for artificial water and soil loss based on multi-mode remote sensing, which breaks through the technical bottleneck that the original large-scale artificial water and soil loss disturbance range is difficult to automatically extract.
(2) Aiming at the problem that the multi-mode satellite remote sensing data lacks strict matching, the method adopts the image matching method based on phase consistency, and realizes the multi-mode remote sensing accurate matching.
(3) The invention provides a construction method of artificial water and soil loss multi-mode remote sensing features fused with a visual attention mechanism based on multi-mode satellite remote sensing data, and adopts a Relief-F algorithm to conduct optimization on the extracted artificial water and soil loss disturbance multi-mode remote sensing features.
(4) The invention constructs the remote sensing intelligent extraction model of the artificial soil erosion disturbance range based on the LSP graph convolution neural network, and realizes the high-precision and automatic extraction of the artificial soil erosion disturbance range of the large-scale area.
Drawings
Fig. 1 is a flowchart of a remote sensing intelligent extraction method for a large-scale artificial soil erosion disturbance range, which is provided by an embodiment of the invention.
Fig. 2 is a diagram showing a remote sensing intelligent extraction effect of a large-scale artificial soil erosion disturbance range provided by an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It is to be understood that the embodiments shown and described in the drawings are merely illustrative of the principles and spirit of the invention and are not intended to limit the scope of the invention.
The embodiment of the invention provides a remote sensing intelligent extraction method for a large-scale artificial soil erosion disturbance range, which is shown in fig. 1 and comprises the following steps S1-S3:
s1, acquiring time sequence multi-mode remote sensing image data before and after disturbance of artificial water and soil loss in a supervision range, and carrying out high-precision registration on the time sequence multi-mode remote sensing image data to obtain a time sequence remote sensing image set.
In the embodiment of the invention, the acquisition frequency of the time sequence multi-mode remote sensing image data is better than that of the time sequence multi-mode remote sensing image data once a month, the time sequence multi-mode remote sensing image data comprises optical remote sensing data and radar remote sensing data, wherein the optical remote sensing data are selected from high-score first-number, high-score second-number, high-score sixth-number, sentinel second-number, landsat series and other remote sensing data, the radar remote sensing data are selected from high-score third-number, sentinel first-number and other remote sensing data, and the optical remote sensing data are selected from high-quality cloud-free and snow-free coverage data.
In the embodiment of the invention, geometric fine correction, atmosphere correction and cloud mask processing are performed on optical remote sensing data, and geometric fine correction processing is performed on radar remote sensing data.
In the embodiment of the invention, the specific method for geometric fine correction comprises the following steps: collecting 1 of an image area covering a supervision range: 10000 topography and medium-high resolution satellite remote sensing data, according to the method of visual interpretation, selecting points with obvious characteristics (road crossing points, building bright spots, etc.) on the image as control points, uniformly distributing the points as far as possible in the space range covered by the remote sensing image, and performing geometric fine correction on the medium-high resolution remote sensing data by adopting a rational function model constrained by the control points.
In the embodiment of the invention, according to the satellite remote sensing data type, a corresponding algorithm or tool is selected to perform atmospheric correction, and a FLAASH atmospheric correction model is adopted to correct the high-resolution series and the Land s The at series satellite remote sensing images are subjected to atmospheric correction so as to eliminate the influence of factors such as atmosphere, illumination and the like on the ground object reflection.
In the embodiment of the invention, the cloud mask processing method comprises the following steps: and performing mask processing on the cloud and shadow covered region of the remote sensing image according to the related cloud and cloud shadow coverage information provided by the image quality inspection (QA) wave band to obtain the satellite remote sensing image of the cloud-free region.
The classical remote sensing image matching method is completed in the image space domain, and the space domain information depends on the gray level or gradient information of the image, is sensitive to nonlinear radiation difference and is not easy to match. The multi-mode remote sensing image has nonlinear image radiation difference due to different platforms such as sensors, so the embodiment of the invention adopts an image matching method based on phase consistency to carry out high-precision registration on the time-series multi-mode remote sensing image data, and the phase consistency is a method for expressing and describing images by utilizing frequency domain information, which comprises the following specific steps:
a1, filtering the time-series multi-mode remote sensing image data by adopting a Log-Gabor filter to obtain local phase information of the time-series multi-mode remote sensing image data, wherein the Log-Gabor filter is Log d,o (x, y) is expressed as:
wherein the method comprises the steps ofRepresenting even symmetric filters at direction o and dimension d +.>Representing an odd symmetric filter at direction o and dimension d, i is the imaginary unit of the complex number.
A2, respectively convolving the time sequence multi-mode remote sensing image data with an even symmetric filter and an odd symmetric filter to obtain even symmetric response energy E under the direction o and the scale d d,o (x, y) and odd symmetric response energy O d,o (x,y):
Wherein the method comprises the steps ofRepresenting a convolution operation, J (x, y) represents a pixel point having coordinates (x, y).
A3, response energy E according to even symmetry d,o (x, y) and odd symmetric response energy O d,o (x, y) calculating amplitude A d,o (x, y) and local phase
A4, according to amplitude A d,o (x, y) and local phaseCalculating to obtain a phase consistency characteristic value PC (x, y) at a pixel point J (x, y):
wherein W (x, y) is a weight factor, a symbolIndicating that when positive, the value is taken itself, otherwise 0, T indicates the estimated noise threshold, ε indicates the constant that prevents the denominator from being 0, typically 0.01,/>Indicating phase difference>Representing the average phase value.
A5, obtaining a phase consistency graph PC (theta) in each direction according to the phase consistency eigenvalue PC (x, y) j ),j=1,2,...,o;
A6, based on the phase consistency map PC (θ) j ) Maximum moment max of phase consistency is calculated pc And minimum moment min pc
Wherein a, b and c are all intermediate parameters, θ j The direction angle is represented by 0 degrees, 45 degrees, 90 degrees and 135 degrees.
A7, maximum moment max pc And minimum moment min pc Corresponding images are overlapped to form an overlapped feature map, the overlapped feature map is divided into n multiplied by n image blocks which are not overlapped with each other, angular point monitoring is realized in each image block area by adopting Harris operator, points with response intensity exceeding a preset threshold are selected as points to be registered, and a point set { P to be registered is obtained j }。
A8, d×o amplitude values A obtained in step A3 d,o (x, y) accumulating the amplitude values of d scales in the same direction to obtain o accumulated amplitude values A o (x,y)。
A9, respectively selecting reference images I stan And the image I to be registered reg Middle o cumulative amplitude values A o Maximum value in (x, y) and recording the direction of the maximum value to obtain a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg
A10, calculating a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg Is a similarity measure MII.IN (MII) stan ,MII reg ):
MII·IN(MII stan ,MII reg )=H(MII stan )+H(MII reg )+H(MII stan ,MII reg )
Wherein H (MII) stan ) Entropy value, H (MII reg ) Entropy value representing maximum index map of image to be registered, H (MII) stan ,MII reg ) Represents the joint entropy of the maximum index map of the reference image and the maximum index map of the image to be registered,probability distribution representing maximum index value A of reference image, < ->Probability distribution representing maximum index value B of images to be registered, < ->And the joint probability distribution of the maximum index value A of the reference image and the maximum index value B of the image to be registered is represented.
A11, according to the point set { P to be registered j Some characteristic point P in } j Determining a corresponding point P on the image to be registered r A template region is determined.
A12, according to similarity measure MII.IN (MII) IN the template region stan ,MII reg ) Searching the homonymy points by adopting a maximum value principle and recording offset to obtain matched homonymy point pairs, calculating a transformation matrix of the homonymy point pairs to finish registration to obtain a time sequence remote sensing image set I= { I i I=1, 2,..t }, t is the number of time-series remote sensing images.
S2, constructing an artificial water and soil loss multi-mode remote sensing optimization feature set integrating a visual attention mechanism according to the time sequence remote sensing image set.
Step S2 includes the following substeps S21 to S23:
s21, extracting a multi-mode remote sensing characteristic wave band F according to the time sequence remote sensing image data in the time sequence remote sensing image set 1 Normalized vegetation index F 2 Normalized building index F 3 Normalized bare die index F 4 Normalized water body fingerNumber F 5 And normalized polarization characteristic F 6 Wherein the multi-mode remote sensing characteristic wave band F 1 The band combinations for all images.
S22, extracting visual saliency features F concentrated by the time sequence remote sensing images by adopting a visual attention mechanism from bottom to top 7 Azimuth characteristic F 8 Contrast feature F 9 Entropy feature F 10 Angular second moment characteristics F 11 Uniformity characteristics F 12 Correlation feature F 13 And gradient amplitude feature F 14 Further constructing a multi-mode remote sensing feature set { F ] of artificial water and soil loss i ,i=1,2,...,14}。
Visual attention mechanisms are an important feature of the human visual system, which has been studied to show that in complex scenes, the human visual system can quickly focus attention on significant areas in the image and perform optimization. Therefore, the embodiment of the invention introduces a visual attention mechanism into the artificial water and soil loss extraction, selects the visual attention mechanism from bottom to top, extracts color, direction, texture features and edge features respectively, and obtains a saliency map by fusing the extracted multiple features.
In an embodiment of the invention, the color features include visual saliency feature F 7 The HSV color model is selected, the perception and discrimination capability of human vision on colors can be reflected well, the RGB color space is converted into the HSV color space, and the specific formula is as follows:
C max =max(I R ,I G ,I B ),C min =min(I R ,I G ,I B )
I′ n =(C max -I n )/(C max -C min ),n=R,G,B
wherein num represents the total number of extracted color features, H, S, V represents the H color features, S color features and V color features of the time sequence remote sensing image in HSV color space respectively, I R 、I G 、I B Respectively representing R color characteristics, G color characteristics and B color characteristics of the time sequence remote sensing image in RGB color space, C max And C min Respectively represent I R 、I G 、I B Maximum and minimum values among the three, H 'represents the comparison value, I' n Representation I R 、I G 、I B Is a recalculated value of (1).
The direction features include orientation features F 8 The extraction method comprises the following steps: extracting by using a Gabor filter, and convolving the four directions with a Gabor kernel function to obtain an azimuth characteristic F, wherein the directions are respectively theta= {0 degrees, 45 degrees, 90 degrees and 135 degrees 8
Texture features including contrast features F 9 (CON), entropy feature F 10 (ENT), angular second moment feature F 11 (ASM), uniformity characterization F 12 (HOM) and correlation feature F 13 (COR), which are all extracted by gray level co-occurrence matrix (GLCM).
The edge features are characterized by adopting gradient amplitude values, firstly smoothing an image through Gaussian filtering, then calculating an image gradient amplitude value, setting pixels with gradient amplitude values smaller than a threshold value as non-edges, traversing pixel by pixel, determining edge pixels by using a non-maximum value inhibition method, and determining the gradient amplitude features F 14 The extraction formula of (2) is:
F x (x,y)=I(x+1,y)-I(x-1,y)/2
F y (x,y)=I(x,y+1)-I(x,y-1)/2
wherein I is(x, y) represents a time-series remote sensing image with pixel row and column number (x, y), F x (x, y) represents the gradient amplitude in the x direction, i.e. the difference between the pixel values of the pixel point (x, y) before and after the x direction, F y (x, y) represents the magnitude of the gradient in the y-direction, i.e., the difference between the pixel values of the pixel point (x, y) before and after the y-direction.
S23, adopting a Relief-F algorithm to perform multi-mode remote sensing characteristic set { F on man-made water and soil loss i I=1, 2, 14} performs optimization to obtain an artificial soil and water loss multi-modal remote sensing optimization feature set F':
F‘={F′ i ,i=1,2,...,f}
wherein F represents the number of features in the multi-mode remote sensing optimized feature set F' for artificial water and soil loss.
The Relief-F algorithm is a typical filtering type feature optimization algorithm, and an optimal feature set is further provided by calculating weights of feature variables and sequencing the weights. The Relief-F algorithm is specifically:
b1, randomly sampling samples from an artificial water and soil loss range selected from the multi-mode remote sensing feature set of the artificial water and soil loss by randomly sampling samples, and randomly selecting a training set sample R from the samples.
And B2, searching a k nearest neighbor sample H from a sample set of artificial water and soil loss similar to the sample set R, and searching a k nearest neighbor sample M from a sample set of non-artificial water and soil loss different from the sample set R.
And B3, calculating the weight omega' (Z) of each characteristic in the multi-mode remote sensing characteristic set for water and soil loss according to the k nearest neighbor samples H and M:
wherein Z represents the number of characteristic variables in the multi-mode remote sensing characteristic set for artificial water and soil loss, ω (Z) represents the initial weight value of the Z-th characteristic variable, m represents the sample sampling times, k represents the number of the nearest zero samples, and H j Represents the j-th nearest neighbor of sample R, diff (A, R, H j ) Representing samples R and H on feature Z j Difference of M j (C) Represents the j-th nearest-neighbor heterogeneous point of sample R, class (R) represents the class of sample R, C represents the class, P (·) represents the prior probability, diff (A, R, M) j (C) Representing samples R and M on feature Z j (C) Is a difference between (a) and (b).
And B4, sequencing the characteristics in the artificial water and soil loss multi-mode remote sensing characteristic set through the weight omega '(Z) to obtain an artificial water and soil loss multi-mode remote sensing optimized characteristic set F'.
And S3, according to the multi-mode remote sensing optimization feature set of the artificial water and soil loss, carrying out remote sensing intelligent extraction on the artificial water and soil loss disturbance range in the supervision range based on the LSP graph convolution neural network.
Step S3 includes the following substeps S31 to S37:
s31, extracting a feature set corresponding to the data tag position according to the multi-mode remote sensing optimized feature set of the artificial water and soil loss, and constructing a multi-mode remote sensing training sample library of the artificial water and soil lossWherein->Representing a multi-modal remote sensing data feature set, X i An i-th sample in the feature set representing multi-modal remote sensing data, o=1, 2,.. n Representing the marking data.
S32, sample X i Connected with the neighborhood sample points form graph G (v, epsilon), where v= { X 1 ,X 2 ,...,X i ,...,X N The data set composed of nodes on the graph is represented by }, N represents the number of neighbor samples, and ε represents the set of edges, i.e., neighbor points and sample X i Is a distance of (3).
S33, describing topological information of the graph through LSP (local structure preserving) to obtain local structure vector LS i,j
/>
Wherein X is j Representation and sample X i The j-th sample point of the connected neighborhood, SIM (·) represents the similarity function.
S34, calculating local structure vectors of student models in LSP graph convolution neural networkLocal structural vector with teacher model>Similarity S of (2) i
Wherein D is KL (. Cndot.) represents the relative entropy, which is used to measure the distance between two probability distributions. In the embodiment of the invention, the teacher model is a pre-trained main model, the model is relatively complex, and the student model is a simplified model. The two models input data in the same way and train simultaneously, and the training knowledge of the teacher model is transferred to the student model in the process, so that the running efficiency and the generalization capability are improved. Similarity S i Smaller means that the distribution of local structures is more similar.
S35, according to the similarity S i Calculating a loss function L LSP
S36, according to the loss function L LSP Calculate the total loss L:
L=H(y s ,y)+λL LSP
where H (·) represents the cross entropy loss function, y represents the truth sample data label, y s Represents the value predicted by the student model, lambda represents the hyper-parameter that balances the two losses。
S37, training the LSP graph convolution neural network through total loss L, and collecting the multi-mode remote sensing data characteristicsAnd inputting the trained LSP graph convolution neural network to realize remote sensing intelligent extraction of the artificial soil and water loss disturbance range in the supervision range.
In the embodiment of the invention, a teacher model is set to be 5 layers, a student model is set to be 3 layers, the learning rate is set to be 0.005, lambda is set to be 100, 80% of training samples are selected by a random method, and 20% of samples are used for verification. In the embodiment of the invention, the remote sensing intelligent extraction result of the artificial soil erosion disturbance range is shown in fig. 2.
Combining a large-scale artificial water and soil loss disturbance range remote sensing intelligent extraction range with a random verification sample, quantitatively evaluating the precision of the artificial water and soil loss extraction range by adopting a global precision index, wherein in general, the precision requirement of intelligent extraction is met when the global precision index of the artificial water and soil loss disturbance range exceeds 85%.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.

Claims (10)

1. A remote sensing intelligent extraction method for a large-scale artificial soil erosion disturbance range is characterized by comprising the following steps:
s1, acquiring time sequence multi-mode remote sensing image data before and after disturbance of artificial water and soil loss in a supervision range, and carrying out high-precision registration on the time sequence multi-mode remote sensing image data to obtain a time sequence remote sensing image set;
s2, constructing an artificial water and soil loss multi-mode remote sensing optimization feature set fused with a visual attention mechanism according to the time sequence remote sensing image set;
and S3, according to the multi-mode remote sensing optimization feature set of the artificial water and soil loss, carrying out remote sensing intelligent extraction on the artificial water and soil loss disturbance range in the supervision range based on the LSP graph convolution neural network.
2. The method for intelligent extraction of large-scale artificial water and soil loss disturbance range remote sensing according to claim 1, wherein the time sequence multi-mode remote sensing image data in the step S1 comprises optical remote sensing data and radar remote sensing data, and the method is characterized in that geometric fine correction, atmosphere correction and cloud mask processing are performed on the optical remote sensing data, and geometric fine correction processing is performed on the radar remote sensing data.
3. The method for remotely sensing and intelligently extracting the large-scale artificial soil erosion disturbance range according to claim 1, wherein the specific method for performing high-precision registration on the time-series multi-mode remote sensing image data in the step S1 is as follows:
a1, filtering the time-series multi-mode remote sensing image data by adopting a Log-Gabor filter to obtain local phase information of the time-series multi-mode remote sensing image data, wherein the Log-Gabor filter Log is used for filtering the time-series multi-mode remote sensing image data by adopting a Log-Gabor filter d,o (x, y) is expressed as:
wherein the method comprises the steps ofRepresenting even symmetric filters at direction o and dimension d +.>An odd symmetric filter under the direction o and the scale d, i is the imaginary unit of complex number;
a2, respectively convoluting the time sequence multi-mode remote sensing image data with an even symmetric filter and an odd symmetric filter to obtain even symmetric under the direction o and the scale dResponse energy E d,o (x, y) and odd symmetric response energy O d,o (x,y):
Wherein the method comprises the steps ofRepresenting a convolution operation, j (x, y) representing a pixel point having coordinates (x, y);
a3, response energy E according to even symmetry d,o (x, y) and odd symmetric response energy O d,o (x, y) calculating amplitude A d,o (x, y) and local phase
A4, according to amplitude A d,o (x, y) and local phaseCalculating to obtain a phase consistency characteristic value PC (x, y) at a pixel point J (x, y):
wherein W (x, y) is a weight factorSon, symbolIndicating that taking itself when positive, otherwise 0, t indicates the estimated noise threshold, epsilon indicates a constant preventing denominator 0, +.>Indicating phase difference>Representing the average phase value;
a5, obtaining a phase consistency graph PC (theta) in each direction according to the phase consistency eigenvalue PC (x, y) j ),j=1,2,...,o;
A6, based on the phase consistency map PC (θ) j ) Maximum moment max of phase consistency is calculated pc And minimum moment min pc
Wherein a, b and c are all intermediate parameters,θ j indicating the direction angle;
a7, maximum moment max pc And minimum moment min pc Corresponding images are overlapped to form an overlapped feature map, the overlapped feature map is divided into n multiplied by n image blocks which are not overlapped with each other, angular point monitoring is realized in each image block area by adopting Harris operator, points with response intensity exceeding a preset threshold are selected as points to be registered, and a point set { P to be registered is obtained j };
A8, d×o amplitude values A obtained in step A3 d,o (x, y) accumulating the amplitude values of d scales in the same direction to obtain o accumulated amplitude values A o (x,y);
A9, respectively selecting reference images I stan And the image I to be registered reg Middle o cumulative amplitude values A o Maximum value in (x, y) and recording the direction of the maximum value to obtain a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg
A10, calculating a reference image maximum index image MII stan And an MII (minimum index) of the image to be registered reg Is a similarity measure MII.IN (MII) stan ,MII reg ):
MII·IN(MII stan ,MII reg )=H(MII stan )+H(MII reg )+H(MII stan ,MII reg )
Wherein H (MII) stan ) Entropy value, H (MII reg ) Representing to be matchedEntropy value of maximum index map of quasi-image, H (MII stan ,MII reg ) Represents the joint entropy of the maximum index map of the reference image and the maximum index map of the image to be registered,probability distribution representing maximum index value A of reference image, < ->Probability distribution representing maximum index value B of images to be registered, < ->Representing the joint probability distribution of the maximum index value A of the reference image and the maximum index value B of the image to be registered;
a11, according to the point set { P to be registered j Some characteristic point P in } j Determining a corresponding point P on the image to be registered r Determining a template area;
a12, according to similarity measure MII.IN (MII) IN the template region stan ,MII reg ) Searching the homonymy points by adopting a maximum value principle and recording offset to obtain matched homonymy point pairs, calculating a transformation matrix of the homonymy point pairs to finish registration to obtain a time sequence remote sensing image set I= { I i I=1, 2,..t }, t is the number of time-series remote sensing images.
4. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 1, wherein the step S2 comprises the following steps:
s21, extracting a multi-mode remote sensing characteristic wave band F according to the time sequence remote sensing image data in the time sequence remote sensing image set 1 Normalized vegetation index F 2 Normalized building index F 3 Normalized bare die index F 4 Normalized water index F 5 And normalized polarization characteristic F 6
S22, extracting the time sequence remote sensing image by adopting a visual attention mechanism from bottom to topConcentrated visual saliency feature F 7 Azimuth characteristic F 8 Contrast feature F 9 Entropy feature F 10 Angular second moment characteristics F 11 Uniformity characteristics F 12 Correlation feature F 13 And gradient amplitude feature F 14 Further constructing a multi-mode remote sensing feature set { F ] of artificial water and soil loss i ,i=1,2,...,14};
S23, adopting a Relief-F algorithm to perform multi-mode remote sensing characteristic set { F on man-made water and soil loss i I=1, 2, 14} performs optimization to obtain an artificial soil and water loss multi-modal remote sensing optimization feature set F':
F‘={F′ i ,i=1,2,...,f}
wherein F represents the number of features in the multi-mode remote sensing optimized feature set F' for artificial water and soil loss.
5. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 4, wherein the visual saliency feature F in step S22 is as follows 7 The extraction formula of (2) is:
C max =max(I R ,I G ,I B ),C min =min(I R ,I G ,I B )
I′ n =(C max -I n )/(C max -C min ),n=R,G,B
where num represents the extracted color featureH, S, V respectively represent the H color feature, the S color feature and the V color feature of the time sequence remote sensing image in the HSV color space, I R 、I G 、I B Respectively representing R color characteristics, G color characteristics and B color characteristics of the time sequence remote sensing image in RGB color space, C max And C min Respectively represent I R 、I G 、I B Maximum and minimum values among the three, H 'represents the comparison value, I' n Representation I R 、I G 、I B Is a recalculated value of (1).
6. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 4, wherein the orientation feature F in step S22 is as follows 8 The extraction method of (2) comprises the following steps: extracting by using a Gabor filter, and convolving the four directions by using a Gaussian kernel function to obtain an azimuth characteristic F, wherein the azimuth is theta= {0 degrees, 45 degrees, 90 degrees and 135 degrees in 4 azimuths respectively 8
7. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 4, wherein the contrast characteristic F in step S22 is as follows 9 Entropy feature F 10 Angular second moment characteristics F 11 Uniformity characteristics F 12 Correlation feature F 13 All are extracted through the gray level co-occurrence matrix.
8. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 4, wherein the gradient amplitude characteristic F in step S22 is as follows 14 The extraction formula of (2) is:
F x (x,y)=I(x+1,y)-I(x-1,y)/2
F y (x,y)=I(x,y+1)-I(x,y-1)/2
where I (x, y) represents the pixel row and column number (x,y) time-series remote sensing image, F x (x, y) represents the gradient magnitude in the x-direction, F y (x, y) represents the magnitude of the gradient in the y direction.
9. The large-scale artificial soil erosion disturbance range remote sensing intelligent extraction method according to claim 4, wherein the Relief-F algorithm in the step S23 is specifically:
b1, randomly sampling samples from an artificial water and soil loss range selected from a multi-mode remote sensing feature set of the artificial water and soil loss by randomly sampling the samples, and randomly selecting a training set sample R from the samples;
b2, searching a k nearest neighbor sample H from a sample set of artificial water and soil loss similar to the sample set R, and searching a k nearest neighbor sample M from a sample set of non-artificial water and soil loss different from the sample set R;
and B3, calculating the weight omega' (Z) of each characteristic in the multi-mode remote sensing characteristic set for water and soil loss according to the k nearest neighbor samples H and M:
wherein Z represents the number of characteristic variables in the multi-mode remote sensing characteristic set for artificial water and soil loss, ω (Z) represents the initial weight value of the Z-th characteristic variable, m represents the sample sampling times, k represents the number of the nearest zero samples, and H j Represents the j nearest-neighbor homonymy point of sample R, diff (A, R, H j ) Representing samples R and H on feature Z j Difference of M j (C) Represents the j-th nearest-neighbor heterogeneous point of sample R, class (R) represents the class of sample R, C represents the class, P (·) represents the prior probability, diff (A, R, M) j (C) Representing samples R and M on feature Z j (C) Is the difference between (1);
and B4, sequencing the characteristics in the artificial water and soil loss multi-mode remote sensing characteristic set through the weight omega '(Z) to obtain an artificial water and soil loss multi-mode remote sensing optimized characteristic set F'.
10. The method for intelligent remote sensing extraction of large-scale artificial soil erosion disturbance range according to claim 1, wherein the step S3 comprises the following steps:
s31, extracting a feature set corresponding to the data tag position according to the multi-mode remote sensing optimized feature set of the artificial water and soil loss, and constructing a multi-mode remote sensing training sample library of the artificial water and soil lossWherein->Representing a multi-modal remote sensing data feature set, X i I=1, 2, i, n, n is the number of samples, Y n Representing the marking data;
s32, sample X i Connected with the neighborhood sample points form graph G (v, epsilon), where v= { X 1 ,X 2 ,...,X i ,...,X N The data set composed of nodes on the graph is represented by }, N represents the number of neighbor samples, and ε represents the set of edges, i.e., neighbor points and sample X i Is a distance of (2);
s33, obtaining a local structure vector LS through LSP description graph topology information i,j
Wherein X is j Representation and sample X i The j-th sample point of the connected neighborhood, SIM (·) represents the similarity function;
s34, calculating local structure vectors of student models in LSP graph convolution neural networkMold for teacherLocal structure vector->Similarity S of (2) i
Wherein D is KL (. Cndot.) represents relative entropy;
s35, according to the similarity S i Calculating a loss function L LSP
S36, according to the loss function L LSP Calculate the total loss L:
L=H(y s ,y)+λL LsP
where H (·) represents the cross entropy loss function, y represents the truth sample data label, y s Representing values predicted by the student model, λ representing a hyper-parameter balancing the two losses;
s37, training the LSP graph convolution neural network through total loss L, and collecting the multi-mode remote sensing data characteristicsAnd inputting the trained LSP graph convolution neural network to realize remote sensing intelligent extraction of the artificial soil and water loss disturbance range in the supervision range.
CN202310604455.3A 2023-05-25 2023-05-25 Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range Active CN116580320B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310604455.3A CN116580320B (en) 2023-05-25 2023-05-25 Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310604455.3A CN116580320B (en) 2023-05-25 2023-05-25 Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range

Publications (2)

Publication Number Publication Date
CN116580320A true CN116580320A (en) 2023-08-11
CN116580320B CN116580320B (en) 2023-10-13

Family

ID=87537537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310604455.3A Active CN116580320B (en) 2023-05-25 2023-05-25 Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range

Country Status (1)

Country Link
CN (1) CN116580320B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414968A (en) * 2020-03-26 2020-07-14 西南交通大学 Multi-mode remote sensing image matching method based on convolutional neural network characteristic diagram
CN112132006A (en) * 2020-09-21 2020-12-25 西南交通大学 Intelligent forest land and building extraction method for cultivated land protection
CN112949414A (en) * 2021-02-04 2021-06-11 中国水利水电科学研究院 Intelligent surface water body drawing method for wide-vision-field high-resolution six-satellite image
CN113537018A (en) * 2021-07-05 2021-10-22 国网安徽省电力有限公司铜陵供电公司 Water and soil conservation monitoring method based on multi-temporal satellite remote sensing and unmanned aerial vehicle technology
CN115527108A (en) * 2022-08-24 2022-12-27 华中农业大学 Method for rapidly identifying water and soil loss artificial disturbance plots based on multi-temporal Sentinel-2

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414968A (en) * 2020-03-26 2020-07-14 西南交通大学 Multi-mode remote sensing image matching method based on convolutional neural network characteristic diagram
CN112132006A (en) * 2020-09-21 2020-12-25 西南交通大学 Intelligent forest land and building extraction method for cultivated land protection
CN112949414A (en) * 2021-02-04 2021-06-11 中国水利水电科学研究院 Intelligent surface water body drawing method for wide-vision-field high-resolution six-satellite image
CN113537018A (en) * 2021-07-05 2021-10-22 国网安徽省电力有限公司铜陵供电公司 Water and soil conservation monitoring method based on multi-temporal satellite remote sensing and unmanned aerial vehicle technology
CN115527108A (en) * 2022-08-24 2022-12-27 华中农业大学 Method for rapidly identifying water and soil loss artificial disturbance plots based on multi-temporal Sentinel-2

Also Published As

Publication number Publication date
CN116580320B (en) 2023-10-13

Similar Documents

Publication Publication Date Title
Lu et al. Cultivated land information extraction in UAV imagery based on deep convolutional neural network and transfer learning
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
EP3614308B1 (en) Joint deep learning for land cover and land use classification
Nhat-Duc et al. Automatic recognition of asphalt pavement cracks using metaheuristic optimized edge detection algorithms and convolution neural network
CN109409263B (en) Method for detecting urban ground feature change of remote sensing image based on Siamese convolutional network
Chen et al. A self organizing map optimization based image recognition and processing model for bridge crack inspection
CN110263717B (en) Method for determining land utilization category of street view image
CN105809194B (en) A kind of method that SAR image is translated as optical image
CN110765934B (en) Geological disaster identification method based on multi-source data fusion
Chen et al. The application of the tasseled cap transformation and feature knowledge for the extraction of coastline information from remote sensing images
CN106295124A (en) Utilize the method that multiple image detecting technique comprehensively analyzes gene polyadenylation signal figure likelihood probability amount
CN110619258B (en) Road track checking method based on high-resolution remote sensing image
CN108932721A (en) A kind of infrared Image Segmentation and fusion method for crusing robot
CN109447160A (en) A kind of method of image and vector road junction Auto-matching
CN111882573B (en) Cultivated land block extraction method and system based on high-resolution image data
CN113657324A (en) Urban functional area identification method based on remote sensing image ground object classification
Dubois et al. Fast and efficient evaluation of building damage from very high resolution optical satellite images
Hossain et al. A hybrid image segmentation method for building extraction from high-resolution RGB images
CN107704840A (en) A kind of remote sensing images Approach for road detection based on deep learning
CN115512247A (en) Regional building damage grade assessment method based on image multi-parameter extraction
Singh et al. A hybrid approach for information extraction from high resolution satellite imagery
CN113378912B (en) Forest illegal reclamation land block detection method based on deep learning target detection
CN116580320B (en) Large-scale intelligent remote sensing extraction method for artificial soil erosion disturbance range
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant