CN115456957B - Method for detecting change of remote sensing image by full-scale feature aggregation - Google Patents

Method for detecting change of remote sensing image by full-scale feature aggregation Download PDF

Info

Publication number
CN115456957B
CN115456957B CN202211003665.9A CN202211003665A CN115456957B CN 115456957 B CN115456957 B CN 115456957B CN 202211003665 A CN202211003665 A CN 202211003665A CN 115456957 B CN115456957 B CN 115456957B
Authority
CN
China
Prior art keywords
remote sensing
feature
sensing image
full
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211003665.9A
Other languages
Chinese (zh)
Other versions
CN115456957A (en
Inventor
阮永俭
张新长
姜明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202211003665.9A priority Critical patent/CN115456957B/en
Publication of CN115456957A publication Critical patent/CN115456957A/en
Application granted granted Critical
Publication of CN115456957B publication Critical patent/CN115456957B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of remote sensing image change detection, in particular to a method for detecting the change of a remote sensing image by full-scale feature aggregation, which comprises the following steps: collecting an original remote sensing image, and preprocessing the original remote sensing image; cutting the preprocessed original remote sensing image into a plurality of small image sets, and dividing the small image sets into a training set, a testing set and a verification set; constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, and training the AFSNet; in the training process, acquiring a weight file with highest precision of each round; and obtaining prediction result data by using the weight file, and carrying out precision evaluation. According to the invention, by constructing the AFSNet, the edge detail information of the change region in the remote sensing image can be detected more accurately, the boundary of the change region suitable for detection of multiple scales is clearer, and the method has a good balance effect on the calculated amount and the parameter amount.

Description

Method for detecting change of remote sensing image by full-scale feature aggregation
Technical Field
The invention relates to the technical field of remote sensing image change detection, in particular to a method for detecting the change of a remote sensing image by full-scale feature aggregation.
Background
With the rapid development of aerospace technology, the imaging quality of remote sensing images is greatly improved. The high-resolution remote sensing images contain abundant ground feature characteristics, but noise interference is also serious, the images contain more complicated ground feature information, the phenomena of homoeorological and foreign matter homography appear in a large quantity, and the traditional method is difficult to achieve satisfactory results.
Conventional change detection methods can be classified into two main categories, a pixel-based change detection method and an object-based change detection method, according to the study object. Common methods include principal component analysis (PCA, principal Components Analysis), k-means clustering algorithms, variance vector analysis (CVA, change Vector Analysis), slow feature analysis (SFA, slow Feature Analysis), and the like. The pixel-based change detection methods are simple to implement, but have poor robustness, and salt and pepper noise is easy to generate in the processing process, so that an ideal result is difficult to achieve. The object-based change detection method is to divide a remote sensing image into disjoint homogeneous objects according to a certain rule, extract spectrum, texture, text and geometric information in the image based on the homogeneous objects, and analyze differences among the images by utilizing the characteristic information. Compared with the pixel-based change detection method, the object-based change detection method utilizes the spatial context information of the high-resolution remote sensing image, but the artificial design of the characteristic process is complex and difficult, and the application effect in the remote sensing image containing complex spectrum and texture characteristics is not ideal.
Compared with the traditional method, the deep learning does not need priori knowledge, can automatically learn rich context features and advanced semantic features from the input image, and the research based on the deep learning method is also focused in the field of change detection.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a method for detecting the change of a remote sensing image of Full-scale feature aggregation, which can more accurately detect the edge detail information of a change area in the remote sensing image by constructing an AFSNet (Attention-Guided Full-Scale Feature Aggregation Change Detection Network) and adapt to the detection of the edge detail information of the change area in the remote sensing image, has clearer boundary of the multi-scale change area, less phenomenon of occurrence of cavities, better communication of long and narrow change areas and better balance effect on calculated amount and parameter amount.
The invention provides a method for detecting the change of a remote sensing image by full-scale feature aggregation, which comprises the following steps:
acquiring an original remote sensing image, and carrying out image preprocessing on the original remote sensing image to obtain a preprocessed original remote sensing image;
further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixel specifications, and dividing the small picture sets into a training set, a test set and a verification set;
constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, training the AFSNet based on the training set, and obtaining a weight file obtained by training each round;
verifying the weight file based on the verification set, and screening out the weight file with the highest precision rate;
the AFSNet predicts the test set based on the weight file with the highest precision rate to obtain prediction result data;
and carrying out precision assessment on the prediction result data.
The acquiring the original remote sensing image and performing image preprocessing on the original remote sensing image comprises the following steps:
performing geometric fine correction on the original remote sensing image based on ENVI software to obtain an original remote sensing image subjected to geometric fine correction;
and carrying out image fusion processing on the original remote sensing image subjected to the geometric fine correction processing to obtain the preprocessed original remote sensing image.
The step of further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixel specifications, and the step of dividing the small picture sets into training sets, test sets and verification sets based on the dividing rule of the original data sets comprises the following steps:
and collecting the preprocessed original remote sensing image with the size of 1024 multiplied by 1024 pixels based on the LEVIR-CD data set, further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixels and no overlapping, and dividing the small picture sets into a training set, a testing set and a verification set based on the dividing rule of the original data set.
The further cutting the preprocessed original remote sensing image into a plurality of small image sets with 256 multiplied by 256 pixel specifications, and dividing the small image sets into training sets, test sets and verification sets further comprises:
and collecting the preprocessed original remote sensing image based on the SVCD data set, further cutting the preprocessed original remote sensing image into a plurality of small block image sets with 256 multiplied by 256 pixel specifications, and dividing the small block image sets into a training set, a test set and a verification set.
The constructing of the attention-guided full-scale feature aggregation change detection network model AFSNet, the training of the AFSNet based on the training set, and the obtaining of the weight file obtained by each round of training comprises the following steps:
extracting the double-phase image features of the training set based on the attention-guided full-scale feature aggregation change detection network model AFSNet;
and carrying out feature fusion processing on the double-phase image features based on a full-scale jump connection mode to obtain full-scale feature information.
The extracting the dual-phase image feature of the training set based on the attention-guided full-scale feature aggregation change detection network model AFSNet comprises the following steps:
processing the two-phase images of the double-phase images based on a VGG16 network to respectively obtain four-layer feature images and five-layer feature images corresponding to the two-phase images of the double-phase images;
and carrying out fusion processing on the four-layer feature images and the five-layer feature images corresponding to the two-phase images of the double-phase images to obtain the five-layer feature images of the double-phase images.
The step of carrying out feature fusion processing on the image features based on the full-scale jump connection mode, and the step of obtaining full-scale feature information comprises the following steps:
carrying out maximum value pooling treatment on the five-layer feature images of the double-time-phase image, wherein the size of the feature images is larger than that of the expected feature images, so as to obtain feature images with the same size as the expected feature images;
and carrying out channel number reduction processing on five-layer feature images of the double-phase image, which are smaller than and the same as the expected feature images in size and the feature images with the same size as the expected feature images, based on a convolution layer with the convolution kernel size of 1 multiplied by 1, and connecting the five-layer feature images together to obtain the expected feature images with full-scale feature information.
The step of verifying the weight file based on the verification set, and the step of screening the weight file with the highest precision rate comprises the following steps:
verifying the expected feature map with the full-scale feature information based on the verification set, and screening out the expected feature map with the full-scale feature information with the highest precision rate; using the precision rate F1 as an evaluation criterion for measuring the precision of the expected feature map with the full-scale feature information, wherein a calculation formula comprises:
where Precision is the Precision value, recall is the Recall, TP is the true class, FP is the false positive class, FN is the false negative class, and TN is the true negative class.
The AFSNet predicts the test set based on the weight file with the highest precision rate, and the obtaining of the predicted result data comprises the following steps:
performing feature refinement processing on the expected feature map with the full-scale feature information based on a CBAM attention module to obtain an expected feature map with the full-scale feature information after the feature refinement processing;
processing the expected feature map with the full-scale feature information after feature refinement processing based on a convolution layer with the convolution kernel size of 3 multiplied by 3 to obtain a prediction map corresponding to each scale;
sampling the prediction graphs corresponding to all scales to obtain prediction results corresponding to all scales;
and (3) carrying out fusion processing on the prediction results of all scales based on a multi-side output fusion method, and outputting prediction result data through a convolution layer with the convolution kernel size of 3 multiplied by 3.
The precision evaluation of the prediction result data comprises the following steps:
using Precision value Precision, recall ratio Recall, precision ratio F1, cross-over ratio IoU and average cross-over ratio mIoU as evaluation criteria for measuring the performance of the attention-guided full-scale feature aggregation change detection network model AFSNet, wherein a calculation formula comprises:
wherein TP is a true class, FP is a false positive class, FN is a false negative class, TN is a true negative class, and k represents class (0, 1).
According to the invention, the attention-guided full-scale feature aggregation change detection network model AFSNet is constructed, the original remote sensing image data are collected and classified based on the data set, and the subsequent comparison and evaluation are facilitated; the twin network is used for extracting the image characteristics of the input remote sensing image, so that the independence between two-phase images of the double-phase image is ensured; the full-scale characteristic connecting structure is adopted to aggregate the characteristics of different scales, and the full characteristic information is fused, so that the detection precision is improved, the calculated amount is reduced, and the influence of information loss is relieved; feature refinement is carried out based on an attention mechanism, a multi-scale prediction result is output, a multi-side output fusion strategy is used for obtaining a final detection result, features on all scales are fully aggregated, and the accuracy of change detection is improved; and comparing and evaluating the final detection result, and showing a good change detection effect of the attention-guided full-scale feature aggregation change detection network model AFSNet, and expressing continuity and accuracy of the change detection result.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings which are required in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting a change of a remote sensing image according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating the operation of remote sensing image change detection according to an embodiment of the present invention;
FIG. 3 is a schematic view of a portion of a remote sensing image of a scene of a LEVIR-CD dataset in accordance with an embodiment of the present invention;
FIG. 4 is a schematic view of a portion of a scene remote sensing image of an SVCD dataset according to an embodiment of the invention;
FIG. 5 is an overall network architecture diagram of an AFSNet in an embodiment of the present invention;
FIG. 6 is a diagram of a twin VGG16 network architecture in an embodiment of the invention;
FIG. 7 is a full-scale hopped connection block diagram in an embodiment of the present invention;
FIG. 8 is a flow chart of an AFSNet process for constructing an attention-guided full-scale feature aggregate change detection network model in an embodiment of the invention;
FIG. 9 is a flow chart for predicting the test set based on the AFSNet in an embodiment of the present invention;
FIG. 10 is a graph of the detection of changes in LEVIR-CD datasets in an embodiment of the present invention;
fig. 11 is a graph of the detection of changes on the SVCD dataset in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the present invention, it should be understood that terms such as "comprises" or "comprising," etc., are intended to indicate the presence of features, numbers, steps, acts, components, portions, or combinations thereof disclosed in the present specification, and are not intended to exclude the possibility that one or more other features, numbers, steps, acts, components, portions, or combinations thereof are present or added.
In addition, it should be noted that, without conflict, the embodiments of the present invention and the features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
The embodiment of the invention relates to a method for detecting the change of a remote sensing image of full-scale feature aggregation, which comprises the following steps: acquiring an original remote sensing image, and carrying out image preprocessing on the original remote sensing image to obtain a preprocessed original remote sensing image; further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixel specifications, and dividing the small picture sets into a training set, a test set and a verification set; constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, training the AFSNet based on the training set, and obtaining a weight file obtained by training each round; verifying the weight file based on the verification set, and screening out the weight file with the highest precision rate; the AFSNet predicts the test set based on the weight file with the highest precision rate to obtain prediction result data; and carrying out precision assessment on the prediction result data.
For the deep learning model, spatial information is lost in the downsampling process, and is critical to detecting the boundary information of a change area, so that spatial detail information needs to be connected to a decoder in the upsampling process to ensure the detection accuracy. Feature maps of different scales in the neural network represent different information, the low-level feature map contains spatial information of a change area and highlights the boundary of the change area, the high-level semantic feature map contains category information, and the contribution of the low-level feature map and the high-level semantic feature map to the network prediction result is different and is very important. The common change detection network structure is mainly based on UNet and unet++, and the corresponding change detection method has good effects in remote sensing image change detection tasks. In the aspect of fusion of low-level detail features and high-level semantic features, the UNet adopts a single branch jump connection mode with the same size as that of an encoder and a decoder, and the unet++ adopts a dense jump connection mode with the same size as that of the encoder and the decoder of different depth networks. In addition, the fusion mode of the high-level features and the lower-level features, which lack supervision and attention, is easy to cause information redundancy, is unfavorable for model fitting, causes poor change detection effect and cannot meet the requirements.
In view of the above requirements, in this embodiment, in order to solve the problem that the prior art relies on expert knowledge, the classification process is complex, consumes a lot of manpower and time, and is inefficient and difficult to implement large-scale application; the manual design of the features is complex in process and high in difficulty, the accuracy can be affected by different feature combinations, and the image features are not fully utilized; the method for detecting the change of the remote sensing image of the full-scale feature aggregation is provided, and the method for detecting the change of the full-scale feature aggregation is used for collecting original remote sensing image data and classifying based on a data set by constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, so that the follow-up comparison and evaluation are facilitated; the twin network is used for extracting the image characteristics of the input remote sensing image, so that the independence between two-phase images of the double-phase image is ensured; the full-scale characteristic connecting structure is adopted to aggregate the characteristics of different scales, and the full characteristic information is fused, so that the detection precision is improved, the calculated amount is reduced, and the influence of information loss is relieved; feature refinement is carried out based on an attention mechanism, a multi-scale prediction result is output, a multi-side output fusion strategy is used for obtaining a final detection result, features on all scales are fully aggregated, and the accuracy of change detection is improved; and comparing and evaluating the final detection result, and showing a good change detection effect of the attention-guided full-scale feature aggregation change detection network model AFSNet, and expressing continuity and accuracy of the change detection result.
In an alternative implementation manner of the present embodiment, as shown in fig. 1 and 2, fig. 1 shows a flowchart of a method for detecting a remote sensing image change in an embodiment of the present invention, and fig. 2 shows an operation flowchart of remote sensing image change detection in an embodiment of the present invention, including the following steps:
s101, acquiring an original remote sensing image, and performing image preprocessing on the original remote sensing image to obtain a preprocessed original remote sensing image;
in an optional implementation manner of this embodiment, the acquiring the original remote sensing image is based on satellite sensor acquisition.
In an optional implementation manner of this embodiment, the performing image preprocessing on the original remote sensing image includes: performing geometric fine correction on the original remote sensing image based on ENVI software to obtain an original remote sensing image subjected to geometric fine correction; and carrying out image fusion processing on the original remote sensing image subjected to the geometric fine correction processing to obtain the preprocessed original remote sensing image.
Specifically, the ENVI software (The Environment for Visualizing Images) is a complete remote sensing image processing platform, is a first-time software solution for rapidly, conveniently and accurately extracting information from images at present, and is widely applied to the fields of scientific research, environmental protection, weather, petroleum and mineral exploration, agriculture, forestry, medicine, national defense & security, earth science, public facility management, remote sensing engineering, water conservancy, ocean, mapping investigation, city and region planning and the like. In this embodiment, the method is used for preprocessing the original remote sensing image acquired based on the satellite sensor.
In an optional implementation manner of this embodiment, the geometric fine correction processing refers to performing elimination processing on geometric deformation in the original remote sensing image, so as to eliminate internal distortion in the original remote sensing image, and achieve geometric integration of the original remote sensing image and the standard map image.
In an optional implementation manner of this embodiment, the image fusion process refers to extracting beneficial information in a multi-source channel of the original remote sensing image after the geometric fine correction process to the greatest extent, so as to total and synthesize a high-quality remote sensing image.
The original remote sensing image is subjected to image preprocessing, so that the utilization rate of image information can be improved, the interpretation accuracy and reliability can be improved, the spatial resolution and the spectral resolution of the original remote sensing image can be improved, and the detection can be facilitated.
S102, cutting the processed original remote sensing image into a small picture set with 256 multiplied by 256 pixel specification, and dividing the small picture set into a training set, a test set and a verification set;
the large-scale original remote sensing image preprocessed in step S101 is cut into small pictures with the specification of 256 pixels×256 pixels, and the small pictures are combined to form a picture set, and the obtained picture set is divided into a training set, a test set and a verification set.
In an alternative implementation manner of this embodiment, the original remote sensing image with the size of 1024×1024 pixels after preprocessing is collected based on the LEVIR-CD data set (Remote Sensing Laboratory building change detection dataset ), and the preprocessed original remote sensing image is further cut into a plurality of small block picture sets with 256×256 pixels and no overlapping, and the small block picture sets are divided into a training set, a test set and a verification set based on a division rule of the original data set.
Specifically, the LEVIR-CD data set is a large-scale remote sensing image change detection data set, and comprises 637 double-phase remote sensing images with resolution of 0.5m and size of 1024×1024 pixels, the remote sensing images are cut into 10192 pairs of small-block images with size of 256×256 pixels and no overlapping, the small-block images are divided into a training set, a verification set and a test set based on a dividing rule of an original data set, wherein the training set comprises 7120 pairs of remote sensing images, the verification set comprises 1024 pairs of remote sensing images, and the test set comprises 2048 pairs of remote sensing images. Fig. 3 shows a schematic diagram of a part of a remote sensing image of a scene of a LEVIR-CD dataset in an embodiment of the invention, wherein (a) is a remote sensing image of TI time, (b) is a remote sensing image of T2 time, and (c) is a label.
In an optional implementation manner of this embodiment, the preprocessed original remote sensing image is collected based on the SVCD dataset, the preprocessed original remote sensing image is further cut into a plurality of small block picture sets with 256×256 pixel specifications, and the small block picture sets are divided into a training set, a test set and a verification set based on a division rule of the original dataset.
Specifically, the SVCD dataset (Season-Varying change detection dataset, seasonal change detection dataset) comprises 11 pairs of multispectral double-temporal remote sensing images with resolution of 0.03m to 1m, the remote sensing images are cut into 16000 pairs of small images with 256×256 pixels and without overlapping, the small images are divided into a training set, a verification set and a test set based on the dividing rule of the original dataset, wherein the training set comprises 10000 pairs of remote sensing images, the verification set comprises 3000 pairs of remote sensing images, and the test set comprises 3000 pairs of remote sensing images. Fig. 4 shows a schematic diagram of a portion of a scene remote sensing image of an SVCD dataset in an embodiment of the present invention, where (a) is a remote sensing image at TI time, (b) is a remote sensing image at T2 time, and (c) is a label.
The original remote sensing image is divided into a training set, a testing set and a verification set based on the LEVIR-CD data set and the SVCD data set, so that the subsequent comparison and evaluation are facilitated.
S103, constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, training the AFSNet based on the training set, and obtaining a weight file obtained by training each round;
in an optional implementation manner of this embodiment, the Attention-Guided Full-scale feature aggregate change detection network model AFSNet (Attention-Guided Full-Scale Feature Aggregation Change Detection Network) is an end-to-end remote sensing image change monitoring network, and is composed of an encoder network and a decoder network, the overall network structure of which is shown in fig. 5, fig. 5 shows an overall network structure diagram of the AFSNet in this embodiment of the present invention, where Down-Sampling is downsampling, up-Sampling is upsampling, skip-Connection is a jump Connection, connection is a serial point, encoder Convolution Unit is an encoder network convolution layer, MSOF (Multiple position-output) fusion is a multi-layer feature map generating policy, attention Mechanism is an Attention mechanism,the CBAM is the attention module, which is a feature map.
In an optional implementation manner of this embodiment, as shown in fig. 6, fig. 6 shows a structure diagram of a twin VGG16 network in an embodiment of the present invention, where the encoder network adopts a twin VGG16 network structure, and uses the twin network to extract the input dual-phase remote sensing image feature.
Specifically, the twin VGG16 network (Visual Geometry Group Network ) is composed of thirteen Conv convolution layers and three full link layers, which are removed to accommodate the change detection intensive prediction task, thus forming the encoder backbone network. The convolution kernel size of the convolution layer is 3×3 and is connected with the batch normalization layer and the nonlinear activation function ReLU (Linear rectification function) layer at the same time.
In an alternative implementation manner of this embodiment, as shown in fig. 7, fig. 7 shows a full-scale jump connection structure diagram in an embodiment of the present invention, where Maxpooling is a maximum pooling layer, 1×1Conv is a convolution layer with a convolution kernel size of 1×1, BN (Batch Normalization) is a batch normalization layer, reLU is a nonlinear activation function layer, and bilineaer up is an upsampling layer.
In an alternative implementation manner of the present embodiment, as shown in fig. 8, fig. 8 shows a flow chart of constructing an attention-guided full-scale feature aggregation change detection network model AFSNet in an embodiment of the present invention, including the following steps:
s801, extracting double-phase image features of the training set based on the attention-guided full-scale feature aggregation change detection network model AFSNet;
processing the two-phase images of the double-phase image based on a VGG16 network to respectively obtain a four-layer characteristic image and a five-layer characteristic image corresponding to the two-phase images of the double-phase image; and carrying out fusion processing on the four-layer feature images and the five-layer feature images corresponding to the two-phase images of the double-phase images to obtain the five-layer feature images of the double-phase images.
In an optional implementation manner of this embodiment, the encoder network processes the input dual-phase image based on the VGG16 network, so as to obtain a four-layer feature map and a five-layer feature map corresponding to the two-phase image of the dual-phase image, where the four-layer feature map is of different sizes, the number of channels is 32, 64, 128, 256, and the five-layer feature map is of different sizes, and the number of channels is 32, 64, 128, 256, 512.
The twin VGG16 network is used for extracting the characteristics of the two-phase images, the independence between the two-phase images is guaranteed in the characteristic extraction process, the weight sharing can reduce the quantity of model parameters, and the effect is better.
S802, carrying out feature fusion processing on the double-time-phase image features based on a full-scale jump connection mode to obtain full-scale feature information.
Performing maximum value pooling treatment on the five-layer feature images of the double-phase image, wherein the size of the five-layer feature images is larger than that of the expected feature images, so as to obtain feature images with the same size as the expected feature images; and carrying out channel number reduction processing on five-layer feature images of the double-phase image, which are smaller than and the same as the expected feature images in size and the feature images with the same size as the expected feature images, based on a convolution layer with the convolution kernel size of 1 multiplied by 1, and connecting the five-layer feature images together to obtain the expected feature images with full-scale feature information.
In an alternative implementation of this embodiment, the decoder network fuses two-phase image features in a same-size interconnect manner, as shown in fig. 5And->Are connected and fused into-> And->Are connected and fused into-> And->Are connected and fused into-> And->Are connected and fused into->
In an alternative implementation of the present embodiment, toTaking the desired feature map as an example, compared to +.>Dimension of the feature map, +.>Feature map and->The feature map is larger in size, then ∈>Feature map and->The feature map is subjected to maximum value pooling treatment to obtain AND +.>Feature map the same size feature map reduces its number of channels based on a convolution layer with a convolution kernel size of 1 x 1.
More, compared withDimension of the feature map, +.>Feature map and->The feature map is larger in size, and the number of channels is reduced based on a convolution layer with a convolution kernel size of 1×1, and an up-sampling operation is performed.
More, andfeature map same size +.>And->The feature map reduces its number of channels based on a convolution layer with a convolution kernel size of 1 x 1.
In an alternative implementation of this embodiment, the five-layer feature map has a reduced feature channel number that is one-fourth of its pre-reduction channel number.
In an alternative implementation manner of this embodiment, the five processed feature maps are connected together, and a feature map with information of a full-scale feature map is obtained based on a convolution layer with a convolution kernel size of 1×1
The full-scale jump connection structure is adopted, so that information loss can be reduced, parameter and calculation amount are reduced, model training efficiency is accelerated, characteristics on all scales are fully aggregated, and change detection precision is improved.
S104, verifying the weight file based on the verification set, and screening out the weight file with the highest precision rate;
in an optional implementation manner of this embodiment, verifying the expected feature map with full-scale feature information based on a verification set, and screening out the expected feature map with full-scale feature information with highest accuracy; using the precision rate F1 as an evaluation criterion for measuring the precision of the expected feature map with the full-scale feature information, wherein a calculation formula comprises:
where Precision is the Precision value, recall is the Recall, TP is the true class, FP is the false positive class, FN is the false negative class, and TN is the true negative class.
Specifically, in the evaluation of the precision index, the higher the precision value, that is, the more accurate the detected change, the higher the recall rate, which means that the less the model finds the wrong change pixel. The F1 score simultaneously considers the precision and recall rate of the classification model, and can compare the precision of the balanced evaluation model.
S105, the AFSNet predicts the test set based on the weight file with the highest precision rate to obtain prediction result data;
in an alternative implementation manner of the present embodiment, as shown in fig. 9, fig. 9 shows a flowchart for predicting the test set based on the AFSNet in an embodiment of the present invention, including the following steps:
s901, carrying out feature refinement processing on the expected feature map with the full-scale feature information based on a CBAM attention module to obtain an expected feature map with the full-scale feature information after the feature refinement processing;
in an alternative implementation of this embodiment, the CBAM (Convolutional Block Attention Module) is a Channel & space attention mechanism (Channel & Spatial Attention), and the CBAM module is capable of generating attention profile information in both Channel and space dimensions in series, given a profile, and then performing adaptive profile correction by multiplying the two profile information with the previous input profile, resulting in the final profile.
The full-scale feature information is subjected to feature refinement processing based on the CBAM attention module, key information is extracted through depression, and interference of redundant information is eliminated.
S902, processing the expected feature map with the full-scale feature information after feature refinement processing based on a convolution layer with the convolution kernel size of 3 multiplied by 3 to obtain a prediction map corresponding to each scale;
the expected feature map with the full-scale feature information after feature refinement processing is processed based on a convolution layer with the convolution kernel size of 3×3, and a prediction map corresponding to each scale is obtained.
S903, sampling the prediction graph corresponding to each scale to obtain a prediction result corresponding to each scale;
the prediction graphs corresponding to the various scales are sampled to the same size as the labels of the input images, and the prediction graphs corresponding to the various scales are obtained.
S904, fusion processing is carried out on the prediction results of all scales based on a multi-side output fusion method, and prediction result data is output through a convolution layer with the convolution kernel size of 3 multiplied by 3.
The classification results of the prediction labels on each scale are fused by utilizing a multi-side output fusion strategy, and then the final result is output through a convolution layer with the convolution kernel size of 3 multiplied by 3.
S106, performing precision assessment on the prediction result data.
In an alternative implementation of this embodiment, the seven above-mentioned change detection networks based on FC-EF, FC-Siam-Conc, FC-Siam-Diff, UNet++ MSOF, STANet, IFN, SNUNet-CD are compared to the attention-directed full-scale feature aggregate change detection network model AFSNet.
It should be noted that the network training and testing were performed under the same conditions.
Specifically, the Precision value Precision, recall ratio Recall, precision ratio F1, cross-over ratio IoU and average cross-over ratio mIoU are used as evaluation criteria for measuring the performance of the attention-guided full-scale feature aggregate change detection network model AFSNet, and a calculation formula comprises:
wherein TP is a true class, FP is a false positive class, FN is a false negative class, TN is a true negative class, and k represents class (0, 1).
Specifically, in the evaluation of the precision index, the higher the precision value, that is, the more accurate the detected change, the higher the recall rate, which means that the less the model finds the wrong change pixel. The F1 score simultaneously considers the precision and recall rate of the classification model, and can compare the precision of the balanced evaluation model. IoU is used to measure the correlation between true and predicted, mIoU is calculated by averaging IoU between unchanged and changed classes, the larger their value, the better the predicted result, in the range 0-1.
In an alternative implementation manner of this embodiment, the F1 and IoU values of the different change monitoring networks on the LEVIR-CD and SVCD test sets are listed, meanwhile, the size of the input image of the model is set to 256×256×3, floating point numbers FLPs and parameter amounts Params of the model are counted, and related values are shown in table 1.
TABLE 1
As can be seen from Table 1, AFSNet achieves the highest accuracy in both LEVIR-CD and SVCD data sets with a score F1/IoU higher than other models. Meanwhile, the number of parameters and floating point operation times of the AFSNet are lower than those of most classical change detection networks, the calculated amount and the parameter amount are well balanced, and various precision indexes are also higher, so that the performance of the AFSNet is proved.
As shown in fig. 10 and 11, fig. 10 shows a change detection result diagram on a LEVIR-CD dataset in an embodiment of the present invention, fig. 11 shows a change detection result diagram on a VCD dataset in an embodiment of the present invention, wherein, (a) Image T1, (b) Image T2, (c) group truth, (d) FC-EF. (e) FC-siem-conc, (f) FC-siem-diff. (g) unet++ msof. (h) stant. (i) ifn. (j) SNUNet-CD. (k) afsnet.
From the above, the change detection effect of AFSNet on the high-resolution remote sensing image is better, and especially for the change types of the urban roads and the small blocks, the consistency and the accuracy of the change types can be better expressed.
In summary, the invention provides a method for detecting the change of a remote sensing image of full-scale feature aggregation, which is convenient for subsequent comparison and evaluation by constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, collecting original remote sensing image data based on a data set and classifying; the twin network is used for extracting the image characteristics of the input remote sensing image, so that the independence between two-phase images of the double-phase image is ensured; the full-scale characteristic connecting structure is adopted to aggregate the characteristics of different scales, and the full characteristic information is fused, so that the detection precision is improved, the calculated amount is reduced, and the influence of information loss is relieved; feature refinement is carried out based on an attention mechanism, a multi-scale prediction result is output, a multi-side output fusion strategy is used for obtaining a final detection result, features on all scales are fully aggregated, and the accuracy of change detection is improved; and comparing and evaluating the final detection result, and showing a good change detection effect of the attention-guided full-scale feature aggregation change detection network model AFSNet, and expressing continuity and accuracy of the change detection result.
The following are examples of apparatus of the invention that can be used to perform the method of the invention.
The invention relates to a remote sensing image change detection system based on attention-guided full-scale feature aggregation, which comprises: the system comprises a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program, and the memory stores data generated in the working process of the system.
It should be noted that the processor executes the computer program to implement the method for detecting the change of the remote sensing image by using the full-scale feature aggregation.
In an alternative implementation of this embodiment, the system employs a PyTorch deep learning framework, running in an NVIDIA RTX6000 GPU, built-in 24GB memory environment. Considering limited computing resources, the batch size is set to 10, adamW is adopted as an optimizer, corresponding super parameters β_1 and β_2 are respectively set to 0.9,0.999, the learning rate is initially set to 5e-4, the attenuation of 0.8 is carried out for every 5 iterations, and the total iteration number is 100.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
In summary, the embodiment of the device of the invention is through a remote sensing image change detection system based on attention-guided full-scale feature aggregation, the system is convenient for subsequent comparison and evaluation by constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, collecting original remote sensing image data based on a data set and classifying; the twin network is used for extracting the image characteristics of the input remote sensing image, so that the independence between two-phase images of the double-phase image is ensured; the full-scale characteristic connecting structure is adopted to aggregate the characteristics of different scales, and the full characteristic information is fused, so that the detection precision is improved, the calculated amount is reduced, and the influence of information loss is relieved; feature refinement is carried out based on an attention mechanism, a multi-scale prediction result is output, a multi-side output fusion strategy is used for obtaining a final detection result, features on all scales are fully aggregated, and the accuracy of change detection is improved; and comparing and evaluating the final detection result, and showing a good change detection effect of the attention-guided full-scale feature aggregation change detection network model AFSNet, and expressing continuity and accuracy of the change detection result.
In addition, the foregoing has described in detail embodiments of the present invention, the principles and embodiments of the present invention have been described herein with reference to specific examples, the foregoing examples being provided to facilitate the understanding of the method of the present invention and the core idea thereof; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (5)

1. A method for detecting changes in remote sensing images of full-scale feature aggregation, the method comprising:
acquiring an original remote sensing image, and carrying out image preprocessing on the original remote sensing image to obtain a preprocessed original remote sensing image;
further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixel specifications, and dividing the small picture sets into a training set, a test set and a verification set;
constructing an attention-guided full-scale feature aggregation change detection network model AFSNet, training the AFSNet based on the training set, and obtaining a weight file obtained by training each round, wherein constructing the attention-guided full-scale feature aggregation change detection network model AFSNet, training the AFSNet based on the training set, and obtaining the weight file obtained by training each round comprises the following steps: extracting the double-phase image features of the training set based on the attention-guided full-scale feature aggregation change detection network model AFSNet; performing feature fusion processing on the double-time-phase image features based on a full-scale jump connection mode to obtain full-scale feature information;
the extracting the dual-phase image feature of the training set based on the attention-guided full-scale feature aggregation change detection network model AFSNet comprises the following steps: processing the two-phase images of the double-phase images based on a VGG16 network to respectively obtain four-layer feature images and five-layer feature images corresponding to the two-phase images of the double-phase images; performing fusion processing on the four-layer feature images and the five-layer feature images corresponding to the two-phase images of the double-phase images to obtain five-layer feature images of the double-phase images;
the step of carrying out feature fusion processing on the image features based on the full-scale jump connection mode, and the step of obtaining full-scale feature information comprises the following steps: carrying out maximum value pooling treatment on the five-layer feature images of the double-time-phase image, wherein the size of the feature images is larger than that of the expected feature images, so as to obtain feature images with the same size as the expected feature images; performing channel number reduction processing on five-layer feature images of the double-phase image, which are smaller than and the same as the expected feature images in size and the feature images with the same size as the expected feature images, based on a convolution layer with the convolution kernel size of 1×1, and connecting the five-layer feature images together to obtain the expected feature images with full-scale feature information;
the step of verifying the weight file based on the verification set, and the step of screening the weight file with the highest precision rate comprises the following steps: verifying the expected feature map with the full-scale feature information based on the verification set, and screening out the expected feature map with the full-scale feature information with the highest precision rate; using the precision rate F1 as an evaluation criterion for measuring the precision of the expected feature map with the full-scale feature information, wherein a calculation formula comprises:
wherein Precision is an accuracy value, recall is a Recall rate, TP is a true class, FP is a false positive class, FN is a false negative class, and TN is a true negative class;
verifying the weight file based on the verification set, and screening out the weight file with the highest precision rate;
the AFSNet predicts the test set based on the weight file with the highest precision rate to obtain prediction result data, and predicts the test set based on the weight file with the highest precision rate to obtain the prediction result data, wherein the obtaining of the prediction result data comprises the following steps: performing feature refinement processing on the expected feature map with the full-scale feature information based on a CBAM attention module to obtain an expected feature map with the full-scale feature information after the feature refinement processing; processing the expected feature map with the full-scale feature information after feature refinement processing based on a convolution layer with the convolution kernel size of 3 multiplied by 3 to obtain a prediction map corresponding to each scale; sampling the prediction graphs corresponding to all scales to obtain prediction results corresponding to all scales; fusion processing is carried out on the prediction results of all scales based on a multi-side output fusion method, and prediction result data is output through a convolution layer with the convolution kernel size of 3 multiplied by 3;
and carrying out precision assessment on the prediction result data.
2. The method for detecting changes in a remote sensing image by full-scale feature aggregation according to claim 1, wherein the capturing an original remote sensing image and performing image preprocessing on the original remote sensing image comprises:
performing geometric fine correction on the original remote sensing image based on ENVI software to obtain an original remote sensing image subjected to geometric fine correction;
and carrying out image fusion processing on the original remote sensing image subjected to the geometric fine correction processing to obtain the preprocessed original remote sensing image.
3. The method for detecting the change of the remote sensing image by using the full-scale feature aggregation according to claim 1, wherein the further clipping the preprocessed original remote sensing image into a plurality of small block image sets with 256×256 pixel specifications, and dividing the small block image sets into a training set, a test set and a verification set comprises:
and collecting the preprocessed original remote sensing image with the size of 1024 multiplied by 1024 pixels based on the LEVIR-CD data set, further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixels and no overlapping, and dividing the small picture sets into a training set, a testing set and a verification set based on the dividing rule of the original data set.
4. The method for detecting changes in a remote sensing image by full-scale feature aggregation according to claim 3, wherein the further cropping the preprocessed original remote sensing image into a plurality of small image sets with 256×256 pixel specifications, and dividing the small image sets into a training set, a test set and a verification set further comprises:
the method comprises the steps of collecting a preprocessed original remote sensing image based on an SVCD data set, further cutting the preprocessed original remote sensing image into a plurality of small picture sets with 256 multiplied by 256 pixel specifications, and dividing the small picture sets into a training set, a testing set and a verification set based on a dividing rule of the original data set.
5. The method for detecting changes in a remote sensing image for full-scale feature aggregation of claim 1, wherein said precision assessment of the predicted outcome data comprises:
using Precision value Precision, recall ratio Recall, precision ratio F1, cross-over ratio IoU and average cross-over ratio mIoU as evaluation criteria for measuring the performance of the attention-guided full-scale feature aggregation change detection network model AFSNet, wherein a calculation formula comprises:
wherein TP is a true class, FP is a false positive class, FN is a false negative class, TN is a true negative class, and k represents class (0, 1).
CN202211003665.9A 2022-08-19 2022-08-19 Method for detecting change of remote sensing image by full-scale feature aggregation Active CN115456957B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211003665.9A CN115456957B (en) 2022-08-19 2022-08-19 Method for detecting change of remote sensing image by full-scale feature aggregation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211003665.9A CN115456957B (en) 2022-08-19 2022-08-19 Method for detecting change of remote sensing image by full-scale feature aggregation

Publications (2)

Publication Number Publication Date
CN115456957A CN115456957A (en) 2022-12-09
CN115456957B true CN115456957B (en) 2023-09-01

Family

ID=84299549

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211003665.9A Active CN115456957B (en) 2022-08-19 2022-08-19 Method for detecting change of remote sensing image by full-scale feature aggregation

Country Status (1)

Country Link
CN (1) CN115456957B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117218535B (en) * 2023-09-12 2024-05-14 黑龙江省网络空间研究中心(黑龙江省信息安全测评中心、黑龙江省国防科学技术研究院) SFA-based long-term forest coverage change detection method

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111325771A (en) * 2020-02-17 2020-06-23 武汉大学 High-resolution remote sensing image change detection method based on image fusion framework
CN112016436A (en) * 2020-08-28 2020-12-01 北京国遥新天地信息技术有限公司 Remote sensing image change detection method based on deep learning
CN112949549A (en) * 2021-03-19 2021-06-11 中山大学 Super-resolution-based change detection method for multi-resolution remote sensing image
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system
CN113378727A (en) * 2021-06-16 2021-09-10 武汉大学 Remote sensing image binary change detection method based on characteristic deviation alignment
CN113569810A (en) * 2021-08-30 2021-10-29 黄河水利委员会黄河水利科学研究院 Remote sensing image building change detection system and method based on deep learning
CN113887459A (en) * 2021-10-12 2022-01-04 中国矿业大学(北京) Open-pit mining area stope change area detection method based on improved Unet +
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114283164A (en) * 2022-03-02 2022-04-05 华南理工大学 Breast cancer pathological section image segmentation prediction system based on UNet3+
CN114332117A (en) * 2021-12-23 2022-04-12 杭州电子科技大学 Post-earthquake landform segmentation method based on UNET3+ and full-connection condition random field fusion
CN114359723A (en) * 2021-12-27 2022-04-15 陕西科技大学 Remote sensing image change detection method based on space spectrum feature fusion network
CN114743110A (en) * 2022-03-01 2022-07-12 西北大学 Multi-scale nested remote sensing image change detection method and system and computer terminal
CN114881916A (en) * 2022-03-02 2022-08-09 中国人民解放军战略支援部队信息工程大学 Remote sensing image change detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110705457A (en) * 2019-09-29 2020-01-17 核工业北京地质研究院 Remote sensing image building change detection method
CN111325771A (en) * 2020-02-17 2020-06-23 武汉大学 High-resolution remote sensing image change detection method based on image fusion framework
CN112016436A (en) * 2020-08-28 2020-12-01 北京国遥新天地信息技术有限公司 Remote sensing image change detection method based on deep learning
CN112949549A (en) * 2021-03-19 2021-06-11 中山大学 Super-resolution-based change detection method for multi-resolution remote sensing image
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system
CN113378727A (en) * 2021-06-16 2021-09-10 武汉大学 Remote sensing image binary change detection method based on characteristic deviation alignment
CN113569810A (en) * 2021-08-30 2021-10-29 黄河水利委员会黄河水利科学研究院 Remote sensing image building change detection system and method based on deep learning
CN113887459A (en) * 2021-10-12 2022-01-04 中国矿业大学(北京) Open-pit mining area stope change area detection method based on improved Unet +
CN113936217A (en) * 2021-10-25 2022-01-14 华中师范大学 Priori semantic knowledge guided high-resolution remote sensing image weakly supervised building change detection method
CN114332117A (en) * 2021-12-23 2022-04-12 杭州电子科技大学 Post-earthquake landform segmentation method based on UNET3+ and full-connection condition random field fusion
CN114359723A (en) * 2021-12-27 2022-04-15 陕西科技大学 Remote sensing image change detection method based on space spectrum feature fusion network
CN114743110A (en) * 2022-03-01 2022-07-12 西北大学 Multi-scale nested remote sensing image change detection method and system and computer terminal
CN114283164A (en) * 2022-03-02 2022-04-05 华南理工大学 Breast cancer pathological section image segmentation prediction system based on UNet3+
CN114881916A (en) * 2022-03-02 2022-08-09 中国人民解放军战略支援部队信息工程大学 Remote sensing image change detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images;Sheng Fang等;《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》;第19卷;第1-5页摘要、第233页左栏最后1段 *

Also Published As

Publication number Publication date
CN115456957A (en) 2022-12-09

Similar Documents

Publication Publication Date Title
CN112668494A (en) Small sample change detection method based on multi-scale feature extraction
CN110111345B (en) Attention network-based 3D point cloud segmentation method
CN110728658A (en) High-resolution remote sensing image weak target detection method based on deep learning
CN110705457A (en) Remote sensing image building change detection method
CN111080629A (en) Method for detecting image splicing tampering
CN108596108B (en) Aerial remote sensing image change detection method based on triple semantic relation learning
CN109871875B (en) Building change detection method based on deep learning
Wang et al. FE-YOLOv5: Feature enhancement network based on YOLOv5 for small object detection
CN112017192B (en) Glandular cell image segmentation method and glandular cell image segmentation system based on improved U-Net network
CN115601661A (en) Building change detection method for urban dynamic monitoring
CN113569788B (en) Building semantic segmentation network model training method, system and application method
CN113887472A (en) Remote sensing image cloud detection method based on cascade color and texture feature attention
CN115456957B (en) Method for detecting change of remote sensing image by full-scale feature aggregation
CN114596316A (en) Road image detail capturing method based on semantic segmentation
CN116363527A (en) Remote sensing image change detection method based on interaction feature perception
CN116703885A (en) Swin transducer-based surface defect detection method and system
CN116168240A (en) Arbitrary-direction dense ship target detection method based on attention enhancement
CN114913434A (en) High-resolution remote sensing image change detection method based on global relationship reasoning
CN115330703A (en) Remote sensing image cloud and cloud shadow detection method based on context information fusion
Li et al. MF-SRCDNet: Multi-feature fusion super-resolution building change detection framework for multi-sensor high-resolution remote sensing imagery
Hu et al. Supervised multi-scale attention-guided ship detection in optical remote sensing images
Li et al. Learning to holistically detect bridges from large-size vhr remote sensing imagery
CN116071645A (en) High-resolution remote sensing image building change detection method and device and electronic equipment
Zhang et al. A Novel SAR Images Change Detection Method Based on Dynamic TUNET-CRF Model
Li et al. Infrared Small Target Detection Algorithm Based on ISTD-CenterNet.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant