CN113420662A - Remote sensing image change detection method based on twin multi-scale difference feature fusion - Google Patents

Remote sensing image change detection method based on twin multi-scale difference feature fusion Download PDF

Info

Publication number
CN113420662A
CN113420662A CN202110698130.7A CN202110698130A CN113420662A CN 113420662 A CN113420662 A CN 113420662A CN 202110698130 A CN202110698130 A CN 202110698130A CN 113420662 A CN113420662 A CN 113420662A
Authority
CN
China
Prior art keywords
remote sensing
module
change detection
twin
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110698130.7A
Other languages
Chinese (zh)
Other versions
CN113420662B (en
Inventor
张向荣
余江南
何领
唐旭
陈璞花
程曦娜
冯婕
古晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110698130.7A priority Critical patent/CN113420662B/en
Publication of CN113420662A publication Critical patent/CN113420662A/en
Application granted granted Critical
Publication of CN113420662B publication Critical patent/CN113420662B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image change detection method based on twin multi-scale difference feature fusion, which mainly solves the problems that the feature fusion method in the prior art is single, and the segmentation edge contour of a target in a change detection result is not fine enough. The scheme is as follows: constructing an encoder based on a ResNet-34 network; introducing a double attention mechanism module; constructing a twin residual multi-core pooling module; building a characteristic difference module; building a decoder with a single-branch structure; constructing a remote sensing image change detection network based on twin multi-scale difference feature fusion by utilizing the modules and a coder-decoder, and training the remote sensing image change detection network; and carrying out change detection on the remote sensing image by using the trained network. According to the invention, the expression capability of the characteristics is improved by introducing the double-attention machine mechanism module, constructing the twin residual multi-core pooling module and the characteristic difference module, so that the change detection precision of the remote sensing image is improved, and the method can be used for land utilization analysis, environment detection, resource exploration and city planning.

Description

Remote sensing image change detection method based on twin multi-scale difference feature fusion
Technical Field
The invention belongs to the technical field of image processing, and further relates to a remote sensing image change detection method which can be used for land utilization analysis, environment detection, resource exploration and urban planning.
Background
In recent years, remote sensing images not only play an important role in the military field, but also are primarily applied in the commercial field. The application range of the method covers a plurality of important fields such as national general survey, geological survey, water conservancy construction, oil exploration, mapping, environmental detection, earthquake prediction, railway and highway site selection, archaeological study and the like. The technical improvement shortens the imaging period and the updating period of the remote sensing image, the imager can rapidly acquire a large amount of data within a few days, and the multiple requirements of people on data acquisition, information processing, data updating and data analysis in real time are met. And then, by matching with a corresponding algorithm, ground feature change information meeting certain requirements can be extracted from the high-resolution remote sensing image, which can generate great influence on the aspects of city planning, road network construction, land monitoring and utilization, disaster resistance and reduction and the like.
The remote sensing change detection is to determine and analyze the change of the ground features in the area, including the change of the position and the range of the ground features and the change of the property state of the ground features, by utilizing the multi-source remote sensing images covering the same earth surface area in different historical periods and related geographic data and adopting an image graphic processing theory and mathematical model method. At present, a plurality of change detection methods, such as EC-EF, FC-Sim-conc, FC-Sim-diff, DASNet, SNUNet-CD and the like, have achieved good detection effects.
Three similar structures, namely EC-EF, FC-Sim-conc and FC-Sim-diff, are proposed in the published document "full connected position networks for change detection [ C ]. 201825 th IEEE International Conference on Image Processing (ICIP),2018: 4063-. The EC-EF basic idea is to stack the front and rear time phases, input the signals to the encoder and then decode the signals, and the structure has the disadvantages that the images of the two time phases are similar, the number of convolution kernels of the encoder is increased by directly stacking and inputting the signals, and the redundancy is overlarge, so that the complexity of a model is increased, and the convergence is slowed down; and the FC-Sim-conc model and the FC-Sim-diff model are twin models, and a weight sharing encoder is used, so that model redundancy is reduced, but the defects that only the stacking operation and the difference operation are used for the feature map are too single, and the features extracted by the encoder cannot be fully utilized.
In the published document "DASNet", Dual active reliable position networks for change detection of high resolution of images [ J ]. IEEE Journal of Selected Topics in Applied Earth objectives and Remote Sensing,2020, PP (99) ", Chen J, Yuan Z, Peng J et al propose DASNet network, which uses VGG-16 to build a twin structure network for extracting depth features, and adds a double-attention mechanism module in the network for capturing Remote correlation to obtain more discrimination feature representations and enhance the recognition performance of the model. The method obtains remarkable detection results on the CDD data set and the BCDD data set, but the method has the defects that the model convergence speed is low, the structure is asymmetric, long connection is not included, and therefore very accurate contour information is difficult to extract.
A Densey Connected simple Network for Change Detection of VHR images, IEEE society and Remote Sensing Letters PP.99(2021):1-5 ], proposes a SNUNet-CD Network, which adopts UNet + + as a backbone Network and integrates a deep supervision and attention mechanism to achieve good effects on a CDD data set. However, the method has the disadvantages that when two time phase characteristics are fused in the network, only channel stacking operation is used, the operation is too single, and the middle layer of the backbone network is complicated, so that the model parameter quantity is large, and the convergence speed of model training is slow.
Disclosure of Invention
The invention aims to provide a remote sensing image change detection method based on twin multi-scale difference feature fusion, and aims to solve the problems that the prior art cannot fully utilize features extracted by an encoder, the feature fusion method is single, and the segmentation edge contour of a target in a change detection result is not fine enough.
The technical idea of the invention is as follows: the method comprises the steps that a twin residual multi-core pooling module SRMP and a feature difference module FDM are designed to provide feature difference information of different dimensionalities for a decoder, so that features extracted by an encoder are utilized more fully, and better difference information is obtained; meanwhile, a network encoder is designed into a twin structure, and long connection is introduced to fuse features of different levels, so that the target segmentation edge contour in a change detection result is finer. The implementation scheme comprises the following steps:
(1) constructing a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion:
(1a) constructing a coder with a twin structure, namely removing a global pooling layer and a full-connection layer at the rear end of a ResNet-34 network to serve as a feature extraction network, constructing two identical feature extraction networks into the coder with the twin structure, and sharing weight parameters of the two branched feature extraction networks;
(1b) introducing a double attention machine module DAM formed by connecting a channel attention module CAM and a space attention module SAM in parallel;
(1c) constructing a twin residual multi-core pooling module SRMP consisting of a characteristic difference calculating unit and pooling layers of 4 cores with different sizes which are connected in parallel;
(1d) building a characteristic difference module FDM formed by cascading a characteristic difference calculation unit and a residual block;
(1e) building a single-branch decoder consisting of 4 decoding blocks with the same structure and output module cascades, wherein each decoding block consists of 1 × 1 convolution with the step length of 1, 3 × 3 deconvolution with the step length of 2 and 1 × 1 convolution cascade with the step length of 1, and the input of the decoding block is the fusion characteristic of the output of the decoding block at the previous layer and the output of the characteristic difference module at the same layer;
(1f) connecting a coder with a twin structure, a double attention mechanism module DAM, a twin residual multi-core pooling module SRMP, a characteristic difference module FDM and a single-branch decoder to obtain a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference characteristic fusion, and setting a loss function L of the networklossAdding a Tversety loss function and a BCE loss function;
(2) training a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion:
(2a) downloading an optical remote sensing image change detection data set from a public data set webpage, and acquiring well divided training set data, verification set data and test set data;
(2b) for each pair of remote sensing image data of the training set, firstly carrying out online amplification operation on the remote sensing image data, then normalizing the pixel value of the remote sensing image data to a [0, 1] interval, and simultaneously carrying out pixel value normalization operation on the verification set data;
(2c) respectively inputting the normalized training data to the front ends of two encoder branches of a Sim-CDM-Net network, obtaining prediction output through forward propagation, and utilizing a loss function LlossCalculating a loss value between each predicted output and its true label;
(2d) optimizing model parameters by using a gradient descent algorithm during back propagation, and obtaining a trained change detection model after multiple iterations until a loss function is converged;
(3) carrying out change detection by utilizing a trained remote sensing image change detection network Sim-CDM-Net based on twin multiscale difference feature fusion:
(3a) performing pixel normalization on a pair of remote sensing images of a test set to a [0, 1] interval, respectively inputting the pixels to the front ends of two encoder branches of a trained Sim-CDM-Net network, outputting a single-channel characteristic diagram through a trained encoder and decoder by forward propagation, and converting the single-channel characteristic diagram into a variation detection output prediction probability diagram with a numerical interval of [0, 1] by a Sigmoid activation function;
(3b) and (4) carrying out binarization processing on the probability map obtained in the step (3a) to obtain a binary classification change detection output result map based on pixels, wherein the pixel values of the two are 0 or 1.
Compared with the prior art, the invention has the following advantages:
firstly, a double-attention machine module DAM is introduced, and areas needing important attention in a feature map are effectively captured through the module, so that high-level semantic information of a change area is better enhanced;
secondly, the twin residual multi-core pooling module SRMP is constructed, and the module can be used for detecting variable targets with different sizes and providing difference characteristics by depending on a plurality of effective visual fields, so that the detection result of a narrow area is improved;
thirdly, the invention constructs a feature difference module FDM, and provides feature difference information of different dimensions for a decoder through the module, so that features of different levels extracted by an encoder can be more fully utilized, the convergence speed is accelerated, and the detection results of contours, edges and small targets are improved;
fourthly, the invention constructs the Sim-CDM-Net network, and the encoder of the Sim-CDM-Net network is constructed by removing the global pooling layer and the full connection layer at the rear end of the ResNet-34 network, so that the trained weight on the ImageNet data set can be preloaded on the encoder before the input data is trained, and the trained weight is used as the initial parameter of the encoder, thereby being beneficial to reducing the number of samples required by the training network and improving the speed of model training and convergence.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is an overall framework of a twin multiscale difference feature fusion based remote sensing image change detection network Sim-CDM-Net model constructed in the invention;
FIG. 3 is a diagram of the structure of the use of a double-attention-machine module DAM introduced in the present invention;
FIG. 4 is a block structure of a twin residual multi-core pooling module SRMP constructed in the present invention;
FIG. 5 is a block structure of a feature difference module FDM constructed in the present invention;
FIG. 6 is a comparison of change detection on CDD data sets using the present invention and the prior SNUNet-CD method, respectively;
FIG. 7 is a comparison of change detection on BCDD datasets using the present invention and existing DASNet methods, respectively.
Detailed Description
The embodiments and effects of the present invention will be further described with reference to the accompanying drawings.
Example implementations of the invention include: the method comprises three parts of constructing a change detection model, acquiring data, training the change detection model and carrying out change detection by utilizing the trained change detection model.
Referring to fig. 1, the implementation steps of each part include the following:
firstly, a change detection model is built.
And step 1, constructing an encoder based on a ResNet-34 network.
(1.1) removing a global pooling layer and a full-connection layer at the rear end of the ResNet-34 network, and taking the global pooling layer and the full-connection layer as a feature extraction network, wherein the feature extraction network is composed of 4 coding blocks with similar structures and average pooling layer cascades, each coding block is composed of a plurality of residual block cascades with the same structures, and the number of the residual blocks contained in the 4 coding blocks is 3, 4, 6 and 3 in sequence.
The structure of each residual block is as follows in sequence: the 1 st convolution layer → the 1 st normalization layer → the 1 st activation function layer → the 2 nd convolution layer, the 2 nd normalization layer → the addition unit → the 2 nd activation function layer. Wherein:
the 1 st convolutional layer and the 2 nd convolutional layer are constructed by using 2-D convolutional kernels, the number of the convolutional kernels is the same as the number of channels of the input feature map, the size of the convolutional kernels is 3 multiplied by 3, the convolution step length is set to be 1, the convolutional layers are used for carrying out feature extraction on the input image feature map, and input data of the convolutional layers are calculated as follows:
Figure BDA0003129361300000051
in the formula, xkA kth feature map representing the convolution layer output, k being 1,2out,foutAs the number of convolution kernels, finRepresenting the number of input profiles, xmRepresenting the mth channel of the input signature, representing a two-dimensional convolution operation of the signature, wmkAnd bkRespectively, the weight and bias parameters of the convolution layer;
the 1 st and 2 nd batch normalization layers are used for normalizing the characteristics of the input samples to change the data into 0-1 distributed data with the mean value of 0 and the standard deviation of 1, and then carrying out scale transformation and migration on the data to obtain the output of the batch normalization layer
Figure BDA0003129361300000052
Figure BDA0003129361300000053
Where γ and β represent two learned parameters of the batch normalization layer, γ is a scale factor, β is a translation factor,
Figure BDA0003129361300000054
the output after 0-1 distribution normalization of the data of the batch is represented, and is calculated as follows:
Figure BDA0003129361300000055
Figure BDA0003129361300000056
Figure BDA0003129361300000057
in the formula (I), the compound is shown in the specification,
Figure BDA0003129361300000058
representing the input of the k-th channel of the batch normalization layer, epsilon is a small positive number used to avoid a divisor of 0, muBFor the mean of each of the batches of training data,
Figure BDA0003129361300000059
for the variance of each batch of training data, B is the size of the batch;
the addition calculation unit is used for summing the elements of the corresponding positions of the two feature maps connected in short range in the residual block to obtain the output O of the addition calculation unit:
O=F+I
wherein, F represents the input of the 1 st convolution layer of the residual block where the addition computing unit is located, and I represents the output of the 2 nd normalization layer of the residual block where the addition computing unit is located;
the 1 st and 2 nd activation layers are both ReLU activation functions and are defined as follows:
ReLU(x)=max(0,x)
in the formula, max (×) represents the operation of taking the maximum value, and x represents the input.
And (1.2) connecting the two feature extraction networks with the same structure in parallel to form a twin-structure encoder, wherein the weight parameters of the two feature extraction networks of the encoder are shared.
And 2, introducing a double-attention machine module DAM as shown in figure 3.
(2.1) connecting the channel attention module CAM and the space attention module SAM in parallel to form a double-attention mechanism module DAM, and respectively performing feature enhancement on the channel dimension and the space dimension.
(2.1.1) the channel attention module CAM having a structure of: the 1 st convolution layer → the 1 st activation function layer → the tunnel attention calculation unit → the 2 nd convolution layer → the batch normalization layer → the 2 nd activation function layer → the 3 rd convolution layer. Wherein: the 1 st, 2 nd and 3 rd convolution layers use 2-D convolution kernels, the number of the convolution kernels is the same as the number of channels of the input feature map, the sizes of the convolution kernels are 3 multiplied by 3, 3 multiplied by 3 and 1 multiplied by 1 respectively, the convolution step sizes are set to be 1, and the 1 st and 2 nd active layers are both ReLU active functions.
(2.1.2) the spatial attention module SAM, having a structure of: the first convolution layer → the first activation function layer → the spatial attention calculating unit → the second convolution layer → the batch normalization layer → the second activation function layer → the third convolution layer. Wherein: the first, second and third convolutional layers use 2-D convolutional kernels, the number of convolutional kernels is the same as the number of channels of the input feature map, the sizes of the convolutional kernels are 3 × 3, 3 × 3 and 1 × 1 respectively, the convolution step size is set to be 1, and the first and second active layers are both ReLU active functions.
(2.1.3) connecting the channel attention module CAM and the space attention module SAM in parallel to obtain a double-attention mechanism module DAM.
(2.2) the dual attention mechanism module DAM is configured to perform feature enhancement on the high-dimensional features output by the encoder through the channel attention module CAM and the spatial attention module SAM respectively in the channel dimension and the spatial dimension, and implement the following:
(2.2.1) the channel attention module CAM firstly passes the input feature map through a 1 st convolution layer and a 1 st activation function layer to obtain an initially processed feature map A; then inputting the characteristic diagram A into a channel attention calculation unit, and carrying out response enhancement or suppression of different channels on the input characteristic diagram to obtain a channel characteristic diagram Gc(ii) a Finally, the feature graph G is usedcSequentially carrying out feature sorting on a 2 nd convolution layer, a batch normalization layer, a 2 nd activation function layer and a 3 rd convolution layer to obtain an output feature diagram H with strengthened channel correlationc
The channel attention calculation unit operates to:
input characteristic diagram A epsilon RC×H×WThe channel feature is changed into a reduced channel feature through a matrix dimension-changing operationFIG. Bc∈RC×NC is the number of feature map channels, H is the height of the feature map, W is the width of the feature map, and N is H × W;
to BcTransposing to obtain a transposed channel characteristic diagram Cc∈RN×CThe channel feature map B after dimension reductioncAnd the transposed channel feature map CcMatrix multiplication is carried out to obtain a channel incidence matrix Dc∈RC×C
Relating the channels to a matrix DcMapping between (0, 1) through softmax activation layer to obtain channel attention matrix Xc∈RC×C,XcThe mutual response between the dimensions of the channels of the original input feature diagram A is calculated according to the following formula:
Figure BDA0003129361300000071
in the formula, AiAnd AjI and j channels, x, of the input feature map AjiThe response of the ith channel on the jth channel of the input feature map A;
attention matrix X of channelcAnd the reduced channel feature map Bc∈RC×NMatrix multiplication is carried out, and then the matrix multiplication is multiplied by a scale coefficient beta to obtain a channel attention feature map Ec∈RC×N
Will EcThe channel characteristic diagram is changed into a channel characteristic diagram F after the dimension is increased through a matrix dimension changing operationc∈RC×H×WThen F is addedcAdding the input characteristic diagram A to obtain a final output channel characteristic diagram Gc∈RC×H×W,GcEach channel of (2) is a weighted sum of a corresponding channel of the original input feature map a and all channels, and a specific calculation formula thereof is as follows:
Figure BDA0003129361300000072
in the formula, GjFor outputting a channel profile GcJ (1) ofChannel, AiAnd AjThe ith channel and the jth channel of the input feature map a are respectively, and β is a scale coefficient which is initialized to 0 and gradually learned to obtain a larger weight.
(2.2.2) the spatial attention module SAM firstly obtains a feature map A after initial processing by passing the input feature map through a first convolution layer and a second activation function layer; then, the characteristic diagram A is input into a space attention calculation unit, a significant region with a large response value is searched for the characteristic diagram, and a space characteristic diagram G is obtaineds(ii) a Finally, the feature graph G is usedsPerforming characteristic arrangement through the second convolution layer, the batch normalization layer, the second activation function layer and the third convolution layer to obtain an output characteristic diagram H with strengthened spatial correlations
The spatial attention computing unit operates to:
the input characteristic diagram is A e RC×H×WFirstly, the data are respectively input into 21 × 1 2-D convolution layers to obtain 2 preprocessed spatial feature maps Bs∈RC×H×WAnd Cs∈RC×H×W
Then B is transformed by matrix dimension changing operationsAnd CsRespectively become space feature map B 'after dimensionality reduction's∈RC×NAnd C's∈RC×NPrepared from B'sTransposed and C'sCarrying out matrix multiplication to obtain a spatial incidence matrix Ds∈RN×NWherein N ═ H × W;
will DsObtaining a spatial attention matrix S through a softmax activation layers∈RN×N,SsThe specific calculation formula of the mutual response between the elements of the spatial dimension in the original input feature map A is as follows:
Figure BDA0003129361300000081
in the formula, BiRepresenting the preprocessed spatial feature map BsThe ith position of (1), CjRepresenting the preprocessed spatial feature map CsJ-th position of (1), sjiPresentation inputResponse of ith position on feature map A at jth position;
obtaining a space feature map B 'after dimensionality reduction's∈RC×NAnd spatial attention matrix SsThe transpose of (A) is subjected to matrix multiplication, and then multiplied by a scale coefficient alpha to obtain a space attention feature map Es∈RC×N
Will EsSpace characteristic diagram F after being changed into ascending dimension through matrix dimension changing operations∈RC×H×WThen F is addedsAdding the obtained data to the original input characteristic diagram A to obtain the final output space characteristic diagram Gs∈RC×H×W,GsIs the weighted sum of each position of the original feature map a, and the specific calculation formula is as follows:
Figure BDA0003129361300000082
in the formula, GjFor outputting a spatial feature map GsThe j-th position of, BiIs a preprocessed spatial feature map BsAt the ith position, AjTo input the jth position of the feature map a, α is a scale factor, which is initialized to 0 and gradually learned to get a larger weight.
(2.2.3) channel emphasis profile H of CAM output of channel attention ModulecSpatial enhancement feature map H output by spatial attention module SAMsAnd adding the obtained result with the original input high-dimensional characteristic diagram A to obtain a final output high-dimensional characteristic diagram of the DAM.
And 3, building a twin residual multi-core pooling module SRMP as shown in figure 4.
(3.1) the twin residual multi-core pooling module SRMP has the structure that: the method comprises the following steps of calculating the characteristic difference → a multi-core pooling layer → a convolutional layer → an upper sampling layer, wherein the multi-core pooling layer is formed by parallel connection of pooling layers of 4 cores with different sizes.
(3.2) the operation flow of the twin residual multi-core pooling module SRMP is as follows:
firstly, output characteristics of two branches in the encoder in step 1 are determinedThe characteristic diagram respectively passes through two double-attention machine module DAMs in the step 2 to obtain a reinforced two-time-phase output high-dimensional characteristic diagram A1And A2As input to the SRMP module, A1And A2The feature difference graph A is obtained by making a difference through a feature difference calculation unit and calculating the absolute value of the differencedThe calculation is as follows:
Ad=|A1-A2|;
then, the difference map AdInputting a multi-core pooling layer, performing maximum pooling operation by using pooling layers of 4 cores with different sizes, setting the sizes of the pooling cores of the 4 pooling layers to be 2 × 2, 3 × 3, 4 × 4 and 5 × 5 respectively, and setting the step length to be 2, 3, 4 and 5 respectively, thereby obtaining 4 pooled feature difference maps A with different resolutionsp2、Ap3、Ap4、Ap5
Then, the obtained feature difference maps A with different resolutions are obtainedd、Ap2、Ap3、Ap4、Ap5Carrying out convolution through 5 2-D convolution layers respectively, sharing weights of the 5 convolution layers, setting the size of a convolution kernel to be 1 multiplied by 1, setting the convolution step length to be 1, setting the number of output channels to be 1, and obtaining 5 single-channel feature difference graphs A with different resolutionscd、Ac2、Ac3、Ac4、Ac5
Next, the 5 single-channel feature difference maps Acd、Ac2、Ac3、Ac4、Ac5Through nearest neighbor interpolation, up-sampling to and inputting characteristic graph A1And A2The same resolution up-sampled single-channel feature difference map Aud、Au2、Au3、Au4、Au5The nearest neighbor interpolation is calculated as follows:
Figure BDA0003129361300000091
in the formula, f (i, j) is a gray value of a position (i, j), u and v are both values larger than 0 and smaller than 1, f (i + u, j + v) is a calculated gray value of a position to be solved on the single-channel feature difference map after upsampling, and f (i, j), f (i, j +1), f (i +1, j) and f (i +1, j +1) are known gray values of corresponding positions on the single-channel feature difference map;
finally, the 5 up-sampled single-channel feature difference maps Aud、Au2、Au3、Au4And Au5Feature A of high dimension comparable to two-time input1And A2Performing feature fusion, i.e. Aud、Au2、Au3、Au4、Au5、A1And A2And stacking the 7 feature maps on the channel dimension to obtain the final output high-dimensional fusion feature of the SRMP module.
And 4, building a characteristic difference module FDM, as shown in figure 5.
(4.1) the feature difference module FDM has the structure that: a feature difference calculation unit → a volume block → a residual block, wherein the structure of the feature difference calculation unit is the same as that in step (3.2); the structure of the residual block is the same as that in the step (1.1); the structure of the rolling block is as follows: convolution layer → batch normalization layer → ReLU activation function layer, convolution kernel size of convolution layer is 3 × 3, convolution step is set to 1, and the setting of convolution kernel number is calculated as follows:
Figure BDA0003129361300000101
in the formula, COFor the number of convolution kernels, i.e. the number of output channels, CIIs a second highest dimension feature difference map BdThe number of the channels of (a) is,
Figure BDA0003129361300000102
representing an ceiling function.
(4.2) the operation flow of the feature difference module FDM is as follows:
firstly, for two time-phase secondary high-dimensional characteristic diagram B output by 2 coding blocks at the same level in the coder1And B2A 1 to B1And B2The input feature difference calculation unit makes a difference and calculates an absolute value to obtain a primary second-highest-dimension feature difference image Bd
Then, B is mixeddPerforming channel dimensionality reduction through 1 convolution block to obtain a second-highest-dimensional feature difference graph B with reduced channel dimensionalityr1
Then, B is addedr1Inputting the data into 1 residual block for further feature extraction to obtain an output secondary high-dimensional feature difference graph Br2
Finally, two time phase secondary high-dimensional feature maps B1、B2And outputting a second highest dimension feature difference map Br2Performing feature fusion, i.e. B1、B2And Br2And stacking the 3 characteristic graphs on the channel dimension to obtain the final output secondary high-dimensional fusion characteristic of the FDM module.
And 5, building a decoder with a single-branch structure.
(5.1) constructing a decoder consisting of 4 decoding blocks with the same structure and output module cascade:
the structure of each decoding block is as follows: the 1 st convolutional layer → the 1 st deconvolution layer → the 2 nd convolutional layer, wherein the convolution kernel size of the 1 st convolutional layer is 1 × 1, the convolution step is 1, and the number of convolution kernels is 1/4 of the number of channels of the input feature map; the convolution kernel size of the 1 st deconvolution layer is 3 multiplied by 3, the convolution step length is 2, the output filling is set to be 1 pixel, and the number of the convolution kernels is the same as the number of channels of the input characteristic diagram; the convolution kernel size of the 2 nd convolution layer is 1 multiplied by 1, the convolution step length is 1, and the number of convolution kernels is the same as the number of channels of the input feature map;
the structure of the output module is as follows: a first deconvolution layer → a first activation function layer → a first convolution layer → a second activation function layer → a second convolution layer, wherein the convolution kernel size of the first deconvolution layer is 4 × 4, the convolution step is 2, and the number of convolution kernels is 32; the first and second activation function layers are ReLU functions; the convolution kernel size of the first convolution layer is 3 multiplied by 3, the convolution step is 1, and the number of convolution kernels is 32; the convolution kernel size of the second convolution layer is 3 × 3, the convolution step is 1, and the number of convolution kernels is set to 1.
(5.2) the decoder obtains a maximum feature map from the input minimum feature map through four cascaded coding blocks, and then integrates information of the maximum feature map through an output module to obtain a final decoder output single-channel feature map, wherein the size of the feature map is consistent with the size of an input image of the encoder, the input of each decoding block in the encoder is the fusion of the output feature map of the previous decoding block and the same-layer low-dimensional feature map output by the feature difference module FDM in the step 4, and the fusion mode is channel stacking.
And 6, constructing a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion, as shown in FIG. 2.
(6.1) connecting the encoder of the twin structure in the step 1, the module DAM of the double attention machine in the step 2, the module SRMP of the twin residual multi-core pooling in the step 3, the module FDM of the characteristic difference in the step 4 and the decoder of the single branch in the step 5 with each other to obtain a Simm-CDM-Net network:
(6.1.1) respectively cascading the rear end of each branch of the encoder of the twin structure constructed in the step 1 with the double-attention machine module DAM constructed in the step 2, wherein the output characteristic diagram of each branch of the encoder is the input of the double-attention machine module DAM;
(6.1.2) cascading the two double-attention-machine system modules DAM with the parallel structure in the step 2 with the twin residual multi-core pooling module SRMP constructed in the step 3, and performing channel stacking fusion on the high-dimensional output feature maps of the two branched double-attention-machine system modules DAM, wherein the fused high-dimensional feature maps are used as the input of the twin residual multi-core pooling module SRMP;
(6.1.3) cascading the twin residual multi-core pooling module SRMP in the step 3 with the single-branch decoder constructed in the step 5, wherein the output characteristic diagram of the twin residual multi-core pooling module SRMP is the initial input of the single-branch decoder;
(6.1.4) bridging the characteristic difference module FDM constructed in the step 4 between the two branches of the encoder in the step 1 and the decoder in the step 5, and carrying out channel stacking fusion on the low-dimensional output characteristic diagrams of the coding blocks of the two branches of the encoder at different levels, wherein the fused low-dimensional characteristic diagrams are used as the input of the decoding blocks of the decoder at the same level;
(6.2) setting the loss function L of the Sim-CDM-Net networklossIs the sum of the Tversky loss function and the BCE loss function, which is expressed as follows:
Lloss=LT+LBCE
Figure BDA0003129361300000121
Figure BDA0003129361300000122
in the formula, LBCERepresenting BCE loss function, i.e. binary cross-entropy loss function, yiThe true tag value, p, representing the sampleiRepresenting a predicted probability value of the model output; LT represents a Tversey loss function, A represents a prediction output value, B represents a real label value, | A-B | represents a false positive, | B-A | represents a false negative, alpha and beta are hyper-parameters for controlling the false positive and the false negative respectively, alpha and beta need to be adjusted to control the balance between the false positive and the false negative due to the imbalance problem of positive and negative samples in a remote sensing image, and alpha is 0.3 and beta is a common value.
And secondly, acquiring data and training a change detection model.
And 7, training the remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion.
(7.1) downloading an optical remote sensing image change detection data set from a public data set webpage, and acquiring well divided training set data, verification set data and test set data;
(7.2) carrying out online amplification operation on each pair of remote sensing image data of the training set, namely randomly carrying out one or more operations of horizontal turning, vertical turning, diagonal turning, image displacement and size scaling on each pair of remote sensing image data before inputting the remote sensing image data into a network to obtain a plurality of pairs of image data;
(7.3) performing pixel value normalization on the amplified training set data and the unamplified verification set data to a [0, 1] interval, wherein the normalization is defined as follows:
Figure BDA0003129361300000123
in the formula, R represents the optical remote sensing image after normalization processing, I represents the optical remote sensing image before normalization processing, min (-) represents taking the minimum value, and max (-) represents taking the maximum value;
(7.4) inputting a pair of normalized image data to the front ends of two encoder branches of the Sim-CDM-Net network, respectively, obtaining prediction output by forward propagation, and utilizing the loss function L in the step (6.2)lossCalculating a loss value between each predicted output and its true label;
(7.5) in the training process, the learning rate adopts an exponential decay strategy, and the formula is defined as follows:
lr=lrinit×bepoch+lrmin
in the formula, lrinitRepresenting an initial learning rate of lrinit=1×10-4B is 0.99 as the attenuation base number, epoch is the number of iterations of the current training, and in order to prevent the learning rate from being too small, the minimum learning rate is set to lrmin=10-10
(7.6) optimizing all parameters by utilizing a gradient descent algorithm when the model is subjected to back propagation, and iterating for multiple times until the loss function converges LlossAnd stopping training to obtain a trained remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion.
And thirdly, carrying out change detection by using the trained change detection model.
And 8, performing change detection by using the trained remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion.
(8.1) performing pixel value normalization on a pair of remote sensing images of the test set data acquired in the step (7.1) to a [0, 1] interval, and respectively inputting the pixel values to the front ends of two encoder branches of the trained Sim-CDM-Net network;
then, carrying out forward propagation on the normalized test set data, and obtaining a single-channel characteristic diagram output by the Sim-CDM-Net network through a trained encoder and decoder;
finally, the single-channel feature map is converted into a change detection output prediction probability map with a numerical interval of [0, 1] through a Sigmoid activation function layer, and the prediction probability map is expressed as follows:
Figure BDA0003129361300000131
in the formula, piValues representing the ith position on the output prediction probability map, sig (. + -.) represents the sigmoid function, e(*)Denotes an exponential operation with a natural constant e as base, xiA value representing the ith position on the single channel feature map;
(8.2) performing binarization processing on the output prediction probability map obtained in the step (8.1) to obtain an output result map based on pixel binary classification change detection, wherein the pixel value of the output result map is 0 or 1 and is expressed as follows:
Figure BDA0003129361300000141
where i is the ith position on the output prediction probability map and the output result map, piValue representing the ith position on the output prediction probability map, OiIndicating the value of the ith position on the output result graph.
The effect of the present invention is further explained by combining the simulation experiment as follows:
1. simulation experiment conditions are as follows:
the hardware platform of the simulation experiment of the invention is as follows:
Figure BDA0003129361300000142
xeon (R) CPU E5-2678 [email protected] x 48, memory 128G, GPU GeForce RTX 2080Ti of memory 11G.
The software platform of the simulation experiment of the invention is as follows: ubuntu16.04 operating system and python 3.6, PyTorch deep learning framework.
The data set used in the simulation experiment of the invention is as follows:
(1) a CDD data set which is a data set for Remote Sensing image Change detection research disclosed in a paper "Change detection in Remote Sensing images using conditional adaptive networks [ J ]. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences,2018,42 (2)", by Lebedev M, Vizilter Y V, Vygolov O, and the like, wherein the high-resolution RGB optical Remote Sensing image data set comprises 10000 pairs of training data, 3000 pairs of verification data and 3000 pairs of test data, the data is a 3-channel image, the size is 256 × 256, and the Spatial resolution varies from 0.03M to 1M;
(2) BCDD data set: the data set for building change detection research of Remote Sensing images is disclosed in a paper "full connectivity networks for multiple source building interaction from an open air and a spatial image data set [ J ]. IEEE Transactions on Geoscience and Remote Sensing,2018,57(1):574-586 ], and comprises a pair of original Remote Sensing images of 15343 × 32507 pixels, which are subjected to non-overlapping sliding window cutting with the step size of 256, and randomly divided into 6096 pairs of training data and 1524 pairs of verification data, which are 3-channel RGB images, with the size of 256 × 256 and the spatial resolution of 0.3 m.
2. Simulation content and result analysis thereof:
simulation 1, which uses the current SNUNet-CD method and the current SNUNet-CD method to perform change detection on CDD data sets, respectively, to obtain a change detection result, as shown in fig. 6. Fig. 6(a) and 6(b) show a pair of two-time-phase remote sensing image data, the column in fig. 6(c) shows a real label, fig. 6(d) shows a change detection result of the conventional SNUNet-CD method, and fig. 6(e) shows a change detection result of the method of the present invention.
The conventional SNUNet-CD method is a Remote Sensing image Change Detection method proposed by Fang, S. et al in A Densey Connected simple Network for Change Detection of VHR images IEEE Geoscience and Remote Sensing Letters PP.99(2021):1-5.
As can be seen from fig. 6, the present invention is better than the conventional SNUNet-CD method in the details of the change detection segmentation, and compared with the conventional SNUNet-CD method, the present invention has finer detection results of the changes on the edges, contours, and slits, and more complete detection results of the changes on such fine objects as roads and vehicles.
The CDD data set is tested by using the trained model, and the overall classification precision OA, recall rate Rc, accuracy rate Pr and F1 numerical indexes are respectively calculated for the change detection result:
Figure BDA0003129361300000151
Figure BDA0003129361300000152
Figure BDA0003129361300000153
Figure BDA0003129361300000154
the 4 numerical indices of the present invention were compared to the prior SNUNet-CD method, as shown in Table 1.
TABLE 1 numerical index of the present invention and the existing SNUNet-CD on CDD data set
Figure BDA0003129361300000155
As can be seen from Table 1, the overall classification accuracy OA of the present invention on the CDD data set is 99.3%, the recall ratio Rc is 97.1%, the accuracy Pr is 97.0%, and the F1 index is 97.0%. The 4 indexes are all higher than that of the existing SNUNet-CD method, and the method proves that the method can obtain higher remote sensing image change detection precision.
Simulation 2, which respectively adopts the method of the present invention and the existing DASNet method to perform change detection on the BCDD number set, and obtains a change detection result, as shown in fig. 7. Fig. 7(a) and 7(b) show a pair of two-time-phase remote sensing image data, fig. 7(c) shows a real label, fig. 7(d) shows a change detection result of the conventional DASNet method, and fig. 7(e) shows a change detection result of the method of the present invention.
The existing DASNet method refers to: a method for detecting changes in Remote-Sensing images is proposed by Chen J, Yuan Z, Peng J et al in DASNet, Dual active reliable position networks for change detection of high resolution satellite images [ J ]. IEEE Journal of Selected Topics in Applied elevation observer and Remote Sensing,2020, PP (99) ".
As can be seen from fig. 7, the present invention is better than the existing DASNet method in the edge of the building change detection partition, and compared with the existing DASNet method, the method of the present invention not only can identify the change area of the building well, but also can resist the interference of the pseudo change well.
The BCDD data set is tested by using the trained model, the overall classification precision OA, recall rate Rc, accuracy rate Pr and F1 numerical indexes are respectively calculated for the change detection result, and the comparison with the 4 numerical indexes of the existing DASNet method is carried out, as shown in Table 2.
TABLE 2 numerical index of the present invention and existing DASNet method on BCDD dataset
Figure BDA0003129361300000161
As can be seen from Table 2, the overall classification accuracy OA of the present invention on the BCDD data set is 99.5%, the recall rate Rc is 93.6%, the accuracy is 95.1%, and the F1 index is 94.3%. The 4 indexes are all higher than that of the existing DASNet method, and the method is proved to be capable of obtaining higher remote sensing image change detection precision.
The simulation experiments show that the remote sensing image change detection performance of the invention is higher than that of other advanced methods in a plurality of indexes.

Claims (7)

1. A remote sensing image change detection method based on twin multi-scale difference feature fusion is characterized by comprising the following steps:
(1) constructing a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion:
(1a) constructing a coder with a twin structure, namely removing a global pooling layer and a full-connection layer at the rear end of a ResNet-34 network to serve as a feature extraction network, constructing two identical feature extraction networks into the coder with the twin structure, and sharing weight parameters of the two branched feature extraction networks;
(1b) introducing a double attention machine module DAM formed by connecting a channel attention module CAM and a space attention module SAM in parallel;
(1c) constructing a twin residual multi-core pooling module SRMP consisting of a characteristic difference calculating unit and pooling layers of 4 cores with different sizes which are connected in parallel;
(1d) building a characteristic difference module FDM formed by cascading a characteristic difference calculation unit and a residual block;
(1e) building a single-branch decoder consisting of 4 decoding blocks with the same structure and output module cascades, wherein each decoding block consists of 1 × 1 convolution with the step length of 1, 3 × 3 deconvolution with the step length of 2 and 1 × 1 convolution cascade with the step length of 1, and the input of the decoding block is the fusion characteristic of the output of the decoding block at the previous layer and the output of the characteristic difference module at the same layer;
(1f) connecting a coder with a twin structure, a double attention mechanism module DAM, a twin residual multi-core pooling module SRMP, a characteristic difference module FDM and a single-branch decoder to obtain a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference characteristic fusion, and setting a loss function L of the networklossAdding a Tversety loss function and a BCE loss function;
(2) training a remote sensing image change detection network Sim-CDM-Net based on twin multi-scale difference feature fusion:
(2a) downloading an optical remote sensing image change detection data set from a public data set webpage, and acquiring well divided training set data, verification set data and test set data;
(2b) for each pair of remote sensing image data of the training set, firstly carrying out online amplification operation on the remote sensing image data, then normalizing the pixel value of the remote sensing image data to a [0, 1] interval, and simultaneously carrying out pixel value normalization operation on the verification set data;
(2c) respectively inputting the normalized training data to the front ends of two encoder branches of a Sim-CDM-Net network, obtaining prediction output through forward propagation, and utilizing a loss function LlossCalculating a loss value between each predicted output and its true label;
(2d) optimizing model parameters by using a gradient descent algorithm during back propagation, and obtaining a trained change detection model after multiple iterations until a loss function is converged;
(3) carrying out change detection by utilizing a trained remote sensing image change detection network Sim-CDM-Net based on twin multiscale difference feature fusion:
(3a) performing pixel normalization on a pair of remote sensing images of a test set to a [0, 1] interval, respectively inputting the pixels to the front ends of two encoder branches of a trained Sim-CDM-Net network, outputting a single-channel characteristic diagram through a trained encoder and decoder by forward propagation, and converting the single-channel characteristic diagram into a variation detection output prediction probability diagram with a numerical interval of [0, 1] by a Sigmoid activation function;
(3b) and (4) carrying out binarization processing on the probability map obtained in the step (3a) to obtain a binary classification change detection output result map based on pixels, wherein the pixel values of the two are 0 or 1.
2. The method according to claim 1, characterized in that in (1f) the twin structured encoder, the dual attention mechanism module DAM, the twin residual multi-core pooling module SRMP, the feature difference module FDM and the single-branch decoder are interconnected as follows:
the rear end of each branch of the encoder with the twin structure constructed in the step (1a) is respectively cascaded with a double-attention-machine module DAM constructed in the step (1b), and the output characteristic diagram of each branch of the encoder is the input of the double-attention-machine module DAM;
cascading the two double-attention mechanism modules DAM with the parallel structure in the step (1b) and the twin residual multi-core pooling module SRMP constructed in the step (1c) for channel stacking and fusing the high-dimensional output feature maps of the two branched double-attention mechanism modules DAM, and taking the fused high-dimensional feature map as the input of the twin residual multi-core pooling module SRMP;
cascading the twin residual multi-core pooling module SRMP in (1c) with the single-branch decoder constructed in (1e), wherein the output characteristic diagram of the twin residual multi-core pooling module SRMP is the initial input of the single-branch decoder;
and (3) bridging the feature difference module FDM constructed in the step (1d) between two branches of the encoder in the step (1a) and the decoder in the step (1e), and performing channel stacking fusion on the low-dimensional output feature maps of the coding blocks of different levels of the two branches of the encoder, wherein the fused low-dimensional feature maps are used as the input of the decoding blocks of the same level of the decoder.
3. The method according to claim 1, wherein the loss function L of the Sim-CDM-Net network set in (1f) islossExpressed as follows:
Lloss=LT+LBCE
Figure FDA0003129361290000031
Figure FDA0003129361290000032
wherein L isBCERepresenting BCE loss function, i.e. binary cross-entropy loss function, yiThe true tag value, p, representing the sampleiRepresenting a predicted probability value of the model output; l isTThe method comprises the steps of representing a Tversey loss function, wherein A is a predicted output value, B is a real label value, | A-B | represents a false positive, | B-A | represents a false negative, and alpha and beta are hyper-parameters for controlling the false positive and the false negative respectively.
4. The method of claim 1, wherein in (2b), the on-line augmentation operation is performed on each pair of remote sensing image data in the training set by randomly performing one or more operations of horizontal flipping, vertical flipping, diagonal flipping, image shifting, and size scaling on each pair of remote sensing image data before inputting into the network to obtain a plurality of pairs of image data.
5. The method of claim 1, wherein in (2b), each pair of the telemetric image data in the training set and the verification set is normalized by a pixel value according to the following formula:
Figure FDA0003129361290000033
wherein, R represents the optical remote sensing image after normalization processing, and I represents the optical remote sensing image before normalization processing.
6. The method according to claim 1, wherein the Sigmoid activation function in (3a) is defined as follows:
Figure FDA0003129361290000034
wherein sig (. + -.) denotes a sigmoid function, e(*)Denotes exponential operation with a natural constant e as base, x denotes the input of the sigmoid function.
7. The method according to claim 1, wherein the change detection output prediction probability map is subjected to binarization processing in (3a), and the formula is as follows:
Figure FDA0003129361290000041
wherein the content of the first and second substances,pipixel value, O, representing the model output prediction probability mapiAnd a pixel value representing a binary classification change detection output result image.
CN202110698130.7A 2021-06-23 2021-06-23 Remote sensing image change detection method based on twin multi-scale difference feature fusion Active CN113420662B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110698130.7A CN113420662B (en) 2021-06-23 2021-06-23 Remote sensing image change detection method based on twin multi-scale difference feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110698130.7A CN113420662B (en) 2021-06-23 2021-06-23 Remote sensing image change detection method based on twin multi-scale difference feature fusion

Publications (2)

Publication Number Publication Date
CN113420662A true CN113420662A (en) 2021-09-21
CN113420662B CN113420662B (en) 2023-04-07

Family

ID=77717542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110698130.7A Active CN113420662B (en) 2021-06-23 2021-06-23 Remote sensing image change detection method based on twin multi-scale difference feature fusion

Country Status (1)

Country Link
CN (1) CN113420662B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837931A (en) * 2021-09-27 2021-12-24 海南长光卫星信息技术有限公司 Method and device for detecting transformation of remote sensing image, electronic equipment and storage medium
CN114022793A (en) * 2021-10-28 2022-02-08 天津大学 Optical remote sensing image change detection method based on twin network
CN114078230A (en) * 2021-11-19 2022-02-22 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114283120A (en) * 2021-12-01 2022-04-05 武汉大学 End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation
CN114387439A (en) * 2022-01-13 2022-04-22 中国电子科技集团公司第五十四研究所 Semantic segmentation network based on fusion of optical and PolSAR (polar synthetic Aperture Radar) features
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN114529457A (en) * 2022-02-22 2022-05-24 贝壳找房网(北京)信息技术有限公司 Image processing method, electronic device, and storage medium
CN114913434A (en) * 2022-06-02 2022-08-16 大连理工大学 High-resolution remote sensing image change detection method based on global relationship reasoning
CN114926512A (en) * 2022-05-31 2022-08-19 武汉大学 Twin convolution network remote sensing change detection method based on fitting exclusive or function
CN115018754A (en) * 2022-01-20 2022-09-06 湖北理工学院 Novel performance of depth twin network improved deformation profile model
CN115147760A (en) * 2022-06-27 2022-10-04 武汉大学 High-resolution remote sensing image change detection method based on video understanding and space-time decoupling
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115526886A (en) * 2022-10-26 2022-12-27 中国铁路设计集团有限公司 Optical satellite image pixel level change detection method based on multi-scale feature fusion
CN115761478A (en) * 2022-10-17 2023-03-07 苏州大学 Building extraction model lightweight method based on SAR image in cross-mode
CN116012364A (en) * 2023-01-28 2023-04-25 北京建筑大学 SAR image change detection method and device
CN116091492A (en) * 2023-04-06 2023-05-09 中国科学技术大学 Image change pixel level detection method and system
CN116188701A (en) * 2023-04-27 2023-05-30 四川大学 Three-dimensional face reconstruction method and device based on speckle structured light
CN116310863A (en) * 2023-02-18 2023-06-23 广东技术师范大学 Multi-scale differential feature enhanced remote sensing image change detection method and device
CN116310851A (en) * 2023-05-26 2023-06-23 中国科学院空天信息创新研究院 Remote sensing image change detection method
CN116343052A (en) * 2023-05-30 2023-06-27 华东交通大学 Attention and multiscale-based dual-temporal remote sensing image change detection network
CN116385881A (en) * 2023-04-10 2023-07-04 北京卫星信息工程研究所 Remote sensing image ground feature change detection method and device
CN116503677A (en) * 2023-06-28 2023-07-28 武汉大学 Wetland classification information extraction method, system, electronic equipment and storage medium
CN116612076A (en) * 2023-04-28 2023-08-18 成都瑞贝英特信息技术有限公司 Cabin micro scratch detection method based on combined twin neural network
CN117173579A (en) * 2023-11-02 2023-12-05 山东科技大学 Image change detection method based on fusion of inherent features and multistage features
CN117576567A (en) * 2023-12-01 2024-02-20 石家庄铁道大学 Remote sensing image change detection method using multi-level difference characteristic self-adaptive fusion
CN117671437A (en) * 2023-10-19 2024-03-08 中国矿业大学(北京) Open stope identification and change detection method based on multitasking convolutional neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533631A (en) * 2019-07-15 2019-12-03 西安电子科技大学 SAR image change detection based on the twin network of pyramid pondization
US20200026953A1 (en) * 2018-07-23 2020-01-23 Wuhan University Method and system of extraction of impervious surface of remote sensing image
CN111127493A (en) * 2019-11-12 2020-05-08 中国矿业大学 Remote sensing image semantic segmentation method based on attention multi-scale feature fusion
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200026953A1 (en) * 2018-07-23 2020-01-23 Wuhan University Method and system of extraction of impervious surface of remote sensing image
CN110533631A (en) * 2019-07-15 2019-12-03 西安电子科技大学 SAR image change detection based on the twin network of pyramid pondization
CN111127493A (en) * 2019-11-12 2020-05-08 中国矿业大学 Remote sensing image semantic segmentation method based on attention multi-scale feature fusion
CN111640159A (en) * 2020-05-11 2020-09-08 武汉大学 Remote sensing image change detection method based on twin convolutional neural network
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
CN112990112A (en) * 2021-04-20 2021-06-18 湖南大学 Edge-guided cyclic convolution neural network building change detection method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHREYA SHARMA,AND ETC: "Small Object Change Detection Based on Multitask Siamese Network", 《IGARSS 2020 - 2020 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 *
向阳等: "基于改进UNet孪生网络的遥感影像矿区变化检测", 《煤炭学报》 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837931A (en) * 2021-09-27 2021-12-24 海南长光卫星信息技术有限公司 Method and device for detecting transformation of remote sensing image, electronic equipment and storage medium
CN114022793B (en) * 2021-10-28 2024-06-04 天津大学 Optical remote sensing image change detection method based on twin network
CN114022793A (en) * 2021-10-28 2022-02-08 天津大学 Optical remote sensing image change detection method based on twin network
CN114078230A (en) * 2021-11-19 2022-02-22 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114078230B (en) * 2021-11-19 2023-08-25 西南交通大学 Small target detection method for self-adaptive feature fusion redundancy optimization
CN114283120A (en) * 2021-12-01 2022-04-05 武汉大学 End-to-end multi-source heterogeneous remote sensing image change detection method based on domain self-adaptation
CN114283120B (en) * 2021-12-01 2024-04-19 武汉大学 Domain-adaptive-based end-to-end multisource heterogeneous remote sensing image change detection method
CN114387439A (en) * 2022-01-13 2022-04-22 中国电子科技集团公司第五十四研究所 Semantic segmentation network based on fusion of optical and PolSAR (polar synthetic Aperture Radar) features
CN114387439B (en) * 2022-01-13 2023-09-12 中国电子科技集团公司第五十四研究所 Semantic segmentation network based on optical and PolSAR feature fusion
CN115018754B (en) * 2022-01-20 2023-08-18 湖北理工学院 Method for improving deformation contour model by depth twin network
CN115018754A (en) * 2022-01-20 2022-09-06 湖北理工学院 Novel performance of depth twin network improved deformation profile model
CN114529457A (en) * 2022-02-22 2022-05-24 贝壳找房网(北京)信息技术有限公司 Image processing method, electronic device, and storage medium
CN114419464B (en) * 2022-03-29 2022-07-26 南湖实验室 Construction method of twin network change detection model based on deep learning
CN114419464A (en) * 2022-03-29 2022-04-29 南湖实验室 Twin network change detection model based on deep learning
CN114926512A (en) * 2022-05-31 2022-08-19 武汉大学 Twin convolution network remote sensing change detection method based on fitting exclusive or function
CN114913434A (en) * 2022-06-02 2022-08-16 大连理工大学 High-resolution remote sensing image change detection method based on global relationship reasoning
CN114913434B (en) * 2022-06-02 2024-06-11 大连理工大学 High-resolution remote sensing image change detection method based on global relation reasoning
CN115147760A (en) * 2022-06-27 2022-10-04 武汉大学 High-resolution remote sensing image change detection method based on video understanding and space-time decoupling
CN115147760B (en) * 2022-06-27 2024-04-19 武汉大学 High-resolution remote sensing image change detection method based on video understanding and space-time decoupling
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115331087A (en) * 2022-10-11 2022-11-11 水利部交通运输部国家能源局南京水利科学研究院 Remote sensing image change detection method and system fusing regional semantics and pixel characteristics
CN115761478A (en) * 2022-10-17 2023-03-07 苏州大学 Building extraction model lightweight method based on SAR image in cross-mode
CN115526886A (en) * 2022-10-26 2022-12-27 中国铁路设计集团有限公司 Optical satellite image pixel level change detection method based on multi-scale feature fusion
CN115526886B (en) * 2022-10-26 2023-05-26 中国铁路设计集团有限公司 Optical satellite image pixel level change detection method based on multi-scale feature fusion
CN116012364B (en) * 2023-01-28 2024-01-16 北京建筑大学 SAR image change detection method and device
CN116012364A (en) * 2023-01-28 2023-04-25 北京建筑大学 SAR image change detection method and device
CN116310863A (en) * 2023-02-18 2023-06-23 广东技术师范大学 Multi-scale differential feature enhanced remote sensing image change detection method and device
CN116091492B (en) * 2023-04-06 2023-07-14 中国科学技术大学 Image change pixel level detection method and system
CN116091492A (en) * 2023-04-06 2023-05-09 中国科学技术大学 Image change pixel level detection method and system
CN116385881B (en) * 2023-04-10 2023-11-14 北京卫星信息工程研究所 Remote sensing image ground feature change detection method and device
CN116385881A (en) * 2023-04-10 2023-07-04 北京卫星信息工程研究所 Remote sensing image ground feature change detection method and device
CN116188701A (en) * 2023-04-27 2023-05-30 四川大学 Three-dimensional face reconstruction method and device based on speckle structured light
CN116612076B (en) * 2023-04-28 2024-01-30 成都瑞贝英特信息技术有限公司 Cabin micro scratch detection method based on combined twin neural network
CN116612076A (en) * 2023-04-28 2023-08-18 成都瑞贝英特信息技术有限公司 Cabin micro scratch detection method based on combined twin neural network
CN116310851B (en) * 2023-05-26 2023-08-15 中国科学院空天信息创新研究院 Remote sensing image change detection method
CN116310851A (en) * 2023-05-26 2023-06-23 中国科学院空天信息创新研究院 Remote sensing image change detection method
CN116343052B (en) * 2023-05-30 2023-08-01 华东交通大学 Attention and multiscale-based dual-temporal remote sensing image change detection network
CN116343052A (en) * 2023-05-30 2023-06-27 华东交通大学 Attention and multiscale-based dual-temporal remote sensing image change detection network
CN116503677B (en) * 2023-06-28 2023-09-05 武汉大学 Wetland classification information extraction method, system, electronic equipment and storage medium
CN116503677A (en) * 2023-06-28 2023-07-28 武汉大学 Wetland classification information extraction method, system, electronic equipment and storage medium
CN117671437A (en) * 2023-10-19 2024-03-08 中国矿业大学(北京) Open stope identification and change detection method based on multitasking convolutional neural network
CN117173579B (en) * 2023-11-02 2024-01-26 山东科技大学 Image change detection method based on fusion of inherent features and multistage features
CN117173579A (en) * 2023-11-02 2023-12-05 山东科技大学 Image change detection method based on fusion of inherent features and multistage features
CN117576567A (en) * 2023-12-01 2024-02-20 石家庄铁道大学 Remote sensing image change detection method using multi-level difference characteristic self-adaptive fusion

Also Published As

Publication number Publication date
CN113420662B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113420662B (en) Remote sensing image change detection method based on twin multi-scale difference feature fusion
Zhang et al. Remote sensing image spatiotemporal fusion using a generative adversarial network
CN112668494A (en) Small sample change detection method based on multi-scale feature extraction
Han et al. Remote sensing image building detection method based on Mask R-CNN
CN116524361A (en) Remote sensing image change detection network and detection method based on double twin branches
CN111814607A (en) Deep learning model suitable for small sample hyperspectral image classification
CN113313180B (en) Remote sensing image semantic segmentation method based on deep confrontation learning
CN116580241B (en) Image processing method and system based on double-branch multi-scale semantic segmentation network
CN116168295B (en) Lithology remote sensing intelligent interpretation model establishment method and interpretation method
CN110991430A (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN115937697A (en) Remote sensing image change detection method
CN112766223A (en) Hyperspectral image target detection method based on sample mining and background reconstruction
Zhang et al. Dense haze removal based on dynamic collaborative inference learning for remote sensing images
CN111368843A (en) Method for extracting lake on ice based on semantic segmentation
Jiang et al. An Improved Semantic Segmentation Method for Remote Sensing Images Based on Neural Network.
CN110751699B (en) Color reconstruction method of optical remote sensing image based on convolutional neural network
CN116863347A (en) High-efficiency and high-precision remote sensing image semantic segmentation method and application
CN114511787A (en) Neural network-based remote sensing image ground feature information generation method and system
He et al. Bayesian temporal tensor factorization-based interpolation for time-series remote sensing data with large-area missing observations
CN115588138A (en) Semantic segmentation method for landslide detection by using medium-resolution multi-source remote sensing data
CN116958800A (en) Remote sensing image change detection method based on hierarchical attention residual unet++
CN115909077A (en) Hyperspectral image change detection method based on unsupervised spectrum unmixing neural network
CN115393717A (en) SAR image house extraction method and system based on evolution hybrid attention mechanism
CN114048823A (en) Resistivity inversion model establishment method based on full convolution network
CN115147727A (en) Method and system for extracting impervious surface of remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant