CN117933309A - Three-path neural network and method for detecting change of double-phase remote sensing image - Google Patents

Three-path neural network and method for detecting change of double-phase remote sensing image Download PDF

Info

Publication number
CN117933309A
CN117933309A CN202410284742.5A CN202410284742A CN117933309A CN 117933309 A CN117933309 A CN 117933309A CN 202410284742 A CN202410284742 A CN 202410284742A CN 117933309 A CN117933309 A CN 117933309A
Authority
CN
China
Prior art keywords
neural network
path
image
module
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410284742.5A
Other languages
Chinese (zh)
Other versions
CN117933309B (en
Inventor
吕志勇
钟平东
李军怀
宁小娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202410284742.5A priority Critical patent/CN117933309B/en
Publication of CN117933309A publication Critical patent/CN117933309A/en
Application granted granted Critical
Publication of CN117933309B publication Critical patent/CN117933309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-path neural network and a method for detecting the change of a double-phase remote sensing image, wherein the three-path neural network comprises a first path neural network, a second path neural network and a third path neural network which have the same structure, output end channels of the first path neural network, the second path neural network and the third path neural network are in fusion connection, and the output end after the fusion connection is connected with a feature fusion module; the first path of neural network, the second path of neural network and the third path of neural network are all composed of three parallel branch neural networks; each branched neural network consists of two first combination modules and two second combination modules which are sequentially connected, wherein the first combination modules consist of two fourth convolution layers and one maximum pooling layer which are sequentially connected, and the second combination modules consist of three fourth convolution layers and one maximum pooling layer which are sequentially connected. By adopting the three-path neural network, the high-resolution remote sensing image change detection precision is improved.

Description

Three-path neural network and method for detecting change of double-phase remote sensing image
Technical Field
The invention belongs to the technical field of remote sensing image classification, and particularly relates to a three-path neural network and a method for detecting change of a double-phase remote sensing image.
Background
With the rapid development of satellite and aerial remote sensing technologies, the time resolution and the spatial resolution of images are greatly improved, so that the analysis and the processing of remote sensing images and aerial images can be performed by intelligent means, and the ground information can be rapidly and effectively acquired. The detection of land coverage change by using the double-time-phase remote sensing image has important significance for monitoring geological disasters, evaluating the health of an ecological system, assisting urban planning development and acquiring vegetation large-scale change and land utilization management. The improvement of the image resolution enhances the capability of acquiring the ground object information, and can also provide richer ground object detail information, thereby bringing more accurate judgment. In recent years, the high-speed development of deep learning provides a new thought for the change detection of high-resolution remote sensing images and provides a corresponding technology for the change detection of the high-resolution remote sensing images. Although the remote sensing image has rich features, in the process of extracting the characteristic information, the accuracy of change detection is not ideal, and the error rate is high.
Disclosure of Invention
The invention aims to provide a three-path neural network and a method for detecting the change of a double-phase remote sensing image, which improve the accuracy of remote sensing image change detection.
The invention adopts the following technical scheme: the invention discloses a three-path neural network for detecting the change of a double-phase remote sensing image, which consists of a first path neural network, a second path neural network and a third path neural network which have the same structure, wherein output end channels of the first path neural network, the second path neural network and the third path neural network are in fusion connection, and the output end after the fusion connection is connected with a feature fusion module;
The first path of neural network, the second path of neural network and the third path of neural network are all composed of three parallel branch neural networks; each branched neural network consists of two first combination modules and two second combination modules which are sequentially connected, wherein the first combination modules consist of two fourth convolution layers and one maximum pooling layer which are sequentially connected, and the second combination modules consist of three fourth convolution layers and one maximum pooling layer which are sequentially connected.
Further, the three branch neural networks are a first branch neural network, a second branch neural network and a third branch neural network respectively; the convolution kernel sizes of the first branch neural network, the second branch neural network and the third branch neural network are different.
Further, the output end channels of the three first paths of branched neural networks are connected in a fusion way; the output end channels of the three second paths of branched neural networks are connected in a fusion way; the output end channels of the three third branch nerve networks are connected in a fusion way.
Further, the loss function of the three-way neural network is a mixed loss function combining a mean square error and a cross entropy loss function, as follows:
Wherein:
And/> Is a super parameter;
Is the mean square error;
Is a cross entropy loss function;
the total number of pixels of the image to be processed;
A predictive label for the ith pixel;
a reference true value of 1 or 0;
Is a Sigmoid function.
Further, the feature fusion module is composed of a first module group, a second module group and a third module group which are sequentially connected, wherein:
The first module group consists of three parallel first convolution layers, and a first normalization function module and a first activation function module are sequentially connected behind each first convolution layer;
the second module group consists of two parallel second convolution layers, and a second normalization function module and a second activation function module are sequentially connected behind each second convolution layer;
The third module group is a third convolution layer, and a third normalization function module and a third activation function module are sequentially connected behind the third convolution layer.
Further, a differential module is connected in front of the second path neural network.
Further, a first noise reduction module is connected in front of the first path of neural network; a third noise reduction module is connected in front of the third neural network.
Further, the first activation function module, the second activation function module and the third activation function module all employ Relu activation functions.
The invention also discloses a method for detecting the change of the double-phase remote sensing image, which is based on the three-path neural network for detecting the change of the double-phase remote sensing image and comprises the following steps:
Acquiring a pre-event image, a differential image and a post-event image;
inputting the pre-event image into a first path of neural network; the differential image is input into a second path neural network, and the image is input into a third path neural network after an event;
Determining first characteristic information, second characteristic information and third characteristic information based on channel fusion of the first path neural network, the second path neural network and the third path neural network; wherein: the first characteristic information is fusion characteristic information of ground features of the pre-event image, the differential image and the post-event image under a first scale; the second characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a second scale; the third characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a third scale;
Inputting the first feature information, the second feature information and the third feature information into a feature fusion module; and determining the change of the image after the event relative to the image before the event based on the feature fusion module to obtain a binary change detection diagram.
Further, the differential image is obtained as follows: and (3) inputting the pre-event image and the post-event image into the differential module simultaneously in parallel, and processing the images by the differential module.
The beneficial effects of the invention are as follows: 1. three paths of neural networks are adopted to effectively extract the difference characteristics of the two-stage image ground features at the same position, namely the change area; and the difference features are respectively fused with the features of the front and rear images to strengthen the changed region, thereby improving the accuracy of the change detection. 2. In each path of neural network of the three paths of neural networks, different convolution kernels are used for extracting ground feature information, so that the shape and the size of different ground features can be considered, the scale information of the different ground features can be effectively extracted, ground feature missing detection is avoided, and the accuracy is high. 3. The noise reduction module can effectively reduce the influence caused by noise and reduce the false detection caused by the noise in the detection process.
Drawings
FIG. 1 is a block diagram of a three-way neural network;
FIG. 2 is a schematic diagram of data in an embodiment;
FIG. 3 is a comparative graph of different test methods;
Wherein: 1. a first noise reduction module; 2. a differential module; 3. a third noise reduction module; 4. and a feature fusion module.
Detailed Description
The invention will be described in detail below with reference to the drawings and the detailed description.
The invention discloses a three-path neural network for double-phase remote sensing image change discovery, which is composed of a first path neural network, a second path neural network and a third path neural network with the same structure, wherein output end channels of the first path neural network, the second path neural network and the third path neural network are in fusion connection, and an output end after the fusion connection is connected with a characteristic fusion module 4;
The first path of neural network, the second path of neural network and the third path of neural network are all composed of three parallel branch neural networks; each branched neural network consists of two first combination modules and two second combination modules which are sequentially connected, wherein the first combination modules consist of two fourth convolution layers and one maximum pooling layer which are sequentially connected, and the second combination modules consist of three fourth convolution layers and one maximum pooling layer which are sequentially connected.
The three branch neural networks are a first branch neural network, a second branch neural network and a third branch neural network respectively;
The convolution kernel sizes of the first branch neural network, the second branch neural network and the third branch neural network are different.
The output end channels of the three first paths of branched neural networks are connected in a fusion way; the output end channels of the three second paths of branched neural networks are connected in a fusion way; the output end channels of the three third branch nerve networks are connected in a fusion way.
And fusing the differential characteristics extracted by the second path of neural network, namely the changed region, the characteristics output by the first path of neural network and the characteristics output by the third path of neural network to strengthen the changed region, thereby improving the accuracy of change detection.
The method comprises the following steps: the first branch neural network consists of ten fourth convolution layers with convolution kernels of 1×1 and four largest pooling layers with step sizes of two. The second branch neural network consists of ten fourth convolution layers with convolution kernels of 3×3 and four largest pooling layers with step sizes of two. Third branch neural network: consists of ten fourth convolution layers with a convolution kernel of 5 x 5 and four largest pooling layers with a step size of two. The convolution kernels with different sizes are used for extracting the ground feature information, so that the shape and the size of different ground features can be considered, the scale information of the different ground features can be effectively extracted, ground feature missing detection is avoided, and the detection precision is improved.
The pre-event image feature information with the convolution kernel of 1 x 1 scale, the differential image feature information with the convolution kernel of 1 x 1 scale and the post-event image feature information with the convolution kernel of 1 x 1 scale are fused, and up-sampling processing is carried out after the fusion, so that the fused feature information of the ground object with the first scale, namely the first feature information, is obtained. And fusing the pre-event image characteristic information with the convolution kernel of 3 x 3 scale, the differential image characteristic information with the convolution kernel of 3 x 3 scale and the post-event image characteristic information channel with the convolution kernel of 3 x 3 scale, and performing up-sampling processing after fusing to obtain the fused characteristic information of the ground feature with the second scale, namely the second characteristic information. And fusing the pre-event image characteristic information with the convolution kernel of 5 multiplied by 5, the differential image characteristic information with the convolution kernel of 5 multiplied by 5 and the post-event image characteristic information with the convolution kernel of 5 multiplied by 5, and performing up-sampling processing after fusing to obtain the fused characteristic information of the ground object with the three scales, namely third characteristic information.
Different convolution kernels are adopted in the three-path branched neural network, so that ground features under different scales can be considered. In the remote sensing image, the sizes of the ground features are often different, a single scale is adopted, namely, one scale is adopted for feature extraction on the ground features with different sizes, so that the ground features with small sizes cannot be effectively identified and extracted, the extracted features cannot well interpret ground surface information, and when different convolution kernels are adopted, the ground features with different shapes and different sizes can be fully considered, so that the effect of improving the change detection precision is achieved.
The first feature information, the second feature information and the third feature information are input to the feature fusion module 4.
Before the second neural network, a differential module 2 is connected. A first noise reduction module 1 is connected in front of the first path of neural network; a third noise reduction module 3 is connected before the third neural network. The noise reduction module can effectively reduce the influence caused by noise and reduce the false detection caused by the noise in the detection process.
The loss function of the three-path neural network is a mixed loss function combining a mean square error and a cross entropy loss function, and is as follows:
Wherein:
And/> The value of the super parameter is 0.3 and 0.7 respectively;
Is the mean square error;
Is a cross entropy loss function;
the total number of pixels of the image to be processed;
A predictive label for the ith pixel;
A value of 1 or 0 for a reference true value;
is a Sigmoid function for converting the output of the neural network into a probability value between 0 and 1.
The feature fusion module comprises a first module group, a second module group and a third module group which are sequentially connected, wherein:
the first module group consists of three parallel first convolution layers with convolution kernels of 3 multiplied by 3, and a first normalization function module and a first activation function module are sequentially connected behind each first convolution layer;
The second module group consists of two parallel second convolution layers with the convolution kernel of 3 multiplied by 3, and a second normalization function module and a second activation function module are sequentially connected behind each second convolution layer;
the third module group is a third convolution layer with a convolution kernel of 3 multiplied by 3, and a third normalization function module and a third activation function module are sequentially connected behind the third convolution layer.
The first activation function module, the second activation function module and the third activation function module all adopt Relu activation functions.
By adopting Relu functions, the problem of image gradient disappearance is not caused, and the convergence speed of the neural network model can be accelerated.
The noise reduction module is as follows: the method comprises the steps of forming a third convolution layer with 3 multiplied by 3, a parallel self-adaptive average pooling layer, a self-adaptive maximum pooling layer and a maximum pooling layer, connecting three sixth convolution layers with 3 multiplied by 3 respectively after the self-adaptive average pooling layer, the self-adaptive maximum pooling layer and the maximum pooling layer, connecting a seventh convolution layer with 3 multiplied by 3 respectively after the sixth convolution layers, dividing the three layers into two paths altogether after the seventh convolution layers, connecting an eighth convolution layer with 3 multiplied by 3 respectively after the two eighth convolution layers, connecting one path after the two eighth convolution layers, and connecting a ninth convolution layer with 1 multiplied by 1 respectively after the three convolution layers.
The invention also discloses a method for detecting the change of the double-phase remote sensing image, which is based on the three-path neural network for detecting the change of the double-phase remote sensing image and comprises the following steps:
Acquiring a pre-event image, a differential image and a post-event image;
inputting the pre-event image into a first path of neural network; the differential image is input into a second path neural network, and the image is input into a third path neural network after an event;
Determining first characteristic information, second characteristic information and third characteristic information based on channel fusion of the first path neural network, the second path neural network and the third path neural network; wherein: the first characteristic information is fusion characteristic information of ground features of the pre-event image, the differential image and the post-event image under a first scale; the second characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a second scale; the third characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a third scale;
inputting the first feature information, the second feature information and the third feature information into a feature fusion module 4; and determining the change of the image after the event relative to the image before the event based on the feature fusion module 4 to obtain a binary change detection diagram.
The differential image is obtained as follows: and (5) inputting the pre-event image and the post-event image into the differential module 2 in parallel and processing the images by the differential module 2.
Before the image before the event is input into the first path of neural network, the image before the event is input into the first noise reduction module 1; before the post-event image is input into the third neural network, the post-event image is input into the third noise reduction module 3. After being processed by each noise reduction module, the noise-reduced pre-event image and the noise-reduced post-event image are output, and the noise-reduced pre-event image and the noise-reduced post-event image are used as input.
Therefore, the feature information of the ground object under the scale corresponding to the pre-event image and the post-event image which are subjected to the convolution processing and the pooling processing is the feature information of the ground object under the scale corresponding to the pre-event image and the post-event image which are subjected to the noise reduction.
Specifically, the pre-event image is input into three parallel branch neural networks of a first path neural network to obtain feature information of ground objects under different scales of the pre-event image:、/> And/>
The differential images of the pre-event image and the post-event image are input into three parallel branch neural networks of the second path neural network to obtain deep features of the ground feature differences: 、/> And/>
Inputting the post-event images into three parallel branch neural networks of a third neural network to obtain feature information of ground objects under different scales of the post-event images:、/> And/>
Will be、/>And/>Fusion to obtain the fused characteristic/>Will/>And/>Fusion to obtain the fused characteristic/>Will/>、/>And/>Fusion to obtain the fused characteristic/>. Will/>、/>And/>Sequentially up-sampling to obtain three corresponding characteristic information: /(I)、/>And/>; The method comprises the following steps: upsampling will/>Decoding is completed and thenDecoding is completed, and finally/>And after the decoding is completed, the three characteristic information are obtained.
Will be、/>And/>Inputting the feature fusion module to obtain fusion feature information/>
Will fuse the characteristic informationAnd outputting a change finding result, namely outputting a binary change detection graph through the threshold segmentation of the Ojin method.
In the feature fusion module, the fusion process is as follows:、/> And/> Two by two fusion, i.eAnd/>Fusion,/>And/>Fusion,/>And/>Fusing, and correspondingly obtaining fused characteristic information: /(I)、/>And/>; Pair/>Convolution with a convolution kernel of 3×3, normalization and Relu activation function processing, resulting in/>Pair/>Convolution with a convolution kernel of 3×3, normalization and Relu activation function processing, resulting in/>Pair/>Convolution with a convolution kernel of 3×3, normalization and Relu activation functions, resulting in; Then, will/>And/>After concatenation, the function processing is activated using convolution with a convolution kernel of 3×3, normalization and Relu, resulting in/>Will/>And/>After concatenation, the function processing is activated using convolution with a convolution kernel of 3×3, normalization and Relu, resulting in/>; Finally, will/>And/>After connection, convolution with a convolution kernel of 3×3, normalization and Relu activation function processing are used to obtain fusion characteristic information/>. The feature fusion module is adopted to supplement the information lost in the feature extraction process.
The above-mentioned difference module: consists of two-dimensional convolution and a maximum pooling layer. First, the change between the pre-event image and the post-event image is measured pixel by using the Euclidean distance and is recorded as; Simultaneously, the pre-event image and the post-event image are respectively subjected to maximum pooling operation, and the pooling characteristics are respectively marked as/>And/>. Then, will/>And/>Fusing, and marking a fusion result as: /(I),/>And/>Fusion, and the fusion result is recorded as/>. Finally, will/>Two-dimensional convolution processing using a convolution kernel of 3 x 3 yields/>,/>Two-dimensional convolution processing using a convolution kernel of 3 x 3 yields/>Will/>And/>After connection, convolution processing with a convolution kernel of 3×3 is used to obtain the feature difference features of the pre-event image and the post-event image: /(I). The differential module is adopted to effectively enhance the changed and unchanged areas.
To verify the method of the present invention, the following tests were performed: in this embodiment, the method effectiveness verification is performed by using the surface coverage change caused by landslide as a change discovery example, as shown in fig. 2, where (a) in fig. 2 is a pre-event image, (b) in fig. 2 is a post-event image, and (c) in fig. 2 is a binary change detection diagram; the specific process is as follows:
The landslide front image, namely the pre-event image, is input into the first noise reduction module 1, and the output of the first noise reduction module 1 is input into the first path of neural network.
The image before landslide and the image after landslide are input into the differential module 2 at the same time, the differential module 2 outputs the differential image of the image before landslide and the image after landslide, and the differential image is input into the second path neural network.
And inputting the landslide post-image, namely the post-event image, into the third noise reduction module 3, and inputting the output of the third noise reduction module 3 into a third neural network.
And extracting the ground feature information through convolution kernel processing of each path of neural network to obtain three characteristic information. And inputting the three feature messages into a feature fusion module 4 to obtain corresponding fusion feature information.
And dividing the fusion characteristic information into a changed pixel point and an unchanged pixel point through an Ojin method threshold, and finally outputting a binary change detection graph by endowing different colors to the changed pixel point and the unchanged pixel point.
As shown in fig. 3, a comparison chart of experimental results of different methods is provided, and the change of the surface coverage caused by landslide is used as input;
fig. 3 (a) is a comparison method bit_cd training and a result chart obtained by detection, fig. 3 (b) is a comparison method DSIFN training and a result chart obtained by detection, and fig. 3 (c) is a comparison method DSAMNet training and a result chart obtained by detection; fig. 3 (d) is a diagram showing the change in the pattern obtained by the method of the present invention. To ensure accuracy of comparison, all networks were in the training network phase, with a training sample number of 50%.
Precision alignment of the different methods is shown in table 1: table 1 quantitative comparison of the accuracy of the methods
Method of OA AA KA FA MA TE Precision of
BIT_CD 98.91 97.16 0.92 0.82 4.86 1.09 89.33
DSIFN 99.38 96.90 0.95 0.23 5.97 0.61 96.70
DSAMNet 97.61 87.94 0.80 0.89 23.22 2.39 86.04
The method of the invention 99.60 97.92 0.97 0.13 4.02 0.40 98.08
As shown in Table 1, the method provided by the invention is higher than the numerical value of the selected comparison method in four indexes of OA, AA, kappa coefficients and precision, and the four indexes are forward statistical indexes, namely, the higher the numerical value is, the better the detection result is. On the other hand, for three negative indexes, namely, the lower the index value is, the better the index value is, the lower the values of the FA and the TE are reached, which indicates that the error rate of the method is lower when the change and the non-change are distinguished. Experimental results show that the method can effectively find the surface coverage change, and the detection precision is superior to that of the similar method.
Because the differential image is obtained by the pre-event image and the post-event image through the differential module 3, the characteristics extracted by the differential module 3 are the differential characteristics of the images in the two periods before and after the event, namely the changed area; after the difference features are extracted, inputting the difference features into a second path of neural network, and extracting the difference features. After the features are fully extracted, the features extracted by the second path of neural network are fused with the features output by the first path of neural network and the features output by the third path of neural network, so that the change area is enhanced, and the accuracy of change detection is improved.
And different convolution kernels are adopted in the three-path branched neural network, so that the ground features under different scales can be considered. In the remote sensing image, the sizes of the ground features are often different, the ground surface information cannot be well interpreted by adopting a single scale, and when different convolution kernels are adopted, the ground features with different shapes and different sizes can be fully considered, so that the effect of improving the change detection precision can be achieved.
The calculation formula of the precision evaluation index in the table is as follows:
overall Accuracy (OA),
Average user's accuracies (AA),
Kappa coefficient (Kappa Coefficient, ka),Wherein/>
Error rate (FALSE ALARM, FA),
The omission factor (MISSED ALARM, MA),
Overall Error rate (TE),
Precision (Precision),
Wherein TP represents a true change class, FP represents an error change class, TN represents a true unchanged class, and FN represents an error unchanged class.

Claims (10)

1. The three-path neural network for the change discovery of the double-time-phase remote sensing image is characterized by comprising a first path neural network, a second path neural network and a third path neural network which are identical in structure, wherein output end channels of the first path neural network, the second path neural network and the third path neural network are in fusion connection, and an output end after the fusion connection is connected with a characteristic fusion module (4);
The first path of neural network, the second path of neural network and the third path of neural network are all composed of three parallel branch neural networks; each branched neural network consists of two first combination modules and two second combination modules which are sequentially connected; the first combination module consists of two fourth convolution layers and one maximum pooling layer which are connected in sequence, and the second combination module consists of three fourth convolution layers and one maximum pooling layer which are connected in sequence.
2. The three-way neural network for double-phase remote sensing image change discovery according to claim 1, wherein the three branch neural networks are a first branch neural network, a second branch neural network and a third branch neural network, respectively;
The convolution kernel sizes of the first branch neural network, the second branch neural network and the third branch neural network are different.
3. The three-path neural network for double-phase remote sensing image change discovery according to claim 2, wherein the output end channels of the three first path branch neural networks are connected in a fusion manner; the output end channels of the three second paths of branched neural networks are connected in a fusion way; and the output end channels of the three third branch neural networks are connected in a fusion way.
4. A three-way neural network for dual phase remote sensing image change discovery according to claim 3, wherein the loss function of the three-way neural network is a mixed loss function combining a mean square error and a cross entropy loss function, as follows:
Wherein:
And/> Is a super parameter;
Is the mean square error;
Is a cross entropy loss function;
the total number of pixels of the image to be processed;
A predictive label for the ith pixel;
a reference true value of 1 or 0;
Is a Sigmoid function.
5. The three-way neural network for dual-temporal remote sensing image change discovery according to claim 4, wherein the feature fusion module (4) is composed of a first module group, a second module group and a third module group connected in sequence, wherein:
the first module group consists of three parallel first convolution layers, and a first normalization function module and a first activation function module are sequentially connected behind each first convolution layer;
The second module group consists of two parallel second convolution layers, and a second normalization function module and a second activation function module are sequentially connected behind each second convolution layer;
The third module group is a third convolution layer, and a third normalization function module and a third activation function module are sequentially connected behind the third convolution layer.
6. A three-way neural network for dual phase remote sensing image change discovery according to claim 5, characterized in that a differential module (2) is connected in front of the second neural network.
7. A three-way neural network for dual-temporal remote sensing image change discovery according to claim 6, characterized in that a first noise reduction module (1) is connected in front of the first way neural network; a third noise reduction module (3) is connected in front of the third path neural network.
8. The three-way neural network for dual phase remote sensing image change discovery of claim 7, wherein the first, second and third activation function modules each employ Relu activation functions.
9. A method for dual-phase remote sensing image change discovery, based on a three-way neural network for dual-phase remote sensing image change discovery according to any one of claims 1-8, comprising the following steps:
Acquiring a pre-event image, a differential image and a post-event image;
Inputting the pre-event image into a first path of neural network; the differential image is input into a second path neural network, and the post-event image is input into a third path neural network;
Determining first characteristic information, second characteristic information and third characteristic information based on channel fusion of the first path neural network, the second path neural network and the third path neural network; wherein: the first characteristic information is fusion characteristic information of ground features of the pre-event image, the differential image and the post-event image under a first scale; the second characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a second scale; the third characteristic information is fusion characteristic information of the ground feature of the pre-event image, the differential image and the post-event image under a third scale;
Inputting the first feature information, the second feature information and the third feature information into the feature fusion module (4); and determining the change of the image after the event relative to the image before the event based on the feature fusion module (4) to obtain a binary change detection diagram.
10. The method for dual phase remote sensing image variation discovery of claim 9, wherein the differential image is obtained by: and (3) inputting the pre-event image and the post-event image into the differential module (2) simultaneously in parallel, and processing by the differential module (2) to obtain the image.
CN202410284742.5A 2024-03-13 2024-03-13 Three-path neural network and method for detecting change of double-phase remote sensing image Active CN117933309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410284742.5A CN117933309B (en) 2024-03-13 2024-03-13 Three-path neural network and method for detecting change of double-phase remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410284742.5A CN117933309B (en) 2024-03-13 2024-03-13 Three-path neural network and method for detecting change of double-phase remote sensing image

Publications (2)

Publication Number Publication Date
CN117933309A true CN117933309A (en) 2024-04-26
CN117933309B CN117933309B (en) 2024-06-18

Family

ID=90753887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410284742.5A Active CN117933309B (en) 2024-03-13 2024-03-13 Three-path neural network and method for detecting change of double-phase remote sensing image

Country Status (1)

Country Link
CN (1) CN117933309B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020143323A1 (en) * 2019-01-08 2020-07-16 平安科技(深圳)有限公司 Remote sensing image segmentation method and device, and storage medium and server
CN115937677A (en) * 2022-12-05 2023-04-07 珠海欧比特宇航科技股份有限公司 Image prediction method, apparatus and medium for building change detection model
WO2023077998A1 (en) * 2021-11-05 2023-05-11 通号通信信息集团有限公司 Method and system for adaptive feature fusion in convolutional neural network
WO2023088314A1 (en) * 2021-11-16 2023-05-25 王树松 Object classification method, apparatus and device, and storage medium
CN116258953A (en) * 2022-09-08 2023-06-13 中国人民解放军战略支援部队信息工程大学 Remote sensing image target detection method
CN116363516A (en) * 2023-03-31 2023-06-30 西安电子科技大学 Remote sensing image change detection method based on edge auxiliary self-adaption
CN116563683A (en) * 2023-04-12 2023-08-08 武汉大学 Remote sensing image scene classification method based on convolutional neural network and multi-layer perceptron
CN117611996A (en) * 2023-11-15 2024-02-27 西北农林科技大学 Grape planting area remote sensing image change detection method based on depth feature fusion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020143323A1 (en) * 2019-01-08 2020-07-16 平安科技(深圳)有限公司 Remote sensing image segmentation method and device, and storage medium and server
WO2023077998A1 (en) * 2021-11-05 2023-05-11 通号通信信息集团有限公司 Method and system for adaptive feature fusion in convolutional neural network
WO2023088314A1 (en) * 2021-11-16 2023-05-25 王树松 Object classification method, apparatus and device, and storage medium
CN116258953A (en) * 2022-09-08 2023-06-13 中国人民解放军战略支援部队信息工程大学 Remote sensing image target detection method
CN115937677A (en) * 2022-12-05 2023-04-07 珠海欧比特宇航科技股份有限公司 Image prediction method, apparatus and medium for building change detection model
CN116363516A (en) * 2023-03-31 2023-06-30 西安电子科技大学 Remote sensing image change detection method based on edge auxiliary self-adaption
CN116563683A (en) * 2023-04-12 2023-08-08 武汉大学 Remote sensing image scene classification method based on convolutional neural network and multi-layer perceptron
CN117611996A (en) * 2023-11-15 2024-02-27 西北农林科技大学 Grape planting area remote sensing image change detection method based on depth feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈璐;管霜霜;: "基于深度学习的城市高分遥感图像变化检测方法的研究", 计算机应用研究, no. 1, 30 June 2020 (2020-06-30) *

Also Published As

Publication number Publication date
CN117933309B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
CN107527352B (en) Remote sensing ship target contour segmentation and detection method based on deep learning FCN network
CN112966684B (en) Cooperative learning character recognition method under attention mechanism
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN112597985B (en) Crowd counting method based on multi-scale feature fusion
CN115063573A (en) Multi-scale target detection method based on attention mechanism
CN113139489B (en) Crowd counting method and system based on background extraction and multi-scale fusion network
CN108171119B (en) SAR image change detection method based on residual error network
CN114692509B (en) Strong noise single photon three-dimensional reconstruction method based on multi-stage degeneration neural network
CN110119726A (en) A kind of vehicle brand multi-angle recognition methods based on YOLOv3 model
CN113609896A (en) Object-level remote sensing change detection method and system based on dual-correlation attention
CN107092884A (en) Rapid coarse-fine cascade pedestrian detection method
CN114187520B (en) Building extraction model construction and application method
CN113436210B (en) Road image segmentation method fusing context progressive sampling
CN109543672A (en) Object detecting method based on dense characteristic pyramid network
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
Song et al. PSTNet: Progressive sampling transformer network for remote sensing image change detection
Gao et al. Road extraction using a dual attention dilated-linknet based on satellite images and floating vehicle trajectory data
CN114529462A (en) Millimeter wave image target detection method and system based on improved YOLO V3-Tiny
CN115035381A (en) Lightweight target detection network of SN-YOLOv5 and crop picking detection method
CN116543165B (en) Remote sensing image fruit tree segmentation method based on dual-channel composite depth network
CN106570889A (en) Detecting method for weak target in infrared video
CN117933309B (en) Three-path neural network and method for detecting change of double-phase remote sensing image
CN117315284A (en) Image tampering detection method based on irrelevant visual information suppression
CN111353412A (en) End-to-end 3D-CapsNet flame detection method and device
CN115797684A (en) Infrared small target detection method and system based on context information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant