CN115019186A - Algorithm and system for remote sensing change detection - Google Patents

Algorithm and system for remote sensing change detection Download PDF

Info

Publication number
CN115019186A
CN115019186A CN202210941062.7A CN202210941062A CN115019186A CN 115019186 A CN115019186 A CN 115019186A CN 202210941062 A CN202210941062 A CN 202210941062A CN 115019186 A CN115019186 A CN 115019186A
Authority
CN
China
Prior art keywords
feature
map
similarity
output
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210941062.7A
Other languages
Chinese (zh)
Other versions
CN115019186B (en
Inventor
牛威
郝磊
丁锐
鱼群
蔡文新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Xingtu Measurement and Control Technology Co.,Ltd.
Original Assignee
Zhongke Xingtu Measurement And Control Technology Hefei Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Xingtu Measurement And Control Technology Hefei Co ltd filed Critical Zhongke Xingtu Measurement And Control Technology Hefei Co ltd
Priority to CN202210941062.7A priority Critical patent/CN115019186B/en
Publication of CN115019186A publication Critical patent/CN115019186A/en
Application granted granted Critical
Publication of CN115019186B publication Critical patent/CN115019186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an algorithm and a system for remote sensing change detection, wherein the method comprises the following steps: s1, extracting the characteristics of the images in different time phases and outputting a characteristic image; s2, performing feature splicing on the feature map to obtain a feature spliced map; s3, respectively taking the feature map and the feature mosaic as input, and performing similarity calculation to obtain an output result; s4, performing feature splicing on the output result in the S3, then using the output result as the input of feature fusion, and outputting a fusion feature map after passing through a PPM pyramid pooling module; and S5, inputting the fused feature graph into the FCN-head module and the SPP-head module to obtain final output. Through the arrangement of the algorithm and the system, the detailed information of the specific change between the other time phase diagrams can be obtained by integrating the output results of the FCN-head module and the SPP-head module.

Description

Algorithm and system for remote sensing change detection
Technical Field
The invention relates to the technical field of remote sensing change detection, in particular to an algorithm and a system for remote sensing change detection.
Background
The remote sensing change detection is a process of finding the change of the earth surface by comparing two or more remote sensing images acquired in different time points in the same geographical area. The traditional change detection needs manual design characteristics, is labor-consuming and time-consuming work, and needs strong professional knowledge. And it is difficult to design a change detection method suitable for all types of ground features. Besides, with the great improvement of computing power, a deep convolutional neural network framework has been developed greatly, and many scholars apply a deep neural network to change detection, and a series of algorithm frameworks for change detection are proposed, but these algorithm frameworks have some disadvantages, as in reference 1: the Manor Cuqi is a high-resolution image urban ground feature change detection [ D ] based on a Simese convolutional neural network, Wuhan university, 2018, and provides an SCNN network, wherein the network inputs images before and after change into a model (the SCNN network), and performs feature extraction on the images before and after change (two images at the same position and different time) through a twin (Simese) module sharing weight values, and then performs similarity comparison on the proposed features to obtain a changed region (changed mask). The proposed SCNN network only acquires the changed position, but does not acquire the detailed information of the change. Therefore, an algorithm framework is needed to obtain not only the changed region but also the detailed information of the change.
Disclosure of Invention
In order to solve the existing problems, the invention provides an algorithm and a system for remote sensing change detection, and the specific scheme is as follows:
an algorithm for remote sensing change detection specifically comprises the following steps:
s1, extracting the characteristics of the images in different time phases and outputting a characteristic image;
s2, performing feature splicing on the feature map to obtain a feature spliced map;
s3, respectively taking the feature map and the feature mosaic map as input, and performing similarity calculation to obtain an output result;
s4, performing feature splicing on the output result in the S3, then using the output result as the input of feature fusion, and outputting a fusion feature map after passing through a PPM pyramid pooling module;
and S5, inputting the fusion feature diagram into an FCN-head module and an SPP-head module to obtain final output.
Preferably, in S1, based on the twin neural network, using the backbone network to perform feature extraction on1 group of 2 graphs of different time phases, and outputting 2 feature graphs, which are denoted as pre _ feature _ map and cur _ feature _ map; s2, performing feature splicing on the 2 feature maps output in S1 to obtain a feature splicing map, and recording the feature splicing map as concat _ feature _ map.
Preferably, in S3, similarity calculation is performed by taking the 2 feature maps obtained in S1 and the feature map obtained in S2 as input, and 3 calculation results are output, which are denoted as similarity _1, similarity _2, and similarity _ 3.
Preferably, the calculation result in S3 is completed by 3 calculation branches;
taking pre _ feature _ map and cur _ feature _ map as the input of a first branch, multiplying the pre _ feature _ map and the cur _ feature _ map by a feature map obtained after the convolution layer by the first branch to obtain the output of the first branch, and recording the output as similarity _ 1;
taking the second _ feature _ map as the input of the second branch, directly passing through the convolutional layer to obtain the output of the second branch, and recording as similarity _ 2;
and taking pre _ feature _ map and cur _ feature _ map as the input of branch three, subtracting the feature maps obtained after passing through the convolutional layer to obtain the output of branch three, and recording the output as precision _ 3.
Preferably, in S4, feature concatenation is performed on precision _1 and precision _2 as input, and PPM-a is performed to obtain output feature _ fusion1, and feature concatenation is performed on precision _2 and precision _3 as input, and PPM-B is performed to obtain output feature _ fusion 2.
Preferably, the specific method for feature splicing between similarity _1 and similarity _2 includes: the dimension of the similarity _1 is a tensor with the dimensions of 'B + C + H + W', the dimension of the similarity _2 is a tensor with the dimensions of 'B + C + H + W', and the similarity _1 is expanded into B + C (H + W); and then, outputting the characteristic C _ P with the dimension of 'B (C/2) × (H) × (W)', similarly, performing the same operation on the similarity _2 with the similarity _1 to output the characteristic P _ P with the dimension of 'B (C/2) × (H) × (W)', performing concatation on the 'C _ P' and the 'P _ P' to obtain a dimension tensor C _ P _ F of 'B _ C (H) × (H)', performing tensor C _ P _ F to be a dimension tensor of 'B _ C _ H)', outputting, and performing the same specific method for performing characteristic splicing on the similarity _2 and the similarity _3 with the similarity _1 and the similarity _ 2.
Preferably, the detailed processing steps of PPM-B are as follows:
SA1, wherein a feature splicing map obtained by feature splicing of the similarity _2 and the similarity _3 is referred to as feature _ map _ s2s3, the feature _ map _ s2s3 is subjected to 4 groups of pooling operations with different scales (1/2, 1/4, 1/8 and 1/16), and 4 groups of feature maps with different dimensions are output, and the feature maps are feature maps with different dimensions, feature _ map _ s2s3_ p2, feature _ map _ s2s3_ p4, feature _ map _ s2s3_ p8 and feature _ map _ s2s3_ p 16;
SA2 records 4 sets of feature maps with different dimensions obtained in SA1 as feature _ fusion2 by convolution operations, including feature _ map _ s2s3_ p2_ c, feature _ map _ s2s3_ p4_ c, feature _ map _ s2s3_ p8_ c, and feature _ map _ s2s3_ p16_ c.
Preferably, feature _ fusion1 is input into the FCN-head module in S5 to obtain a final output, which is recorded as output 1; feature _ fusion2 is input into the SPP-head module to get the final output, which is denoted as output 2.
Preferably, after the feature _ fusion2 is input into the SPP-head module, the SPP-head module specifically includes:
SB1, pooling 1/8, 1/4, 1/2 and 1/1 of the feature _ map _ s2s3_ p2_ c, feature _ map _ s3_ p4_ c, feature _ map _ s2s3_ p8_ c and feature _ map _ s2s3_ p16_ c to obtain 4-component new feature maps with the same resolution, which are denoted as feature _ map _ s2s3_ c _ p2_18, feature _ map _ s2s3_ c _ p4_14, feature _ map _ s2s3_ c _ p8_12 and feature _ map _ s2s3_ c _ p1_ 11;
SB2, splicing the 4 groups of new feature maps with the same resolution obtained in SB1, and recording as feature _ map _ s2s3_ c _ p _ concat;
SB3, upsampling the feature _ map _ s2s3_ c _ p _ concat to the same resolution as the feature _ map _ s2s3, and recording as feature _ map _ s2s3_ p _ concat _ up;
SB4, feature concatenate the feature _ map _ s2s3_ p _ concat _ up with the feature _ map _ s2s3, and record as SPP-head _ feature _ fusion 2;
SB5, the SPP-head _ feature _ fusion2 is convolved to obtain the output.
Preferably, the system for the algorithm for detecting the remote sensing change comprises a feature extraction module, a similarity calculation module, a feature fusion module and a result output module which are electrically connected with corresponding ports in sequence;
the feature extraction module is used for extracting features of the graphs of different time phases and outputting the feature graphs as the input of the similarity calculation module;
the similarity calculation module comprises three branches, namely a related operation branch, a concat feature fusion branch and a difference branch, and is used for distinguishing the similarity and the difference of the images at different time phases after passing through the twin neural network through learning, and performing feature splicing on the results calculated by the similarity calculation module to serve as the input of the feature fusion module;
the feature fusion module comprises a PPM pyramid pooling module and is used for fusing and outputting the input feature splicing map and taking the fused feature splicing map as the input of the result output module;
the result output module comprises an FCN-head module and an SPP-head module, the FCN-head module is used for outputting the semantic segmentation of the time-phase diagram 1, and the SPP-head module is used for outputting the change of the time-phase diagram 2 relative to the time-phase diagram 1.
The invention has the beneficial effects that:
through the arrangement of the algorithm and the system in the invention, the FCN-head module in the result output module outputs the semantic segmentation of the time phase diagram 1, namely the information represented by each area of the time phase diagram 1, and the output result corresponding to the SPP-head module is the change of the time phase diagram 2 relative to the time phase diagram 1, namely the detailed information of the specific change in the changed area. By integrating the output results of the FCN-head module and the SPP-head module, the detailed information of the specific changes from the time-phase diagram 1 to the time-phase diagram 2 can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a diagram of a PPM _ A module according to the present invention;
FIG. 3 is a diagram of a PPM _ A + FCN-head module according to the present invention;
FIG. 4 is a diagram of a PPM _ B module according to the present invention;
FIG. 5 is a diagram of a PPM _ B + SPP-head module according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an algorithm for remote sensing change detection specifically includes the following:
s1, extracting the characteristics of the images in different time phases and outputting a characteristic image;
specifically, the feature extraction is based on a twin neural network, the weight is shared, the twin network fades the labels, the network has good expansibility, the size of the whole data set is increased in a phase-changing manner, and the data set with relatively small data volume can be trained with a deep network to achieve a good effect. Using a backbone network to perform feature extraction on1 group of 2 graphs in different time phases, and outputting 2 feature graphs which are marked as pre _ feature _ map and cur _ feature _ map; s2, performing feature splicing on the 2 feature maps output in S1 to obtain a feature splicing map, and recording the feature splicing map as concat _ feature _ map.
And S2, performing feature splicing on the feature map to obtain a feature spliced map. This step is to make the back-end network, i.e. the network for calculating the similarity, able to distinguish the similarity and difference of the images of different time phases after passing through the twin neural network by learning.
And S3, respectively taking the feature map and the feature splicing map as input, and performing similarity calculation to obtain an output result.
Specifically, 2 feature maps obtained in S1 and the feature map obtained in S2 are input, similarity calculation is performed, and 3 calculation results are output, which are denoted as similarity _1, similarity _2, and similarity _ 3.
Wherein, the calculation result is completed by 3 calculation branches.
And taking the pre _ feature _ map and the cur _ feature _ map as the input of a branch I, and multiplying the pre _ feature _ map and the cur _ feature _ map by a feature map obtained after the convolution layer by the branch I to obtain the output of the branch I, wherein the output of the branch I is recorded as similarity _ 1.
The concat _ feature _ map is used as the input of the second branch, and the output of the second branch is obtained after directly passing through the convolution layer, and is marked as precision _ 2.
And taking pre _ feature _ map and cur _ feature _ map as the input of branch three, subtracting the feature maps obtained after passing through the convolutional layer to obtain the output of branch three, and recording the output as precision _ 3.
The similarity calculation has three branches, and the similarity of the feature maps is evaluated in three modes of a correlation operation (multiplication operation) branch, concat feature fusion and a difference (subtraction) branch, so that the whole network is more robust and the detection precision is higher.
And S4, performing feature splicing on the output result in the S3, then using the output result as the input of feature fusion, and outputting a fusion feature map after passing through a PPM pyramid pooling module.
Specifically, the precision _1 and the precision _2 are subjected to feature splicing and then taken as input, and the output feature _ fusion1 is obtained after PPM-A, and the precision _2 and the precision _3 are subjected to feature splicing and then taken as input, and the output feature _ fusion2 is obtained after PPM-B. The receptive field of the lower network is small, the representation capability of the geometric detail information is strong, and the representation capability of the semantic information is weak although the resolution is high. And the feature fusion fuses low-level features and high-level features of different scales together, gives consideration to the bottom-level features and the high-level features, and the network has higher detection accuracy.
The specific method for performing feature splicing on the similarity _1 and the similarity _2 comprises the following steps: the dimension of the similarity _1 is a tensor with the dimensions of 'B + C + H + W', the dimension of the similarity _2 is a tensor with the dimensions of 'B + C + H + W', and the similarity _1 is expanded into B + C (H + W); and then, outputting the characteristic C _ P with the dimension of 'B (C/2) × (H) × (W)', similarly, performing the same operation on the similarity _2 with the similarity _1 to output the characteristic P _ P with the dimension of 'B (C/2) × (H) × (W)', performing concatation on the 'C _ P' and the 'P _ P' to obtain a dimension tensor C _ P _ F of 'B _ C (H) × (H)', performing tensor C _ P _ F to be a dimension tensor of 'B _ C _ H)', outputting, and performing the same specific method for performing characteristic splicing on the similarity _2 and the similarity _3 with the similarity _1 and the similarity _ 2.
As shown in fig. 2, the detailed processing steps of PPM-a include:
SC1, the feature map after feature splicing of similarity _1 and similarity _2 is denoted as feature _ map _ s1s2, the feature _ map _ s1s2 outputs 4 groups of feature maps with different dimensions through 4 groups of pooling operations with different dimensions (1/4, 1/8, 1/16 and 1/32), and is denoted as feature _ map _ s1s2_ p4, feature _ map _ s1s2_ p8, feature _ map _ s1s2_ p16 and feature _ map _ s1s2_ p 32;
SC2, respectively convolving 4 sets of feature maps with different dimensions obtained by SC1, and then upsampling the feature maps by 4 times, 8 times, 16 times and 32 times to obtain 4 sets of new feature maps, which are recorded as feature _ map _ s1s2_ p4_ up4, feature _ map _ s1s2_ p8_ up8, feature _ map _ s1s2_ p16_ up16, and feature _ map _ s1s2_ p32_ up 32;
SC3 obtains a new feature map, denoted feature _ fusion1, by performing normal feature concatenation, i.e., normal CONCAT, on the 4 groups of new feature maps obtained in C2.
As shown in fig. 4, the detailed processing steps of PPM-B include:
SA1, wherein a feature splicing map obtained by feature splicing of the similarity _2 and the similarity _3 is referred to as feature _ map _ s2s3, the feature _ map _ s2s3 is subjected to 4 groups of pooling operations with different scales (1/2, 1/4, 1/8 and 1/16), and 4 groups of feature maps with different dimensions are output, and the feature maps are feature maps with different dimensions, feature _ map _ s2s3_ p2, feature _ map _ s2s3_ p4, feature _ map _ s2s3_ p8 and feature _ map _ s2s3_ p 16;
SA2 records 4 sets of feature maps with different dimensions obtained in SA1 as feature _ fusion2 by convolution operations, including feature _ map _ s2s3_ p2_ c, feature _ map _ s2s3_ p4_ c, feature _ map _ s2s3_ p8_ c, and feature _ map _ s2s3_ p16_ c.
And S5, inputting the fusion feature diagram into an FCN-head module and an SPP-head module to obtain final output.
Specifically, feature _ fusion1 is input into the FCN-head module to obtain the final output, which is recorded as output 1; feature _ fusion2 is input into the SPP-head module to get the final output, which is denoted as output 2.
As shown in fig. 3, the specific processing steps of the FCN-head module include: the convolution operation is performed on the feature _ fusion1 to obtain an output, namely, semantic segmentation is obtained, namely what each region in the phase diagram 1 represents.
As shown in fig. 5, the specific processing steps of the SPP-head module include:
SB1, the feature _ map _ s2s3_ p2_ c, the feature _ map _ s2s3_ p4_ c, the feature _ map _ s2s3_ p8_ c, and the feature _ map _ s2s3_ p16_ c are subjected to pooling operations in the scales of 1/8, 1/4, 1/2, and 1/1 to obtain 4 sets of new feature maps with the same resolution, which are denoted as feature _ map _ s2s3_ c _ p2_18, feature _ map _ s2s3_ c _ p4_14, feature _ map _ s2s3_ c _ p8_12, and feature _ map _ s2s3_ c _ p1_ 11;
SB2, splicing the 4 groups of new feature maps with the same resolution obtained in SB1, and recording as feature _ map _ s2s3_ c _ p _ concat;
SB3, upsampling the feature _ map _ s2s3_ c _ p _ concat to the same resolution as the feature _ map _ s2s3, and recording as feature _ map _ s2s3_ p _ concat _ up;
SB4, feature concatenating the feature _ map _ s2s3_ p _ concat _ up and the feature _ map _ s2s3, and recording as SPP-head _ feature _ fusion 2;
SB5, performing convolution operation on SPP-head _ feature _ fusion2 to obtain output, namely obtaining semantic segmentation, namely the change of the phasor diagram 2 relative to the phasor diagram 1, namely what is specifically changed in the changed area.
A system for an algorithm for remote sensing change detection comprises a feature extraction module, a similarity calculation module, a feature fusion module and a result output module which are sequentially and electrically connected through corresponding ports.
The feature extraction module is used for extracting features of the graphs in different time phases and outputting the feature graphs as the input of the similarity calculation module.
The similarity calculation module comprises three branches, namely a correlation (multiplication) operation branch, a concat feature fusion branch and a difference branch, and is used for distinguishing the similarity and difference of the images in different time phases after passing through the twin neural network through learning, and performing feature splicing on the results calculated by the similarity calculation module to serve as the input of the feature fusion module.
The feature fusion module comprises a PPM pyramid pooling module and is used for fusing and outputting the input feature splicing map and taking the fused feature splicing map as the input of the result output module.
The result output module comprises an FCN-head module and an SPP-head module, the FCN-head module is used for outputting the semantic segmentation of the time-phase diagram 1, and the SPP-head module is used for outputting the change of the time-phase diagram 2 relative to the time-phase diagram 1.
Through the arrangement of the algorithm and the system in the invention, the FCN-head module in the result output module outputs the semantic segmentation of the time phase diagram 1, namely what each region of the time phase diagram 1 represents, and the output result corresponding to the SPP-head module is the change of the time phase diagram 2 relative to the time phase diagram 1, namely what the changed region is specifically changed. By integrating the output results of the FCN-head module and the SPP-head module, the detailed information of the specific changes from the time-phase diagram 1 to the time-phase diagram 2 can be obtained.
The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. An algorithm for remote sensing change detection is characterized by specifically comprising the following steps:
s1, extracting the characteristics of the images in different time phases and outputting a characteristic image;
s2, performing feature splicing on the feature map to obtain a feature spliced map;
s3, respectively taking the feature map and the feature mosaic map as input, and performing similarity calculation to obtain an output result;
s4, performing feature splicing on the output result in the S3, then using the output result as the input of feature fusion, and outputting a fusion feature map after passing through a PPM pyramid pooling module;
and S5, inputting the fusion feature diagram into an FCN-head module and an SPP-head module to obtain final output.
2. The algorithm for remote sensing change detection as recited in claim 1, wherein: in the step S1, based on a twin neural network, using a backbone network to perform feature extraction on1 group of 2 graphs in different time phases, and outputting 2 feature graphs which are marked as pre _ feature _ map and cur _ feature _ map; s2, performing feature splicing on the 2 feature maps output in S1 to obtain a feature splicing map, and recording the feature splicing map as concat _ feature _ map.
3. The algorithm for remote sensing change detection as claimed in claim 2, wherein: s3 receives the 2 feature maps obtained in S1 and the feature map obtained in S2, respectively, and performs similarity calculation to output 3 calculation results, which are denoted as similarity _1, similarity _2, and similarity _ 3.
4. The algorithm for remote sensing change detection as claimed in claim 3, wherein: the calculation result in S3 is completed by 3 calculation branches;
taking pre _ feature _ map and cur _ feature _ map as the input of a first branch, multiplying the pre _ feature _ map and the cur _ feature _ map by a feature map obtained after the convolution layer by the first branch to obtain the output of the first branch, and recording the output as similarity _ 1;
taking the second _ feature _ map as the input of the second branch, directly passing through the convolutional layer to obtain the output of the second branch, and recording as similarity _ 2;
and taking pre _ feature _ map and cur _ feature _ map as the input of branch three, subtracting the feature maps obtained after passing through the convolutional layer to obtain the output of branch three, and recording the output as precision _ 3.
5. The algorithm for remote sensing change detection as claimed in claim 4, wherein: in S4, the precision _1 and the precision _2 are subjected to feature splicing to be used as input, and the input is subjected to PPM-A to obtain output feature _ fusion1, and in addition, the precision _2 and the precision _3 are subjected to feature splicing to be used as input, and the output feature _ fusion2 is obtained after PPM-B.
6. The algorithm for remote sensing change detection as recited in claim 5, wherein the specific method for feature stitching similarity _1 and similarity _2 comprises: the dimension of the similarity _1 is a tensor with the dimensions of 'B + C + H + W', the dimension of the similarity _2 is a tensor with the dimensions of 'B + C + H + W', and the similarity _1 is expanded into B + C (H + W); and then, outputting the characteristic C _ P with the dimension of 'B (C/2) × (H) × (W)', similarly, performing the same operation on the similarity _2 with the similarity _1 to output the characteristic P _ P with the dimension of 'B (C/2) × (H) × (W)', performing concatation on the 'C _ P' and the 'P _ P' to obtain a dimension tensor C _ P _ F of 'B _ C (H) × (H)', performing tensor C _ P _ F to be a dimension tensor of 'B _ C _ H)', outputting, and performing the same specific method for performing characteristic splicing on the similarity _2 and the similarity _3 with the similarity _1 and the similarity _ 2.
7. The algorithm for remote sensing change detection as claimed in claim 6, wherein the PPM-B specific processing steps include:
SA1, wherein a feature stitching graph obtained by feature stitching between similarity _2 and similarity _3 is referred to as feature _ map _ s2s3, and the feature _ map _ s2s3 is subjected to 4 sets of pooling operations with different scales (1/2, 1/4, 1/8 and 1/16), and outputs 4 sets of feature maps with different dimensions, namely, feature _ map _ s2s3_ p2, feature _ map _ s2s3_ p4, feature _ map _ s2s3_ p8 and feature _ map _ s2s3_ p 16;
SA2 records 4 sets of feature maps with different dimensions obtained in SA1 as feature _ fusion2 by convolution operations, including feature _ map _ s2s3_ p2_ c, feature _ map _ s2s3_ p4_ c, feature _ map _ s2s3_ p8_ c, and feature _ map _ s2s3_ p16_ c.
8. The algorithm for remote sensing change detection as recited in claim 7, wherein: in S5, feature _ fusion1 is input into the FCN-head module to obtain the final output, which is recorded as output 1; feature _ fusion2 is input into the SPP-head module to get the final output, which is denoted as output 2.
9. The algorithm for remote sensing change detection according to claim 8, wherein after the feature _ fusion2 is input into the SPP-head module, the SPP-head module comprises the following specific processing steps:
SB1, the feature _ map _ s2s3_ p2_ c, the feature _ map _ s2s3_ p4_ c, the feature _ map _ s2s3_ p8_ c, and the feature _ map _ s2s3_ p16_ c are subjected to pooling operations in the scales of 1/8, 1/4, 1/2, and 1/1 to obtain 4 sets of new feature maps with the same resolution, which are denoted as feature _ map _ s2s3_ c _ p2_18, feature _ map _ s2s3_ c _ p4_14, feature _ map _ s2s3_ c _ p8_12, and feature _ map _ s2s3_ c _ p1_ 11;
SB2, splicing the 4 groups of new feature maps with the same resolution obtained in SB1, and recording as feature _ map _ s2s3_ c _ p _ concat;
SB3, upsampling the feature _ map _ s2s3_ c _ p _ concat to the same resolution as the feature _ map _ s2s3, and recording as feature _ map _ s2s3_ p _ concat _ up;
SB4, feature concatenating the feature _ map _ s2s3_ p _ concat _ up and the feature _ map _ s2s3, and recording as SPP-head _ feature _ fusion 2;
SB5, the SPP-head _ feature _ fusion2 is convolved to obtain the output.
10. A system for remote sensing change detection algorithm based on any of claims 1-9, characterized by: the system comprises a feature extraction module, a similarity calculation module, a feature fusion module and a result output module which are electrically connected through corresponding ports in sequence;
the feature extraction module is used for extracting features of the graphs of different time phases and outputting the feature graphs as the input of the similarity calculation module;
the similarity calculation module comprises three branches, namely a related operation branch, a concat feature fusion branch and a difference branch, and is used for distinguishing the similarity and the difference of the images at different time phases after passing through the twin neural network through learning, and performing feature splicing on the results calculated by the similarity calculation module to serve as the input of the feature fusion module;
the feature fusion module comprises a PPM pyramid pooling module and is used for fusing and outputting the input feature splicing map and taking the fused feature splicing map as the input of the result output module;
the result output module comprises an FCN-head module and an SPP-head module, the FCN-head module is used for outputting the semantic segmentation of the time-phase diagram 1, and the SPP-head module is used for outputting the change of the time-phase diagram 2 relative to the time-phase diagram 1.
CN202210941062.7A 2022-08-08 2022-08-08 Method and system for detecting remote sensing change Active CN115019186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210941062.7A CN115019186B (en) 2022-08-08 2022-08-08 Method and system for detecting remote sensing change

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210941062.7A CN115019186B (en) 2022-08-08 2022-08-08 Method and system for detecting remote sensing change

Publications (2)

Publication Number Publication Date
CN115019186A true CN115019186A (en) 2022-09-06
CN115019186B CN115019186B (en) 2022-11-22

Family

ID=83065799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210941062.7A Active CN115019186B (en) 2022-08-08 2022-08-08 Method and system for detecting remote sensing change

Country Status (1)

Country Link
CN (1) CN115019186B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001008098A1 (en) * 1999-07-21 2001-02-01 Obvious Technology, Inc. Object extraction in images
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN110263705A (en) * 2019-06-19 2019-09-20 上海交通大学 Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN110533631A (en) * 2019-07-15 2019-12-03 西安电子科技大学 SAR image change detection based on the twin network of pyramid pondization
CN111047551A (en) * 2019-11-06 2020-04-21 北京科技大学 Remote sensing image change detection method and system based on U-net improved algorithm
CN111681197A (en) * 2020-06-12 2020-09-18 陕西科技大学 Remote sensing image unsupervised change detection method based on Siamese network structure
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
CN113065467A (en) * 2021-04-01 2021-07-02 中科星图空间技术有限公司 Satellite image low-coherence region identification method and device based on deep learning
CN113221997A (en) * 2021-05-06 2021-08-06 湖南中科星图信息技术股份有限公司 High-resolution image rape extraction method based on deep learning algorithm
CN113240023A (en) * 2021-05-19 2021-08-10 中国民航大学 Change detection method and device based on change image classification and feature difference value prior
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
CN113705538A (en) * 2021-09-28 2021-11-26 黄河水利委员会黄河水利科学研究院 High-resolution remote sensing image road change detection device and method based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001008098A1 (en) * 1999-07-21 2001-02-01 Obvious Technology, Inc. Object extraction in images
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network
CN110263705A (en) * 2019-06-19 2019-09-20 上海交通大学 Towards two phase of remote sensing technology field high-resolution remote sensing image change detecting method
CN110533631A (en) * 2019-07-15 2019-12-03 西安电子科技大学 SAR image change detection based on the twin network of pyramid pondization
CN111047551A (en) * 2019-11-06 2020-04-21 北京科技大学 Remote sensing image change detection method and system based on U-net improved algorithm
CN111681197A (en) * 2020-06-12 2020-09-18 陕西科技大学 Remote sensing image unsupervised change detection method based on Siamese network structure
CN112668494A (en) * 2020-12-31 2021-04-16 西安电子科技大学 Small sample change detection method based on multi-scale feature extraction
CN113065467A (en) * 2021-04-01 2021-07-02 中科星图空间技术有限公司 Satellite image low-coherence region identification method and device based on deep learning
CN113221997A (en) * 2021-05-06 2021-08-06 湖南中科星图信息技术股份有限公司 High-resolution image rape extraction method based on deep learning algorithm
CN113240023A (en) * 2021-05-19 2021-08-10 中国民航大学 Change detection method and device based on change image classification and feature difference value prior
CN113706482A (en) * 2021-08-16 2021-11-26 武汉大学 High-resolution remote sensing image change detection method
CN113705538A (en) * 2021-09-28 2021-11-26 黄河水利委员会黄河水利科学研究院 High-resolution remote sensing image road change detection device and method based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MENGYA ZHANG: "Triplet-Based Semantic Relation Learning for Aerial Remote Sensing Image Change Detection", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 *
ZHENCHAO ZHANG 等: "Change Detection between Multimodal RemoteS ensing Data Using Siamese CNN", 《ARXIV》 *
杜俊翰 等: "基于多尺度注意力特征与孪生判别的遥感影像变化检测及其抗噪性研究", 《数据采集与处理》 *
雷婷: "基于神经网络的遥感图像变化检测研究与应用", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *

Also Published As

Publication number Publication date
CN115019186B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN110853026B (en) Remote sensing image change detection method integrating deep learning and region segmentation
CN110084850B (en) Dynamic scene visual positioning method based on image semantic segmentation
CN108664981B (en) Salient image extraction method and device
CN111583097A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN104931044B (en) A kind of star sensor image processing method and system
CN107292247A (en) A kind of Human bodys' response method and device based on residual error network
CN110246148B (en) Multi-modal significance detection method for depth information fusion and attention learning
KR20200040186A (en) Learning method and testing method for object detector based on r-cnn, and learning device and testing device using the same
CN106548169A (en) Fuzzy literal Enhancement Method and device based on deep neural network
Delibasoglu et al. Improved U-Nets with inception blocks for building detection
CN116797787B (en) Remote sensing image semantic segmentation method based on cross-modal fusion and graph neural network
CN113610070A (en) Landslide disaster identification method based on multi-source data fusion
CN117876836B (en) Image fusion method based on multi-scale feature extraction and target reconstruction
CN111310767A (en) Significance detection method based on boundary enhancement
CN112991364A (en) Road scene semantic segmentation method based on convolution neural network cross-modal fusion
CN113988147A (en) Multi-label classification method and device for remote sensing image scene based on graph network, and multi-label retrieval method and device
CN111242003B (en) Video salient object detection method based on multi-scale constrained self-attention mechanism
Liu et al. DSAMNet: A deeply supervised attention metric based network for change detection of high-resolution images
CN111598841B (en) Example significance detection method based on regularized dense connection feature pyramid
CN115019186B (en) Method and system for detecting remote sensing change
Lu et al. An iterative classification and semantic segmentation network for old landslide detection using high-resolution remote sensing images
Hinojosa et al. Spectral-spatial classification from multi-sensor compressive measurements using superpixels
Ma et al. STNet: Spatial and Temporal feature fusion network for change detection in remote sensing images
Perciano et al. A hierarchical Markov random field for road network extraction and its application with optical and SAR data
CN114549958B (en) Night and camouflage target detection method based on context information perception mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 35th Floor, Building A1, Phase I, Zhongan Chuanggu Science and Technology Park, No. 900, Wangjiang West Road, High-tech Zone, Hefei City, Anhui Province, 230000

Patentee after: Zhongke Xingtu Measurement and Control Technology Co.,Ltd.

Address before: 35th Floor, Building A1, Phase I, Zhongan Chuanggu Science and Technology Park, No. 900, Wangjiang West Road, High-tech Zone, Hefei City, Anhui Province, 230000

Patentee before: Zhongke Xingtu measurement and control technology (Hefei) Co.,Ltd.