CN117671437B - Open stope identification and change detection method based on multitasking convolutional neural network - Google Patents

Open stope identification and change detection method based on multitasking convolutional neural network Download PDF

Info

Publication number
CN117671437B
CN117671437B CN202311359531.5A CN202311359531A CN117671437B CN 117671437 B CN117671437 B CN 117671437B CN 202311359531 A CN202311359531 A CN 202311359531A CN 117671437 B CN117671437 B CN 117671437B
Authority
CN
China
Prior art keywords
feature map
feature
change detection
image
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311359531.5A
Other languages
Chinese (zh)
Other versions
CN117671437A (en
Inventor
李军
邢江河
杜守航
张成业
李炜
谢焱新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Mining and Technology Beijing CUMTB
Original Assignee
China University of Mining and Technology Beijing CUMTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Mining and Technology Beijing CUMTB filed Critical China University of Mining and Technology Beijing CUMTB
Priority to CN202311359531.5A priority Critical patent/CN117671437B/en
Publication of CN117671437A publication Critical patent/CN117671437A/en
Application granted granted Critical
Publication of CN117671437B publication Critical patent/CN117671437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses an open stope identification and change detection method based on a multitasking convolutional neural network, which comprises the following steps: s1, acquiring remote sensing image data of two time phases of a study area T1 and a study area T2, and constructing a multi-task convolutional neural network model; s2, the change detection network branch performs differential fusion on the feature map obtained by the first identification network branch and the feature map obtained by the second identification network branch to obtain a coding feature map, and then performs feature fusion through jump connection to obtain a feature map D t‑5、Dt‑4、Dt‑3、Dt‑2; s3, the change detection network branch carries out differential fusion on the feature map D t1‑2 and the feature map D t2‑2 to obtain a feature map D a‑t2; s4, multiplying the feature map D t‑2 by the channel attention weight and the space attention weight respectively to obtain a feature map D' t‑2, and then obtaining a change detection result through up-sampling operation. The invention builds the multi-task convolutional neural network model based on the twin VGG-16 network structure, and can be rapidly and efficiently applied to the open pit mining field identification and the automatic detection of the change area.

Description

Open stope identification and change detection method based on multitasking convolutional neural network
Technical Field
The invention relates to the field of open stope remote sensing image processing and change detection, in particular to an open stope identification and change detection method based on a multitasking convolutional neural network.
Background
Although the exploitation of coal resources promotes the development of economy, serious ecological environment problems are brought about, and the exploitation is specifically expressed as follows: along with the increase of coal resource exploitation strength, serious earth surface damage is caused, the normal land utilization pattern is disturbed, and ecological elements such as earth surface/underground water, soil, vegetation, atmosphere and the like are affected, so that ecological environment risks are brought. The conventional open stope identification and change detection usually adopts a manual field investigation method, and the manual investigation data has the problems of high precision, large consumption of manpower and material resources, untimely monitoring and the like. The method can accurately realize the space range identification and change detection of the open-air stope of the mining area, and is helpful for assisting relevant national departments in ecological environment monitoring and management and use control of the open-air stope.
The method for automatically identifying and detecting the space range of the open stope based on the remote sensing image is mainly a traditional remote sensing automatic interpretation method, such as membership function, random forest and the like, and although the identification accuracy of the method is continuously improved, the result accuracy still cannot meet the application requirements. Most of the existing researches cut and split the two tasks of identification and change detection, and the change detection does not go deep into the processes of coding and the like of feature identification, so that the accuracy of the change detection is low, especially for low detail feature identification, the change detection is interfered by pseudo change information (the pseudo change problem is particularly serious in the change detection task of the open stope because the open stope of a coal mining area is continuously mined). The coal mining area ground object is complex in type and high in heterogeneity, and the identification task and the change detection task of the open stope are focused on details and face great challenges.
Disclosure of Invention
The invention aims to solve the technical problems pointed out in the background art, provides an open stope identification and change detection method based on a multitasking convolutional neural network, constructs a multitasking convolutional neural network model based on a twin VGG-16 network structure, can realize open stope space range identification and change detection research in an end-to-end mode, can be quickly and efficiently applied to open stope identification and change area automatic detection, and provides network detection model and data support for ecological environment protection of open stopes and mining area illegal mining activity monitoring.
The aim of the invention is achieved by the following technical scheme:
an open stope identification and change detection method based on a multitasking convolutional neural network, the method comprising:
s1, determining a research area, and collecting remote sensing image data of two time phases of the research area T1 and the research area T2; constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first identification network branch, a second identification network branch and a change detection network branch;
The encoding process of the first recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T1 image data, which are respectively marked as E t1-1、Et1-2、Et1-3、Et1-4、Et1-5, the decoding process of the first recognition network branch connects the characteristic diagrams E t1-2、Et1-3、Et1-4 through jump and carries out characteristic fusion to obtain a characteristic diagram D t1-5、Dt1-4、Dt1-3、Dt1-2, and then the recognition result of the T1 image data is obtained through up-sampling operation;
The coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively marked as E t2-1、Et2-2、Et2-3、Et2-4、Et2-5, the decoding process of the second recognition network branch connects the characteristic diagrams E t2-2、Et2-3、Et2-4 through jump and carries out characteristic fusion to obtain a characteristic diagram D t2-5、Dt2-4、Dt2-3、Dt2-2, and then the recognition result of the T2 image data is obtained through up-sampling operation;
S2, performing differential fusion on the characteristic diagram E t1-2、Et1-3、Et1-4、Et1-5 and the characteristic diagram E t2-2、Et2-3、Et2-4、Et2-5 in the encoding process of the change detection network branch to obtain an encoded characteristic diagram E t-2、Et-3、Et-4、Et-5; the coding feature map E t-2、Et-3、Et-4 is connected through jump and is subjected to feature fusion in the branch decoding process of the change detection network to obtain a feature map D t-5、Dt-4、Dt-3、Dt-2;
S3, the change detection network branch carries out differential fusion on the feature map D t1-2 and the feature map D t2-2 to obtain a feature map D t1-t2; the change detection network branch adopts a convolution attention module to process the feature map D t1-t2 to obtain a channel attention weight and a space attention weight;
S4, multiplying the feature map D t-2 with the channel attention weight and the space attention weight respectively to obtain a feature map D' t-2 with enhanced information in the channel direction and the space direction, and then obtaining a change detection result of the research area through up-sampling operation.
In order to better implement the present invention, in step S1, the multitasking convolutional neural network model performs model training using the following method:
S11, preparing remote sensing image sample data of two time phases of a research area T1 and a research area T2, wherein the method comprises the following steps: collecting remote sensing image samples of two time phases of a research area T1 and a research area T2, respectively correspondingly extracting boundary vectors, carrying out vector grid conversion treatment, respectively converting the remote sensing image samples of the two time phases of the research area T1 and the research area T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the research area T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the research area T2; ;
S12, cutting the T1 image sample data and the T2 image sample data into image blocks respectively, dividing the image blocks into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein the training set and the verification set are used for training a model, and the test set is used for testing the precision and generalization capability of the model.
In order to better implement the present invention, the decoding process method of the first identified network branch is as follows: firstly, a feature map E t-5 is subjected to convolution treatment to obtain a feature map D t1-5, then the feature map D t1-4 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t1-4, then the feature map D t1-3 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t1-3, then the feature map D t1-2 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t1-2, the identification result of the T1 image data is obtained by carrying out up-sampling operation on the feature map D t1-2; the second identified network branch decoding process method is as follows: firstly, a feature map E t2-5 is subjected to convolution processing to obtain a feature map D t2-5, then the feature map D t1-4 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t2-4, then the feature map D t2-3 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t2-3, then the feature map D t2-2 is obtained by up-sampling, jump-connection and feature fusion with a feature map E t2-2, and the recognition result of T2 image data is obtained by up-sampling operation of the feature map D t2-2; the method for detecting the network branch decoding process by the change is as follows: firstly, a feature map E t-5 is subjected to convolution processing to obtain a feature map D t-5, then the feature map D t-4 is obtained by up-sampling, jump-connecting with a feature map E t-4 and feature fusion, then the feature map D t-3 is obtained by up-sampling, jump-connecting with a feature map E t1-3 and feature fusion, and then the feature map D t-2 is obtained by up-sampling, jump-connecting with a feature map E t-2 and feature fusion.
Preferably, the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, and the image preprocessing includes radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
Preferably, the jump connection further comprises an edge information enhancement module for processing, the edge information enhancement module comprises channel dimension pooling and Sobel convolution, the edge information enhancement module carries out channel compression on the input feature image through the channel dimension pooling, the Sobel convolution comprises an operator sobelx in the horizontal direction and an operator Sobely in the vertical direction, the feature image after channel compression obtains edge information in the horizontal direction and the vertical direction through Sobel convolution and carries out addition operation, and then product operation is carried out on the feature image and the original input feature image to obtain the feature image with enhanced edge information.
Preferably, the loss function of the multitasking convolutional neural network model is composed of a contrast loss function L CT and a cross entropy loss function L CE, and the calculation formula is as follows:
L=ω 1LCT2LCE; where ω 1 represents the weight of the contrast loss function L CT, ω 2 represents the weight of the cross entropy loss function L CE, and L is the total loss value of the multi-tasking convolutional neural network model.
Preferably, the contrast loss function L CT is formulated as follows:
Wherein d represents the Euclidean distance that the two features form a pair of features of n, and the two features of n are similar, then/> Otherwise/>Margin is a set threshold and N is the total number of feature pairs.
Preferably, the cross entropy loss function L CE is formulated as follows:
wherein/> And y i is the pixel result of model prediction, and p is the total number of pixels for the real category corresponding to the pixels.
Compared with the prior art, the invention has the following advantages:
(1) The invention builds a multi-task convolutional neural network model based on a twin VGG-16 network structure, can synchronously realize the open stope space range identification and the change detection research end to end, can be quickly and efficiently applied to open stope identification and automatic change area detection, and provides a network detection model and data support for the ecological environment protection of open stopes and the illegal mining activity monitoring of the mining areas.
(2) The multi-task convolutional neural network model utilizes the contrast loss function to restrict the feature extraction process in the feature coding part, so that the gap recognition capability of a change sample is enhanced, a non-change sample can be effectively ignored, the problem of complex heterogeneity of a mining area is solved, and the change detection precision is improved; an edge information enhancement module for enhancing the edge characteristics of the open stope is constructed in jump connection, so that the separability of the detail identification of the open stope is improved; and the feature decoding layer utilizes the features decoded by the recognition branches to carry out absolute difference fusion and inputs the absolute difference fusion to the attention mechanism module to obtain the spatial attention feature and the channel attention feature, thereby being beneficial to further improving the precision of the change detection task.
(3) The invention utilizes the twin VGG-16 network structure to respectively extract the characteristics from the front and rear time phase remote sensing images for the characteristic decoding of the subsequent identification branch and the open stope identification, and simultaneously utilizes the contrast loss function to supervise and restrict the characteristic extraction process of the front and rear time phase remote sensing images under the supervision of the change detection true value, thereby enhancing the study of the model on the substantial change characteristics of the images, inhibiting the characteristics irrelevant to the substantial change, improving the detail characteristic identification degree, reducing the interference of pseudo change information, and further improving the sensitivity degree of the network to the change pixels.
(4) The invention carries out absolute difference operation on the multidimensional features extracted from the front and back time phase remote sensing images by the twin VGG-16 network, and is used for the feature decoding of the subsequent change detection branch; absolute difference operation is carried out by utilizing the identification branch characteristics, and the absolute difference operation is input into a convolution attention module to acquire channel attention weight and space attention weight; and finally, fusing the two attention weights with the decoding characteristics of the change detection branch to obtain a change detection result with high precision.
(5) The invention constructs an edge information enhancement module for enhancing the edge characteristics of the open stope in jump connection; in the feature encoding stage, the feature edge structure information in the features is gradually blurred due to continuous convolution and pooling operation, so that the development of decoding part of open stope recognition and change detection tasks is not facilitated.
Drawings
FIG. 1 is a schematic diagram of a multi-tasking convolutional neural network model of the present invention;
FIG. 2 is a schematic diagram of the principle structure of a VGG-16 network according to an embodiment;
FIG. 3 is a schematic diagram of the principle structure of an edge information enhancement module according to an embodiment;
FIG. 4 is a schematic diagram of the attention module of CBAM in an embodiment;
FIG. 5 is a schematic diagram of a channel attention module according to an embodiment;
fig. 6 is a schematic structural diagram of a spatial attention module according to an embodiment.
Detailed Description
The invention is further illustrated by the following examples:
Examples
As shown in fig. 1, a method for identifying and detecting changes in an open stope based on a multitasking convolutional neural network, the method comprising:
S1, determining a research area, and collecting remote sensing image data (high-resolution remote sensing image data) of two time phases of the research area T1 and the research area T2. And constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first identification network branch, a second identification network branch and a change detection network branch. In this embodiment, the first and second identification network branches have the same structure and all adopt VGG-16 networks, as shown in fig. 1, and the first and second identification network branches form a twin VGG-16 network structure, as shown in fig. 2, preferably, the VGG-16 network of the present invention includes 13 convolutional layers and 4 pooling layers for 5-layer feature extraction.
The encoding process of the first recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T1 image data, which are respectively marked as E t1-1、Et1-2、Et1-3、Et1-4、Et1-5, the decoding process of the first recognition network branch connects the characteristic diagrams E t1-2、Et1-3、Et1-4 in a jumping manner and performs characteristic fusion to obtain a characteristic diagram D t1-5、Dt1-4、Dt1-3、Dt1-2, and then the recognition result of the T1 image data is obtained through up-sampling operation. Preferably, the decoding process method of the first identified network branch is as follows: firstly, a feature map E t-5 is subjected to convolution processing to obtain a feature map D t1-5, then the feature map D t1-4 is obtained by up-sampling (aiming at a feature map D t1-5) and in jump connection with a feature map E t1-4 in a feature fusion manner, then the feature map D t1-3 is obtained by up-sampling (aiming at a feature map D t1-4) and in jump connection with a feature map E t1-3 in a feature fusion manner, then the feature map D t1-2 is obtained by up-sampling (aiming at a feature map D t1-3) and in jump connection with a feature map E t1-2 in a feature fusion manner, and the recognition result of T1 image data is obtained by up-sampling the feature map D t1-2.
The coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively marked as E t2-1、Et2-2、Et2-3、Et2-4、Et2-5, the decoding process of the second recognition network branch obtains a characteristic diagram D t2-5、Dt2-4、Dt2-3、Dt2-2 by jumping connection and characteristic fusion of the characteristic diagram E t2-2、Et2-3、Et2-4, and then the recognition result of the T2 image data is obtained by up-sampling operation. Preferably, the second identified network branch decoding process method is as follows: firstly, a feature map E t2-5 is subjected to convolution processing to obtain a feature map D t2-5, then the feature map D t2-4 is obtained by up-sampling (aiming at a feature map D t2-5) and in jump connection with a feature map E t2-4 in a feature fusion manner, then the feature map D t2-2 is obtained by up-sampling (aiming at a feature map D t1-4) and in jump connection with a feature map E t2-3 in a feature fusion manner, then the feature map D t2-2 is obtained by up-sampling (aiming at a feature map D t2-3) and in jump connection with a feature map E t2-2 in a feature fusion manner, and the recognition result of T2 image data is obtained by up-sampling the feature map D t2-2.
In some embodiments, in step S1, the multitasking convolutional neural network model is model trained using the following method:
S11, remote sensing image sample data (comprising T1 image sample data and T2 image sample data) of two time phases of a research area T1 and a research area T2 are manufactured, and the method comprises the following steps: collecting remote sensing image samples (high-resolution remote sensing image data) of two time phases of a research area T1 and T2, respectively and correspondingly extracting boundary vectors (the boundary of an outdoor stope can be measured and positioned by adopting field investigation, and the boundary positioning data is convenient for the vector extraction of the boundary), carrying out vector grid conversion processing, respectively converting the remote sensing image samples of the two time phases of the research area T1 and T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the T2. Preferably, the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, and the image preprocessing includes radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
S12, cutting the T1 image sample data and the T2 image sample data into image blocks (cutting the T1 image sample data and the T2 image sample data according to the same region to obtain the image blocks of the same region, wherein the size of the image blocks is C multiplied by H multiplied by W, C is the number of channels, M is the number of image lines, and N is the number of image columns), in the embodiment, data enhancement processing can be performed on the image blocks to enhance the generalization capability of the model, and the data enhancement processing comprises turnover, translation, scale change, contrast change and Gaussian noise. The training set, the verification set and the test set are divided according to the ratio of 6:2:2, the training set and the verification set are used for training a model, and the test set is used for testing the precision and the generalization capability of the model.
Taking the Shendong mining area as an example, the method for collecting high-resolution six-number remote sensing images in 2019 (time phase T1) and 2020 (time phase T2) of the Shendong mining area, preprocessing data, and manufacturing the data into remote sensing image sample data of the time phases T1 and T2, so that the high-resolution remote sensing images meet the specified format of the input images of the multi-task convolutional neural network model, comprises the following steps: the field investigation is adopted to measure and position the boundary of the open stope, and the boundary positioning data is obtained so as to facilitate the vector extraction of the boundary; collecting or downloading high-resolution six-number remote sensing images in 2019 and 2020 of Shendong mining areas, and performing image preprocessing, wherein the image preprocessing comprises radiometric calibration, atmospheric correction, orthographic correction, image fusion, research area cutting and the like; labeling the open stope space range and the inter-annual change region in the remote sensing images of the regions 2019 and 2020 by using geographic information software (such as ArcMap), and converting the labeled open stope space range and the inter-annual change region into a sample grid image; the remote sensing image data and the sample grid image of the Shendong mining area in 2019 and 2020 (i.e. the remote sensing image sample data of the time phase of the research area T1 and the sample grid image of the time phase of the T1 are correspondingly formed into T1 image sample data, and the remote sensing image sample data of the time phase of the research area T2 and the sample grid image of the time phase of the T2 are correspondingly formed into T2 image sample data) are cut into 256-pixel and 256-pixel image blocks in a plurality of same areas by using Arcmap. Dividing the cut data into a training set, a verification set and a test set according to the proportion of 6:2:2; the training set and the verification set are used for training the model, and the testing set is used for testing the precision and generalization capability of the finally obtained model; the training set is subjected to data enhancement processing only to enhance the generalization capability of the model, wherein the data enhancement processing comprises flipping, translation, scale change, contrast change and Gaussian noise.
S2, performing differential fusion on the feature map E t1-2、Et1-3、Et1-4、Et1-5 and the feature map E t2-2、Et2-3、Et2-4、Et2-5 in the encoding process of the change detection network branch to obtain an encoded feature map E t-2、Et-3、Et-4、Et-5, wherein the differential fusion in the embodiment preferably adopts absolute differential fusion (namely a feature absolute differential fusion module), namely E t-i=|Et1-i-Et2-i I, i=2, 3,4 and 5. The change detection network branch decoding process obtains a feature map D t-5、Dt-4、Dt-3、Dt-2 by performing jump connection and feature fusion on the coding feature map E t-2、Et-3、Et-4. Preferably, the method of the change detection network branch decoding process is as follows: firstly, a feature map E t-5 is subjected to convolution treatment to obtain a feature map D t-5, then the feature map D t-4 is obtained by up-sampling (aiming at the feature map D t-5) and in jump connection with a feature map E t-4 and feature fusion, then the feature map D t-3 is obtained by up-sampling (aiming at the feature map D t-4) and in jump connection with a feature map E t-3 and feature fusion, and then the feature map D t-3 is obtained by up-sampling (aiming at the feature map D t-3) and in jump connection with a feature map E t-2, Feature fusion results in feature map D t-2.
S3, the change detection network branch carries out differential fusion on the feature map D t1-2 and the feature map D t2-2 to obtain a feature map D t1-t2; the differential fusion in this embodiment preferably uses absolute differential fusion, i.e., D t1-t2=|Dt1-2-Dt1-2. The change detection network branches process the feature map D t1-t2 by using convolution attention modules (Convolutional Block Attention Module, CBAM) to obtain channel attention weights and spatial attention weights. As shown in fig. 4, the convolution attention module includes a channel attention module and a spatial attention module, in which a feature map D t1-t2 (for example, feature map D t1-t2 information is expressed as c×h×w, C is the number of channels, H is the number of rows of the feature map, and W is the number of columns of the feature map) is input to the convolution attention module, a channel attention weight is obtained by the channel attention module, and a spatial attention weight is obtained by the spatial attention module. As shown in fig. 5, the channel attention module includes two parallel adaptive global maximum pooling layers, two parallel adaptive global average pooling layers, a multi-layer perceptron, and a Sigmoid activation operation module, and the feature map D t1-t2 generates two feature maps through the two parallel adaptive global maximum pooling layers and the two parallel adaptive global average pooling layers, inputs the two feature maps into the same multi-layer perceptron respectively, performs addition processing on the output feature maps, and then obtains the channel attention weight through Sigmoid activation operation. As shown in fig. 6, the spatial attention module includes two parallel adaptive global maximum pooling layers, two parallel adaptive global average pooling layers, a channel splicing module, a convolution module, and a Sigmoid activation operation module, where the feature map D t1-t2 generates two feature maps through the two parallel adaptive global maximum pooling layers and the two parallel adaptive global average pooling layers, and performs channel splicing, convolution, and Sigmoid activation operations on the two feature maps to obtain spatial attention weights.
S4, multiplying the feature map D t-2 with the channel attention weight and the space attention weight respectively to obtain a feature map D' t-2 with enhanced information in the channel direction and the space direction, and then obtaining a change detection result of the research area through up-sampling operation, wherein the change detection result corresponds to a prediction result of change detection.
In this embodiment, as a further preferred implementation method: the jump connection of the embodiment further comprises edge information enhancement module processing, and because the ground features of the coal mine area are complex in distribution and strong in surface heterogeneity, the first identification network branch and the second identification network branch are subjected to repeated rolling and pooling operations, and ground feature boundary information in coding features is easy to blur. As shown in fig. 3, the edge information enhancement module (Edge Information Enhancement Module, EIEM) includes channel dimension pooling, and Sobel convolution, where the edge information enhancement module performs channel compression on the input feature map through the channel dimension pooling, for example, performs channel pooling processing on the feature map (c×h×w) to obtain the feature map (1×h×w), the Sobel convolution includes an operator sobelx in a horizontal direction and an operator Sobely in a vertical direction, and the channel compressed feature map performs Sobel convolution to obtain edge information in the horizontal direction and the vertical direction and performs addition operation, and then performs product operation with the original input feature map to obtain the feature map with enhanced edge information. According to the invention, the first recognition network branch, the second recognition network branch and the change detection network branch are processed by adopting the edge information enhancement module in jump connection, so that the three branches are more refined in feature processing, and the edge recognition capability and the change edge detection capability of the subsequent feature map are enhanced.
In some embodiments, the training set is used to perform model training on the multi-task convolutional neural network model, the verification set is used to check the model precision after each iteration training, the loss function adopted by the multi-task convolutional neural network model in the iteration training is composed of a contrast loss function L CT and a cross entropy loss function L CE, and the calculation formula is as follows:
L=ω 1LCT2LCE. Where ω 1 represents the weight of the contrast loss function L CT, ω 2 represents the weight of the cross entropy loss function L CE, and L is the total loss value of the multi-tasking convolutional neural network model. The multi-task convolutional neural network model utilizes a contrast loss function L CT to carry out model loss constraint in the coding process, as shown in fig. 1, a first recognition network branch extracts a characteristic diagram E' t1-2、E′t1-3、E′t1-4、E′t1-5 from the characteristic diagram (specifically, the characteristic E t1-2、Et1-3、Et1-4、Et1-5) of the last four layers of the T1 image data by using 1X 1 convolutional characteristics; the second recognition network branch extracts the characteristic diagram E' t2-2、E′t2-3、E′t2-4、E′t2-5 of the last four layers of characteristic diagrams (specifically, the characteristic E t2-2、Et2-3、Et2-4、Et2-5) of the T1 image data by using 1X 1 convolution characteristics; the feature map E 't1-2、E′t1-3、E′t1-4、E′t1-5 and the feature map E' t2-2、E′t2-3、E′t2-4、E′t2-5 are sequentially corresponded to each other as a feature pair (feature map E 't1-2 and feature map E' t2-2 are a feature pair, feature map E 't1-3 and feature map E' t2-3 are a feature pair, feature map E 't1-4 and feature map E' t2-4 are a feature pair, feature map E 't1-5 and feature map E' t2-5 are a feature pair), and the real result (study area T1 trained by a model, T1, a feature pair) is detected by the change corresponding to the T1 image data and the T2 image data, For example, the remote sensing image sample data of the two phases T2 corresponds to the change detection result marked by the remote sensing image sample data by the change detection real result).
The formula of the contrast loss function L CT of three network branch constraints of the multitasking convolutional neural network model in the coding process is as follows:
Where d represents the Euclidean distance of two features forming a pair of feature pairs n (e.g., the Euclidean distance of the feature pairs of feature map E 't1-2 and feature map E' t2-2), then/>, where two features in feature pair n are similar (The real result of the corresponding change detection is that the open stope of the two time phases is not changed), otherwise/>(The real result of the corresponding change detection is that the open stope of two time phases has a change); margin is a set threshold and N is the total number of feature pairs.
The multitasking convolutional neural network model performs model loss constraint on an output result by using a cross entropy loss function L CE, as shown in fig. 1, a predicted result and a real result of the T1 image data form a data input of a group of loss functions, a predicted result and a real result of the T2 image data form a data input of a group of loss functions, a change detection predicted result and a change detection real result output by the multitasking convolutional neural network model form a data input of a group of loss functions, and the cross entropy loss function L CE is used for performing model loss constraint on the output result. The cross entropy loss function L CE is formulated as follows:
wherein/> For the real category corresponding to the pixel (the real result corresponding to the T1 image data, the real result corresponding to the T2 image data or the real result of change detection), y i is the pixel result predicted by the model (the predicted result corresponding to the T1 image data, the predicted result corresponding to the T2 image data or the predicted result of change detection), and p is the total number of the pixels.
The multitask convolutional neural network model of the invention sets the iteration training times, the learning rate, the batch size, the optimizer and other super parameters, and carries out repeated iteration training; and for each iteration training, reducing the model loss value by using a gradient descent algorithm, and simultaneously optimizing and updating the weight value of the model connected between layers in the network.
In this embodiment, taking the eastern mining area as an example, the network parameter settings of the multitasking convolutional neural network model are shown in table 1, and the server performance is shown in table 2.
TABLE 1
TABLE 2
Training the multitasking convolutional neural network model by using a training set, checking the model precision after each iteration training by using a verification set, and checking the model precision by using Precision, recall, F-score three classification evaluation indexes in the experiment. And after multiple iterative training, selecting a multi-task convolutional neural network model with highest precision and best visual effect, and inputting remote sensing image data of two time phases T1 and T2 to be detected in a research area into the trained multi-task convolutional neural network model for open stope identification and change detection.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (7)

1. An open stope identification and change detection method based on a multitasking convolutional neural network is characterized in that: the method comprises the following steps:
s1, determining a research area, and collecting remote sensing image data of two time phases of the research area T1 and the research area T2; constructing a multi-task convolutional neural network model, wherein the multi-task convolutional neural network model comprises a first identification network branch, a second identification network branch and a change detection network branch;
The encoding process of the first recognition network branch utilizes VGG-16 network to extract five hierarchical feature graphs of T1 image data, which are respectively recorded as The decoding process of the first identified network branch maps the feature mapObtaining a feature map/>, by jumping connection and feature fusionThen, obtaining an identification result of the T1 image data through up-sampling operation; the decoding process method of the first identified network branch is as follows: first feature map/>The characteristic diagram/> isobtained through convolution processingThen upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>By plotting the characteristic map/>Performing up-sampling operation to obtain an identification result of the T1 image data;
The coding process of the second recognition network branch utilizes VGG-16 network to extract five level characteristic diagrams of T2 image data, which are respectively recorded as The second recognition network branch decoding process decodes the feature mapObtaining a feature map/>, by jumping connection and feature fusionThen, obtaining an identification result of the T2 image data through up-sampling operation; the second identified network branch decoding process method is as follows: first feature map/>The characteristic diagram/> isobtained through convolution processingThen upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>By plotting the characteristic map/>Performing up-sampling operation to obtain a recognition result of the T2 image data;
s2, the coding process of the change detection network branch maps the characteristic diagram And feature mapPerforming differential fusion to obtain a coding feature map/>; The change detection network branch decoding process encodes the feature map/>The feature map is obtained by jumping connection and feature fusion; The method for detecting the network branch decoding process by the change is as follows: first, feature map/>The characteristic diagram/> isobtained through convolution processingThen upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>Then upsample and match the feature map/>Jump connection and feature fusion to obtain feature map/>
S3, detecting network branching characteristic diagramAnd feature map/>Differential fusion is carried out to obtain a feature map/>; The change detection network branch adopts a convolution attention module to the feature map/>Processing to obtain a channel attention weight and a space attention weight;
S4, mapping the characteristic diagram Multiplying the channel attention weight and the space attention weight to obtain a characteristic diagram/>, wherein the characteristic diagram/> isenhanced by information in the channel direction and the space directionAnd then obtaining a change detection result of the research area through an up-sampling operation.
2. The surface stope identification and change detection method based on a multitasking convolutional neural network of claim 1, wherein: in step S1, the multitasking convolutional neural network model performs model training using the following method:
S11, preparing remote sensing image sample data of two time phases of a research area T1 and a research area T2, wherein the method comprises the following steps: collecting remote sensing image samples of two time phases of a research area T1 and a research area T2, respectively correspondingly extracting boundary vectors, carrying out vector grid conversion treatment, respectively converting the remote sensing image samples of the two time phases of the research area T1 and the research area T2 into sample grid images, correspondingly forming T1 image sample data by the remote sensing image samples of the time phase of the research area T1 and the sample grid images of the time phase of the research area T1, and correspondingly forming T2 image sample data by the remote sensing image samples of the time phase of the research area T2 and the sample grid images of the time phase of the research area T2;
S12, cutting the T1 image sample data and the T2 image sample data into image blocks respectively, dividing the image blocks into a training set, a verification set and a test set according to the ratio of 6:2:2, wherein the training set and the verification set are used for training a model, and the test set is used for testing the precision and generalization capability of the model.
3. The surface pit identification and change detection method based on the multitasking convolutional neural network as recited in claim 2, wherein: the remote sensing image data of two time phases of the research areas T1 and T2 in the step S1 and the remote sensing image sample in the step S11 are subjected to image preprocessing, wherein the image preprocessing comprises radiometric calibration, atmospheric correction, orthographic correction or/and image fusion.
4. The surface stope identification and change detection method based on a multitasking convolutional neural network of claim 1, wherein: the jump connection further comprises edge information enhancement module processing, the edge information enhancement module comprises channel dimension pooling and Sobel convolution, the edge information enhancement module carries out channel compression on the input feature image through the channel dimension pooling, the Sobel convolution comprises an operator sobelx in the horizontal direction and an operator Sobely in the vertical direction, the feature image after channel compression obtains edge information in the horizontal direction and the vertical direction through Sobel convolution and carries out addition operation, and then product operation is carried out on the feature image and the original input feature image to obtain the feature image with enhanced edge information.
5. The surface stope identification and change detection method based on a multitasking convolutional neural network of claim 1, wherein: the loss function of the multitasking convolutional neural network model is obtained by comparing the loss functionsAnd cross entropy loss function/>The common composition and the calculation formula are as follows: /(I); Wherein/>Representing contrast loss function/>Weights of/>Representing cross entropy loss function/>Weights of/>The total loss value of the neural network model is convolved for the multitasking.
6. The surface pit identification and change detection method based on the multitasking convolutional neural network of claim 5, wherein: contrast loss functionThe formula is as follows: ; wherein d represents the Euclidean distance that two features form a pair of feature pairs n, and the first recognition network branch is used for extracting the characteristic diagram/>, by using 1X 1 convolution features, of the characteristic diagram of the last four layers of the T1 image data The second recognition network branch extracts the characteristic map/>, by using 1X1 convolution characteristic, of the characteristic map of the last four layers of the T2 image dataIn the form of characteristic diagramsFeature map/>Sequentially corresponding to the characteristic pairs; the two features in the feature pair n are similar, then/>Otherwise/>;/>For a set threshold,/>Is the total number of feature pairs.
7. The surface pit identification and change detection method based on the multitasking convolutional neural network of claim 5, wherein: cross entropy loss functionThe formula is as follows: ; wherein/> For the real category corresponding to the pixel, the real category is the real result corresponding to the T1 image data, the real result of the T2 image data or the real result of the change detection, and the real result is/areFor the image element result of model prediction, the image element result corresponds to the prediction result of T1 image data, the prediction result of T2 image data or the change detection prediction result,/>Is the total number of pixels.
CN202311359531.5A 2023-10-19 2023-10-19 Open stope identification and change detection method based on multitasking convolutional neural network Active CN117671437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311359531.5A CN117671437B (en) 2023-10-19 2023-10-19 Open stope identification and change detection method based on multitasking convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311359531.5A CN117671437B (en) 2023-10-19 2023-10-19 Open stope identification and change detection method based on multitasking convolutional neural network

Publications (2)

Publication Number Publication Date
CN117671437A CN117671437A (en) 2024-03-08
CN117671437B true CN117671437B (en) 2024-06-18

Family

ID=90075968

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311359531.5A Active CN117671437B (en) 2023-10-19 2023-10-19 Open stope identification and change detection method based on multitasking convolutional neural network

Country Status (1)

Country Link
CN (1) CN117671437B (en)

Family Cites Families (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4012200B2 (en) * 2004-12-28 2007-11-21 株式会社東芝 Object detection method, apparatus, and program
CN110555841B (en) * 2019-09-10 2021-11-23 西安电子科技大学 SAR image change detection method based on self-attention image fusion and DEC
CN111291622B (en) * 2020-01-16 2023-07-11 武汉汉达瑞科技有限公司 Method and device for detecting building change in remote sensing image
CN112287978B (en) * 2020-10-07 2022-04-15 武汉大学 Hyperspectral remote sensing image classification method based on self-attention context network
CN112861931B (en) * 2021-01-21 2022-04-12 南开大学 Multi-level change detection method, system, medium and electronic device based on difference attention neural network
CN112949388B (en) * 2021-01-27 2024-04-16 上海商汤智能科技有限公司 Image processing method, device, electronic equipment and storage medium
CN112906822B (en) * 2021-03-25 2021-09-28 生态环境部卫星环境应用中心 Human activity recognition fusion method and system for ecological protection red line
CN113505636B (en) * 2021-05-25 2023-11-17 中国科学院空天信息创新研究院 Mining area change detection method based on attention mechanism and full convolution twin neural network
CN113609896B (en) * 2021-06-22 2023-09-01 武汉大学 Object-level remote sensing change detection method and system based on dual-related attention
CN113420662B (en) * 2021-06-23 2023-04-07 西安电子科技大学 Remote sensing image change detection method based on twin multi-scale difference feature fusion
EP4377913A1 (en) * 2021-07-27 2024-06-05 Számítástechnikai És Automatizálási Kutatóintézet Training method for training a change detection system, training set generating method therefor, and change detection system
CN113887459B (en) * 2021-10-12 2022-03-25 中国矿业大学(北京) Open-pit mining area stope change area detection method based on improved Unet +
CN113920262B (en) * 2021-10-15 2022-04-12 中国矿业大学(北京) Mining area FVC calculation method and system for enhancing edge sampling and improving Unet model
CN114549972B (en) * 2022-01-17 2023-01-03 中国矿业大学(北京) Strip mine stope extraction method, device, equipment and medium
US20230289977A1 (en) * 2022-03-10 2023-09-14 NavInfo Europe B.V. Differencing Based Self-Supervised Scene Change Detection (D-SSCD) with Temporal Consistency
CN114723966B (en) * 2022-03-30 2023-04-07 北京百度网讯科技有限公司 Multi-task recognition method, training method, device, electronic equipment and storage medium
CN114972989B (en) * 2022-05-18 2023-01-10 中国矿业大学(北京) Single remote sensing image height information estimation method based on deep learning algorithm
CN115170824A (en) * 2022-07-01 2022-10-11 南京理工大学 Change detection method for enhancing Siamese network based on space self-adaption and characteristics
CN115937697A (en) * 2022-07-14 2023-04-07 中国人民解放军战略支援部队信息工程大学 Remote sensing image change detection method
CN115457390A (en) * 2022-09-13 2022-12-09 中国人民解放军国防科技大学 Remote sensing image change detection method and device, computer equipment and storage medium
CN115880553A (en) * 2022-10-11 2023-03-31 浙江工业大学 Multi-scale change target retrieval method based on space-time modeling
CN116030357A (en) * 2022-12-12 2023-04-28 中北大学 High-resolution remote sensing image change detection depth network and detection method
CN115908369A (en) * 2022-12-13 2023-04-04 南湖实验室 Change detection method based on semantic alignment and feature enhancement
CN116229283A (en) * 2023-03-10 2023-06-06 江西师范大学 Remote sensing image change detection system and method based on depth separable convolution module
CN116486255A (en) * 2023-03-16 2023-07-25 福州大学 High-resolution remote sensing image semantic change detection method based on self-attention feature fusion
CN116433940A (en) * 2023-04-21 2023-07-14 北京数慧时空信息技术有限公司 Remote sensing image change detection method based on twin mirror network
CN116343052B (en) * 2023-05-30 2023-08-01 华东交通大学 Attention and multiscale-based dual-temporal remote sensing image change detection network
CN116778238A (en) * 2023-06-14 2023-09-19 陕西科技大学 Light-weight structure-based sensing transducer network and VHR remote sensing image change detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A CNN-Transformer Network Combining CBAM for Change Detection in High-Resolution Remote Sensing Images;Mengmeng Yin 等;remote sensing;20230504;第1-26页 *
基于改进UNet孪生网络的遥感影像矿区变化检测;向阳;赵银娣;董霁红;;煤炭学报;20191215(第12期);第155-162页 *

Also Published As

Publication number Publication date
CN117671437A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN111986099B (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
US11521379B1 (en) Method for flood disaster monitoring and disaster analysis based on vision transformer
CN110969088B (en) Remote sensing image change detection method based on significance detection and deep twin neural network
Li et al. An index and approach for water extraction using Landsat–OLI data
Hentze et al. Evaluating crop area mapping from MODIS time-series as an assessment tool for Zimbabwe’s “fast track land reform programme”
CN111667187B (en) Highway landslide risk evaluation method based on multi-source remote sensing data
CN107247927B (en) Method and system for extracting coastline information of remote sensing image based on tassel cap transformation
Lasaponara et al. Image enhancement, feature extraction and geospatial analysis in an archaeological perspective
CN110458201B (en) Object-oriented classification method and classification device for remote sensing image
CN115527123B (en) Land cover remote sensing monitoring method based on multisource feature fusion
Piyoosh et al. Semi-automatic mapping of anthropogenic impervious surfaces in an urban/suburban area using Landsat 8 satellite data
Liang et al. Maximum likelihood classification of soil remote sensing image based on deep learning
Chen et al. Open-pit mine area mapping with Gaofen-2 satellite images using U-Net+
CN115457396B (en) Surface target ground object detection method based on remote sensing image
CN114494821A (en) Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation
CN110991430A (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN114494851A (en) Landslide extraction method based on multi-temporal remote sensing image difference information
CN113505636A (en) Mining area change detection method based on attention mechanism and full convolution twin neural network
CN107169467B (en) Rare earth mining area land damage and recovery analysis method of multi-source time sequence images
Khan et al. Step-wise Land-class Elimination Approach for extracting mixed-type built-up areas of Kolkata megacity
Thati et al. A systematic extraction of glacial lakes for satellite imagery using deep learning based technique
Zhao et al. Automatic extraction of yardangs using Landsat 8 and UAV images: A case study in the Qaidam Basin, China
CN117475314A (en) Geological disaster hidden danger three-dimensional identification method, system and medium
CN113158770A (en) Improved mining area change detection method of full convolution twin neural network
CN117671437B (en) Open stope identification and change detection method based on multitasking convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant