CN111259955B - Reliable quality inspection method and system for geographical national condition monitoring result - Google Patents

Reliable quality inspection method and system for geographical national condition monitoring result Download PDF

Info

Publication number
CN111259955B
CN111259955B CN202010040770.4A CN202010040770A CN111259955B CN 111259955 B CN111259955 B CN 111259955B CN 202010040770 A CN202010040770 A CN 202010040770A CN 111259955 B CN111259955 B CN 111259955B
Authority
CN
China
Prior art keywords
data
neural network
convolutional neural
deep convolutional
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010040770.4A
Other languages
Chinese (zh)
Other versions
CN111259955A (en
Inventor
沈晶
张继贤
张莉
韩文立
章力博
葛娟
卢遥
周进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Surveying And Mapping Product Quality Inspection And Testing Center
Original Assignee
National Surveying And Mapping Product Quality Inspection And Testing Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Surveying And Mapping Product Quality Inspection And Testing Center filed Critical National Surveying And Mapping Product Quality Inspection And Testing Center
Priority to CN202010040770.4A priority Critical patent/CN111259955B/en
Publication of CN111259955A publication Critical patent/CN111259955A/en
Application granted granted Critical
Publication of CN111259955B publication Critical patent/CN111259955B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a reliable property detection method and a reliable property detection system for a geographical national condition monitoring result, which are characterized in that firstly, a sample data set is established by utilizing existing orthophoto data T and mask data, the sample data is enhanced, a deep convolutional neural network model CNN is established on the basis, the model is deeply enhanced and fused, secondly, a new time phase image T2 is subjected to super-pixel segmentation to obtain a characteristic pattern spot, the characteristic pattern spot is mapped onto old time phase earth surface coverage data to obtain geometric information of a change pattern spot, thirdly, the new time phase image T2 data is input into a trained deep convolutional neural network model to obtain semantic information of the change pattern spot, so as to obtain change information of earth surface coverage, finally, the change information is compared with detected earth surface coverage data to obtain a suspected error region, and the comparison verification is carried out on the suspected error region and the new time phase image T1 and T2, so that a final earth surface coverage change reliable property detection result is obtained. The invention can realize the rapid inspection of the reliability of the remote sensing image change detection result, can be used for quality inspection and acceptance of geographical national condition monitoring projects, and greatly improves the quality inspection efficiency.

Description

Reliable quality inspection method and system for geographical national condition monitoring result
Technical Field
The invention relates to the technical field of geographic information technology, in particular to a reliable property detection method and a reliable property detection system for geographic national condition monitoring results.
Background
Geographical national condition monitoring is a new practice and important task for the new period of mapping geographic information department to serve economic, social and scientific development. The earth surface coverage data is important basic information for the work of geographic national condition monitoring, global change research, ecological resource management and the like. At present, the earth surface coverage change detection is widely applied to the fields of land coverage and land utilization monitoring, urban development research, resource management, disaster assessment, ecological system monitoring, military application and the like, in addition, the change detection is a key technology for earth surface coverage update, and the earth surface coverage for geographic national condition monitoring has a huge and complex classification system and standard. The large-area earth surface coverage change detection information is complex in classification system, complex in change type, large in image time phase difference, complex and changeable in feature of land feature texture and structure, and the difficulty in earth surface coverage change detection and reliability research is increased to different degrees.
The existing reliability inspection method for earth surface coverage change detection mainly comprises manual comparison, namely, a quality inspector stacks earth surface coverage classification information with an image source, sample data, thematic information and the like for manual comparison inspection. For the pattern spots which cannot be determined whether the classification is correct or not, marking is carried out on the pattern, and then field check of the field is carried out. The deep learning model at the forefront of the machine learning field can simulate the cognition and judgment of human brain on images to a certain extent, and the deep network can express abstract hidden features.
There are many problems with the existing methods for detecting quality of geographical national condition monitoring. The detection results are mainly compared manually, the detection efficiency is low, the subjective knowledge and experience accumulation of quality detectors are excessively relied on, the consistency and reliability of the detection results are low, and the field check is limited by traffic, weather and other conditions, so that the positions of all suspected error pattern spots are difficult to reach. In summary, the existing quality detection method for earth surface coverage classification has hardly satisfied the quality requirements of the earth surface coverage change detection precision, classification accuracy and integrity in the new normal state, so it is urgent to study the reliable quality detection method for earth surface coverage change detection with high efficiency and high reliability.
Disclosure of Invention
The invention aims to provide a reliable property detection method for geographic national condition monitoring achievements; the method solves the problems of poor reliability, low efficiency, low automation degree and the like in the reliable property detection of the earth surface coverage change detection to a certain extent.
The aim of the invention is realized by the following technical scheme:
the invention provides a geographic national condition monitoring result reliability quality inspection method, which comprises the following steps:
s1: rasterizing the existing earth surface coverage data, giving each region class a designated code, and generating mask data;
s2: establishing a sample data set from the orthophoto data and the mask data, and amplifying the sample data in four ways, and on the basis, enhancing and fusing the deep convolutional neural network model;
s3: dividing the new time phase image T2 by using a super-pixel multi-scale dividing method, mapping the divided result onto old time phase earth surface coverage data, and extracting geometric information of an earth surface coverage change area;
s4: semantic marking is carried out on the earth surface coverage change area by using a deep convolutional neural network model;
s5: and (3) superposing the variation pattern spots obtained in the step S4 with the surface coverage data to be detected.
Further, the method also comprises the following steps:
s6: generating a suspected error checking and quality testing report:
and performing geometric and attribute data comparison analysis on the obtained earth surface coverage change detection result and earth surface coverage data to be detected, superposing new and old phase images T2 and T1 with the detection result for verification, and finally counting the missing detection rate and the false detection rate of the change detection to generate a quality inspection report.
Further, the deep convolutional neural network model in the step S2 is built according to the following steps:
1) The sample selection and division stage comprises the steps of firstly dividing the existing orthophoto T and mask data into a plurality of subareas, wherein each subarea corresponds to one tile, and then dividing the whole sample set into a training sample set, a verification sample set and a test sample set by adopting a tile-based mechanism;
2) Performing data augmentation operation on the training samples;
3) And inputting the training sample subjected to the data augmentation operation into a deep convolutional neural network model, performing forward reasoning and backward learning, and training the super-parameter value of the calculation model.
Further, the segmentation mapping strategy of the super-pixel combined area adjacency graph in the step S3 is performed according to the following steps:
s31: the super-pixel multi-scale segmentation method is used for segmenting a new time phase image T2 to obtain a characteristic image spot, and on the basis, a watershed segmentation algorithm with space constraint is used for super-pixel segmentation;
s32: constructing a region adjacency graph; the regional adjacency graph abstracts each super pixel in the initial segmentation result into a node, and the adjacent super pixels, namely the representative nodes are communicated, and then a line segment with weight is used for connecting the communicated nodes;
s33: region merging; and according to the sorting of the merging cost, circulating adjacent areas with minimum merging cost function values until the minimum merging cost function value meets the condition.
Further, the combining cost is calculated according to the following function:
H(m,n)=w 1 *D D (m,n)+w 2 *D T (m,n)+w 3 *D F (m,n) (2)
wherein, C (m, n) represents the merging cost function of the adjacent super pixels;
A m 、A n representing the areas of superpixels m and n, respectively;
l represents the common boundary length of adjacent super-pixels;
λ represents a shape factor;
h (m, n) represents the heterogeneity of adjacent super-pixels;
w 1 、w 2 、w 3 weights respectively representing spectral heterogeneity, texture heterogeneity, and feature factor heterogeneity;
D s (m,n)、D T (m,n)、D F (m, n) respectively represent spectral heterogeneity, texture heterogeneity, and characteristic factor heterogeneity, and the symbols denoted by f and a respectively represent characteristic values of the preceding and following phases.
Further, the deep convolutional neural network model in the step S4 adopts the following data enhancement method of a rotary transformation model, and the final prediction result is obtained by fusion, wherein the data enhancement method of the rotary transformation model is adopted; the optimal configuration scheme is obtained by comparing the marking precision at different rotation intervals; the method comprises the following specific steps:
inputting existing orthophoto T and mask data; rotating the sample at different rotational intervals; inputting the model into a deep convolutional neural network model for training; generating semantic mark results of different rotation intervals; comparing and determining the rotation interval of the images; rotating all samples at selected intervals; a deep convolutional neural network model is enhanced.
Further, the deep convolutional neural network model in the step S4 adopts the following data enhancement method of an aspect ratio transformation model, and the data enhancement method of the aspect ratio transformation model is obtained by fusion; the method comprises the following specific steps:
inputting existing orthophoto T and mask data; randomly selecting a scaling scale from the set [0.75,1.0,1.25] in two directions; each image scale combination forms 9 augmented images; and enhancing the deep convolutional neural network model, and fusing to obtain a final prediction result.
Further, the deep convolutional neural network model in step S4 adopts the following multi-dataset cross-scene learning and fusion to obtain the final prediction result:
inputting existing orthophoto T and mask data; inputting cross-scene image data, and resampling to be the same as the resolution of the original training set; inputting the processed cross-scene image data into a deep neural network model for training; finally, inputting the model into an enhanced deep convolutional neural network model; and fusing to obtain a final prediction result.
Further, the deep convolutional neural network model in the step S4 adopts the following multi-source data enhancement method and performs fusion to obtain the final prediction result
S44: acquiring R, G, B three channels of new-time-phase image T2 data; and further R, G, B, DSM, NDSM five-channel conversion processing; and fusing the input layer in the deep convolutional neural network model to obtain a final prediction result.
The invention also provides a reliable quality inspection system for the geographical national condition monitoring result, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the following steps when executing the program:
s1: rasterizing the existing earth surface coverage data, giving each region class a designated code, and generating mask data;
s2: establishing a sample data set from the orthophoto data and the mask data, and amplifying the sample data in four ways, and on the basis, enhancing and fusing the deep convolutional neural network model;
s3: dividing the new time phase image T2 by using a super-pixel multi-scale dividing method, mapping the divided result onto old time phase earth surface coverage data, and extracting geometric information of an earth surface coverage change area;
s4: semantic marking is carried out on the earth surface coverage change area by using a deep convolutional neural network model;
s5: and (3) superposing the variation pattern spots obtained in the step S4 with the surface coverage data to be detected.
S6: and (3) generating suspected missed detection and quality detection reports, superposing new and old phase images T2 and T1 to verify the ground surface coverage change detection result, and finally counting the missed detection rate and the missed detection rate of the change detection.
Due to the adoption of the technical scheme, the invention has the following advantages:
the new method for quality inspection of the geographical national condition monitoring achievements is characterized in that a sample data set is established through rasterization and tiling treatment by the existing historical orthographic images and corresponding earth surface coverage data. Then, a super-pixel image is obtained by dividing a new time phase image by utilizing a super-pixel combined area adjacent image, the super-pixel image is mapped to old time phase surface coverage data to obtain a change image spot, meanwhile, a deep convolutional neural network model (CNN) is established, the deep convolutional neural network model is trained by utilizing a sample data set, and on the basis, the enhancement of the deep convolutional neural network model is realized by the augmentation of the sample data set; and inputting the new time phase image data T2 into the model to obtain the semantic mark of the change pattern. Finally, the obtained earth surface coverage change information is compared with the detected earth surface coverage data to obtain a suspected error region, and the suspected error region is subjected to superposition verification with front and back time phase images T1 and T2 to obtain a final quality inspection result. The method can realize the rapid inspection of the change detection result of the remote sensing image, has high efficiency if the change misleakage information exists, can be used for quality inspection and acceptance of the geographic national condition monitoring project, and greatly improves the quality inspection efficiency.
The existing orthographic image and earth surface coverage data are utilized to generate a sample data set, and the samples are amplified in four modes, so that an enhanced deep convolutional neural network model is constructed, the classification accuracy is higher, high-precision semantic marks are provided for earth surface coverage change detection, and finally a set of high-resolution remote sensing image earth surface coverage change detection reliability automatic quality inspection method is formed. The research results solve the problems of poor reliability, low efficiency, low automation degree and the like in the reliable property detection of the earth surface coverage change detection to a certain extent.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objects and other advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out in the specification.
Drawings
The drawings of the present invention are described below.
FIG. 1 is a flow chart of quality inspection for earth surface coverage variation.
Fig. 2 is a multi-core hole convolution structure.
FIG. 3 is a flow chart of a semantic labeling algorithm based on a deep convolutional neural network.
FIG. 4 is a flow chart of a method for partitioning a super pixel junction region adjacency graph.
FIG. 5 is a schematic diagram of model enhancement and fusion.
FIG. 6 is a rotational transformation model enhancement flow chart.
FIG. 7 is an enhancement flow chart based on an aspect ratio transformation model.
FIG. 8 is a flow chart of a multi-dataset based cross-scene learning method.
Fig. 9 is a data fusion method based on multi-source data.
Detailed Description
The invention is further described below with reference to the drawings and examples.
As shown in fig. 1, the new method for quality inspection of reliability of geographical national condition monitoring results provided by the embodiment uses a deep convolutional neural network model for quality inspection of earth surface coverage change, establishes an augmented sample data set based on orthographic images and mask data, establishes a deep convolutional neural network model for semantic marking of a change region, simultaneously performs super-pixel segmentation on a new time phase image T2 to obtain characteristic image spots, maps the characteristic image spots to old time phase earth surface coverage data to obtain geometric information of the change region, and finally forms an automatic method for quality inspection of geographical national condition monitoring results based on the deep convolutional neural network. The research result not only can perfect the theoretical system of the surface coverage quality inspection, but also has great practical significance for improving the quality inspection efficiency, innovating the quality inspection method, guaranteeing the surface coverage product quality and the like, and the method specifically comprises the following steps, as shown in fig. 1:
s1: sample set construction:
the method comprises the steps of rasterizing existing earth surface coverage data, giving each region an appointed code, generating mask data, performing tiling on the orthophoto data and the mask data, dividing the orthophoto data and the mask data into a plurality of subareas, wherein each subarea corresponds to one tile, and dividing the whole sample set into a training sample set, a verification sample set and a test sample set by adopting a tile-based mechanism.
S2: deep convolution neural network construction:
the potential of the CNN algorithm is fully excavated by utilizing the established training sample set, the CNN algorithm is better adapted to high-resolution remote sensing image data, and deep testing and optimization are carried out on two network structures of VGG-16 and ResNet-101; two basic network models of VGG-16 and ResNet-101 are respectively enhanced by four different methods, and the enhanced models are fused together to carry out final comprehensive decision;
an integrated learning strategy based on reinforcement before fusion is adopted, and integration among models with good difference is reasonably selected to realize improvement of precision in the integrated learning process;
as shown in fig. 2, the fully connected layer is changed into a convolution layer, and any well-designed network structure for image classification can be adapted as an end-to-end fully convolution network for semantic labeling. Based on the criterion, two network structures of VGG-16 and ResNet-101 which are better in performance and easy to apply in the image classification task are selected for adaptation, so that the network structure is suitable for the classification task of the high-resolution remote sensing image. In order to reduce the resolution loss of the FCN in the pooling layer and improve the calculation efficiency of the network, a hole algorithm is adopted, and multi-core hole convolution is realized on the basis.
S21: by adapting a CNN model for image classification to a full convolution, a corresponding full convolution network FCN can be conveniently obtained for classification, but after multiple pooling layer operations in the network, the spatial resolution of the final feature map is severely reduced, for example, for a classical VGG network with 5 pooling layer operations, the final output feature map resolution is only 1/32 of the original input image resolution. The use of additional deconvolution layers to counteract the downsampling effect of the pooling layer by means of nonlinear upsampling requires additional computing resources and memory consumption. For this reason, a hole algorithm is introduced to alleviate the resolution loss problem due to the pooling operation. By introducing an additional sampling rate parameter r, the control of the receptive field of the convolution operator can be realized on the premise of not increasing the number of weight parameters in the convolution kernel.
S22: downsampling control strategy. To avoid the downsampling effect of a pooling layer, a hole algorithm is applied to the following convolution layers, the step size of the convolution operation is set to 1, and the sampling rate parameter of the hole algorithm is set to 2 times of the original one. If the operation is applied to all pooling layers in the network, a full convolution network without downsampling effect is finally obtained, and a full-width semantic marking result consistent with the resolution of the original image is output. In view of the extremely high calculation amount and huge memory consumption, the resolution of the finally obtained feature map is allowed to be reduced by means of a hole convolution algorithm, but the reduced resolution is ensured to be not lower than 1/8 of the original resolution. The final result is then up-sampled to the resolution corresponding to the original image by a fast bilinear interpolation algorithm.
S23: and (5) training a network model. By means of the flexibility of the hole algorithm, we can conveniently control the receptive field of each pixel to be marked by adjusting the sampling rate r of a particular hole convolution layer. And the category prediction performance of the whole full convolution network FCN model is further improved by fusing the prediction results of a plurality of different receptive fields. Specifically, 4 hole convolution branches with different sampling rate parameters are trained synchronously and eventually fused before being input to the softmax layer to improve model performance.
S3: segmentation mapping strategy of super-pixel combined area adjacency graph:
dividing the new-phase image T2 by using a super-pixel multi-scale dividing method to obtain a characteristic image spot, and mapping the characteristic image spot to a T1 image to obtain a change region; in order to ensure that the segmentation effect of each time phase image is consistent, the embodiment adopts a multi-time phase image segmentation mapping strategy to directly map the image segmentation result of a single time phase to other time phases, so that errors caused by multi-time phase composite image segmentation are avoided.
S31: aiming at the characteristic of high spatial resolution of the high-resolution remote sensing image, adjacent pixels with similar texture, color, brightness and other characteristics are extracted to form an irregular pixel block with a certain meaning, the pixels are grouped by utilizing the similarity of the characteristics among the pixels, and a small quantity of super pixels are used for replacing a large quantity of pixels to express the image characteristics. Compared with an image pixel, the super pixel can greatly improve the subsequent processing speed of the remote sensing image, has stronger plasticity, and can obtain a better segmentation result through further combination. And (3) performing superpixel segmentation by adopting a watershed segmentation algorithm with space constraint to obtain superpixels with compact shapes, and generating ground object objects with regular boundaries.
S32: constructing a region adjacency graph. And the regional adjacency graph abstracts each super pixel in the initial segmentation result into a node, and the super pixels are adjacent, namely the representative nodes are communicated, and then the connected nodes are connected by a line segment with weight. The weight is the merging cost of the adjacent super-pixels, and the more similar the adjacent super-pixel features are, the smaller the merging cost is, and the more tends to be merged. The method for segmenting the remote sensing image by multi-feature fusion is provided, the shape features, the spectrum features, the texture features and the feature factors of the adjacent super pixels are comprehensively considered, and the calculation formula of the merging cost function is as follows:
H(m,n)=w 1 *D S (m,n)+w 2 *D T (m,n)+w 3 *D F (m,n) (2)
in the method, in the process of the invention,
c (m, n) represents the merging cost function of adjacent superpixels;
A m 、A n representing the areas of superpixels m and n, respectively;
l represents the common boundary length of adjacent super-pixels;
λ represents a shape factor;
h (m, n) represents the heterogeneity of adjacent super-pixels;
w 1 、w 2 、w 3 weights respectively representing spectral heterogeneity, texture heterogeneity, and feature factor heterogeneity;
D s (m, n) represents spectral heterogeneity;
D T (m, n) represents texture heterogeneity;
D F (m, n) represents characteristic factor heterogeneity;
the symbols with subscripts f and a represent the eigenvalues of the front and rear phases, respectively;
s33: and (3) region merging, namely, the region merging is the adjacent region with the minimum merging cost function value in a circulating way according to the sorting of the merging cost until the minimum merging cost function value meets the condition.
S4: enhancement and fusion of deep convolutional neural network model:
in order to increase the variability among CNN models to obtain good integration effect, besides fusing two models with different network structures, we combine model enhancement and model differentiation to further highlight the role of variability in ensemble learning. Specifically, on the basis of two network structures of VGG-16 and ResNet-101, four different enhancement methods are respectively utilized to improve the performance of the two network structures, and then all models with the improved performance are fused to obtain a final prediction result.
S41: a data broadening method based on rotation transformation. The optimal arrangement is preferred by comparing the marking accuracy at different rotation intervals. Experiments show that different rotation broadening modes can improve final semantic marking precision, and marking results are better at 30-degree rotation intervals. Therefore, the interval is taken as a reference, 12 corresponding images are obtained after each image is enlarged, and finally model enhancement is realized through angle enlargement.
S42: an aspect ratio conversion-based data broadening method. To obtain a model with better generalization performance, we here further enhance the flexibility of the scaling, allowing images with different scaling in two different directions, i.e. model enhancement is achieved in a data-broadened way of aspect ratio transformation. Consistent with the preferred procedure for the angular transformation approach, we finally transformed by experimental comparison under different aspect ratio transformation configurations as follows: for each image to be transformed, the two directions randomly select scaling scales corresponding to the directions from the set [0.75,1.0,1.25], and finally 9 corresponding enlarged images are generated through scale combination in the two directions.
S43: a multi-dataset based cross-scene learning method. Cross-scene or cross-database learning has proven to be an effective method in the field of computer vision to significantly improve the generalization ability of CNN models. However, no attention is paid to the semantic marking of remote sensing data. Specifically, besides fine tuning by using the remote sensing data set to be classified on the basis of the network model trained by the visual database as reported in other documents, the remote sensing data sets of other scenes are collected together for cross-scene learning.
S44: a data fusion method based on multi-source data. In addition to the two-phase images of the three wave bands, reference data such as a sample library, topographic map data and the like are fused together to form multi-source data. The two types of data are combined, and the method based on multi-source data fusion is used as another model enhancement mode. Firstly, changing an input layer of an original network structure from 3 channels to 5 channels, correspondingly adjusting the connection relation of a convolution layer which follows the input layer, and then combining the two types of data into 5-channel data as input of a new network structure to carry out subsequent training and testing.
S5: extracting change information: and overlapping the pattern spots segmented by the super pixels with old time phase earth surface coverage data to extract changed earth boundaries, and inputting a new time phase image T2 corresponding to the pattern spots into the enhanced deep convolutional neural network model to obtain semantic marks of the changed areas.
S6: generating a suspected error checking and quality testing report: and comparing and analyzing the obtained change detection result with the detected earth surface coverage data in geometric and attribute data, overlapping the obtained change detection result with the new and old phase images to verify the correctness of suspected error, and finally counting the error detection rate and the like of the change detection to obtain a report of earth surface coverage change detection quality inspection.
As shown in fig. 3, the semantic labeling algorithm based on the deep convolutional neural network provided in this embodiment is performed according to the following steps: establishing an FCN model and adopting a multi-core hole convolution algorithm; the multi-core hole convolution algorithm is provided with a Softmax which is a normalized exponential function; sum is an aggregation function; pooling is a Pooling layer; atrous is a hole convolution layer, r is a sampling interval; FC is a full convolution connection; inputting multi-source reference data and old-time phase earth surface coverage data; the input data are subjected to segmentation and augmentation operation; then training and classifying FCNs; and finally, merging and FCCRF essence treatment are carried out.
The deep convolutional neural network model in the step S2 is built according to the following steps:
1) The sample selection and division stage comprises the steps of firstly dividing the existing multiple groups of orthographic images T and grid surface coverage corresponding to the orthographic images T into multiple subareas, wherein each subarea corresponds to one tile, and then dividing the whole sample set into a training sample set, a verification sample set and a test sample set by adopting a tile-based mechanism;
2) Performing data augmentation operation on the training samples;
3) And inputting the training sample subjected to the data augmentation operation into a deep convolutional neural network model, performing forward reasoning and backward learning, and training the super-parameter value of the calculation model.
As shown in fig. 4, the method for dividing the super pixel bonding area adjacency graph is performed according to the following steps:
s31: the super-pixel multi-scale segmentation method is used for segmenting the new-time-phase image T2 to obtain a characteristic image spot; performing super-pixel segmentation by adopting a watershed segmentation algorithm with space constraint;
s32: constructing a region adjacency graph; the regional adjacency graph abstracts each super pixel in the initial segmentation result into a node, and the adjacent super pixels, namely the representative nodes are communicated, and then a line segment with weight is used for connecting the communicated nodes;
s33: region merging; and according to the sorting of the merging cost, circulating adjacent areas with minimum merging cost function values until the minimum merging cost function value meets the condition.
The merging cost is calculated according to the following functions:
H(m,n)=w 1 *D S (m,n)+w 2 *D T (m,n)+w 3 *D F (m,n) (2)
wherein C (m, n) represents the merging cost function of adjacent super pixels, A m 、A n Representing the areas of the superpixels m and n, respectively, L representing the common boundary length of adjacent superpixels, lambda representing the shape factor, H (m, n) representing the heterogeneity of adjacent superpixels, w 1 、w 2 、w 3 Weights representing spectral heterogeneity, texture heterogeneity, and feature factor heterogeneity, respectively, D S (m,n)、D T (m,n)、D F (m, n) respectively represent spectral heterogeneity, texture heterogeneity, and characteristic factor heterogeneity, and the symbols denoted by f and a respectively represent characteristic values of the preceding and following phases.
As shown in fig. 5, the enhancement and fusion of the graph model in step S4 is performed in the following manner:
s41: a data broadening method based on rotation transformation; the optimal configuration scheme is obtained by comparing the marking precision at different rotation intervals; as shown in fig. 6, fig. 6 is a rotational transformation model enhancement flowchart. Inputting the existing orthophoto T and the corresponding earth surface coverage data; rotating the sample at different rotational intervals; inputting the model into a deep convolutional neural network model for training; generating semantic mark results of different rotation intervals; comparing and determining the rotation interval of the images; rotating all samples at selected intervals; a deep convolutional neural network model is enhanced.
S42: a data broadening method based on aspect ratio conversion; allowing the image to have different scale in two different directions, and realizing model enhancement in a data enhancement mode of aspect ratio conversion; generating an enlarged image; as shown in fig. 7, fig. 7 is an enhancement flowchart based on an aspect ratio transformation model. Inputting the existing orthophoto T and the corresponding earth surface coverage data; randomly selecting a scaling scale from the set [0.75,1.0,1.25] in two directions; each image scale combination forms 9 augmented images; and enhancing the deep convolutional neural network model, and fusing to obtain a final prediction result.
S43: a multi-dataset-based cross-scene learning method; as shown in fig. 8, fig. 8 is a flowchart of a multi-dataset based cross-scene learning method. Inputting the existing orthophoto T and the corresponding earth surface coverage data; inputting cross-scene image data, and resampling to be the same as the resolution of the original training set; inputting the processed cross-scene image data into a deep neural network model for training; finally, inputting the model into an enhanced deep convolutional neural network model; and fusing to obtain a final prediction result.
S44: a data fusion method based on multi-source data; as shown in fig. 9, fig. 9 is a data fusion method based on multi-source data. Acquiring R, G, B three channels of new-time-phase image T2 data; and further performing R, G, B, DSM, NDSM five-channel conversion treatment; and fusing the input layer in the deep convolutional neural network model to obtain a final prediction result.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted without departing from the spirit and scope of the technical solution, and the present invention is intended to be covered in the scope of the present invention.

Claims (6)

1. A reliable quality inspection method for geographical national condition monitoring achievements is characterized in that: the method comprises the following steps:
s1: rasterizing the existing earth surface coverage data, giving each region class a designated code, and generating mask data;
s2: establishing a sample data set from the orthophoto data and the mask data, and amplifying the sample data in four ways, and on the basis, enhancing and fusing the deep convolutional neural network model;
s3: dividing the new time phase image T2 by using a super-pixel multi-scale dividing method, mapping the divided result onto old time phase earth surface coverage data, and extracting geometric information of an earth surface coverage change area;
s4: semantic marking is carried out on the earth surface coverage change area by using a deep convolutional neural network model;
s5: superposing the variation pattern spots obtained in the step S4 with the surface coverage data to be detected;
s6: generating a suspected error checking and quality testing report:
performing geometric and attribute data comparison analysis on the obtained earth surface coverage change detection result and earth surface coverage data to be detected, superposing new and old phase images T2 and T1 with the detection result for verification, and finally counting the missing detection rate and the false detection rate of the change detection to generate a quality inspection report;
the deep convolutional neural network model in the step S2 is built according to the following steps:
1) The sample selection and division stage comprises the steps of firstly dividing the existing orthophoto T and mask data into a plurality of subareas, wherein each subarea corresponds to one tile, and then dividing the whole sample set into a training sample set, a verification sample set and a test sample set by adopting a tile-based mechanism;
2) Performing data augmentation operation on the training samples;
3) Inputting the training sample subjected to the data augmentation operation into a deep convolutional neural network model, performing forward reasoning and backward learning, and training the super-parameter value of a calculation model;
the deep convolution neural network is adapted based on two network structures of VGG-16 and ResNet-101, so that the deep convolution neural network is suitable for classification tasks of high-resolution remote sensing images, adopts a hole algorithm, realizes multi-core hole convolution, and specifically comprises the following steps:
s21: a CNN model for image classification is adapted to a full convolution form, a corresponding full convolution network FCN is conveniently obtained for classification, a hole algorithm is introduced to relieve the resolution loss problem caused by pooling operation, and the control of a convolution operator receptive field is realized on the premise of not increasing the number of weight parameters in a convolution kernel by introducing an additional sampling rate parameter r;
s22: the step-down sampling control strategy is to set the step length of convolution operation as 1, set the sampling rate parameter of the hole algorithm as 2 times, ensure that the resolution after the reduction is not lower than 1/8 of the original resolution by means of the hole convolution algorithm, and up-sample the final result to the resolution corresponding to the original image by a rapid bilinear interpolation algorithm;
s23: the sampling rate r of a specific hole convolution layer is adjusted to conveniently control the receptive field of each pixel to be marked, the type prediction performance of the whole full convolution network FCN model is further improved by fusing the prediction results of a plurality of different receptive fields, specifically, 4 hole convolution branches with different sampling rate parameters are synchronously trained, and finally, the hole convolution branches are fused before being input into a softmax layer to improve the model performance;
the segmentation mapping strategy of the super-pixel combined area adjacency graph in the step S3 is carried out according to the following steps:
s31: dividing the new time phase image T2 by using a super-pixel multi-scale dividing method, and on the basis, performing super-pixel division by using a watershed dividing algorithm with space constraint;
s32: constructing a region adjacency graph; abstracting each super pixel in the initial segmentation result into a node, connecting the adjacent super pixels, namely representing nodes, and connecting the connected nodes by using a line segment with weight;
s33: region merging; according to the sorting of the merging cost, circulating adjacent areas with minimum merging cost function values until the minimum merging cost function value meets the condition;
the merging cost is calculated according to the following functions:
wherein, C (m, n) represents the merging cost function of the adjacent super pixels;
am, an represent the areas of superpixels m and n, respectively;
l represents the common boundary length of adjacent super-pixels;
λ represents a shape factor;
h (m, n) represents the heterogeneity of adjacent super-pixels;
w1, w2, w3 represent weights of spectral heterogeneity, texture heterogeneity, and feature factor heterogeneity, respectively;
DS (m, n), DT (m, n), DF (m, n) represent spectral heterogeneity, texture heterogeneity, and feature factor heterogeneity, respectively, and the symbols denoted f and a represent the feature values of the front and rear phases, respectively.
2. The method for reliably detecting the quality of the geographical national condition monitoring result as recited in claim 1, wherein the method comprises the following steps: the deep convolutional neural network model in the step S4 adopts the following data enhancement method of a rotary transformation model, and the final prediction result is obtained by fusion, wherein the data enhancement method of the rotary transformation model is adopted; the optimal configuration scheme is obtained by comparing the marking precision at different rotation intervals; the method comprises the following specific steps:
inputting existing orthophoto T and mask data; rotating the sample at different rotational intervals; inputting the model into a deep convolutional neural network model for training; generating semantic mark results of different rotation intervals; comparing and determining the rotation interval of the images; rotating all samples at selected intervals; a deep convolutional neural network model is enhanced.
3. The method for reliably detecting the quality of the geographical national condition monitoring result as recited in claim 1, wherein the method comprises the following steps: the deep convolutional neural network model in the step S4 adopts the following data enhancement method of an aspect ratio conversion model, and the data enhancement method of the aspect ratio conversion model is obtained by fusion; the method comprises the following specific steps:
inputting existing orthophoto T and mask data; randomly selecting a scaling scale from the set [0.75,1.0,1.25] in two directions; each image scale combination forms 9 augmented images; and enhancing the deep convolutional neural network model, and fusing to obtain a final prediction result.
4. The method for reliably detecting the quality of the geographical national condition monitoring result as recited in claim 1, wherein the method comprises the following steps: the deep convolutional neural network model in the step S4 adopts the following multi-data-set cross-scene learning and fusion to obtain a final prediction result:
inputting existing orthophoto T and mask data; inputting cross-scene image data, and resampling to be the same as the resolution of the original training set; inputting the processed cross-scene image data into a deep convolutional neural network model for training; finally, inputting the model into an enhanced deep convolutional neural network model; and fusing to obtain a final prediction result.
5. The method for reliably detecting the quality of the geographical national condition monitoring result as recited in claim 1, wherein the method comprises the following steps: the deep convolutional neural network model in the step S4 adopts the following multi-source data enhancement method and is subjected to fusion to obtain a final prediction result:
acquiring R, G, B three channels of new-time-phase image T2 data; and performing R, G, B, DSM, NDSM five-channel conversion treatment; and fusing the input layer in the deep convolutional neural network model to obtain a final prediction result.
6. A reliable geographical condition monitoring achievement quality inspection system comprising a memory, a processor and a computer program stored on the memory and operable on the processor, characterized in that: the processor, when executing the program, implements the method for reliably detecting the geographical national condition monitoring result according to any one of claims 1 to 5.
CN202010040770.4A 2020-01-15 2020-01-15 Reliable quality inspection method and system for geographical national condition monitoring result Active CN111259955B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010040770.4A CN111259955B (en) 2020-01-15 2020-01-15 Reliable quality inspection method and system for geographical national condition monitoring result

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010040770.4A CN111259955B (en) 2020-01-15 2020-01-15 Reliable quality inspection method and system for geographical national condition monitoring result

Publications (2)

Publication Number Publication Date
CN111259955A CN111259955A (en) 2020-06-09
CN111259955B true CN111259955B (en) 2023-12-08

Family

ID=70953131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010040770.4A Active CN111259955B (en) 2020-01-15 2020-01-15 Reliable quality inspection method and system for geographical national condition monitoring result

Country Status (1)

Country Link
CN (1) CN111259955B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111898503B (en) * 2020-07-20 2021-02-26 中国农业科学院农业资源与农业区划研究所 Crop identification method and system based on cloud coverage remote sensing image and deep learning
CN112101464B (en) * 2020-09-17 2024-03-15 西安锐思数智科技股份有限公司 Deep learning-based image sample data acquisition method and device
CN112148829B (en) * 2020-09-30 2023-05-16 重庆市规划设计研究院 GIS algorithm optimization method applied to broken pattern spot elimination
CN112733745A (en) * 2021-01-14 2021-04-30 北京师范大学 Cultivated land image extraction method and system
CN113077458B (en) * 2021-04-25 2023-09-19 北京艾尔思时代科技有限公司 Cloud and shadow detection method and system in remote sensing image
CN114154040B (en) * 2022-02-07 2022-06-10 自然资源部国土卫星遥感应用中心 Construction method and device of remote sensing reference data set

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976437A (en) * 2010-09-29 2011-02-16 中国资源卫星应用中心 High-resolution remote sensing image variation detection method based on self-adaptive threshold division
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN104298734A (en) * 2014-09-30 2015-01-21 东南大学 Mobile updating method for change pattern spots in land use status change survey
CN107767380A (en) * 2017-12-06 2018-03-06 电子科技大学 A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution
CN109409315A (en) * 2018-11-07 2019-03-01 浩云科技股份有限公司 A kind of ATM machine panel zone remnant object detection method and system
CN110135354A (en) * 2019-05-17 2019-08-16 武汉大势智慧科技有限公司 A kind of change detecting method based on outdoor scene threedimensional model
CN110472661A (en) * 2019-07-10 2019-11-19 北京吉威数源信息技术有限公司 Method for detecting automatic variation and system based on history background and current remote sensing image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976437A (en) * 2010-09-29 2011-02-16 中国资源卫星应用中心 High-resolution remote sensing image variation detection method based on self-adaptive threshold division
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN104298734A (en) * 2014-09-30 2015-01-21 东南大学 Mobile updating method for change pattern spots in land use status change survey
CN107767380A (en) * 2017-12-06 2018-03-06 电子科技大学 A kind of compound visual field skin lens image dividing method of high-resolution based on global empty convolution
CN109409315A (en) * 2018-11-07 2019-03-01 浩云科技股份有限公司 A kind of ATM machine panel zone remnant object detection method and system
CN110135354A (en) * 2019-05-17 2019-08-16 武汉大势智慧科技有限公司 A kind of change detecting method based on outdoor scene threedimensional model
CN110472661A (en) * 2019-07-10 2019-11-19 北京吉威数源信息技术有限公司 Method for detecting automatic variation and system based on history background and current remote sensing image

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
党宇 ; 张继贤 ; 邓喀中 ; 赵有松 ; 余凡.基于深度学习AlexNet的遥感影像地表覆盖分类评价研究.地球信息科学学报.2017,第19卷(第11期),第1532页右栏第2段,第1533页右栏第2-3段. *
冷顺绿.地表覆盖质检工具在基础性地理国情监测项目中的开发与应用.湖北农业科学.2019,第第58卷卷(第第58卷期),第156页第1段. *
刘仲民 ; 王阳 ; 李战明 ; 胡文瑾 ; .基于简单线性迭代聚类和快速最近邻区域合并的图像分割算法.吉林大学学报(工学版).2018,(第06期),全文. *
基于深度学习的无人机土地覆盖图像分割方法;刘文萍;赵磊;周焱;宗世祥;骆有庆;;农业机械学报(02) *
张志强 ; 张新长 ; 辛秦川 ; 杨晓羚 ; .结合像元级和目标级的高分辨率遥感影像建筑物变化检测.测绘学报.2018,(第01期),全文. *
彭超,魏雪云.SAR图像海岸线分割的超像素合并方法.电光与控制.2019,第26卷(第26期),第12-15页. *
熊松.地理国情普查中人工分类图斑的检验方法研究.科技创新与应用.2016,(第25期),第76页左栏最后1段-右栏第1段. *

Also Published As

Publication number Publication date
CN111259955A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN111259955B (en) Reliable quality inspection method and system for geographical national condition monitoring result
Bechtel et al. Generating WUDAPT Level 0 data–Current status of production and evaluation
CN109446992B (en) Remote sensing image building extraction method and system based on deep learning, storage medium and electronic equipment
Li et al. A 30-year (1984–2013) record of annual urban dynamics of Beijing City derived from Landsat data
Zhou et al. BT-RoadNet: A boundary and topologically-aware neural network for road extraction from high-resolution remote sensing imagery
CN109493320B (en) Remote sensing image road extraction method and system based on deep learning, storage medium and electronic equipment
CN112287807B (en) Remote sensing image road extraction method based on multi-branch pyramid neural network
El-naggar Determination of optimum segmentation parameter values for extracting building from remote sensing images
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN109447160A (en) A kind of method of image and vector road junction Auto-matching
CN111626947A (en) Map vectorization sample enhancement method and system based on generation of countermeasure network
CN106021342A (en) A city space growth ring map making and analyzing method
CN112819066A (en) Res-UNet single tree species classification technology
CN113744106A (en) Method for automatically dividing natural resource right-confirming registration unit
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN115240066A (en) Remote sensing image mining area greening monitoring method and system based on deep learning
CN117237780B (en) Multidimensional data feature graph construction method, multidimensional data feature graph construction system, intelligent terminal and medium
Rashed Remote sensing of within-class change in urban neighborhood structures
Pai et al. A geospatial tool for delineating streambanks
CN115690597A (en) Remote sensing image urban ground feature change detection method based on depth background difference
Jiao et al. A Novel Data Augmentation Method to Enhance the Training Dataset for Road Extraction from Historical Maps
Chakraborti et al. Assessing dynamism of urban built-up growth and landuse change through spatial metrics: a study on Siliguri and its surroundings
Wang et al. Archaeological site segmentation of ancient city walls based on deep learning and LiDAR remote sensing
CN106339423A (en) Method and device for dynamically updating sugarcane planting information
Kiani et al. Design and implementation of an expert interpreter system for intelligent acquisition of spatial data from aerial or remotely sensed images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant