Change detection method based on live-action three-dimensional model
Technical Field
The invention relates to the technical field of three-dimensional digitization, in particular to a change detection method based on a live-action three-dimensional model.
Background
Change detection is one of key technologies in the fields of land coverage monitoring, land utilization monitoring, disaster assessment, disaster prediction, geographic information data updating and the like, and is always concerned about, the change detection comprises change area detection and change type identification, the traditional change detection process comprises the steps of generating a difference map and changing area classification, the difference map acquisition method comprises image difference values, image ratio values and the like, these pixel-based methods are only applicable to large-scale satellite images or low-resolution aerial images, for high-resolution images with increasingly frequent application, a large number of fragments are easily formed, so that excessive pseudo-change regions are generated, the later data processing is not facilitated, the traditional classification method is divided into supervised classification and unsupervised classification, however, they are based on images, only use the color information of the images, and the classification basis is too single, and the accuracy of classification is not high.
With the rapid development of the unmanned aerial vehicle technology, the unmanned aerial vehicle images are increasingly applied to geographic information source data acquisition due to the advantages of low acquisition cost, high efficiency, high resolution and the like, real three-dimensional model data generated by the unmanned aerial vehicle images also become one of important geographic information data, the real three-dimensional model data simultaneously have color and geometric information and are applied to change detection, and meanwhile, according to the characteristic of high resolution of the unmanned aerial vehicle images, an object-oriented method is applied to perform change detection by taking a segmented object as a basic unit, so that the change detection precision can be greatly improved.
Deep learning is a new field in machine learning research, and the motivation lies in establishing and simulating a Neural network for analyzing and learning the human brain, which simulates the mechanism of the human brain to interpret data such as images, sounds and texts, the deep learning aims at learning better features by constructing a machine learning model with a plurality of hidden layers and massive training data, thereby improving the classification accuracy, and tracing the root and the source, the concept of the deep learning is derived from the research of the artificial Neural network, forms more abstract high-level representation attribute categories or features by combining low-level features to find the distributed feature representation of the data, the Convolutional Neural Network (CNN) specially solving the problem of image classification and identification is a deep learning network with a convolutional structure, the CNN can automatically extract spatial features from the images, and takes the pixels to be classified and the neighborhood pixels as the input of the convolutional Neural network together, and the method is further converted into the characteristics effectively utilized by a machine learning task, in recent years, the problem of image classification by using a neural network method is mature day by day, the application field is expanded, and compared with the traditional classification method, the deep learning method has the strong capability of learning the essential characteristics of the data set from a few sample sets, and the identification accuracy of the types of the changed ground objects can be greatly improved.
In summary, the invention provides a change detection method based on a live-action three-dimensional model, which utilizes live-action three-dimensional model data, detects a change area by an object-oriented method, and identifies a change type by a deep learning method, thereby greatly improving the change detection precision and the change type identification accuracy.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a change detection method based on a live-action three-dimensional model, which solves the problem that the traditional change detection method based on images only utilizes the color information of the images and is difficult to achieve satisfactory effect on change detection precision and change type identification accuracy.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
Preferably, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
Preferably, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the sizes Δ x and Δ y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
Preferably, the digital orthophoto image DOM is generated by taking texture color values of model points at corresponding positions on the model as z values according to horizontal coordinates of the grid points, and then the digital ground model DSM is generated by taking elevation values of the model points at corresponding positions on the model as z values according to the horizontal coordinates of the grid points.
Preferably, in the step S3, the DOM and the DSM are segmented to generate the patch object set, the generated digital positive-shot image is utilized to segment the image by using an effective segmentation method based on a graph, the image is segmented into a plurality of specific regions with unique properties, the details of low-variation regions can be maintained, and the details of high-variation regions can be ignored, so as to reduce the generation of fine fragments, thereby obtaining a good segmentation effect, the digital ground model DSM is utilized to stretch the elevation values of the grid points to the range of 0-255, so as to generate a gray image, the image is segmented into a plurality of non-overlapping regions by using a threshold segmentation method, and the two segmentation results are combined to obtain the final patch object set.
Preferably, in step S4, it is determined whether the change region is a region in which the average of the height differences in the patches is counted for each patch object, a threshold is set, and if the average of the height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
Preferably, the classifier is generated by deep learning in step S7, the deep learning network is formed by clustering a plurality of neural units together and constructing a hierarchical result, the simplest network is formed by an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in no layer are connected with the neurons in the next layer, and the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully-connected network.
Preferably, the predicting of the type of the changed surface feature in step S8 is inputting a surface feature pattern and a classifier, calculating by the classifier, and outputting the probability of each class, where the highest probability is the class of the changed surface feature.
(III) advantageous effects
The invention provides a change detection method based on a live-action three-dimensional model. Compared with the prior art, the method has the following beneficial effects: the change detection method based on the live-action three-dimensional model specifically comprises the following steps: s1, calculating the overlapped area before and after the change according to the range of the real three-dimensional model, S2, carrying out texture resampling and elevation resampling on a plurality of three-dimensional models, generating a digital positive shot image DOM and a digital ground model DSM, S3, carrying out imaging processing on the digital positive shot image DOM and the digital ground model DSM in the step S2, carrying out object-oriented segmentation, generating a spot object set, S4, judging whether a spot is a changed area according to the elevation change in each spot object, S5, respectively collecting sample data of different ground object types, S6, carrying out sample training, S7, generating a classifier, S8, inputting the color and elevation information of the spot to obtain the type of the changed ground object, realizing segmentation by using an object-oriented method by using the color and geometric information of the real three-dimensional model, carrying out change detection by using the object as a basic unit, determining the changed area, the method for recognizing the ground feature change type by utilizing the deep learning method greatly improves the change detection precision and the change type recognition accuracy, provides favorable conditions for further rejecting pseudo change areas by utilizing the color and geometric information, improves the detection precision and enriches the classification basis.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a technical solution: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
In the present invention, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
In the invention, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the size delta x and delta y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
According to the horizontal coordinates of the grid points, texture color values of model points at corresponding positions on the model are taken as z values, so that the digital orthophoto image DOM can be generated, and then according to the horizontal coordinates of the grid points, elevation values of the model points at corresponding positions on the model are taken as z values, so that the digital ground model DSM is generated.
In the invention, the DOM and DSM are divided in step S3 to generate a speckle object set, the generated digital positive shooting image is utilized, the image is divided by adopting an effective dividing method based on a graph, the image is divided into a plurality of specific areas with unique properties, the details of low-variation areas can be kept, and the details of high-variation areas can be ignored, so that the generation of fine fragments is reduced, a good dividing effect is obtained, the elevation value of a grid point is stretched to a range of 0-255 by utilizing a digital ground model DSM to generate a gray level image, the image is divided into a plurality of non-overlapping areas by utilizing a threshold dividing method, and the two dividing results are combined to obtain the final speckle object set.
In the present invention, in step S4, it is determined whether the change region is a region of each patch object, a mean value of height differences within the patch is counted, a threshold is set, if the mean value of height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
The method for eliminating the pseudo change area comprises the following steps: 1) removing unimportant change areas such as vegetation by using the vegetation index, calculating the vegetation index EGI (2G-R-B) or nGEI (2G-R-B)/(2G + R + B) of the map spot object according to the RGB value of the image, and if the vegetation index before and after the change exceeds a threshold value, determining the change areas as pseudo change areas; 2) a fine isolated region, the area of which is generally larger according to the change detection, the fine isolated region can be regarded as a pseudo change; 3) the patch having irregular geometric characteristics such as long, narrow, and concave height is regarded as a pseudo-change region, and if the patch is determined as a change region, step S8 is executed, otherwise, the process is ended.
In the present invention, the generation classifier in step S7 is generated by a deep learning method, the deep learning network is formed by gathering a plurality of neural units together and constructing a layered result, the simplest network is composed of an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in none of the layers are connected with the neurons in the next layer, the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully connected network, the deep learning generally uses a multi-layer neural network, and is composed of three parts: 1) the input layer is responsible for data acquisition; 2) the method comprises the steps of extracting features by combining n convolution layers and pooling layers, namely a hidden layer which is invisible to the outside; 3) the output layer is composed of a fully connected multi-layer perceptron classifier.
The last layer of the classification model is usually a Softmax regression model, which works on the principle of adding features that can be judged to be of a certain class, then converting the features into the probability that the judgment is of the class, and describing the features as:
featuresi=∑jWi,jxj+bi
i represents class i, j represents pixel j of an image, biIs bias (representing the tendency of the data itself), W represents a weight parameter, and x represents the input image data.
Next softmax was calculated for all features, with the following results:
softmax(x)=normalize(exp(x))
the probability of the i-th class being determined can be obtained by the following equation.
In order to train the model, a loss function needs to be defined to describe the classification precision of the model to the problem, and the smaller the loss function is, the smaller the deviation of the classification result representing the model from the true value is, i.e. the more accurate the model is. For the multi-classification problem, Cross-entropy (Cross-entropy) is usually used as a loss function, and is defined as follows, where y is the predicted distribution probability and y' is the true probability distribution (i.e., the one-hot code of Label), which is used to judge how accurate the model estimates the true probability distribution.
The application of Stochastic Gradient Descent (SGD) to neural networks is a back propagation algorithm, a common Stochastic Gradient Descent optimization algorithm is used to optimize a loss function, a Gradient Descent method is to determine a new search direction for each iteration by using a negative Gradient direction, so that a target function to be optimized can be gradually reduced for each iteration, and the most appropriate weight parameter of the perceptron is solved through a known input value (image) and a real output value (prediction probability).
Thus, the deep learning is applied to the change detection, and is mainly divided into 3 steps: 1) collecting samples, namely selecting a certain number of surface feature pattern spots with different resolutions, different visual angles and different light and shade degrees on a pattern according to surface feature types such as roads, buildings, lands, vegetation and the like, and putting the surface feature pattern spots into a sample library; 2) sample training, defining a classification algorithm formula and a loss function, then defining an optimization algorithm, then carrying out iterative training, updating parameters to reduce loss in each iteration, and finally achieving global optimal parameters, wherein in order to better complete a task, the method adopts two different network structures, namely a Google inclusion Net V3 network structure and a SegNet network structure, and identifies and segments an image; 3) and generating a classifier, and storing the model parameters output by training so as to load the model parameters in prediction.
In the invention, the step S8 of predicting the type of the changed ground feature is to input the ground feature pattern spot and the classifier, calculate the classification and output the probability of each class, wherein the highest probability is the class of the changed ground feature.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.