CN111738972A - Building detection system, method and device - Google Patents

Building detection system, method and device Download PDF

Info

Publication number
CN111738972A
CN111738972A CN201910211703.1A CN201910211703A CN111738972A CN 111738972 A CN111738972 A CN 111738972A CN 201910211703 A CN201910211703 A CN 201910211703A CN 111738972 A CN111738972 A CN 111738972A
Authority
CN
China
Prior art keywords
newly added
added building
building
moment
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910211703.1A
Other languages
Chinese (zh)
Other versions
CN111738972B (en
Inventor
陈伟涛
王洪彬
李�昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910211703.1A priority Critical patent/CN111738972B/en
Publication of CN111738972A publication Critical patent/CN111738972A/en
Application granted granted Critical
Publication of CN111738972B publication Critical patent/CN111738972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a building detection system, a building detection method, a building detection device and a building detection device. The building detection method comprises the following steps: acquiring first moment image data and second moment image data of a to-be-detected area; extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; and determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model. By adopting the processing mode, the newly added buildings in the remote sensing images are detected based on the deep learning method, and the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the detection recall rate and the accuracy rate of the newly added building can be effectively improved, and the detection efficiency is improved.

Description

Building detection system, method and device
Technical Field
The application relates to the technical field of remote sensing image processing, in particular to a building detection system, a method and a device, and a newly added building detection device.
Background
The remote sensing image analysis is used for knowing the use condition of the land, and the remote sensing image analysis has great significance for the homeland department, wherein the position of the illegal building can be determined on the basis of ensuring the timeliness by timely detecting the newly added building, and the remote sensing image analysis has great significance for the protection of cultivated land and the construction management of cities.
At present, the commonly used method for detecting newly added buildings in a certain area is mainly a manual detection method or an automatic detection method through intelligent image analysis software. The process of detection by the intelligent image analysis software is as follows. Firstly, based on a vector classification method, extracting an original building by using an original building vector file, wherein an assign class algorithm can be adopted; extracting all objects with the height higher than 2 meters in the unclassified objects by using the DTM and DSM files, and naming the objects as Elevated objects (Elevated objects), wherein the Elevated objects comprise newly added buildings and trees with the height higher than 2 meters; the updated object is subdivided into new buildings and trees by NDVI, and a classification algorithm can be adopted, for example, the value of NDVI is 0.03, more than 0.03 is taken as the tree, and less than 0.03 is taken as the new building; classifying the new buildings and trees by using a merging, small area removing and area normalizing algorithm, and extracting newly added buildings; finally, the extracted newly added building may also be exported as a vector file.
However, in the process of implementing the present invention, the inventor finds that the above technical solution has at least the following problems: 1) the manual detection mode needs a large amount of time and manpower to compare and analyze the remote sensing image, so that the labor cost is high, the detection efficiency is low, and meanwhile, the detection result is related to the professional experience of a detector, so that the detection recall rate and the detection accuracy rate cannot be ensured; 2) the automatic detection mode also has the problem that the detection recall rate and the detection accuracy rate are to be improved.
Disclosure of Invention
The application provides a building detection method, which aims to solve the problems of low recall rate and accuracy rate of newly added building detection in the prior art. The application additionally relates to a building detection device and system, and a building detection apparatus.
The application provides a building detection method, which comprises the following steps:
acquiring first moment image data and second moment image data of a to-be-detected area;
extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model;
and determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model.
Optionally, the newly added building detection model includes: at least one newly added building feature extraction sub-network, at least one first newly added building prediction sub-network, and a second newly added building prediction sub-network;
extracting at least one newly added building feature of the to-be-detected area in at least one depth grade relative to the second moment at the first moment from the first moment image data and the second moment image data through the at least one newly added building feature extraction sub-network;
acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through the at least one first newly added building prediction sub-network;
and determining the positions of the newly added buildings according to the newly added building scores respectively corresponding to at least one depth level through the second newly added building prediction subnetwork.
Optionally, the input data of the newly-added building feature extraction sub-network includes a newly-added building feature of a previous depth level output by a previous newly-added building feature extraction sub-network adjacent to the newly-added building feature extraction sub-network.
Optionally, the method further includes:
dividing the first moment image data and the second moment image data into first moment sub-image data and second moment sub-image data which respectively correspond to a plurality of sub-areas;
aiming at each subregion, extracting at least one newly added building feature of a depth level from the first time subimage data and the second time subimage data through the at least one newly added building feature extraction sub-network;
acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through the at least one first newly added building prediction sub-network;
and determining the positions of the newly added buildings corresponding to the sub-regions according to the newly added building scores respectively corresponding to at least one depth level through the second newly added building prediction sub-network.
Optionally, the method further includes:
acquiring a training data set comprising newly added building position marking information;
constructing a neural network; the neural network comprises the newly added building feature extraction sub-network and a newly added building prediction sub-network;
training the neural network according to the training data set.
The present application further provides a building detection system, including:
the client is used for sending a newly added building detection request aiming at the target area to the webpage service module; receiving the image including the newly added building position identification of the target area returned by the webpage service module, and displaying the image;
the webpage service module is used for receiving the request and sending a new building detection instruction to the new building detection module; receiving newly added building position data sent by the file transmission service module; generating the image according to the newly added building position data, and returning the image to the client;
the newly added building detection module is used for acquiring first moment image data and second moment image data of the target area according to the instruction; extracting the newly added building features of the target area at the first time relative to the second time from the first time image data and the second time image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending a newly added building position data file to a file transmission service module;
and the file transmission service module is used for receiving the newly added building position data file and sending the newly added building position data to the webpage service module.
Optionally, the newly added building detection module is specifically configured to divide the first time image data and the second time image data into first time sub-image data and second time sub-image data respectively corresponding to the multiple sub-areas; aiming at each subregion, extracting at least one newly added building feature of at least one depth grade of the subregion at a first time relative to a second time from the first time subimage data and the second time subimage data through at least one newly added building feature extraction subnetwork included by a newly added building detection model; acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through at least one first newly added building prediction sub-network included by the model; and determining the positions of the newly added buildings corresponding to the sub-regions according to the scores of the newly added buildings respectively corresponding to at least one depth level through a second newly added building prediction sub-network included by the model.
The present application further provides a building detection apparatus, including:
the webpage service module is used for receiving a newly added building detection request aiming at a target area and sent by a client and sending a newly added building detection instruction to the newly added building detection module; receiving the data of the newly added building position sent by the file transmission service module; generating an image of the target area including a newly added building position identifier according to the newly added building position data, and returning the image to the client;
the newly added building detection module is used for acquiring first moment image data and second moment image data of the target area according to the instruction; extracting the newly added building features of the target area at the first time relative to the second time from the first time image data and the second time image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending a newly added building position data file to a file transmission service module;
and the file transmission service module is used for receiving the newly added building position data file and sending the newly added building position data to the webpage service module.
The application also provides a building detection device, includes:
the image data acquisition unit is used for acquiring first moment image data and second moment image data of the area to be detected;
the feature extraction unit is used for extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in the newly added building detection model;
and the newly added building position determining unit is used for determining the newly added building position of the area to be detected at the first moment relative to the second moment according to the newly added building prediction sub-network included by the model.
Optionally, the newly added building detection model includes: at least one newly-added building feature extraction sub-network, at least one first newly-added building prediction sub-network and a second newly-added building prediction sub-network which respectively correspond to the at least one newly-added building feature extraction sub-network;
the feature extraction unit is specifically configured to extract, through the at least one newly added building feature extraction sub-network, the newly added building feature of the at least one hierarchy level from the first time image data and the second time image data;
the newly added building position determining unit comprises a first subunit and a second subunit;
the first subunit is configured to, through the at least one first newly added building prediction sub-network, obtain, according to the newly added building features of the at least one hierarchy, newly added building scores respectively corresponding to the at least one hierarchy;
and the second subunit is configured to determine, through the second newly-added building prediction sub-network, the newly-added building position according to the newly-added building scores respectively corresponding to the at least one hierarchy.
Optionally, the method further includes:
and the image segmentation unit is used for segmenting the first moment image data and the second moment image data into first moment sub-image data and second moment sub-image data which respectively correspond to the plurality of sub-areas.
The present application also provides a computer-readable storage medium having stored therein instructions, which when run on a computer, cause the computer to perform the various methods described above.
The present application also provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the various methods described above.
Compared with the prior art, the method has the following advantages:
according to the building detection method provided by the embodiment of the application, the first moment image data and the second moment image data of the area to be detected are obtained; extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model; by the processing mode, newly added buildings in the remote sensing images are detected based on a deep learning method, and the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the detection recall rate and the accuracy rate of the newly added building can be effectively improved, and the detection efficiency is improved.
Drawings
FIG. 1 is a flow chart of an embodiment of a building detection method provided herein;
FIG. 2a is a first time image of an embodiment of a building inspection method provided by the present application;
FIG. 2b is an image of a building at a second time according to an embodiment of the building inspection method provided in the present application;
FIG. 3 is a schematic model diagram of an embodiment of a building detection method provided herein;
FIG. 4a is a graph showing comparison results of an embodiment of a building inspection method provided in the present application;
fig. 4b is a graph illustrating a visualization result of a comparison of an embodiment of a building detection method provided in the present application;
FIG. 5 is a graph comparing model effects of an embodiment of a building detection method provided by the present application;
FIG. 6 is a detailed flow chart of an embodiment of a building detection method provided herein;
FIG. 7 is a flow chart of model generation for an embodiment of a building detection method provided herein;
FIG. 8 is a schematic view of an embodiment of a building detection apparatus provided herein;
FIG. 9 is a detailed schematic view of an embodiment of a building detection apparatus provided herein;
FIG. 10 is a schematic view of an embodiment of a building detection system provided herein;
FIG. 11 is an interactive schematic of an embodiment of a building detection system provided herein;
FIG. 12 is a schematic view of an embodiment of a building detection apparatus provided herein.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
In the application, a building detection system, a building detection method and a building detection device are provided, and a newly added building detection device is provided. Each of the schemes is described in detail in the following examples.
The technical scheme provided by the application has the core technical idea that: the method comprises the steps of solving the automatic detection problem of the newly added building of the remote sensing image based on deep learning, specifically, extracting the newly added building characteristics of the to-be-detected area at a first moment relative to a second moment from the first moment image data and the second moment image data of the to-be-detected area through a newly added building detection model based on a neural network, and determining the newly added building position of the to-be-detected area at the first moment relative to the second moment according to the extracted newly added building characteristics. The method is based on the deep learning method to detect the newly added buildings in the remote sensing images, so that the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the detection recall rate and the accuracy rate of the newly added building can be effectively improved, and the detection efficiency is improved.
First embodiment
Please refer to fig. 1, which is a flowchart illustrating an embodiment of a building detection method according to the present application, wherein an execution main body of the method includes a building detection apparatus. The application provides a building detection method, which comprises the following steps:
step S101: and acquiring first moment image data and second moment image data of the area to be detected.
The image data comprises remote sensing data obtained by observing the region to be detected through a remote sensing technology, and a remote sensing image displayed according to the data is shown in fig. 2. In specific implementation, satellite data can be acquired from a remote sensing technology platform, and a remote sensing instrument and information receiving, processing and analyzing are carried out.
The first time is a later time relative to the second time, such as 2018/12/1 for the first time, 2018/11/1 for the second time, and so on. Fig. 2a and 2b show first-time image data and second-time image data of the present embodiment, respectively.
In this embodiment, the first time image data and the second time image data are respectively downloaded according to the respective URL addresses corresponding to the first time image data and the second time image data.
After the first moment image data and the second moment image data of the area to be detected are obtained, the next step can be carried out to extract the characteristics of the newly added building through the newly added building detection model.
Step S103: and extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in the newly added building detection model.
The newly added building detection model can be a coding and decoding structure and comprises a newly added building feature extraction sub-network of a coding part and a newly added building prediction sub-network of a decoding part. The newly added building feature extraction sub-network is used for extracting newly added building features of the area to be detected at a first moment relative to a second moment from the first moment image data and the second moment image data; and a newly added building prediction sub-network is added, and the position of the newly added building of the area to be detected at the first moment relative to the second moment is determined according to the characteristics of the newly added building.
The newly added building can be a building built at a certain moment after the second moment and before the first moment, and the building built at the middle moment is actually an existing building for the first moment; the newly added building can be a building which is built at the first moment, and even a building which is in the process of being built at the first moment. No matter which of the cases the newly added building belongs to, the newly added building does not exist in the image data at the second moment, but exists in the image data at the first moment, and the newly added building can embody the building change situation of the first moment relative to the second moment.
The input data of the model includes the first time image data and the second time image data, for example, the image data is RGB image data including 112 × 112 pixels, and then the dimension 112 × 3 × 2 of the input data of the model, where 3 represents three colors and 2 represents two times. In this embodiment, the comparison images at different times are synthesized in the channel dimension as input, and the number of input channels of the corresponding first convolutional layer is also increased from the number of channels of the single map to the number of channels of the double map. Through the newly added building feature extraction sub-network, the two image data can be subjected to nonlinear transformation to obtain newly added building features, and the dimensionality of the newly added building features can be lower than that of the image data of the input layer of the newly added building feature extraction sub-network. The newly-added sub-network for extracting the architectural features can comprise a plurality of convolutional layers, pooling layers and the like.
Please refer to fig. 3, which is a schematic diagram of a new building detection model according to an embodiment of a building detection method provided by the present application. In this embodiment, the newly added building detection model includes: the system comprises at least one newly added building feature extraction sub-network, at least one first newly added building prediction sub-network and at least one second newly added building prediction sub-network, wherein the at least one first newly added building prediction sub-network and the at least one second newly added building prediction sub-network respectively correspond to the at least one newly added building feature extraction sub-network. And extracting at least one newly added building feature of the to-be-detected area in at least one depth level relative to the second moment at the first moment from the first moment image data and the second moment image data through the at least one newly added building feature extraction sub-network. The input data of the newly-added building feature extraction sub-network comprises the newly-added building features of the previous depth level output by the previous newly-added building feature extraction sub-network adjacent to the newly-added building feature extraction sub-network. The newly added building features of the latter depth level are more deeply expressed than the newly added building features of the former depth level, and may also be referred to as the newly added building features of depth. Therefore, the method and the device detect the newly added buildings in the remote sensing images by using the deep learning algorithm.
In this embodiment, the first layer of the encoded portion is a convolution layer having a convolution kernel size of 1 × 1, and the input channel of the layer is the sum of the channels of two comparison images (the first time image data and the second time image data). Next, 5 similar modules, namely 5 newly added sub-networks for extracting architectural features, are provided, and the different sub-networks for extracting the newly added architectural features at different depths. Each sub-network of feature extraction is composed of a plurality of convolution layers which are normalized by batch processing, activated and pooled, the convolution kernel size of each convolution layer is 3x3, but the number of output channels of different sub-networks of feature extraction is different. The first feature extraction sub-network is composed of 2 convolutional layers with the number of output channels being 32 in series connection, the second feature extraction sub-network is composed of 2 convolutional layers with the number of output channels being 64, the third feature extraction sub-network is composed of 3 convolutional layers with the number of output channels being 128, the fourth feature extraction sub-network is composed of 3 convolutional layers with the number of output channels being 256, the fifth feature extraction sub-network is composed of 3 convolutional layers with the number of output channels being 512, input data of each feature extraction sub-network is output data of a pooling layer of the previous feature extraction sub-network, new building features with gradually increased depths are extracted, positions of the new building are determined according to the new building features of deep expression, and the recall rate and the accuracy rate of the new building can be effectively improved.
It should be noted that, the number of the newly added sub-networks for extracting the architectural features may be determined according to specific requirements of the detection accuracy and the recall ratio, and the smaller the number of the newly added sub-networks for extracting the architectural features, that is, the smaller the number of the depth levels, the lower the detection accuracy and the recall ratio.
After the newly added building features of the area to be detected at the first time relative to the second time are extracted, the next step can be entered to use the newly added building prediction sub-network, and the newly added building position of the area to be detected at the first time relative to the second time is determined according to the newly added building features.
Step S105: and determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model.
The newly added building detection model can predict the probability that each pixel point in the image data of the to-be-detected area belongs to the newly added building. The input data of the newly added building prediction sub-network comprises newly added building features extracted through the newly added building feature extraction sub-network, the newly added building prediction sub-network can predict the probability that each point in the image belongs to the newly added building according to the features, namely the newly added building score, and the newly added building position of the area to be detected at the first moment relative to the second moment is determined accordingly.
The output layer size of the new building prediction sub-network may be the number of image pixel points, for example, for a 112 × 112 image data, the sub-network outputs a 112 × 112 matrix, and the value of each element in the matrix may indicate whether the pixel point is the new building location.
As shown in fig. 3, in this embodiment, the new building detection model includes at least one first new building prediction sub-network and a second new building prediction sub-network corresponding to the at least one new building feature extraction sub-network. The input data of different first newly-added building prediction sub-networks are newly-added building features of different depth levels, each first newly-added building prediction sub-network can predict newly-added building scores of different depth levels, and then the newly-added building scores of multiple depth levels are fused through the second newly-added building prediction sub-network to obtain the final newly-added building score, so that the position of a newly-added building is determined. By adopting the processing mode, the newly added building scores respectively corresponding to the plurality of depth levels are comprehensively considered to determine the newly added building position; therefore, the recall rate and the accuracy can be effectively improved.
As can be seen from fig. 3, the decoding portion of this embodiment also comprises 5 similar modules corresponding to the encoding portion, i.e. 5 first newly added building prediction subnets, and the input of each module is the output of the corresponding module of the encoding portion before being pooled. Each first newly-added building prediction sub-network comprises four convolutional layers, the size of a convolutional kernel of the first convolutional layer is 1x1, the number of output channels of each module is different, the number of output channels of the first module is 32, the number of output channels of the second module is 64, the number of output channels of the third module is 128, the number of output channels of the fourth module is 256, and the number of output channels of the fifth module is 512; the size of the second convolutional layer convolutional core is 3x3, the number of output channels is 1, the convolutional layers are activated after batch normalization, the size of the third convolutional layer convolutional core is 3x3, the number of the output channels is 1, the convolutional layers are activated after batch normalization, the size of the fourth convolutional layer convolutional core is 1x1, the number of the output channels is 1, no activation is adopted, except for the first module, other modules have an up-sampling layer after the first convolutional layer, the output width and height after the up-sampling of the 2 nd module are 2 times of the original output width and height, the output width and height after the up-sampling of the 3 rd module are 4 times of the original output width and height after the up-sampling of the fourth module are 8 times of the original output width and height after the up-sampling of the 5 th module are 16 times of the original output width and height, and each decoding module corresponds to one output value. After encoding and decoding, the network combines the output values of the 5 decoding modules in the channel dimension, the combined result is fused through a convolution layer with the convolution kernel size of 1x1 and the output dimension number of 1, and finally, a sigmoid function is used for obtaining the final segmentation result. Fig. 4a and 4b show the comparison result graph of the present embodiment and the comparison result visualization graph displayed to the user, respectively.
It should be noted that, in the implementation, the new building location may be determined only according to the new building feature with the highest depth level, but since only the new building feature with one depth level is referred to and the new building features with other lower depth levels are not combined, the detection accuracy is slightly low.
In this embodiment, the model structure diagram shown in fig. 3 is adopted, and for a large diagram (the first time image data and the second time image data of the region to be detected) with the size of 2048 × 2048, the processing time is 1 to 2 minutes, which is better than the manual processing time of 6 minutes of preliminary statistics, and expert knowledge is not required.
Please refer to fig. 5, which is a comparison diagram of model effects of an embodiment of a building detection method provided in the present application. In this embodiment, the off-line evaluation result is shown in the form of PR (accuracy and recall) curve, where the best detection result is the result of the model described in fig. 3, then the result of the fusion between the VGG network and the UNet network, then the result of the UNet network, and finally the result of the VGG network, and it can be seen that the curve corresponding to the model provided in the embodiment of the present application is better than the results of other single models and the fusion result of other single models.
Please refer to fig. 6, which is a flowchart illustrating an embodiment of a building detection method according to the present application. In this embodiment, the method further includes the steps of:
step S601: and dividing the first-time image data and the second-time image data into first-time sub-image data and second-time sub-image data respectively corresponding to the plurality of sub-areas.
Since building objects are mostly smaller and do not require too large a field of view, and increasing the size of the image input to the model network causes increased use of video memory, the input image size of the network is set to a smaller size, e.g., 112 × 112 pixels.
Correspondingly, steps S103 and S105 are respectively executed for each sub-region, and the newly added building position of each sub-region is detected until all sub-regions are detected, so that the newly added building position of the whole to-be-detected region is obtained.
The embodiment divides the remote sensing image related to the wide area into sub-images with smaller sizes, and adopts convolution kernels (such as 3x 3) with smaller sizes; the processing mode can effectively reduce the dimensionality of the input data of the model, can effectively control the quantity of model parameters, and is a neural network structure suitable for embedded equipment, and the detection speed can be effectively improved, wherein the novel building detection model adapts to the computing power and resources of the embedded equipment.
Please refer to fig. 7, which is a flowchart illustrating a model generation process according to an embodiment of a building detection method provided in the present application.
Step S701: and acquiring a training data set comprising the newly added building position marking information.
The training data comprises first moment image data, second moment image data and newly added building position marking information of the detection area. The training data set includes a plurality of pieces of training data.
Step S703: and constructing a neural network.
The neural network comprises the newly added building feature extraction sub-network and the newly added building prediction sub-network. The neural network may employ a deep neural network.
In one example, the neural network includes at least one added building feature extraction sub-network, at least one first added building prediction sub-network corresponding to the at least one added building feature extraction sub-network, respectively, and a second added building prediction sub-network.
Step S705: training the neural network according to the training data set.
After the training data set is obtained, the newly added building detection model can be obtained by learning from the training data set through a machine learning algorithm.
In the present embodiment, Focal length (Focal loss) is adopted as the loss function. Unlike the conventional cross entropy loss function, the local loss helps to solve the problem of sample imbalance, and for a large-range remote sensing image, a new building serving as a positive sample is a little part, so that the loss function is adopted to improve the situation. The definition of Focal loss is as follows:
FL(p_t)=-alpha*(1-p_t)**gamma*log(p_t)
where p is sigmoid (x), when the tag is positive, p _ t is p, and when the tag is negative, p _ t is 1-p. And x is a prediction result output by the network.
As can be seen from the foregoing embodiments, in the building detection method provided in the embodiments of the present application, the first time image data and the second time image data of the area to be detected are obtained; extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model; by the processing mode, newly added buildings in the remote sensing images are detected based on a deep learning method, and the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the detection recall rate and the accuracy rate of the newly added building can be effectively improved, and the detection efficiency is improved.
In the above embodiment, a building detection method is provided, and correspondingly, the application also provides a building detection device. The apparatus corresponds to an embodiment of the method described above.
Second embodiment
Please refer to fig. 8, which is a schematic diagram of an embodiment of the building detection apparatus of the present application. Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The device embodiments described below are merely illustrative.
The present application additionally provides a building detection device, comprising:
an image data acquiring unit 801, configured to acquire first time image data and second time image data of a region to be detected;
a feature extraction unit 802, configured to extract, from the first-time image data and the second-time image data, a new building feature of the to-be-detected region at a first time relative to a second time through a new building feature extraction sub-network included in a new building detection model;
and a newly added building position determining unit 803, configured to determine, according to the newly added building characteristics included in the model, a newly added building position of the area to be detected at the first time relative to the second time.
Optionally, the newly added building detection model includes: at least one newly-added building feature extraction sub-network, at least one first newly-added building prediction sub-network and a second newly-added building prediction sub-network which respectively correspond to the at least one newly-added building feature extraction sub-network;
the feature extraction unit 802 is specifically configured to extract, through the at least one newly added building feature extraction sub-network, the newly added building features of the at least one hierarchy level from the first time image data and the second time image data;
the newly added building position determination unit 803 includes a first subunit and a second subunit;
the first subunit is configured to, through the at least one first newly added building prediction sub-network, obtain, according to the newly added building features of the at least one hierarchy, newly added building scores respectively corresponding to the at least one hierarchy;
and the second subunit is configured to determine, through the second newly-added building prediction sub-network, the newly-added building position according to the newly-added building scores respectively corresponding to the at least one hierarchy.
Optionally, the input data of the newly-added building feature extraction sub-network includes a newly-added building feature of a previous depth level output by a previous newly-added building feature extraction sub-network adjacent to the newly-added building feature extraction sub-network.
Please refer to fig. 9, which is a detailed schematic diagram of an embodiment of the building detection apparatus of the present application. Optionally, the method further includes:
an image dividing unit 901, configured to divide the first time image data and the second time image data into first time sub-image data and second time sub-image data corresponding to a plurality of sub-areas, respectively;
the feature extraction unit 802 is specifically configured to extract, for each sub-region, the at least one newly added building feature of the depth level from the first-time sub-image data and the second-time sub-image data through the at least one newly added building feature extraction sub-network;
acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through the at least one first newly added building prediction sub-network;
and determining the positions of the newly added buildings corresponding to the sub-regions according to the newly added building scores respectively corresponding to at least one depth level through the second newly added building prediction sub-network.
Optionally, the method further includes:
the training data acquisition unit is used for acquiring a training data set comprising newly added building position marking information;
the network construction unit is used for constructing a neural network; the neural network comprises the newly added building feature extraction sub-network and a newly added building prediction sub-network;
and the network training unit is used for training the neural network according to the training data set.
Third embodiment
Please refer to fig. 10, which is a schematic diagram of an embodiment of a building detection system according to the present application. Since the system embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for relevant points. The system embodiments described below are merely illustrative.
A building detection system of the present embodiment, the system comprising: the system comprises a client 1000, a web service module 1001, a newly added building detection module 1002 and a file transmission service module 1003.
A client 1000, which is usually deployed in a terminal device such as a personal computer, for use by a user; the web service module 1001, which may also be referred to as an HTTP module, is generally disposed in a server, but is not limited to the server, and may be any device capable of implementing the corresponding function; the newly added building detection module 1002 is generally deployed in an embedded device, but is not limited to the embedded device, and may also be directly deployed in a server or the like; the file transfer service module 1003, which may also be referred to as an FTP module, is usually deployed in a server, but is not limited to the server, and may be any device capable of implementing the corresponding functions.
Please refer to fig. 11, which is an interaction diagram of an embodiment of the building detection system according to the present application. In this embodiment, the client 1000 is configured to send a new building detection request for a target area to the web service module 1001; the web service module 1001 is configured to receive a new building detection request for a target area sent by the client 1000, where the request may include URL addresses of first-time image data and second-time image data of the target area, and send a new building detection instruction to the new building detection module 1002; the newly added building detection module 1002 is configured to download, according to the URL address carried by the instruction, first time image data and second time image data of the target area; extracting the newly added building features of the target area at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending the new building location data file to the file transfer service module 1003; the file transmission service module 1003 is configured to receive the newly added building location data file, and send the newly added building location data to the web service module 1001; the web service module 1001 receives the newly added building location data sent by the file transmission service module 1003, generates an image including a newly added building location identifier of the target area according to the newly added building location data, and returns the image to the client, so that the client can display the image for a user to view.
In an example, the newly added building detection module is specifically configured to divide the first time image data and the second time image data into first time sub-image data and second time sub-image data respectively corresponding to a plurality of sub-areas; aiming at each subregion, extracting at least one newly added building feature of at least one depth grade of the subregion at a first time relative to a second time from the first time subimage data and the second time subimage data through at least one newly added building feature extraction subnetwork included by a newly added building detection model; acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through at least one first newly added building prediction sub-network included by the model; and determining the positions of the newly added buildings corresponding to the sub-regions according to the scores of the newly added buildings respectively corresponding to at least one depth level through a second newly added building prediction sub-network included by the model.
In this embodiment, the web service module 1001 and the file transfer service module 1003 are deployed in a server, the newly added building detection module 1002 is deployed in an embedded device, and the embedded device is installed in the server to form an all-in-one machine system. The embedded device has the advantages of cost and portability, the input data dimension of the newly added building detection model is limited to a smaller value (such as 112 x 3x 2) by dividing the image of the target area into the images of a plurality of sub-areas, and only small convolutions of 1x1 and 3x3 are adopted, so that the parameters of the newly added building detection model are controlled within 100M, finally, the characteristics of each layer are fused by convolution of 1x1, the newly added building detection model can efficiently multiplex the newly added building characteristics of each depth level on the premise of less parameters, and the model can be deployed on the embedded device on the aspects of model size and model effect.
In specific implementation, the all-in-one machine system may include: 1) an x86 architecture server, which mainly provides FTP service, HTTP service and visualization module; the FTP service receives the processed result of the newly added building detection module 1002, sends the result to the HTTP service, and displays the final result to the client user through the processing of a visual module in the HTTP service; the HTTP service receives the user request and submits the request to the algorithmic service (add new building detection module 1002) for processing. The visualization module mainly covers the mask processed by the newly-added building detection module 1002 to the original image of the remote sensing image in the target area, and simultaneously analyzes the longitude and latitude information of the original image; 2) an ARM64 embedded device with a GPU provides a specific newly added building detection service as described in the first embodiment and returns a processing result to a server, wherein the building detection method is a neural network segmentation method based on hardware condition improvement of the embedded device.
When the method is implemented specifically, the server and the embedded device can communicate through the callback result. The newly added building detection module 1002 provides a callback module interacting with the server, and returns a successful callback to the server when the newly added building detection result is successfully transmitted to the server, or returns a failed callback to the server. In this embodiment, the service provided by the newly added building detection module 1002 mainly includes two modules, namely, a pack and a dispatch module, the dispatch module is used for drawing and accessing the pack module, the execution main body of the building detection method is encapsulated into three modules, namely, an initialization module, a processing module and a release module, and the whole can be called by the pack as a plug-in. The call behavior is described as follows: the method comprises the steps of firstly, initializing, loading a newly added building detection model, continuously responding to a newly added building detection processing request by a processing module after initialization and before release, and calling a release module when service is stopped.
During specific implementation, the algorithm library file can be generated on the embedded device by adopting safe compiling, so that decompiling is prevented, and a high-level safety strategy is adopted for the embedded device providing the newly-added building detection service.
In specific implementation, a newly added building detection model file is converted into a UFF file to load a tensorrT library, acceleration is carried out forward by rewriting a network through the tensorrT library, and meanwhile, model parameters are reduced from full precision to half precision so as to further speed up.
In the system provided by this embodiment, the web service module 1001, the newly added building detection module 1002, and the file transmission service module 1003 are formed into an all-in-one machine in hardware, and the newly added building detection module 1002 is separately deployed in the embedded device, so that not only the newly added building in the remote sensing image is detected by using a deep learning algorithm, but also the GPU (graphics processing unit) hardware cost of the server can be reduced, and since the newly added building detection module 1002 and the web service module 1001 are decoupled from hardware, the web service module 1001 does not need to be redeployed when the newly added detection model is updated, and thus the system can be landed to a specific application requirement in a practical engineering deployment manner, and the rapid deployment of the system is realized.
In this embodiment, the client 1000 may be further specifically configured to send a new building detection request for multiple target areas to the web service module 1001, so as to implement batch detection of new buildings. The newly added building detection request can include URL addresses of the first-time image data and the second-time image data of the multiple areas, the first-time image data and the second-time image data of the multiple areas are downloaded from the URL addresses, and newly added building detection is performed on each area one by one.
The building detection system provided by the embodiment of the application obtains the first moment image data and the second moment image data of the area to be detected; extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model; by the processing mode, newly added buildings in the remote sensing images are detected based on a deep learning method, and the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the recall rate and the accuracy of the detection of the newly added building can be effectively improved, and the detection efficiency of the newly added building can be improved.
In the embodiment, a building detection system is provided, and correspondingly, the application also provides a building detection device. The method corresponds to the embodiment of the system described above.
Fourth embodiment
Please refer to fig. 12, which is a flowchart illustrating an embodiment of a building detection apparatus according to the present application. Since the device embodiment is a part of the method embodiment of the first embodiment, the description is relatively simple, and related points can be referred to the partial description of the method embodiment. The method embodiments described below are merely illustrative.
The application provides a building detection equipment includes:
the web service module 1201 is configured to receive a new building detection request for a target area sent by a client, and send a new building detection instruction to the new building detection module 1202; receiving the new building position data sent by the file transmission service 1203 module; generating an image of the target area including a newly added building position identifier according to the newly added building position data, and returning the image to the client;
a newly added building detection module 1202, configured to obtain first time image data and second time image data of the target area according to the instruction; extracting the newly added building features of the target area at the first time relative to the second time from the first time image data and the second time image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending the new building location data file to the file transmission service module 1203;
the file transmission service module 1203 is configured to receive the newly added building location data file, and send the newly added building location data to the web service module 1201.
In this embodiment, the web service module 1201 and the file transfer service module 1203 are deployed in a server, the newly added building detection module 1202 is deployed in an embedded device, and the embedded device is installed in the server to form the building detection device, which may also be referred to as an all-in-one machine system.
As can be seen from the foregoing embodiments, the first time image data and the second time image data of the region to be detected are obtained in the embodiments of the present application; extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model; by the processing mode, newly added buildings in the remote sensing images are detected based on a deep learning method, and the newly added buildings can be automatically detected from various complex scenes in the remote sensing images related to a wide area; therefore, the recall rate and the accuracy of the detection of the newly added building can be effectively improved, and the detection efficiency of the newly added building can be improved.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
1. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
2. As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (10)

1. A building detection method, comprising:
acquiring first moment image data and second moment image data of a to-be-detected area;
extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in a newly added building detection model;
and determining the position of the newly added building of the area to be detected at the first moment relative to the second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included by the model.
2. The method of claim 1,
the newly-added building detection model comprises: at least one newly added building feature extraction sub-network, at least one first newly added building prediction sub-network, and a second newly added building prediction sub-network;
extracting at least one newly added building feature of the to-be-detected area in at least one depth grade relative to the second moment at the first moment from the first moment image data and the second moment image data through the at least one newly added building feature extraction sub-network;
acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through the at least one first newly added building prediction sub-network;
and determining the positions of the newly added buildings according to the newly added building scores respectively corresponding to at least one depth level through the second newly added building prediction subnetwork.
3. The method of claim 2, wherein the input data of the newly added building feature extraction sub-network comprises the newly added building features of the previous depth level output by the previously added building feature extraction sub-network adjacent to the newly added building feature extraction sub-network.
4. The method of claim 2, further comprising:
dividing the first moment image data and the second moment image data into first moment sub-image data and second moment sub-image data which respectively correspond to a plurality of sub-areas;
aiming at each subregion, extracting at least one newly added building feature of a depth level from the first time subimage data and the second time subimage data through the at least one newly added building feature extraction sub-network;
acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through the at least one first newly added building prediction sub-network;
and determining the positions of the newly added buildings corresponding to the sub-regions according to the newly added building scores respectively corresponding to at least one depth level through the second newly added building prediction sub-network.
5. The method of claim 1, further comprising:
acquiring a training data set comprising newly added building position marking information;
constructing a neural network; the neural network comprises the newly added building feature extraction sub-network and a newly added building prediction sub-network;
training the neural network according to the training data set.
6. A building detection system, comprising:
the client is used for sending a newly added building detection request aiming at the target area to the webpage service module; receiving the image including the newly added building position identification of the target area returned by the webpage service module, and displaying the image;
the webpage service module is used for receiving the request and sending a new building detection instruction to the new building detection module; receiving newly added building position data sent by the file transmission service module; generating the image according to the newly added building position data, and returning the image to the client;
the newly added building detection module is used for acquiring first moment image data and second moment image data of the target area according to the instruction; extracting the newly added building features of the target area at the first time relative to the second time from the first time image data and the second time image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending a newly added building position data file to a file transmission service module;
and the file transmission service module is used for receiving the newly added building position data file and sending the newly added building position data to the webpage service module.
7. The system of claim 6,
the newly added building detection module is specifically configured to divide the first time image data and the second time image data into first time sub-image data and second time sub-image data corresponding to the multiple sub-areas, respectively; aiming at each subregion, extracting at least one newly added building feature of at least one depth grade of the subregion at a first time relative to a second time from the first time subimage data and the second time subimage data through at least one newly added building feature extraction subnetwork included by a newly added building detection model; acquiring newly added building scores respectively corresponding to at least one depth level according to the newly added building characteristics of the at least one depth level through at least one first newly added building prediction sub-network included by the model; and determining the positions of the newly added buildings corresponding to the sub-regions according to the scores of the newly added buildings respectively corresponding to at least one depth level through a second newly added building prediction sub-network included by the model.
8. A building detection apparatus, comprising:
the webpage service module is used for receiving a newly added building detection request aiming at a target area and sent by a client and sending a newly added building detection instruction to the newly added building detection module; receiving the data of the newly added building position sent by the file transmission service module; generating an image of the target area including a newly added building position identifier according to the newly added building position data, and returning the image to the client;
the newly added building detection module is used for acquiring first moment image data and second moment image data of the target area according to the instruction; extracting the newly added building features of the target area at the first time relative to the second time from the first time image data and the second time image data through a newly added building feature extraction sub-network included in a newly added building detection model; determining the position of the newly added building of the target area at a first moment relative to a second moment according to the characteristics of the newly added building through a newly added building prediction sub-network included in the model; sending a newly added building position data file to a file transmission service module;
and the file transmission service module is used for receiving the newly added building position data file and sending the newly added building position data to the webpage service module.
9. A building detection apparatus, comprising:
the image data acquisition unit is used for acquiring first moment image data and second moment image data of the area to be detected;
the feature extraction unit is used for extracting the newly added building features of the area to be detected at the first moment relative to the second moment from the first moment image data and the second moment image data through a newly added building feature extraction sub-network included in the newly added building detection model;
and the newly added building position determining unit is used for determining the newly added building position of the area to be detected at the first moment relative to the second moment according to the newly added building prediction sub-network included by the model.
10. The apparatus of claim 9,
the newly-added building detection model comprises: at least one newly-added building feature extraction sub-network, at least one first newly-added building prediction sub-network and a second newly-added building prediction sub-network which respectively correspond to the at least one newly-added building feature extraction sub-network;
the feature extraction unit is specifically configured to extract, through the at least one newly added building feature extraction sub-network, the newly added building feature of the at least one hierarchy level from the first time image data and the second time image data;
the newly added building position determining unit comprises a first subunit and a second subunit;
the first subunit is configured to, through the at least one first newly added building prediction sub-network, obtain, according to the newly added building features of the at least one hierarchy, newly added building scores respectively corresponding to the at least one hierarchy;
and the second subunit is configured to determine, through the second newly-added building prediction sub-network, the newly-added building position according to the newly-added building scores respectively corresponding to the at least one hierarchy.
CN201910211703.1A 2019-03-19 2019-03-19 Building detection system, method and device Active CN111738972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910211703.1A CN111738972B (en) 2019-03-19 2019-03-19 Building detection system, method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910211703.1A CN111738972B (en) 2019-03-19 2019-03-19 Building detection system, method and device

Publications (2)

Publication Number Publication Date
CN111738972A true CN111738972A (en) 2020-10-02
CN111738972B CN111738972B (en) 2024-05-28

Family

ID=72645632

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910211703.1A Active CN111738972B (en) 2019-03-19 2019-03-19 Building detection system, method and device

Country Status (1)

Country Link
CN (1) CN111738972B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651931A (en) * 2020-12-15 2021-04-13 浙江大华技术股份有限公司 Building deformation monitoring method and device and computer equipment
CN112801109A (en) * 2021-04-14 2021-05-14 广东众聚人工智能科技有限公司 Remote sensing image segmentation method and system based on multi-scale feature fusion
CN112819753A (en) * 2021-01-12 2021-05-18 香港理工大学深圳研究院 Building change detection method and device, intelligent terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077515A (en) * 2012-12-29 2013-05-01 北方工业大学 Multi-spectral image building change detection method
CN108197583A (en) * 2018-01-10 2018-06-22 武汉大学 The building change detecting method of optimization and image structure feature is cut based on figure
CN108447057A (en) * 2018-04-02 2018-08-24 西安电子科技大学 SAR image change detection based on conspicuousness and depth convolutional network
CN108681692A (en) * 2018-04-10 2018-10-19 华南理工大学 Increase Building recognition method in a kind of remote sensing images based on deep learning newly
CN109063569A (en) * 2018-07-04 2018-12-21 北京航空航天大学 A kind of semantic class change detecting method based on remote sensing image
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077515A (en) * 2012-12-29 2013-05-01 北方工业大学 Multi-spectral image building change detection method
CN108197583A (en) * 2018-01-10 2018-06-22 武汉大学 The building change detecting method of optimization and image structure feature is cut based on figure
CN108447057A (en) * 2018-04-02 2018-08-24 西安电子科技大学 SAR image change detection based on conspicuousness and depth convolutional network
CN108681692A (en) * 2018-04-10 2018-10-19 华南理工大学 Increase Building recognition method in a kind of remote sensing images based on deep learning newly
CN109063569A (en) * 2018-07-04 2018-12-21 北京航空航天大学 A kind of semantic class change detecting method based on remote sensing image
CN109409263A (en) * 2018-10-12 2019-03-01 武汉大学 A kind of remote sensing image city feature variation detection method based on Siamese convolutional network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651931A (en) * 2020-12-15 2021-04-13 浙江大华技术股份有限公司 Building deformation monitoring method and device and computer equipment
CN112651931B (en) * 2020-12-15 2024-04-26 浙江大华技术股份有限公司 Building deformation monitoring method and device and computer equipment
CN112819753A (en) * 2021-01-12 2021-05-18 香港理工大学深圳研究院 Building change detection method and device, intelligent terminal and storage medium
CN112801109A (en) * 2021-04-14 2021-05-14 广东众聚人工智能科技有限公司 Remote sensing image segmentation method and system based on multi-scale feature fusion

Also Published As

Publication number Publication date
CN111738972B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN112580439B (en) Large-format remote sensing image ship target detection method and system under small sample condition
CN113780296B (en) Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN110598784B (en) Machine learning-based construction waste classification method and device
Xu et al. Object‐based mapping of karst rocky desertification using a support vector machine
CN111738972B (en) Building detection system, method and device
CN111291826B (en) Pixel-by-pixel classification method of multisource remote sensing image based on correlation fusion network
CN112084923B (en) Remote sensing image semantic segmentation method, storage medium and computing device
US20220215656A1 (en) Method, apparatus, device for image processing, and storage medium
CN111062903A (en) Automatic processing method and system for image watermark, electronic equipment and storage medium
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN112418345B (en) Method and device for quickly identifying small targets with fine granularity
Ejimuda et al. Using deep learning and computer vision techniques to improve facility corrosion risk management systems 2.0
CN115131634A (en) Image recognition method, device, equipment, storage medium and computer program product
CN114612402A (en) Method, device, equipment, medium and program product for determining object quantity
KR102521565B1 (en) Apparatus and method for providing and regenerating augmented reality service using 3 dimensional graph neural network detection
CN112668675B (en) Image processing method and device, computer equipment and storage medium
CN116630302A (en) Cell image segmentation method and device and electronic equipment
CN111881996A (en) Object detection method, computer device and storage medium
CN115019218B (en) Image processing method and processor
CN112395924A (en) Remote sensing monitoring method and device
CN116071557A (en) Long tail target detection method, computer readable storage medium and driving device
CN112347976B (en) Region extraction method and device for remote sensing satellite image, electronic equipment and medium
CN115239590A (en) Sample image generation method, device, equipment, medium and program product
CN116229280B (en) Method and device for identifying collapse sentry, electronic equipment and storage medium
CN117523345B (en) Target detection data balancing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant