CN109255334B - Remote sensing image ground feature classification method based on deep learning semantic segmentation network - Google Patents

Remote sensing image ground feature classification method based on deep learning semantic segmentation network Download PDF

Info

Publication number
CN109255334B
CN109255334B CN201811130333.0A CN201811130333A CN109255334B CN 109255334 B CN109255334 B CN 109255334B CN 201811130333 A CN201811130333 A CN 201811130333A CN 109255334 B CN109255334 B CN 109255334B
Authority
CN
China
Prior art keywords
remote sensing
layer
classification
deep learning
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811130333.0A
Other languages
Chinese (zh)
Other versions
CN109255334A (en
Inventor
楚博策
帅通
高峰
王士成
陈金勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN201811130333.0A priority Critical patent/CN109255334B/en
Publication of CN109255334A publication Critical patent/CN109255334A/en
Application granted granted Critical
Publication of CN109255334B publication Critical patent/CN109255334B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a remote sensing image ground feature classification method based on a deep learning semantic segmentation network. The invention designs a method for constructing a multi-scale feature map group by using texture and structural features as the basis, the feature map group and an original image are combined to be used as the input of a deep learning network, besides, the invention designs an improved network structure of a full convolution network according to a deplab algorithm, parameter training is carried out through convolution and deconvolution, finally, overlapping segmentation is carried out on wide remote sensing images, and after classification, combination is carried out to obtain the final wide remote sensing image ground feature classification result. The method can efficiently and rapidly realize the pixel-level classification of various ground objects of the high-resolution remote sensing image, simplifies the complex process of the traditional classification method and realizes good segmentation and classification effects.

Description

Remote sensing image ground feature classification method based on deep learning semantic segmentation network
Technical Field
The invention belongs to the technical field of intelligent classification of remote sensing images, and particularly relates to a remote sensing ground object classification method based on a full convolution semantic segmentation network under the ground object interpretation requirement.
Background
The remote sensing image land feature classification is widely applied to various military and civil applications such as land survey, satellite film law enforcement, regional investigation and the like at present, obtains a better application effect and has a larger market development potential. With the gradual increase of satellite load and data volume, in remote sensing high-precision classification research, particularly when large-area (national or global scale) ground surface classification is faced, the traditional manual calibration method is difficult to support explosive growth tasks and demand workload, so that how to research how to adopt an artificial intelligence method to realize intelligent automatic processing of remote sensing images is important work with profound significance.
The existing method for classifying the ground features at present,
(1) most of the traditional methods mainly adopt a segmentation method such as superpixel to segment the remote sensing image into a plurality of regions, then traditional characteristics such as morphology and texture are extracted from the regions, and finally classifiers such as SVM are adopted to classify and combine the regions according to the characteristics to form classification results.
(2) In recent years, deep learning research on ground feature classification mainly focuses on initially segmenting an original image by methods such as superpixels and classifying segmented block images by neural networks such as CNN and DBN, so that the purpose of pixel-level ground feature classification is achieved.
(3) The invention innovatively provides a multi-scale feature description map of image textures and structures, the multi-scale feature description map and an original 3-dimensional image are combined to generate a high-dimensional image, the feature description power of the image textures and the like is enhanced, then a deep learning model-full convolution depth network applied to a semantic segmentation task is improved and applied to terrain classification of a high-resolution remote sensing image, the integration of segmentation and classification processes is realized, and a good classification effect is obtained.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a remote sensing image ground feature classification method based on a deep learning semantic segmentation network, so that the classification precision and efficiency are further improved, and the classification error is reduced.
The purpose of the invention is realized as follows:
a remote sensing image surface feature classification method based on a deep learning semantic segmentation network comprises the following steps:
(1) collecting high-resolution visible light remote sensing images with different loads, carrying out pixel-by-pixel labeling on ground objects in each image, forming a binary image by labeled data, and packaging an original remote sensing image and a corresponding labeled image to form a training set and a test set;
(2) extracting two-dimensional entropy, roughness and contrast texture features of the original remote sensing images in the training set by adopting a multi-scale window to form a multi-scale feature map group; extracting the edges of the ground objects from the original remote sensing images in the training set by adopting a Canny operator to form a structural characteristic diagram;
(3) constructing a deep learning full convolution semantic segmentation model based on the idea of deep Lab;
(4) combining the multi-scale feature map group, the structural feature map and the original remote sensing image generated in the step (2) to form an input map group, and performing model training as the input of the deep learning full convolution semantic segmentation model in the step (3) to finally obtain a model with stable parameters;
(5) and (4) segmenting the original remote sensing images to be classified in the test set, classifying the segmented images through the trained model with stable parameters in the step (4), combining the classification results together to generate a wide detection result, and when the detection results in the overlapped area are contradictory, retaining the result of the non-background pixels classified in the overlapped area to obtain a final combination result.
The full convolution semantic segmentation model in the step (3) is specifically as follows:
the model is divided into a downward section and an upward section, wherein the downward passage changes the original 13 layers of convolution layers of the VGGnet into 6 layers according to the classification intensity, and a 16 multiplied by 16 dimensional characteristic heat map is obtained as a 7 th layer after 6 layers of convolution and pooling are carried out on an input layer; in an upward path, performing interpolation up-sampling on the 7 th layer of deconvolution layer to restore the size of the 7 th layer to be the same as that of the 6 th layer, performing up-sampling on the 7 th layer, and fusing the 5 th layer of porous convolution to generate an 8 th layer; and (3) performing up-sampling on the 8 th layer, fusing the 6 th layer porous convolution to generate a 9 th layer, and performing size transformation and reduction on the output of the 9 th layer to the size of the original remote sensing image to obtain a final classification result.
Compared with the background technology, the invention has the following advantages:
1. the invention provides a multi-scale feature map group to replace the input of a pure RGB image, enhance the feature representation force and guide the feature extraction direction of a neural network;
2. the deep learning full convolution neural network is adopted, the end-to-end ground feature classification task can be realized, and error accumulation caused by multi-step composition in the traditional method is replaced.
3. The method adopts a Deeplab network structure to be applied to the remote sensing ground object classification direction. The porous convolution can effectively solve the problem of insufficient receptive field of the remote sensing large-scale ground object, and meanwhile, the boundary can be effectively optimized by adopting the conditional random field CRF, so that the edge classification effect is extracted.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a texture feature map of the present invention.
FIG. 3 is a schematic diagram of the construction of a multi-scale feature map set according to the present invention.
Fig. 4 is a structural feature diagram of the present invention.
Fig. 5 is a diagram of a full convolutional network design in accordance with the present invention.
Fig. 6 is a diagram of the improved deep network design of the present invention.
Fig. 7 is a diagram of the deep network architecture of the present invention.
Fig. 8 is a narrow high resolution image of the present invention.
FIG. 9 is a graph of the accuracy of the classification method of the present invention compared to other methods.
FIG. 10 is a comparison graph of the classification effect of the method of the present invention and other methods.
FIG. 11 is a diagram illustrating the effect of classifying wide images according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
FIG. 1 is a schematic block diagram of a remote sensing image surface feature classification method based on a deep semantic segmentation network according to a specific implementation of the present invention.
In this embodiment, the method for classifying the remote sensing image surface features based on the deep learning semantic segmentation network shown in fig. 1 includes the following steps:
1. data preparation
The data preparation comprises the steps of collecting and marking images, wherein remote sensing images of high-resolution visible light with different loads are collected, then pixel-by-pixel marking is carried out on ground features in each image, the marked data form a binary image, wherein 0 gray represents background pixels, 1-6 gray represents 6 types of ground features such as buildings, grasslands and the like, and the remote sensing images and the corresponding marked images are packaged to form a training set and a testing set;
2. feature map set extraction
The feature map group extraction comprises extraction of a multi-scale texture feature map group and a structural feature map. Considering that different ground objects have insignificant differences among some texture features (such as directionality and the like) and have larger differences among some texture features (such as concentration, complexity, brightness variation and the like), the invention adopts two-dimensional entropy, roughness and contrast as texture features, and the extraction effect graph is shown in fig. 2. In addition, considering that the change of the size of the window of the texture features can affect the description capability of the texture features to the whole situation and the local situation, the invention adopts a multi-scale window to extract the features to form a multi-scale feature map group, and ensures the comprehensiveness of the feature description capability, as shown in fig. 3. In the aspect of structural characteristics, the structure is not characterized, but the edges of the ground objects in the image are extracted by adopting a Canny operator to highlight the information proportion of the structural information in the network input image set, as shown in fig. 4, the structural characteristics can be well prevented from being covered by redundant information in the deep learning training process, and abstract structural characteristics can be better extracted in the subsequent deep network characteristic training and extracting process.
3. Model training
The deep learning full convolution semantic segmentation model is designed by referring to the idea of deep Lab, deep Lab is a derivative improved network of a full convolution network, and a network design diagram is shown in FIG. 5. The network model can be divided into two sections, corresponding to two sections of downward (adopting porous convolution to extract characteristics and gradually down-sample and simultaneously extract semantic characteristics) and upward (gradually up-sample characteristics to recover detailed information) paths in deep Lab. In which the downward path is modified based on VGGnet and the network structure is shown in fig. 6. Performing porous convolution operation on a multi-channel image and a feature map group, obtaining a feature heat map (16 multiplied by 16 dimensional image) as a 7 th layer after 6 layers of convolution and pooling, performing up-sampling on a 7 th layer of deconvolution layer to restore the size of the 7 th layer to be the same as that of the 6 th layer, converting a low-resolution image into high resolution by an interpolation method, performing up-sampling on the 7 th layer, and fusing the up-sampling with the output of the 5 th layer after porous convolution to generate an 8 th layer; the 8 th layer is upsampled and then fused with the 6 th layer of porous convolution to generate a 9 th layer, namely a final classification result, as shown in fig. 7. Calculating the loss of the classification of the multi-classification regression model softmax pixel by pixel according to the classification result to obtain the calculated loss value of each target, sequencing the loss values, selecting the first B (empirical value) targets with the largest loss values as the sample of the difficult case, feeding the loss values of the sample of the difficult case back to the full convolution neural network model, and updating the parameters of the full convolution neural network model by using a random gradient descent method. And for each remote sensing ground feature classification labeling image, continuously updating parameters of the regional full convolution neural network according to the training process, thereby obtaining a full convolution neural network model of ground feature classification, and using the model in a subsequent ground feature classification task.
4. Ground object classification
The method comprises the steps of firstly segmenting an original wide remote sensing image as shown in figure 8, assuming that the resolution is X, setting the size L of a narrow image, using 0.5X L of the length of the shorter side of the narrow image as the number of overlapped pixels by adopting an overlapping segmentation method, then classifying the segmented image through a depth model, and finally merging the classification results together to produce a wide detection result. When the overlap region detection results produce contradiction, the result classified as non-background pixels therein is retained as the final merged result as shown in fig. 9.
In order to verify the effectiveness of the method, firstly, a data set manufactured by the user is used for training a model, and then the comparison verification of the human body target classification effect is carried out on the basis of the acquired remote sensing image under the complex scene. In the embodiment, a Tensorflow framework is selected to realize a Deeplab network architecture, model parameters are initialized and trained based on the number of data sets and the classification of the ground feature classification tasks, and finally a model for ground feature classification is obtained
The invention realizes end-to-end remote sensing ground feature classification, and adopts a kappa coefficient and an intersection ratio IOU to carry out measurement indexes, wherein the kappa coefficient represents the discrete comprehensive evaluation of the pixel ratio and the number of pixels classified into a ground feature in other ground feature categories, and the intersection ratio IOU represents the ratio of the total number of pixels correctly classified into the ground feature category to the sum of the number of pixels classified into the ground feature category and the total number of pixels of the ground feature. The final kappa coefficient of the invention is 83%, the merging ratio IOU is 81%, and the performance is greatly improved compared with other deep learning methods such as VGG (kappa coefficient 75%%, merging ratio IOU is 71%), rescet 50(kappa coefficient 81%, merging ratio IOU is 78%), rescet 101(kappa coefficient 77%, merging ratio IOU is 72%), rescet 152(kappa coefficient 79%, merging ratio IOU is 75%), as shown in FIG. 10. A comparison graph of the classification effect of specific different classification model structures is shown in fig. 11.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (2)

1. A remote sensing image surface feature classification method based on a deep learning semantic segmentation network is characterized by comprising the following steps:
(1) collecting high-resolution visible light remote sensing images with different loads, carrying out pixel-by-pixel labeling on ground objects in each image, forming a binary image by labeled data, and packaging an original remote sensing image and a corresponding labeled image to form a training set and a test set;
(2) extracting two-dimensional entropy, roughness and contrast texture features of the original remote sensing images in the training set by adopting a multi-scale window to form a multi-scale feature map group; extracting the edges of the ground objects from the original remote sensing images in the training set by adopting a Canny operator to form a structural characteristic diagram;
(3) constructing a deep learning full convolution semantic segmentation model based on the idea of deep Lab;
(4) combining the multi-scale feature map group, the structural feature map and the original remote sensing image generated in the step (2) to form an input map group, and performing model training as the input of the deep learning full convolution semantic segmentation model in the step (3) to finally obtain a model with stable parameters;
(5) and (4) segmenting the original remote sensing images to be classified in the test set, classifying the segmented images through the trained model with stable parameters in the step (4), combining the classification results together to generate a wide detection result, and when the detection results in the overlapped area are contradictory, retaining the result of the non-background pixels classified in the overlapped area to obtain a final combination result.
2. The remote sensing image terrain classification method based on the deep learning semantic segmentation network as claimed in claim 1, wherein the full convolution semantic segmentation model in the step (3) is specifically as follows:
the model is divided into a downward section and an upward section, wherein the downward passage changes the original 13 layers of convolution layers of the VGGnet into 6 layers according to the classification intensity, and a 16 multiplied by 16 dimensional characteristic heat map is obtained as a 7 th layer after 6 layers of convolution and pooling are carried out on an input layer; in an upward path, performing interpolation up-sampling on the 7 th layer of deconvolution layer to restore the size of the 7 th layer to be the same as that of the 6 th layer, performing up-sampling on the 7 th layer, and fusing the 5 th layer of porous convolution to generate an 8 th layer; and (3) performing up-sampling on the 8 th layer, fusing the 6 th layer porous convolution to generate a 9 th layer, and performing size transformation and reduction on the output of the 9 th layer to the size of the original remote sensing image to obtain a final classification result.
CN201811130333.0A 2018-09-27 2018-09-27 Remote sensing image ground feature classification method based on deep learning semantic segmentation network Active CN109255334B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811130333.0A CN109255334B (en) 2018-09-27 2018-09-27 Remote sensing image ground feature classification method based on deep learning semantic segmentation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811130333.0A CN109255334B (en) 2018-09-27 2018-09-27 Remote sensing image ground feature classification method based on deep learning semantic segmentation network

Publications (2)

Publication Number Publication Date
CN109255334A CN109255334A (en) 2019-01-22
CN109255334B true CN109255334B (en) 2021-12-07

Family

ID=65047834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811130333.0A Active CN109255334B (en) 2018-09-27 2018-09-27 Remote sensing image ground feature classification method based on deep learning semantic segmentation network

Country Status (1)

Country Link
CN (1) CN109255334B (en)

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872331A (en) * 2019-01-30 2019-06-11 天津大学 A kind of remote sensing image data automatic recognition classification method based on deep learning
CN109800736B (en) * 2019-02-01 2023-07-21 东北大学 Road extraction method based on remote sensing image and deep learning
CN111597861A (en) * 2019-02-21 2020-08-28 中科星图股份有限公司 System and method for automatically interpreting ground object of remote sensing image
CN109948517A (en) * 2019-03-18 2019-06-28 长沙理工大学 A kind of high-resolution remote sensing image semantic segmentation method based on intensive full convolutional network
CN111753834B (en) * 2019-03-29 2024-03-26 中国水利水电科学研究院 Planting land block structure semantic segmentation method and device based on deep neural network
CN110110599B (en) * 2019-04-03 2023-05-09 天津大学 Remote sensing image target detection method based on multi-scale feature fusion
CN110009637B (en) * 2019-04-09 2021-04-16 北京化工大学 Remote sensing image segmentation network based on tree structure
CN110197182A (en) * 2019-06-11 2019-09-03 中国电子科技集团公司第五十四研究所 Remote sensing image semantic segmentation method based on contextual information and attention mechanism
CN110458218B (en) * 2019-07-31 2022-09-27 北京市商汤科技开发有限公司 Image classification method and device and classification network training method and device
CN110660023B (en) * 2019-09-12 2020-09-29 中国测绘科学研究院 Video stitching method based on image semantic segmentation
CN112507764A (en) * 2019-09-16 2021-03-16 中科星图股份有限公司 Method and system for extracting target object of remote sensing image and readable storage medium
CN110706239B (en) * 2019-09-26 2022-11-11 哈尔滨工程大学 Scene segmentation method fusing full convolution neural network and improved ASPP module
CN110866494B (en) * 2019-11-14 2022-09-06 三亚中科遥感研究所 Urban group extraction method and system based on optical remote sensing image
CN111079807B (en) * 2019-12-05 2023-07-07 二十一世纪空间技术应用股份有限公司 Ground object classification method and device
CN111274865B (en) * 2019-12-14 2023-09-19 深圳先进技术研究院 Remote sensing image cloud detection method and device based on full convolution neural network
CN110992257B (en) * 2019-12-20 2024-06-14 北京航天泰坦科技股份有限公司 Remote sensing image sensitive information automatic shielding method and device based on deep learning
CN111144335A (en) * 2019-12-30 2020-05-12 自然资源部国土卫星遥感应用中心 Method and device for building deep learning model
CN111259905B (en) * 2020-01-17 2022-05-31 山西大学 Feature fusion remote sensing image semantic segmentation method based on downsampling
CN110991430B (en) * 2020-03-02 2020-06-23 中科星图股份有限公司 Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN111368843B (en) * 2020-03-06 2022-06-10 电子科技大学 Method for extracting lake on ice based on semantic segmentation
CN111428781A (en) * 2020-03-20 2020-07-17 中国科学院深圳先进技术研究院 Remote sensing image ground object classification method and system
CN111582104B (en) * 2020-04-28 2021-08-06 中国科学院空天信息创新研究院 Remote sensing image semantic segmentation method and device based on self-attention feature aggregation network
CN112001214A (en) * 2020-05-18 2020-11-27 天津大学 Land classification method based on high-resolution remote sensing image
CN111797703B (en) * 2020-06-11 2022-04-01 武汉大学 Multi-source remote sensing image classification method based on robust deep semantic segmentation network
CN111860173B (en) * 2020-06-22 2021-10-15 中国科学院空天信息创新研究院 Remote sensing image ground feature element extraction method and system based on weak supervision
CN111860208B (en) * 2020-06-29 2023-10-24 中山大学 Super-pixel-based remote sensing image ground object classification method, system, device and medium
CN113706372B (en) * 2020-06-30 2024-07-05 稿定(厦门)科技有限公司 Automatic matting model building method and system
CN112036246B (en) * 2020-07-30 2021-08-24 长安大学 Construction method of remote sensing image classification model, remote sensing image classification method and system
CN111951285A (en) * 2020-08-12 2020-11-17 湖南神帆科技有限公司 Optical remote sensing image woodland classification method based on cascade deep convolutional neural network
CN112001293A (en) * 2020-08-19 2020-11-27 四创科技有限公司 Remote sensing image ground object classification method combining multi-scale information and coding and decoding network
CN111931689B (en) * 2020-08-26 2021-04-23 北京建筑大学 Method for extracting video satellite data identification features on line
CN112233062A (en) * 2020-09-10 2021-01-15 浙江大华技术股份有限公司 Surface feature change detection method, electronic device, and storage medium
CN112464732B (en) * 2020-11-04 2022-05-03 北京理工大学重庆创新中心 Optical remote sensing image ground feature classification method based on double-path sparse hierarchical network
CN112329852B (en) * 2020-11-05 2022-04-05 西安锐思数智科技股份有限公司 Classification method and device for earth surface coverage images and electronic equipment
CN112464745B (en) * 2020-11-09 2023-07-07 中国科学院计算机网络信息中心 Feature identification and classification method and device based on semantic segmentation
CN112598692A (en) * 2020-12-21 2021-04-02 陕西土豆数据科技有限公司 Remote sensing image segmentation post-processing algorithm based on marked pixel matrix
CN112712087A (en) * 2020-12-29 2021-04-27 哈尔滨工业大学 Remote sensing image ground object semantic segmentation method based on deep convolutional neural network
CN112857268B (en) 2021-01-21 2023-10-31 北京百度网讯科技有限公司 Object area measuring method, device, electronic equipment and storage medium
CN112950655A (en) * 2021-03-08 2021-06-11 甘肃农业大学 Land use information automatic extraction method based on deep learning
CN113239815B (en) * 2021-05-17 2022-09-06 广东工业大学 Remote sensing image classification method, device and equipment based on real semantic full-network learning
CN113361373A (en) * 2021-06-02 2021-09-07 武汉理工大学 Real-time semantic segmentation method for aerial image in agricultural scene
CN113723411B (en) * 2021-06-18 2023-06-27 湖北工业大学 Feature extraction method and segmentation system for semantic segmentation of remote sensing image
CN113298042B (en) * 2021-06-22 2024-02-02 中国平安财产保险股份有限公司 Remote sensing image data processing method and device, storage medium and computer equipment
CN113920421B (en) * 2021-07-03 2023-06-27 桂林理工大学 Full convolution neural network model capable of achieving rapid classification
CN114092423A (en) * 2021-11-11 2022-02-25 中国电子科技集团公司第五十四研究所 Intelligent extraction method for remote sensing image information label

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587587A (en) * 2009-07-14 2009-11-25 武汉大学 The segmentation method for synthetic aperture radar images of consideration of multi-scale Markov field
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN107527352A (en) * 2017-08-09 2017-12-29 中国电子科技集团公司第五十四研究所 Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks
CN107564025A (en) * 2017-08-09 2018-01-09 浙江大学 A kind of power equipment infrared image semantic segmentation method based on deep neural network
CN108154192A (en) * 2018-01-12 2018-06-12 西安电子科技大学 High Resolution SAR terrain classification method based on multiple dimensioned convolution and Fusion Features

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10878708B2 (en) * 2017-03-03 2020-12-29 Farrokh Mohamadi Drone terrain surveillance with camera and radar sensor fusion for collision avoidance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101587587A (en) * 2009-07-14 2009-11-25 武汉大学 The segmentation method for synthetic aperture radar images of consideration of multi-scale Markov field
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN107527352A (en) * 2017-08-09 2017-12-29 中国电子科技集团公司第五十四研究所 Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks
CN107564025A (en) * 2017-08-09 2018-01-09 浙江大学 A kind of power equipment infrared image semantic segmentation method based on deep neural network
CN108154192A (en) * 2018-01-12 2018-06-12 西安电子科技大学 High Resolution SAR terrain classification method based on multiple dimensioned convolution and Fusion Features

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
从FCN到DeepLab;ZhangJunior;《CSDN》;20160922;第1-11页 *

Also Published As

Publication number Publication date
CN109255334A (en) 2019-01-22

Similar Documents

Publication Publication Date Title
CN109255334B (en) Remote sensing image ground feature classification method based on deep learning semantic segmentation network
CN110119728B (en) Remote sensing image cloud detection method based on multi-scale fusion semantic segmentation network
CN107527352B (en) Remote sensing ship target contour segmentation and detection method based on deep learning FCN network
CN109886066B (en) Rapid target detection method based on multi-scale and multi-layer feature fusion
CN110533084B (en) Multi-scale target detection method based on self-attention mechanism
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN111915592B (en) Remote sensing image cloud detection method based on deep learning
CN111428781A (en) Remote sensing image ground object classification method and system
CN113887459B (en) Open-pit mining area stope change area detection method based on improved Unet +
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN107564009B (en) Outdoor scene multi-target segmentation method based on deep convolutional neural network
CN111461083A (en) Rapid vehicle detection method based on deep learning
CN112016436A (en) Remote sensing image change detection method based on deep learning
CN113435411B (en) Improved DeepLabV3+ based open pit land utilization identification method
CN112232328A (en) Remote sensing image building area extraction method and device based on convolutional neural network
CN107944353B (en) SAR image change detection method based on contour wave BSPP network
CN112906662B (en) Method, device and equipment for detecting change of remote sensing image and storage medium
CN113239736B (en) Land coverage classification annotation drawing acquisition method based on multi-source remote sensing data
Jiang et al. Intelligent image semantic segmentation: a review through deep learning techniques for remote sensing image analysis
CN113436210B (en) Road image segmentation method fusing context progressive sampling
WO2020232942A1 (en) Method for constructing farmland image-based convolutional neural network model, and system thereof
CN116630971B (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
CN114943902A (en) Urban vegetation unmanned aerial vehicle remote sensing classification method based on multi-scale feature perception network
CN115049640A (en) Road crack detection method based on deep learning
CN112861869A (en) Sorghum lodging image segmentation method based on lightweight convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant