CN110211138A - Remote sensing image segmentation method based on confidence point - Google Patents

Remote sensing image segmentation method based on confidence point Download PDF

Info

Publication number
CN110211138A
CN110211138A CN201910494015.0A CN201910494015A CN110211138A CN 110211138 A CN110211138 A CN 110211138A CN 201910494015 A CN201910494015 A CN 201910494015A CN 110211138 A CN110211138 A CN 110211138A
Authority
CN
China
Prior art keywords
image
pixel
rgb
atural object
object class
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910494015.0A
Other languages
Chinese (zh)
Other versions
CN110211138B (en
Inventor
焦李成
张梦璇
黄钟键
冯雨歆
陈悉儿
屈嵘
丁静怡
张丹
李玲玲
郭雨薇
唐旭
冯志玺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910494015.0A priority Critical patent/CN110211138B/en
Publication of CN110211138A publication Critical patent/CN110211138A/en
Application granted granted Critical
Publication of CN110211138B publication Critical patent/CN110211138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of remote sensing image segmentation methods based on confidence point, mainly solve the not high defect of prior art Moderate-High Spatial Resolution Remote Sensing Image segmentation precision.The specific steps of the present invention are as follows: (1) constructing convolutional neural networks;(2) two training sets are generated;(3) two test sets are generated;(4) prediction of atural object class label is carried out to test set;(5) each confidence point pixel is marked;(6) building classification is corrected;(7) plant classification is updated;(8) final atural object class label is obtained.The present invention, which has, improves vegetation and building classification precision, not only has preferable segmentation effect to the remote sensing images of low resolution, but also also have preferable segmentation effect to high-resolution remote sensing images.

Description

Remote sensing image segmentation method based on confidence point
Technical field
The invention belongs to technical field of image processing, further relate to one of technical field of remote sensing image processing base In the remote sensing image segmentation method of confidence point.The present invention can be used for carrying out the high-resolution Multi-Band Remote Sensing Images that satellite obtains Segmentation, obtains the segmentation figure with atural object class label.
Background technique
Remote Sensing Image Segmentation is the technology and mistake that remote sensing images are divided into several regions specific, with unique properties Journey.Currently, more using U-Net, PSPNet, DeepLab even depth neural network learning technology in engineering practice.With based on deep The method segmentation remote sensing images of degree learning neural network are to extract Characteristics of The Remote Sensing Images using neural network, by training network The classification of each pixel is predicted, to finally obtain the segmentation figure with class label.The method of the prior art is for differentiating The lower common data sets of rate have preferable segmentation effect, still, for high-resolution remote sensing images, such as WorldView- 3 satellite remote sensing images, effect are then undesirable.
A kind of patent document " multiple features fusion high-resolution remote sensing image based on region point of the Chang An University in its application It is disclosed in segmentation method " (number of patent application: 201610643629.7, publication number: 106296680A) a kind of based on the more of region Fusion Features high-resolution remote sensing image dividing method.This method divides initial remote sensing images first, secondly calculates initial segmentation In image in each cut zone any neighborhood textural characteristics distance, spectral signature distance and shape feature distance, finally Relevant range merging is carried out using RAG and NNG method.Although this method is on No. two remote sensing images of high score that resolution ratio is 1 meter Segmentation precision is high and execution efficiency is high, and still, the shortcoming that this method still has is, this method using spectral signature away from With a distance from, the textural characteristics, shape feature distance remote sensing images are split, do not use remote sensing images its all bands Information causes vegetation classification segmentation precision not high.Further, since presence and the occlusion issue of building effects, cause specifically to build It is not high to build the other segmentation precision of species.
Patent document " multiband high-resolution remote sensing image based on gray level co-occurrence matrixes point of Haihe River university in its application It is disclosed in segmentation method " (number of patent application: 201310563019.2, publication number: 103578110A) a kind of based on gray scale symbiosis The multiband high-resolution remote sensing image dividing method of matrix.This method individually divides each wave band using precipitation watershed transform Then image is superimposed the segmentation result of each wave band.Finally, merging with the region merging technique strategy based on multiband light spectrum information Debris field in over-segmentation result, to realize Image Segmentation.Although this method can overcome showing for over-segmentation and less divided As segmentation precision and stability are preferable, and still, the shortcoming that this method still has is, since this method is suitable for differentiating Rate be 2.5 meters Chinese Shanghai area three wave bands it is panchromatic-Multi-spectral image fusion remote sensing images on, to the higher remote sensing figure of resolution ratio Picture universality is not high, further, since this method divides remote sensing images using the method for gray level co-occurrence matrixes, does not consider building yin The presence of shadow and occlusion issue cause specific building classification segmentation precision not high.
Summary of the invention
It is an object of the invention to be directed to the deficiency of above-mentioned prior art, a kind of remote sensing images minute based on confidence point are proposed Segmentation method.It is higher that this method not only divides efficiency, but also for high-resolution remote sensing images especially vegetation and building species Not, preferable segmentation result may be implemented.
Realizing the thinking of the object of the invention is, first constructs two training sets and two test sets, builds LinkNet network, Every layer parameter is set, two after being trained after being trained network, then two test sets are respectively fed to trained two A network carries out the prediction of atural object class label to test set by network, finally by label confidence point pixel, updates vegetation class Not and amendment building classification obtains the atural object class label after Remote Sensing Image Segmentation.
To achieve the goals above, the specific steps of the present invention are as follows:
(1) LinkNet neural network is constructed:
(1a) builds one 14 layers of LinkNet neural network;
The parameter of each layer in LinkNet neural network is arranged in (1b);
(2) two training sets are generated:
(2a) is concentrated from remote sensing image data and is chosen at least 20000 images, grounding collection is formed, wherein selecting 10000 RGB RGB image is opened, remaining 10000 are multispectral image;Each pixel for every image that grounding is concentrated Atural object class label be one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every is red The corresponding satellite shooting area of turquoise RGB image, every multispectral image includes eight band class informations, every multispectral image Satellite shooting area and the satellite shooting area of every RGB RGB image it is corresponding identical;
(2b) utilizes synthetic method, concentrates all RGB RGB images to synthesize enhanced RGB grounding RGB image, as first training set;
(2c) utilizes synthetic method, concentrates all multispectral images to synthesize multiband pseudocolour picture grounding Picture, as second training set;
(3) training neural network:
First training set is input in LinkNet network and is trained by (3a), obtains trained first segmentation Network;
Second training set is input in LinkNet network and is trained by (3b), obtains trained second segmentation Network;
(4) two test sets are generated:
(4a) is concentrated from remote sensing image data and is chosen at least 6000 images, basic test collection is formed, wherein selecting 3000 RGB RGB image, remaining 3000 are multispectral image;The ground of each pixel of every image that basic test is concentrated Species distinguishing label is one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every RGB RGB image corresponds to a satellite shooting area, and every multispectral image includes eight band class informations, and every multispectral image is defended Star shooting area and the satellite shooting area of every RGB RGB image are corresponding identical;
(4b) utilizes synthetic method, and the RGB RGB image that basic test is concentrated is synthesized enhanced RGB RGB and is schemed Picture, as first test set;
(4c) utilizes synthetic method, and the multispectral image that basic test is concentrated is synthesized multiband pseudo color image, is made For second test set;
(5) prediction of atural object class label is carried out to test set:
Each image of first test set is sequentially inputted in first segmentation network by (5a), exports first survey The atural object of each of the atural object class label and each image of each of examination each image of concentration pixel pixel Class label probability value;
Each image of second test set is sequentially inputted in second segmentation network by (5b), exports second survey The atural object of each of the atural object class label and each image of each of examination each image of concentration pixel pixel Class label probability value;
(6) each confidence point pixel is marked:
Traverse each pixel in first test set in every image;By the atural object class label probability value of wherein pixel All pixels greater than 0.9 mark as a pixel;
(7) building classification is corrected:
Each pixel that all atural object class labels in first test set are building is traversed, it will be adjacent with each pixel 5 × 5 pixel point range memories are changed to building in the atural object class label of all pixels of confidence point pixel, remaining The atural object class label of pixel is changed to ground;Obtain each of each image in updated first test set The atural object class label of pixel
(8) plant classification is updated:
The pixel that all categories in second test set are vegetation is corresponded to the pixel of coastline wave band Coastal by (8a) The mean value of value is multiplied with 0.8, obtains the threshold value of coastline wave band Coastal, is vegetation by all categories in second test set Pixel be multiplied corresponding to the mean value of pixel value of yellow band Yellow with 0.8, obtain the threshold value of yellow band Yellow, general All categories are the mean value and 0.2 phase that the pixel of vegetation corresponds to the pixel value of two wave band NIR2 of near-infrared in second test set Multiply, obtains the threshold value of two wave band NIR2 of near-infrared;
(8b) traverses each of each image pixel in second test set, and atural object class label probability value is small In equal to 0.5, the threshold value corresponding to the pixel value of coastline wave band Coastal less than coastline wave band Coastal, yellow band The pixel value of Yellow is less than the threshold value of yellow band Yellow, the pixel value of two wave band NIR2 of near-infrared is greater than two wave of near-infrared The atural object class label of all pixels of the threshold value of section NIR2 is updated to vegetation, remaining pixel keeps original ground species Distinguishing label is constant;Obtain the atural object class label of each of each image pixel in updated second test set;
(9) final atural object class label is obtained:
(9a) is replaced with the vegetation class label of each of each image pixel in updated second test set The vegetation class label for changing each of each image pixel in updated first test set, obtains vegetation classification and replaces The atural object class label of each of each image pixel in first test set after changing;
The atural object of each of each image pixel in first test set after (9b) vegetation classification is replaced Class label, as the atural object class label after Remote Sensing Image Segmentation.
Compared with prior art, the present invention has the advantage that
First, since the present invention passes through setting coastline wave band Coastal, yellow band Yellow, two wave band of near-infrared Three threshold values of NIR2 update the atural object class label of the vegetation of segmentation network output, overcome the prior art using spectral signature Distance, textural characteristics distance, shape feature distance are split remote sensing images, cause the vegetation classification segmentation precision not high Defect, even improving the segmentation precision of vegetation classification so that the scattered vegetation of the present invention can also be divided out.
Second, since the present invention inputs enhanced RGB RGB image and multiband pseudo color image to LinkNet nerve Network is trained, and overcoming prior art common data sets lower for resolution ratio has preferable segmentation effect, and to height The not high defect of the remote sensing images universality of resolution ratio, so that the present invention not only has preferable point to the remote sensing images of low resolution Effect is cut, and also there is preferable segmentation effect to high-resolution remote sensing images.
Third, since the present invention utilizes the atural object class label of the building of confidence point pixel correction segmentation network output, It overcomes in the prior art since the presence of building effects and occlusion issue cause specific building classification segmentation precision not high Defect so that the present invention improves the segmentation precision of building classification.
Detailed description of the invention
Fig. 1 is overall flow figure of the invention.
Specific embodiment:
Step 1, neural network is constructed.
Build one 14 layers of LinkNet neural network;
The structure of 14 layers of LinkNet neural network is successively are as follows: and the maximum pond layer in input layer → the 1st convolutional layer → 1st → 1st encoder → the 2nd encoder → the 3rd encoder → the 4th encoder → the 4th decoder → the 3rd decoder → the 2nd decoder → 1st decoder → the 1st full convolutional layer → the 2nd convolutional layer → the 2nd full convolutional layer;Wherein, the 1st solution of output connection of the 1st encoder The input of code device, the input of output the 2nd decoder of connection of the 2nd encoder, output the 3rd decoder of connection of the 3rd encoder Input.
The structure of encoder is successively are as follows: input layer → the 1st coding convolutional layer → the 2nd coding convolutional layer → the 3rd coding convolution Layer → the 4th coding convolutional layer;Wherein, the output of the 2nd coding convolutional layer of input layer connection, the input connection of the 3rd coding convolutional layer The output of 4th coding convolutional layer;Decoder architecture are as follows: input layer → the 1st decoding convolutional layer → the 1st full convolutional layer → 2nd of decoding Decode convolutional layer.
The parameter of each layer in LinkNet neural network is set;
The parameter that each layer in LinkNet neural network is arranged is as follows: the convolution kernel of the 1st convolutional layer is dimensioned to 7*7, Step-length is set as 2;3*3 is set by the layer pond window size in the 1st maximum pond, step-length is set as 2;By the 1st, 2 full convolution The convolution kernel size of layer sets gradually as 3*3,2*2, and step-length is disposed as 2;The convolution kernel of 2nd convolutional layer is dimensioned to 3* 3, step-length is set as 1.
Wherein, the parameter setting of encoder is as follows: the convolution kernel size of the 1st, 2,3,4 coding convolutional layers is disposed as 3* 3, it is 2,1,1,1 that step-length, which is set gradually,;The parameter of decoder is as follows: the convolution kernel size of the 1st, 2 decoding convolutional layers is respectively provided with For 1*1, step-length is set as 1;The convolution kernel of the 1st full convolutional layer of decoding is dimensioned to 3*3, step-length is set as 2.
Step 2, two training sets are generated.
From remote sensing image data concentrate choose at least 20000 images, form grounding collection, wherein select 10000 it is red Turquoise RGB image, remaining 10000 are multispectral image;The ground of each pixel of every image that grounding is concentrated Species distinguishing label is one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every RGB RGB image corresponds to a satellite shooting area, and every multispectral image includes eight band class informations, and every multispectral image is defended Star shooting area and the satellite shooting area of every RGB RGB image are corresponding identical;
Eight band class informations refer to: coastline wave band Coastal, blue wave band Blue, green band Green, yellow band Yellow, red band Red, red side wave section Red edge, one wave band NIR1 of near-infrared, two wave band NIR2 of near-infrared.
Using synthetic method, concentrates all RGB RGB images to synthesize enhanced RGB RGB grounding and scheme Picture, as first training set;
Enhanced RGB RGB image combines the information of original RGB RGB image and two NIR2 wave band of near-infrared, The information such as contrast and the detail textures of remote sensing images are enhanced, so that it imitates the detection of the classifications such as ground, building, viaduct Fruit is preferable.
It using synthetic method, concentrates all multispectral images to synthesize multiband pseudo color image grounding, makees For second training set.
Multiband pseudo color image uses coastline Coastal wave band in multispectral image, yellow Yellow wave band and close Tri- wave bands of infrared two NIR2 are combined, and have highlighted the classification information of vegetation.
The step of synthetic method, is as follows:
Step 1, it is according to the following formula, green in each RGB RGB image that grounding collection or basic test are concentrated Chrominance channel and corresponding two NIR2 band class information of near-infrared synthesize the enhanced RGB RGB figure of this RGB RGB image The green channel of picture:
N=G*w+R* (1-w)
Wherein, N indicates the green channel of enhanced RGB RGB image, and G indicates grounding collection or basic test collection The green channel of RGB RGB image, * indicate multiplication operations, and w indicates the green channel and base of enhanced RGB RGB image The weight of the green channel weighting for the RGB RGB image that plinth training set or basic test are concentrated, for controlling enhancing RGB The degree of RGB image, the weighted value are that the RGB RGB image institute that 0.8, R indicates that grounding collection or basic test are concentrated is right Two NIR2 wave band of near-infrared in the multispectral image answered.
Step 2 is replaced it with the green channel of each enhanced RGB RGB image and is surveyed in grounding collection or basis The green channel in corresponding RGB RGB image is concentrated in examination, and it is updated enhanced red to obtain this RGB RGB image Turquoise RGB image.
All enhanced RGB RGB images are formed first training set or first test set by step 3.
Step 4, by coastline wave band Coastal, yellow band Yellow and the near-infrared in each multispectral image The value of two tri- wave bands of wave band NIR2 normalizes respectively, and by three values after normalization respectively multiplied by 255, its product is replaced respectively It changes grounding collection corresponding to this multispectral image or basic test concentrates the red channel R of RGB RGB image, green Updated red R, green G, tri- channels indigo plant B are formed this multiband pseudo color image by channel G, blue channel B.
All multiband pseudo color images are formed second training set or second test set by step 5;
Step 3, training neural network.
First training set is input in LinkNet network and is trained, trained first segmentation network is obtained;
Second training set is input in LinkNet network and is trained, trained second segmentation network is obtained.
LinkNet is NIPS paper " LinkNet:Exploiting Encoder Representations in 2018 For Efficient Semantic Segmentation " network structure that is proposed, the speed for mainly stressing semantic segmentation asks Topic, while obtaining preferable segmentation effect.
Step 4, two test sets are generated.
From remote sensing image data concentrate choose at least 6000 images, form basic test collection, wherein select 3000 it is red green Blue RGB image, remaining 3000 are multispectral image;The ground species of each pixel of every image that basic test is concentrated Distinguishing label is one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every RGB RGB figure As a corresponding satellite shooting area, every multispectral image includes eight band class informations, and the satellite of every multispectral image is clapped It takes the photograph region and the satellite shooting area of every RGB RGB image is corresponding identical;
Using synthetic method, the RGB RGB image that basic test is concentrated is synthesized into enhanced RGB RGB image, As first test set;
Using synthetic method, the multispectral image that basic test is concentrated is synthesized into multiband pseudo color image, as the Two test sets;
Synthetic method is as described in step 2.
Step 5, the prediction of atural object class label is carried out to test set.
Each image of first test set is sequentially inputted in first segmentation network, first test set is exported In the atural object class label of each of each image pixel and the atural object classification of each of each image pixel Label probability value;
Each image of second test set is sequentially inputted in second segmentation network, second test set is exported In the atural object class label of each of each image pixel and the atural object classification of each of each image pixel Label probability value;
Atural object class label probability value refers to that each image of input to segmentation network, segmentation network exports every image Each pixel respectively corresponds five output valves on road, overpass, building, vegetation and ground, carries out normalizing to this five values Change operation, the maximum value after normalizing in five values is taken, using the maximum value as the atural object class label probability value of the pixel.
Step 6, each confidence point pixel is marked.
Traverse each pixel in first test set in every image;By the atural object class label probability value of wherein pixel All pixels greater than 0.9 mark as a pixel.
Step 7, building classification is corrected.
Each pixel that all atural object class labels in first test set are building is traversed, it will be adjacent with each pixel 5 × 5 pixel point range memories are changed to building in the atural object class label of all pixels of confidence point pixel, remaining The atural object class label of pixel is changed to ground;Obtain each of each image in updated first test set The atural object class label of pixel;
Solar elevation and shooting angle are different when due to satellite shooting, will appear shade around building and block and ask Topic, causes the segmentation result of these difficult regions inaccurate, so in the segmentation result of segmentation network output, it can be in these areas Domain forms more scattered segmentation result, therefore the classification of building is corrected by confidence point pixel.
Step 8, plant classification is updated.
The pixel that all categories in second test set are vegetation is corresponded into the pixel value of coastline wave band Coastal Mean value is multiplied with 0.8, obtains the threshold value of coastline wave band Coastal, is the picture of vegetation by all categories in second test set Element is multiplied corresponding to the mean value of the pixel value of yellow band Yellow with 0.8, the threshold value of yellow band Yellow is obtained, by second All categories are that the pixel of vegetation is multiplied corresponding to the mean value of the pixel value of two wave band NIR2 of near-infrared with 0.2 in a test set, Obtain the threshold value of two wave band NIR2 of near-infrared;
Each of each image pixel in second test set is traversed, atural object class label probability value is less than etc. In 0.5, the threshold value corresponding to the pixel value of coastline wave band Coastal less than coastline wave band Coastal, yellow band The pixel value of Yellow is less than the threshold value of yellow band Yellow, the pixel value of two wave band NIR2 of near-infrared is greater than two wave of near-infrared The atural object class label of all pixels of the threshold value of section NIR2 is updated to vegetation, remaining pixel keeps original ground species Distinguishing label is constant;Obtain the atural object class label of each of each image pixel in updated second test set;
Dividing vegetation area due to the threshold value only with three wave bands may be affected by noise, for example has phase with vegetation Like the target etc. of color, so introducing atural object class label probability value to avoid the generation of this problem.
Step 9, final atural object class label is obtained.
With the vegetation class label of each of each image pixel in updated second test set, replacement is more The vegetation class label of each of each image pixel in first test set after new, after obtaining the replacement of vegetation classification First test set in each of each image pixel atural object class label;
The atural object classification of each of each image pixel in first test set after vegetation classification is replaced Label, as the atural object class label after Remote Sensing Image Segmentation.
Since the vegetation classification segmentation precision of each of each image pixel in second test set is higher, and the Each of each image pixel is to other classifications in one test set, such as ground, building, overpass segmentation effect Preferably, the atural object class label so being merged to two segmentation results, after obtaining final accurate Remote Sensing Image Segmentation.

Claims (6)

1. a kind of remote sensing image segmentation method based on confidence point, which is characterized in that utilize neural network prediction atural object classification mark Label, fusion utilize the building in confidence point amendment segmentation figure according to multiple view and Double Data model class label obtained The step of classification, this method includes the following:
(1) LinkNet neural network is constructed:
(1a) builds one 14 layers of LinkNet neural network;
The parameter of each layer in LinkNet neural network is arranged in (1b);
(2) two training sets are generated:
(2a) from remote sensing image data concentrate choose at least 20000 images, form grounding collection, wherein select 10000 it is red Turquoise RGB image, remaining 10000 are multispectral image;The ground of each pixel of every image that grounding is concentrated Species distinguishing label is one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every RGB RGB image corresponds to a satellite shooting area, and every multispectral image includes eight band class informations, and every multispectral image is defended Star shooting area and the satellite shooting area of every RGB RGB image are corresponding identical;
(2b) utilizes synthetic method, concentrates all RGB RGB images to synthesize enhanced RGB RGB grounding and schemes Picture, as first training set;
(2c) utilizes synthetic method, concentrates all multispectral images to synthesize multiband pseudo color image grounding, makees For second training set;
(3) training neural network:
First training set is input in LinkNet network and is trained by (3a), obtains trained first segmentation network;
Second training set is input in LinkNet network and is trained by (3b), obtains trained second segmentation network;
(4) two test sets are generated:
(4a) from remote sensing image data concentrate choose at least 6000 images, form basic test collection, wherein select 3000 it is red green Blue RGB image, remaining 3000 are multispectral image;The ground species of each pixel of every image that basic test is concentrated Distinguishing label is one of road, overpass, building, five kinds of atural object class labels of vegetation and ground, every RGB RGB figure As a corresponding satellite shooting area, every multispectral image includes eight band class informations, and the satellite of every multispectral image is clapped It takes the photograph region and the satellite shooting area of every RGB RGB image is corresponding identical;
(4b) utilizes synthetic method, and the RGB RGB image that basic test is concentrated is synthesized enhanced RGB RGB image, As first test set;
(4c) utilizes synthetic method, the multispectral image that basic test is concentrated is synthesized multiband pseudo color image, as the Two test sets;
(5) prediction of atural object class label is carried out to test set:
Each image of first test set is sequentially inputted in first segmentation network by (5a), exports first test set In the atural object class label of each of each image pixel and the atural object classification of each of each image pixel Label probability value;
Each image of second test set is sequentially inputted in second segmentation network by (5b), exports second test set In the atural object class label of each of each image pixel and the atural object classification of each of each image pixel Label probability value;
(6) each confidence point pixel is marked:
Traverse each pixel in first test set in every image;The atural object class label probability value of wherein pixel is greater than 0.9 all pixels mark as a pixel;
(7) building classification is corrected:
Each pixel that all atural object class labels in first test set are building is traversed, it will adjacent with each pixel 5 × 5 A pixel point range memory is changed to building in the atural object class label of all pixels of confidence point pixel, remaining pixel Atural object class label be changed to ground;Obtain each of each image pixel in updated first test set Atural object class label;
(8) plant classification is updated:
The pixel value of (8a) by pixel that all categories in second test set are vegetation corresponding to coastline wave band Coastal Mean value is multiplied with 0.8, obtains the threshold value of coastline wave band Coastal, is the picture of vegetation by all categories in second test set Element is multiplied corresponding to the mean value of the pixel value of yellow band Yellow with 0.8, the threshold value of yellow band Yellow is obtained, by second All categories are that the pixel of vegetation is multiplied corresponding to the mean value of the pixel value of two wave band NIR2 of near-infrared with 0.2 in a test set, Obtain the threshold value of two wave band NIR2 of near-infrared;
(8b) traverses each of each image pixel in second test set, atural object class label probability value is less than etc. In 0.5, the threshold value corresponding to the pixel value of coastline wave band Coastal less than coastline wave band Coastal, yellow band The pixel value of Yellow is less than the threshold value of yellow band Yellow, the pixel value of two wave band NIR2 of near-infrared is greater than two wave of near-infrared The atural object class label of all pixels of the threshold value of section NIR2 is updated to vegetation, remaining pixel keeps original ground species Distinguishing label is constant;Obtain the atural object class label of each of each image pixel in updated second test set;
(9) final atural object class label is obtained:
(9a) with the vegetation class label of each of each image pixel in updated second test set, replacement is more The vegetation class label of each of each image pixel in first test set after new, after obtaining the replacement of vegetation classification First test set in each of each image pixel atural object class label;
The atural object classification of each of each image pixel in first test set after (9b) vegetation classification is replaced Label, as the atural object class label after Remote Sensing Image Segmentation.
2. the remote sensing image segmentation method according to claim 1 based on confidence point, which is characterized in that institute in step (1a) State 14 layers of the structure of LinkNet neural network successively are as follows: the maximum pond layer in input layer → the 1st convolutional layer → 1st → the 1st coding Device → the 2nd encoder → the 3rd encoder → the 4th encoder → the 4th decoder → the 3rd decoder → the 2nd decoder → the 1st decoding Device → the 1st full convolutional layer → the 2nd convolutional layer → the 2nd full convolutional layer;Wherein, output the 1st decoder of connection of the 1st encoder is defeated Enter, the input of output the 2nd decoder of connection of the 2nd encoder, the input of output the 3rd decoder of connection of the 3rd encoder;
The structure of encoder is successively are as follows: and input layer → the 1st coding convolutional layer → the 2nd coding convolutional layer → the 3rd coding convolutional layer → 4th coding convolutional layer;Wherein, the output of the 2nd coding convolutional layer of input layer connection, the input connection the 4th of the 3rd coding convolutional layer are compiled The output of code convolutional layer;Decoder architecture are as follows: input layer → the 1st decoding convolutional layer → the 1st full convolutional layer of decoding → the 2nd decoding volume Lamination.
3. the remote sensing image segmentation method according to claim 1 based on confidence point, which is characterized in that institute in step (1b) The parameter for stating each layer in setting LinkNet neural network is as follows: the convolution kernel of the 1st convolutional layer being dimensioned to 7*7, step-length is set It is set to 2;3*3 is set by the layer pond window size in the 1st maximum pond, step-length is set as 2;By the volume of the 1st, 2 full convolutional layers Product core size sets gradually as 3*3,2*2, and step-length is disposed as 2;The convolution kernel of 2nd convolutional layer is dimensioned to 3*3, step-length It is set as 1;
Wherein, the parameter setting of encoder is as follows: the convolution kernel size of the 1st, 2,3,4 coding convolutional layers being disposed as 3*3, is walked Long set gradually is 2,1,1,1;The parameter of decoder is as follows: the convolution kernel size of the 1st, 2 decoding convolutional layers is disposed as 1* 1, step-length is set as 1;The convolution kernel of the 1st full convolutional layer of decoding is dimensioned to 3*3, step-length is set as 2.
4. the remote sensing image segmentation method according to claim 1 based on confidence point, which is characterized in that step (2a), step Suddenly eight band class informations described in (4a) refer to: coastline wave band Coastal, blue wave band Blue, green band Green, Huang Color wave band Yellow, red band Red, red side wave section Rededge, one wave band NIR1 of near-infrared, two wave band NIR2 of near-infrared.
5. the remote sensing image segmentation method according to claim 1 based on confidence point, which is characterized in that step (2b), step Suddenly the step of (2c), step (4b), synthetic method described in step (4c) is as follows:
The first step according to the following formula leads to the green in grounding collection or each RGB RGB image of basic test concentration Road and corresponding two NIR2 band class information of near-infrared, synthesize the enhanced RGB RGB image of this RGB RGB image Green channel:
N=G*w+R* (1-w)
Wherein, N indicates the green channel of enhanced RGB RGB image, and G indicates the red green of grounding collection or basic test collection The green channel of blue RGB image, * indicate multiplication operations, and w indicates the green channel and basis instruction of enhanced RGB RGB image Practice the weight of the green channel weighting for the RGB RGB image that collection or basic test are concentrated, which is that 0.8, R indicates basis Two NIR2 wave band of near-infrared in multispectral image corresponding to the RGB RGB image that training set or basic test are concentrated;
Second step replaces it in grounding collection or basic test with the green channel of each enhanced RGB RGB image The green channel in corresponding RGB RGB image is concentrated, it is updated enhanced red green to obtain this RGB RGB image Blue RGB image;
All enhanced RGB RGB images are formed first training set or first test set by third step;
4th step, by coastline wave band Coastal, the yellow band Yellow and two wave of near-infrared in each multispectral image The value of section tri- wave bands of NIR2 normalizes respectively, and by three values after normalization respectively multiplied by 255, its product is replaced respectively should It opens grounding collection corresponding to multispectral image or basic test concentrates red channel R, the green channel of RGB RGB image G, updated red R, green G, tri- channels indigo plant B are formed this multiband pseudo color image by blue channel B;
All multiband pseudo color images are formed second training set or second test set by the 5th step.
6. the remote sensing image segmentation method according to claim 1 based on confidence point, which is characterized in that step (5a), step Suddenly atural object class label probability value described in (5b) refers to, each image of input to segmentation network, and segmentation network output is every The each pixel of image respectively corresponds five output valves on road, overpass, building, vegetation and ground, to this five be worth into Row normalization operation takes the maximum value after normalizing in five values, and the maximum value is general as the atural object class label of the pixel Rate value.
CN201910494015.0A 2019-06-08 2019-06-08 Remote sensing image segmentation method based on confidence points Active CN110211138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910494015.0A CN110211138B (en) 2019-06-08 2019-06-08 Remote sensing image segmentation method based on confidence points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910494015.0A CN110211138B (en) 2019-06-08 2019-06-08 Remote sensing image segmentation method based on confidence points

Publications (2)

Publication Number Publication Date
CN110211138A true CN110211138A (en) 2019-09-06
CN110211138B CN110211138B (en) 2022-12-02

Family

ID=67791544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910494015.0A Active CN110211138B (en) 2019-06-08 2019-06-08 Remote sensing image segmentation method based on confidence points

Country Status (1)

Country Link
CN (1) CN110211138B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401275A (en) * 2020-03-20 2020-07-10 内蒙古工业大学 Information processing method and device for identifying grassland edge
CN112348823A (en) * 2020-09-22 2021-02-09 陕西土豆数据科技有限公司 Object-oriented high-resolution remote sensing image segmentation algorithm
CN113139550A (en) * 2021-03-29 2021-07-20 山东科技大学 Remote sensing image coastline extraction method based on deep semantic segmentation network
CN113284171A (en) * 2021-06-18 2021-08-20 成都天巡微小卫星科技有限责任公司 Vegetation height analysis method and system based on satellite remote sensing stereo imaging
CN113538280A (en) * 2021-07-20 2021-10-22 江苏天汇空间信息研究院有限公司 Remote sensing image splicing method for removing color lines and incomplete images based on matrix binarization
EP3968286A3 (en) * 2021-01-20 2022-07-13 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus, electronic device and storage medium for detecting change of building

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130343641A1 (en) * 2012-06-22 2013-12-26 Google Inc. System and method for labelling aerial images
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN105427309A (en) * 2015-11-23 2016-03-23 中国地质大学(北京) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN106683112A (en) * 2016-10-10 2017-05-17 中国交通通信信息中心 High-resolution image-based road region building change extraction method
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN107392130A (en) * 2017-07-13 2017-11-24 西安电子科技大学 Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN107392925A (en) * 2017-08-01 2017-11-24 西安电子科技大学 Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks
CN108573276A (en) * 2018-03-12 2018-09-25 浙江大学 A kind of change detecting method based on high-resolution remote sensing image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130343641A1 (en) * 2012-06-22 2013-12-26 Google Inc. System and method for labelling aerial images
CN103971115A (en) * 2014-05-09 2014-08-06 中国科学院遥感与数字地球研究所 Automatic extraction method for newly-increased construction land image spots in high-resolution remote sensing images based on NDVI and PanTex index
CN105427309A (en) * 2015-11-23 2016-03-23 中国地质大学(北京) Multiscale hierarchical processing method for extracting object-oriented high-spatial resolution remote sensing information
CN106683112A (en) * 2016-10-10 2017-05-17 中国交通通信信息中心 High-resolution image-based road region building change extraction method
CN106709948A (en) * 2016-12-21 2017-05-24 浙江大学 Quick binocular stereo matching method based on superpixel segmentation
CN107392130A (en) * 2017-07-13 2017-11-24 西安电子科技大学 Classification of Multispectral Images method based on threshold adaptive and convolutional neural networks
CN107392925A (en) * 2017-08-01 2017-11-24 西安电子科技大学 Remote sensing image terrain classification method based on super-pixel coding and convolutional neural networks
CN108573276A (en) * 2018-03-12 2018-09-25 浙江大学 A kind of change detecting method based on high-resolution remote sensing image

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401275A (en) * 2020-03-20 2020-07-10 内蒙古工业大学 Information processing method and device for identifying grassland edge
CN111401275B (en) * 2020-03-20 2022-11-25 内蒙古工业大学 Information processing method and device for identifying grassland edge
CN112348823A (en) * 2020-09-22 2021-02-09 陕西土豆数据科技有限公司 Object-oriented high-resolution remote sensing image segmentation algorithm
EP3968286A3 (en) * 2021-01-20 2022-07-13 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus, electronic device and storage medium for detecting change of building
CN113139550A (en) * 2021-03-29 2021-07-20 山东科技大学 Remote sensing image coastline extraction method based on deep semantic segmentation network
CN113284171A (en) * 2021-06-18 2021-08-20 成都天巡微小卫星科技有限责任公司 Vegetation height analysis method and system based on satellite remote sensing stereo imaging
CN113538280A (en) * 2021-07-20 2021-10-22 江苏天汇空间信息研究院有限公司 Remote sensing image splicing method for removing color lines and incomplete images based on matrix binarization

Also Published As

Publication number Publication date
CN110211138B (en) 2022-12-02

Similar Documents

Publication Publication Date Title
CN110211138A (en) Remote sensing image segmentation method based on confidence point
KR101728137B1 (en) Method for land-cover item images classification by using satellite picture and GIS
CN107392130B (en) Multispectral image classification method based on threshold value self-adaption and convolutional neural network
CN108229425A (en) A kind of identifying water boy method based on high-resolution remote sensing image
CN109919206A (en) A kind of remote sensing image ground mulching classification method based on complete empty convolutional neural networks
CN105046648A (en) Method for constructing high temporal-spatial remote sensing data
Witharana et al. Understanding the synergies of deep learning and data fusion of multispectral and panchromatic high resolution commercial satellite imagery for automated ice-wedge polygon detection
CN102012528A (en) Hyperspectral remote sensing oil-gas exploration method for vegetation sparse area
CN107341795A (en) A kind of high spatial resolution remote sense image method for detecting automatic variation of Knowledge driving
CN110097101A (en) A kind of remote sensing image fusion and seashore method of tape sorting based on improvement reliability factor
CN112991351B (en) Remote sensing image semantic segmentation method and device and storage medium
CN113239830A (en) Remote sensing image cloud detection method based on full-scale feature fusion
CN103927558A (en) Winter wheat remote sensing recognition method based on hardness change detection
CN107992856A (en) High score remote sensing building effects detection method under City scenarios
CN110879992A (en) Grassland surface covering object classification method and system based on transfer learning
CN110458208A (en) Hyperspectral image classification method based on information measure
CN107688776A (en) A kind of urban water-body extracting method
CN105512619A (en) Layered-knowledge-based impervious surface information extraction method
CN114119617A (en) Method for extracting inland salt lake artemia zone of multispectral satellite remote sensing image
Wang et al. Simultaneous extracting area and quantity of agricultural greenhouses in large scale with deep learning method and high-resolution remote sensing images
CN102231190B (en) Automatic extraction method for alluvial-proluvial fan information
CN116343058A (en) Global collaborative fusion-based multispectral and panchromatic satellite image earth surface classification method
Moosavi et al. Application of Taguchi method to satellite image fusion for object-oriented mapping of Barchan dunes
Guo et al. Object-Level Hybrid Spatiotemporal Fusion: Reaching a Better Trade-Off Among Spectral Accuracy, Spatial Accuracy and Efficiency
Mahmoud Machine learning and pan-sharpening of Sentinel-2 data for land use mapping in arid regions: a case study in Fayoum, Egypt

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant