CN107239775A - Terrain classification method and device - Google Patents

Terrain classification method and device Download PDF

Info

Publication number
CN107239775A
CN107239775A CN201710628776.1A CN201710628776A CN107239775A CN 107239775 A CN107239775 A CN 107239775A CN 201710628776 A CN201710628776 A CN 201710628776A CN 107239775 A CN107239775 A CN 107239775A
Authority
CN
China
Prior art keywords
image
layer
terrain classification
attribute
profile features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710628776.1A
Other languages
Chinese (zh)
Inventor
李树涛
郝乔波
康旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201710628776.1A priority Critical patent/CN107239775A/en
Publication of CN107239775A publication Critical patent/CN107239775A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to remote sensing technology field there is provided a kind of terrain classification method and device, methods described includes:Multiple attribute profile features of high spectrum image are extracted, the first image is obtained;The attribute profile features of laser scanning image are extracted, the second image is obtained;First image and the second image are merged, the 3rd image is obtained;Using default convolutional neural networks, feature extraction and classifying is carried out to the 3rd image, terrain classification result is obtained.The spectral information and the accurate elevation information of laser scanning image that the present invention enriches high spectrum image carry out it is complementary, solve due to spectral information it is inaccurate caused by terrain classification by limitation the problem of.In addition, carrying out feature extraction and classifying using convolutional neural networks, the quantitative requirement to training sample is reduced, while improving the precision of terrain classification.

Description

Terrain classification method and device
Technical field
The present invention relates to remote sensing technology field, in particular to a kind of terrain classification method and device.
Background technology
High-spectrum seems the cutting edge technology of current remote sensing fields, and it can obtain the up to a hundred continuous wave bands of spectrum.With Panchromatic, multi-spectral remote sensing image is compared, and high spectrum image has the spectral resolution more increased, using the teaching of the invention it is possible to provide the ground more enriched Thing information, so as to preferably recognize atural object.But, the building that high spectrum image not can solve complicated urban area is cloudy The problems such as shadow, cloud covering, in addition, when carrying out terrain classification to more complicated urban area, high spectrum image is unable to effective district Divide the different atural objects being made up of identical material, accordingly, it would be desirable to extract the empty spectrum signature of more separability.
Spatial structure characteristic based on morphological properties section, which can effectively extract Multi-scale model in high spectrum image, to be believed Breath, but be due to the complexity and diversity of high spectrum image, the description of single feature to high spectrum image is limited, in great Chang In the identification classification of scape high spectrum image, it is difficult to obtain enough training samples, calculation cost is big.
The content of the invention
It is an object of the invention to provide a kind of terrain classification method and device, to improve above mentioned problem.
To achieve these goals, the technical scheme that the embodiment of the present invention is used is as follows:
In a first aspect, the invention provides a kind of terrain classification method, methods described includes:Extract many of high spectrum image Individual attribute profile features, obtain the first image;The attribute profile features of laser scanning image are extracted, the second image is obtained;By One image and the second image are merged, and obtain the 3rd image;Using default convolutional neural networks, the 3rd image is carried out special Extraction and classification are levied, terrain classification result is obtained.
Second aspect, the invention provides a kind of terrain classification device, described device is carried including the first extraction module, second Modulus block, image co-registration module and terrain classification module.Wherein, the first extraction module is used for the multiple category for extracting high spectrum image Property profile features, obtain the first image;Second extraction module is used to extract the attribute profile features of laser scanning image, obtains the Two images;Image co-registration module is used to be merged the first image and the second image, obtains the 3rd image;Terrain classification module For utilizing default convolutional neural networks, feature extraction and classifying is carried out to the 3rd image, terrain classification result is obtained.
Compared with the prior art, the invention has the advantages that:A kind of terrain classification method and dress that the present invention is provided Put, will by the way that the attribute profile features of multiple attribute profile features of high spectrum image and laser scanning image are merged The abundant spectral information of high spectrum image and the accurate elevation information of laser scanning image carry out complementation, solve in the prior art The problem of due to the inaccurate caused terrain classification of spectral information by limitation.Carried in addition, carrying out feature using convolutional neural networks Take and classify, reduce the quantitative requirement to training sample, while improving the precision of terrain classification.
To enable the above objects, features and advantages of the present invention to become apparent, preferred embodiment cited below particularly, and coordinate Appended accompanying drawing, is described in detail below.
Brief description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be attached to what is used required in embodiment Figure is briefly described, it will be appreciated that the following drawings illustrate only certain embodiments of the present invention, therefore is not construed as pair The restriction of scope, for those of ordinary skill in the art, on the premise of not paying creative work, can also be according to this A little accompanying drawings obtain other related accompanying drawings.
Fig. 1 shows the block diagram of electronic equipment provided in an embodiment of the present invention.
Fig. 2 shows terrain classification method flow diagram provided in an embodiment of the present invention.
Fig. 3 is the sub-step flow chart of the step S101 shown in Fig. 2.
Fig. 4 is the sub-step flow chart of the sub-step S1013 shown in Fig. 3.
Fig. 5 is the sub-step flow chart of the step S104 shown in Fig. 2.
Fig. 6 is the sub-step flow chart of the sub-step S1042 shown in Fig. 5.
Fig. 7 is the sub-step flow chart of the sub-step S1043 shown in Fig. 5.
Fig. 8 shows the block diagram of terrain classification device provided in an embodiment of the present invention.
Fig. 9 be Fig. 8 shown in terrain classification device in the first extraction module block diagram.
Figure 10 be Fig. 9 shown in the first extraction module in execution unit block diagram.
Figure 11 be Fig. 8 shown in terrain classification device in terrain classification module block diagram.
Figure 12 be Figure 11 shown in terrain classification module in image characteristics extraction unit block diagram.
Figure 13 be Figure 11 shown in terrain classification module in characteristics of image taxon block diagram.
Icon:100- electronic equipments;101- memories;102- storage controls;103- processors;200- terrain classifications are filled Put;The extraction modules of 201- first;2011- image acquisition units;2012- principal component analysis units;2013- execution units; 20131- feature acquiring units;20132- feature superpositing units;The image acquiring units of 2014- first;The extraction modules of 202- second; 203- image co-registration modules;204- terrain classification modules;2041- image block acquiring units;2042- image characteristics extraction units; The sub- execution units of 20421- first;The sub- execution units of 20422- second;20423- characteristics of image obtaining units;2043- images are special Levy taxon;20431- characteristic vector obtaining units;20432- probable value computing units;20433- atural objects classification obtains single Member;2044- terrain classification result obtaining units.
Embodiment
Below in conjunction with accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Ground is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.Generally exist The component of the embodiment of the present invention described and illustrated in accompanying drawing can be arranged and designed with a variety of configurations herein.Cause This, the detailed description of the embodiments of the invention to providing in the accompanying drawings is not intended to limit claimed invention below Scope, but it is merely representative of the selected embodiment of the present invention.Based on embodiments of the invention, those skilled in the art are not doing The every other embodiment obtained on the premise of going out creative work, belongs to the scope of protection of the invention.
It should be noted that:Similar label and letter represents similar terms in following accompanying drawing, therefore, once a certain Xiang Yi It is defined in individual accompanying drawing, then it further need not be defined and explained in subsequent accompanying drawing.Meanwhile, the present invention's In description, term " first ", " second " etc. are only used for distinguishing description, and it is not intended that indicating or implying relative importance.
Fig. 1 is refer to, Fig. 1 shows the block diagram for the electronic equipment 100 that present pre-ferred embodiments are provided.Electronics Equipment 100 may be, but not limited to, desktop computer, notebook computer, smart mobile phone, tablet personal computer, pocket computer on knee, car Carry computer, personal digital assistant (personal digital assistant, PDA), Wearable mobile terminal etc..The electricity Sub- equipment 100 includes terrain classification device 200, memory 101, storage control 102 and processor 103.
The memory 101, storage control 102 and each element of processor 103 are directly or indirectly electrical each other Connection, to realize the transmission or interaction of data.For example, these elements can pass through one or more communication bus or letter each other Number line, which is realized, to be electrically connected with.The terrain classification device 200 can be with the shape of software or firmware (firmware) including at least one Formula is stored in the memory 101 or is solidificated in the operating system (operating system, OS) of the electronic equipment 100 In software function module.Such as the processor 103 is used to perform the executable module stored in memory 101, describedly Software function module or computer program that thing sorter 200 includes.
Wherein, memory 101 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only storage (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc.. Wherein, memory 101 is used for storage program, and the processor 103 performs described program, this hair after execute instruction is received Method performed by the server for the flow definition that bright any embodiment is disclosed can apply in processor 103, or by Reason device 103 is realized.
Processor 103 can be a kind of IC chip, with signal handling capacity.Above-mentioned processor 103 can be with It is general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP), speech processor and video processor etc.;Can also be digital signal processor, application specific integrated circuit, Field programmable gate array or other PLDs, discrete gate or transistor logic, discrete hardware components. It can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be Microprocessor or the processor 103 can also be any conventional processors etc..
First embodiment
Fig. 2 is refer to, Fig. 2 shows the terrain classification method flow diagram that present pre-ferred embodiments are provided.Terrain classification Method comprises the following steps:
Step S101, extracts multiple attribute profile features of high spectrum image, obtains the first image.
In embodiments of the present invention, the first image can be many attribute profile features of expanding morphology of high spectrum image, The many attribute profile features of expanding morphology for extracting high spectrum image, high spectrum image can be operated using morphological properties section The many attributes of expanding morphology can be calculated and obtain according to high spectrum image, including area attribute, the moment of inertia attribute, standard Poor attribute etc..
As a kind of embodiment, extract high spectrum image multiple attribute profile features obtain the method for the first image can To comprise the following steps:
First, first three principal component image of high spectrum image is extracted using principal component analytical method, I is expressed as1、 I2、I3
Second, orderly threshold value λ, and λ ∈ { 100,500,1000,5000 } are chosen, by selected orderly threshold value λ and EO-1 hyperion The area attribute of image is compared, respectively to first three principal component image I1、I2、I3Opening operation and closed operation are carried out, obtains every Individual principal component image IjThe attribute profile features of the area attribute of (j=1,2,3), can obtain each principal component figure according to following formula The attribute profile features of the area attribute of picture:
Wherein, γTRepresent opening operation, Ke Yishi Thinning operation,Closed operation is represented, can be thickening operation.
3rd, by first three principal component image I1、I2、I3Attribute profile features be overlapped, obtain high spectrum image The attribute profile features of area, can be according to formula EAP={ AP (I1),AP(I2),AP(I3) obtain high spectrum image area Attribute profile features:
4th, the attribute profile features of area attribute, the moment of inertia attribute, standard deviation attribute are overlapped, bloom is obtained The many attribute profile features of expanding morphology of spectrogram picture, i.e. the first image, it is possible to use formula EMAP={ EAP1, EAP '2,…, EAP nObtain many attribute profile features of expanding morphology of high spectrum image.
Fig. 3 is refer to, step S102 can include following sub-step:
Sub-step S1011, obtains high spectrum image.
Sub-step S1012, carries out principal component analysis to high spectrum image, obtains multiple principal component images.
Sub-step S1013, according to any one morphological properties, obtains the attribute profile features of multiple principal component images, Wherein, morphological properties include area attribute, the moment of inertia attribute and standard deviation attribute.
Fig. 4 is refer to, step S1013 can include following sub-step:
Sub-step S10131, according to any one morphological properties, opening operation is carried out to each principal component image and fortune is closed Calculate, obtain the attribute profile features of each principal component image.
Sub-step S10132, the attribute profile features of each principal component image are overlapped, and obtain multiple principal component figures The attribute profile features of picture.
Sub-step S1014, the attribute profile features of multiple morphological properties are overlapped, the first image is obtained.
Step S102, extracts the attribute profile features of laser scanning image, obtains the second image.
In embodiments of the present invention, the second image can be the morphological properties profile features of laser scanning image, can To operate the morphological properties profile features for extracting laser scanning image using morphological properties section.It is used as a kind of embodiment party Formula, for laser scanning image, can be rebuild according to morphological properties thickening operation and the index cluster of thinning operation, so that The attribute profile features of laser scanning image, i.e. the second image are obtained, the attribute of laser scanning image can be obtained according to following formula Profile features:
Wherein, γTOpening operation is represented, can be thinning behaviour Make,Closed operation is represented, can be thickening operation.
Step S103, the first image and the second image are merged, the 3rd image is obtained.
In embodiments of the present invention, many attribute sections of expanding morphology for the high spectrum image that step S101 can be obtained The attribute profile features of the laser scanning image that feature and step S102 are obtained carry out pixel scale fusion, and the after being merged Three images, the 3rd image can be according to formula χ=(EMAP (XH);AP(XL)) obtain.
Step S104, using default convolutional neural networks, carries out feature extraction and classifying to the 3rd image, obtains atural object Classification results.
In embodiments of the present invention, the method that the 3rd image carries out feature extraction and classifying may comprise steps of: Firstth, centered on each pixel in the 3rd image, image block of multiple sizes for n × n is obtained;Secondth, using default Convolutional neural networks to the 3rd image carry out feature extraction and classifying, first, local sense is passed through using different weight wave filters Convolution operation is carried out for n × n image block to multiple sizes by open country and extracts abstract characteristics, secondly, convolution is reduced using pond layer The characteristic vector of layer output result, again, full articulamentum is connected with Chi Huahou characteristic pattern, is evened up and is characterized vector, most Afterwards, output layer is that many-sorted logic returns numerical value between layer, exportable 0 to 1, represents each characteristic vector of calculating and belongs to each pre- If the probable value of terrain classification.
As a kind of embodiment, the framework of default convolutional neural networks is shown in table 1.
The framework of the convolutional neural networks of table 1
Convolutional layer ReLu layers Pond layer Dropout
11×11×40 Have Nothing Nothing
11×11×40 Have 2×2 Nothing
5×5×80 Have Nothing Nothing
5×5×80 Have 2×2 Nothing
3×3×100 Have Nothing Nothing
3×3×100 Have Nothing Nothing
3×3×100 Have 2×2 50%
Wherein, level 1 volume lamination:Convolution kernel size is 11 × 11 × 40;Level 2 volume lamination;Convolution kernel size be 11 × 11×40;3rd layer of pond layer:Acceptance region size is 2*2;4th layer of convolutional layer:Convolution kernel size is 5 × 5 × 80;5th layer of convolution Layer:Convolution kernel size is 5 × 5 × 80;6th layer of pond layer:Acceptance region size is 2*2;7th layer of convolutional layer:Convolution kernel size is 3 ×3×100;8th layer of convolutional layer:Convolution kernel size is 3 × 3 × 100;9th layer of convolutional layer:Convolution kernel size is 3 × 3 × 100; 10th layer of pond layer:Acceptance region size is 2*2;Random loss layer (also referred to as Dropout layers):It is arranged at every layer of convolutional layer and every layer After the layer of pond, random loss value is 50%.
Fig. 5 is refer to, step S104 can include following sub-step:
Sub-step S1041, centered on each pixel in the 3rd image, obtains the image block that multiple sizes are n × n, Wherein, n is the integer more than 1.
In embodiments of the present invention, the image block that size is 21 × 21 is taken to make centered on each pixel the 3rd image For the input of convolutional neural networks.
Sub-step S1042, is inputted convolutional neural networks by multiple images block, is entered using the first network of convolutional neural networks Row depth characteristic learns and extracts characteristics of image, wherein, first network includes convolutional layer and pond layer.
In embodiments of the present invention, using multiple sizes for n × n image block as convolutional neural networks input, first Network includes 7 layers of convolutional layer and 3 layers of pond layer, carries out depth characteristic study using the first network of convolutional neural networks and extracts The method of characteristics of image may comprise steps of:
First, convolution sum operation, outer biasing are carried out to the multiple images block of input with the different convolution kernels of 7 layers of convolutional layer Put, then exported by result and by ReLu excitation functions, form the neuron of current layer.
As a kind of embodiment, i-th layer of j-th of characteristic pattern can be obtained according to following formula neural at (x, y, z) position MemberValue:
Wherein, m refers to (i-1) layer is connected to the characteristic pattern of current jth layer characteristic pattern, PiAnd QiIt is the height and width of spatial convoluted core, RiIt is spectral Dimensions core Size,It is the weighted value for connecting m-th of characteristic pattern (p, q, r) position, bijIt is the biasing of i-th layer of j-th of characteristic pattern.
As a kind of embodiment, ReLu excitation functions are nonlinear activation function, for making Sparse, its expression formula For
Second, to each neuron of i-th layer of characteristic patternUsing maximum pond method, carried out down adopting with fixed size window Sample, susceptibility of the reduction characteristic pattern for translation, scaling and rotation.Maximum pond method can be expressed as Wherein, u (n, 1) is the window function used, ajIt is the maximum in neighborhood, in embodiments of the present invention, pond layer window is big Small is 2 × 2.
3rd, randomly choosed using random loss layer (also referred to as Dropout layers) some hidden in convolutional layer and pond layer Weight containing node layer does not work, and the ratio for setting random loss is 50%, then trains the hidden layer of random drop 50% every time Node is trained, it is to avoid wave filter all collective effects all during training zoom in or out some features every time, prevent Over-fitting.
Fig. 6 is refer to, step S1042 can include following sub-step:
Sub-step S10421, convolutional neural networks are inputted by multiple images block, using multilayer convolutional layer carry out convolution summation, Biasing is put, and passes through ReLu excitation functions, obtains the output characteristic figure of every layer of convolutional layer.
Sub-step S10422, is adopted under being carried out using the pond layer after every layer of convolutional layer to the output characteristic figure of the convolutional layer Sample, obtains Feature Mapping figure.
In embodiments of the present invention, using the output characteristic figure of first layer convolutional layer as second layer convolutional layer input feature vector Figure, carries out convolution summation, biasing using second layer convolutional layer and puts, and passes through ReLu excitation functions, obtains second layer convolutional layer Output characteristic figure;By the pond layer after the output characteristic figure input second layer convolutional layer of second layer convolutional layer, the pond layer is to defeated Go out characteristic pattern and carry out down-sampling, obtain Feature Mapping figure;This feature mapping graph is inputted into third layer convolutional layer, rolled up using third layer Lamination carries out convolution summation, biasing and put, and passes through ReLu excitation functions, obtains the output characteristic figure of third layer convolutional layer, successively Analogize, the output characteristic figure until obtaining last layer of convolutional layer.
Sub-step S10423, under being carried out using the pond layer after last layer of convolutional layer to the output characteristic figure of the convolutional layer Sampling, obtains characteristics of image.
Sub-step S1043, inputs the second network by the characteristics of image extracted from first network and is classified, and obtains every The atural object classification of individual characteristics of image, wherein, the second network includes full articulamentum and many-sorted logic returns layer.
In embodiments of the present invention, after obtaining characteristics of image using first network, the second network of input is simultaneously classified, Obtain the atural object classification of each characteristics of image, it is possible to use following steps are realized:
First, full articulamentum is connected with the characteristic pattern after most after-bay, is evened up characteristics of image for spy using full articulamentum Levy vector, full articulamentum neuron number is exactly characteristics of image number, its directly affect convolutional neural networks fitting effect and Training speed;
Second, the final output layer of convolutional neural networks returns number between layer, exportable 0 to 1 for many-sorted logic Value, represents the probable value that each characteristic vector belongs to each default terrain classification.Given input R, can be by formulaObtain the probability that characteristic vector belongs to jth class.
As a kind of embodiment, default terrain classification can include, but are not limited to soil, road, railway, parking lot, Residential block, shopping centre etc..
3rd, convolutional neural networks parameter training is divided into propagated forward and back-propagating stage, propagated forward process be to Surely input to be inferred calculates output, and training sample is sent into network, successively converted, and extracts feature, obtains exciter response.Rear Into communication process, convolutional neural networks are learnt according to loss to calculate gradient, by obtained loss function, utilize gradient Descent method is updated to weight and biasing, and automatic derivation and reversely combined each layer of gradient calculate the gradient of whole network, Loss function can be calculated according to following formula:
Wherein, m is the number of image block in formula, and k is classification number, and 1 { } is indicative function, and its value rule is:1 { value is Genuine expression formula }=1,1 { value is false expression formula }=0.
Fig. 7 is refer to, step S1043 can include following sub-step:
Sub-step S10431, is evened up to the characteristics of image of last layer of pond layer output using full articulamentum, obtained Characteristic vector, wherein, each neuron 1 characteristics of image of correspondence of full articulamentum.
Sub-step S10432, layer is returned using many-sorted logic, is calculated each characteristic vector and is belonged to each default terrain classification Probable value.
Sub-step S10433, obtains the corresponding default terrain classification of most probable value of each characteristic vector, and this is pre- If terrain classification is used as the atural object classification of the corresponding characteristics of image of this feature vector.
Sub-step S1044, the atural object classification of each characteristics of image is merged, terrain classification result is obtained.
In embodiments of the present invention, by terrain classification method provided by the present invention with being based on EMP morphological transformations, based on branch Hold vector machine, the sorting technique based on EP extraction features and based on EPF to compare, as shown in table 2.
The comparison of several sorting techniques of table 2
It can see by table 2, the classification of classification results and other method that terrain classification method proposed by the present invention is obtained As a result compare, nicety of grading is higher, with bigger practical value.
In embodiments of the present invention, first, by multiple attribute profile features of high spectrum image and laser scanning image Attribute profile features are merged, and the spectral information and the accurate elevation information of laser scanning image that high spectrum image is enriched enter Row is complementary, it is to avoid building effects, cloud covering and other object spectrum information for being based only on high spectrum image presence are inaccurate Cause the problem of terrain classification is by limitation;Second, based on morphological properties section extract feature method can integrate it is a variety of not The attributive character of same type, so that the space geometry feature of image is more fully described;3rd, based on convolutional neural networks Feature extraction and sorting technique, can overcome conventional classification method to ignore spatial structure features of images and lack lacking for generalization ability Point, improves overall nicety of grading.Therefore, terrain classification method proposed by the present invention can be to geometric transformation, deformation, illumination With a certain degree of consistency, at the same can effectively solve the problem that again urban area building effects problem and by weather influenceed it is big Region misclassification problem, therefore the subsequent analysis to image is handled and its is significant in actual applications and practical value.
Second embodiment
Fig. 8 is refer to, Fig. 8 shows the block diagram of terrain classification device 200 provided in an embodiment of the present invention.Atural object Sorter 200 includes the first extraction module 201, the second extraction module 202, image co-registration module 203 and terrain classification module 204。
First extraction module 201, multiple attribute profile features for extracting high spectrum image, obtains the first image.
In embodiments of the present invention, the first extraction module 201 can be used for performing step S101.
It refer to Fig. 9, Fig. 9 is the block diagram of the first extraction module 201 in terrain classification device 200 shown in Fig. 8. First extraction module 201 includes image acquisition unit 2011, principal component analysis unit 2012, the image of execution unit 2013 and first Obtaining unit 2014.
Image acquisition unit 2011, for obtaining high spectrum image.
In embodiments of the present invention, image acquisition unit 2011 can be used for performing sub-step S1011.
Principal component analysis unit 2012, for carrying out principal component analysis to high spectrum image, obtains multiple principal component images.
In embodiments of the present invention, principal component analysis unit 2012 can be used for performing sub-step S1012.
Execution unit 2013, for according to any one morphological properties, obtaining the attribute section of multiple principal component images Feature, wherein, morphological properties include area attribute, the moment of inertia attribute and standard deviation attribute.
In embodiments of the present invention, execution unit 2013 can be used for performing sub-step S1013.
It refer to Figure 10, Figure 10 is the block diagram of execution unit 2013 in the first extraction module 201 shown in Fig. 9. Execution unit 2013 includes feature acquiring unit 20131 and feature superpositing unit 20132.
Feature acquiring unit 20131, for according to any one morphological properties, carrying out opening fortune to each principal component image Calculate and closed operation, obtain the attribute profile features of each principal component image.
In embodiments of the present invention, feature acquiring unit 20131 can be used for performing sub-step S10131.
Feature superpositing unit 20132, for the attribute profile features of each principal component image to be overlapped, obtains multiple The attribute profile features of principal component image.
In embodiments of the present invention, feature superpositing unit 20132 can be used for performing sub-step S10132.
First image acquiring unit 2014, for the attribute profile features of multiple morphological properties to be overlapped, is obtained First image.
In embodiments of the present invention, the first image acquiring unit 2014 can be used for performing sub-step S1014.
Second extraction module 202, the attribute profile features for extracting laser scanning image, obtains the second image.
In embodiments of the present invention, the second extraction module 202 can be used for performing step S102.
Image co-registration module 203, for the first image and the second image to be merged, obtains the 3rd image.
In embodiments of the present invention, image co-registration module 203 can be used for performing step S103.
Terrain classification module 204, for utilizing default convolutional neural networks, feature extraction is carried out to the 3rd image with dividing Class, obtains terrain classification result.
In embodiments of the present invention, terrain classification module 204 can be used for performing step S104.
Figure 11 is refer to, Figure 11 illustrates for the square frame of terrain classification module 204 in the terrain classification device 200 shown in Fig. 8 Figure.Terrain classification module 204 includes image block acquiring unit 2041, image characteristics extraction unit 2042, characteristics of image grouping sheet Member 2043 and terrain classification result obtaining unit 2044.
Image block acquiring unit 2041, for centered on each pixel in the 3rd image, obtain multiple sizes for n × N image block, wherein, n is the integer more than 1.
In embodiments of the present invention, image block acquiring unit 2041 can be used for performing sub-step S1041.
Image characteristics extraction unit 2042, for multiple images block to be inputted into convolutional neural networks, utilizes convolutional Neural net The first network of network carries out depth characteristic study and extracts characteristics of image, wherein, first network includes convolutional layer and pond layer.
In embodiments of the present invention, image characteristics extraction unit 2042 can be used for performing sub-step S1042.
It refer to Figure 12, Figure 12 is the side of image characteristics extraction unit 2042 in terrain classification module 204 shown in Figure 11 Frame schematic diagram.Image characteristics extraction unit 2042 includes the first sub- execution unit 20421, the second sub- execution unit 20422 and figure As feature obtaining unit 20423.
First sub- execution unit 20421, for multiple images block to be inputted into convolutional neural networks, is entered using multilayer convolutional layer The summation of row convolution, biasing are put, and pass through ReLu excitation functions, obtain the output characteristic figure of every layer of convolutional layer.
In embodiments of the present invention, the first sub- execution unit 20421 can be used for performing sub-step S10421.
Second sub- execution unit 20422, for utilizing output characteristic of the pond layer after every layer of convolutional layer to the convolutional layer Figure carries out down-sampling, obtains Feature Mapping figure.
In embodiments of the present invention, the second sub- execution unit 20422 can be used for performing sub-step S10422.
Downsampling unit 20423, for utilizing output characteristic of the pond layer after last layer of convolutional layer to the convolutional layer Figure carries out down-sampling, obtains characteristics of image.
In embodiments of the present invention, characteristics of image obtaining unit 20423 can be used for performing sub-step S10423.
Characteristics of image taxon 2043, goes forward side by side for the characteristics of image extracted from first network to be inputted into the second network Row classification, obtains the atural object classification of each characteristics of image, wherein, the second network includes full articulamentum and many-sorted logic returns layer.
In embodiments of the present invention, characteristics of image taxon 2043 can be used for performing sub-step S1043.
It refer to Figure 13, Figure 13 is the side of characteristics of image taxon 2043 in terrain classification module 204 shown in Figure 11 Frame schematic diagram.Characteristics of image taxon 2043 include characteristic vector obtaining unit 20431, probable value computing unit 20432 and Atural object classification obtaining unit 20433.
Characteristic vector obtaining unit 20431, for utilizing characteristics of image of the full articulamentum to last layer of pond layer output Evened up, obtain characteristic vector, wherein, each neuron 1 characteristics of image of correspondence of full articulamentum.
In embodiments of the present invention, characteristic vector obtaining unit 20431 can be used for performing sub-step S10431.
Probable value computing unit 20432, for returning layer using many-sorted logic, calculates each characteristic vector and belongs to each pre- If the probable value of terrain classification.
In embodiments of the present invention, probable value computing unit 20432 can be used for performing sub-step S10432.
Atural object classification obtaining unit 20433, the corresponding default atural object of most probable value for obtaining each characteristic vector Classification, and this is preset into terrain classification as the atural object classification of the corresponding characteristics of image of this feature vector.
In embodiments of the present invention, atural object classification obtaining unit 20433 can be used for performing sub-step S10433.
Terrain classification result obtaining unit 2044, for the atural object classification of each characteristics of image to be merged, obtains ground Thing classification results.
In embodiments of the present invention, terrain classification result obtaining unit 2044 can be used for performing sub-step S1044.
In summary, a kind of terrain classification method and device that the present invention is provided, methods described includes:Extract high-spectrum Multiple attribute profile features of picture, obtain the first image;The attribute profile features of laser scanning image are extracted, the second figure is obtained Picture;First image and the second image are merged, the 3rd image is obtained;Using default convolutional neural networks, to the 3rd figure As carrying out feature extraction and classifying, terrain classification result is obtained.Terrain classification method proposed by the present invention, first, by EO-1 hyperion Multiple attribute profile features of image and the attribute profile features of laser scanning image are merged, and high spectrum image is enriched Spectral information and the accurate elevation information of laser scanning image carry out complementary, it is to avoid is based only on the building of high spectrum image presence Thing shade, cloud, which are covered, and other object spectrum information are inaccurate causes the problem of terrain classification is by limitation;Second, based on morphology The method that attribute section extracts feature can integrate a variety of different types of attributive character, so that image is more fully described Space geometry feature;3rd, feature extraction and sorting technique based on convolutional neural networks can overcome conventional classification method to neglect Slightly the shortcoming of spatial structure features of images and shortage generalization ability, improves overall nicety of grading.This method can be to geometry Conversion, deformation, illumination have a certain degree of consistency, at the same can effectively solve the problem that again urban area building effects problem and The big region misclassification problem influenceed by weather, therefore the subsequent analysis of image is handled and its had in actual applications great Meaning and practical value.
In several embodiments provided herein, it should be understood that disclosed apparatus and method, it can also pass through Other modes are realized.Device embodiment described above is only schematical, for example, flow chart and block diagram in accompanying drawing Show according to the device of multiple embodiments of the present invention, the architectural framework in the cards of method and computer program product, Function and operation.At this point, each square frame in flow chart or block diagram can represent the one of a module, program segment or code Part a, part for the module, program segment or code is used to realize holding for defined logic function comprising one or more Row instruction.It should also be noted that in some implementations as replacement, the function of being marked in square frame can also with different from The order marked in accompanying drawing occurs.For example, two continuous square frames can essentially be performed substantially in parallel, they are sometimes It can perform in the opposite order, this is depending on involved function.It is also noted that every in block diagram and/or flow chart The combination of individual square frame and block diagram and/or the square frame in flow chart, can use the special base for performing defined function or action Realize, or can be realized with the combination of specialized hardware and computer instruction in the system of hardware.
In addition, each functional module in each embodiment of the invention can integrate to form an independent portion Point or modules individualism, can also two or more modules be integrated to form an independent part.
If the function is realized using in the form of software function module and is used as independent production marketing or in use, can be with It is stored in a computer read/write memory medium.Understood based on such, technical scheme is substantially in other words The part contributed to prior art or the part of the technical scheme can be embodied in the form of software product, the meter Calculation machine software product is stored in a storage medium, including some instructions are to cause a computer equipment (can be individual People's computer, server, or network equipment etc.) perform all or part of step of each of the invention embodiment methods described. And foregoing storage medium includes:USB flash disk, mobile hard disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited Reservoir (RAM, Random Access Memory), magnetic disc or CD etc. are various can be with the medium of store program codes.Need Illustrate, herein, such as first and second or the like relational terms be used merely to by an entity or operation with Another entity or operation make a distinction, and not necessarily require or imply between these entities or operation there is any this reality The relation or order on border.Moreover, term " comprising ", "comprising" or its any other variant are intended to the bag of nonexcludability Contain, so that process, method, article or equipment including a series of key elements are not only including those key elements, but also including Other key elements being not expressly set out, or also include for this process, method, article or the intrinsic key element of equipment. In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including the key element Process, method, article or equipment in also there is other identical element.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.It should be noted that:Similar label and letter exists Similar terms is represented in following accompanying drawing, therefore, once being defined in a certain Xiang Yi accompanying drawing, is then not required in subsequent accompanying drawing It is further defined and explained.

Claims (10)

1. a kind of terrain classification method, it is characterised in that methods described includes;
Multiple attribute profile features of high spectrum image are extracted, the first image is obtained;
The attribute profile features of laser scanning image are extracted, the second image is obtained;
Described first image and second image are merged, the 3rd image is obtained;
Using default convolutional neural networks, feature extraction and classifying is carried out to the 3rd image, terrain classification result is obtained.
2. the method as described in claim 1, it is characterised in that multiple attribute profile features of the extraction high spectrum image, The step of obtaining the first image, including:
Obtain high spectrum image;
Principal component analysis is carried out to the high spectrum image, multiple principal component images are obtained;
According to any one morphological properties, the attribute profile features of multiple principal component images are obtained, wherein, the morphology category Property include area attribute, the moment of inertia attribute and standard deviation attribute;
The attribute profile features of multiple morphological properties are overlapped, the first image is obtained.
3. method as claimed in claim 2, it is characterised in that described according to any one morphological properties, obtains multiple masters The step of attribute profile features of component-part diagram picture, including:
According to any one of morphological properties, opening operation and closed operation are carried out to each principal component image, obtain every The attribute profile features of the individual principal component image;
The attribute profile features of each principal component image are overlapped, the attribute section for obtaining multiple principal component images is special Levy.
4. the method as described in claim 1, it is characterised in that described to utilize default convolutional neural networks, to the described 3rd Image progress feature extraction and classifying, the step of obtaining terrain classification result, including:
Centered on each pixel in the 3rd image, the image block that multiple sizes are n × n is obtained, wherein, n is more than 1 Integer;
Multiple described image blocks are inputted into the convolutional neural networks, carried out using the first network of the convolutional neural networks deep Degree feature learning simultaneously extracts characteristics of image, wherein, the first network includes convolutional layer and pond layer;
The characteristics of image extracted from the first network is inputted into the second network and classified, each described image is obtained special The atural object classification levied, wherein, second network includes full articulamentum and many-sorted logic returns layer;
The atural object classification of each described image feature is merged, terrain classification result is obtained.
5. method as claimed in claim 4, it is characterised in that described that multiple images block is inputted into the convolutional neural networks, The step of depth characteristic study being carried out using the first network of the convolutional neural networks and extracts characteristics of image, including:
Multiple images block is inputted into the convolutional neural networks, carrying out convolution summation, biasing using multilayer convolutional layer puts, and pass through ReLu excitation functions, obtain the output characteristic figure of every layer of convolutional layer;
Down-sampling is carried out to the output characteristic figure of the convolutional layer using the pond layer after every layer of convolutional layer, Feature Mapping figure is obtained;
Down-sampling is carried out to the output characteristic figure of the convolutional layer using the pond layer after last layer of convolutional layer, described image is obtained Feature.
6. method as claimed in claim 5, it is characterised in that described that the characteristics of image extracted from the first network is defeated Enter the second network and classified, the step of obtaining the atural object classification of each described image feature, including:
The characteristics of image of last layer of pond layer output is evened up using the full articulamentum, characteristic vector is obtained, wherein, Each neuron 1 characteristics of image of correspondence of the full articulamentum;
Layer is returned using the many-sorted logic, the probable value that each characteristic vector belongs to each default terrain classification is calculated;
Obtain the corresponding default terrain classification of most probable value of each characteristic vector, and using the default terrain classification as The atural object classification of the corresponding characteristics of image of this feature vector.
7. a kind of terrain classification device, it is characterised in that described device includes;
First extraction module, multiple attribute profile features for extracting high spectrum image, obtains the first image;
Second extraction module, the attribute profile features for extracting laser scanning image, obtains the second image;
Image co-registration module, for described first image and second image to be merged, obtains the 3rd image;
Terrain classification module, for utilizing default convolutional neural networks, feature extraction and classifying is carried out to the 3rd image, Obtain terrain classification result.
8. device as claimed in claim 7, it is characterised in that first extraction module includes:
Image acquisition unit, for obtaining high spectrum image;
Principal component analysis unit, for carrying out principal component analysis to the high spectrum image, obtains multiple principal component images;
Execution unit, for according to any one morphological properties, obtaining the attribute profile features of multiple principal component images, its In, the morphological properties include area attribute, the moment of inertia attribute and standard deviation attribute;
First image acquiring unit, for the attribute profile features of multiple morphological properties to be overlapped, obtains the first image.
9. device as claimed in claim 8, it is characterised in that the terrain classification module includes:
Image block acquiring unit, is n × n's for centered on each pixel in the 3rd image, obtaining multiple sizes Image block, wherein, n is the integer more than 1;
Image characteristics extraction unit, for multiple described image blocks to be inputted into the convolutional neural networks, utilizes convolution god First network through network carries out depth characteristic study and simultaneously extracts characteristics of image, wherein, the first network include convolutional layer and Pond layer;
Characteristics of image taxon, for the characteristics of image extracted from the first network to be inputted into the second network and divided Class, obtains the atural object classification of each described image feature, wherein, second network includes full articulamentum and many-sorted logic is returned Layer;
Terrain classification result obtaining unit, for the atural object classification of each described image feature to be merged, obtains atural object point Class result.
10. device as claimed in claim 9, it is characterised in that described image feature extraction unit includes:
First sub- execution unit, for multiple images block to be inputted into the convolutional neural networks, is rolled up using multilayer convolutional layer Product summation, biasing are put, and pass through ReLu excitation functions, obtain the output characteristic figure of every layer of convolutional layer;
Second sub- execution unit, for being adopted under being carried out using the pond layer after every layer of convolutional layer to the output characteristic figure of the convolutional layer Sample, obtains Feature Mapping figure;
Characteristics of image obtaining unit, for being entered using the pond layer after last layer of convolutional layer to the output characteristic figure of the convolutional layer Row down-sampling, obtains described image feature.
CN201710628776.1A 2017-07-28 2017-07-28 Terrain classification method and device Pending CN107239775A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710628776.1A CN107239775A (en) 2017-07-28 2017-07-28 Terrain classification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710628776.1A CN107239775A (en) 2017-07-28 2017-07-28 Terrain classification method and device

Publications (1)

Publication Number Publication Date
CN107239775A true CN107239775A (en) 2017-10-10

Family

ID=59988374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710628776.1A Pending CN107239775A (en) 2017-07-28 2017-07-28 Terrain classification method and device

Country Status (1)

Country Link
CN (1) CN107239775A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918776A (en) * 2017-11-01 2018-04-17 中国科学院深圳先进技术研究院 A kind of plan for land method, system and electronic equipment based on machine vision
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks
CN108711150A (en) * 2018-05-22 2018-10-26 电子科技大学 A kind of end-to-end pavement crack detection recognition method based on PCA
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN110298396A (en) * 2019-06-25 2019-10-01 北京工业大学 Hyperspectral image classification method based on deep learning multiple features fusion
CN110569751A (en) * 2019-08-23 2019-12-13 南京信息工程大学 High-resolution remote sensing image building extraction method
CN111539403A (en) * 2020-07-13 2020-08-14 航天宏图信息技术股份有限公司 Agricultural greenhouse identification method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1760889A (en) * 2005-11-03 2006-04-19 复旦大学 Method for sorting characters of ground object through interfusion of satellite carried microwave and infrared remote sensing
CN103632160A (en) * 2012-08-24 2014-03-12 孙琤 Combination-kernel-function RVM (Relevance Vector Machine) hyperspectral classification method integrated with multi-scale morphological characteristics
CN105320965A (en) * 2015-10-23 2016-02-10 西北工业大学 Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1760889A (en) * 2005-11-03 2006-04-19 复旦大学 Method for sorting characters of ground object through interfusion of satellite carried microwave and infrared remote sensing
CN103632160A (en) * 2012-08-24 2014-03-12 孙琤 Combination-kernel-function RVM (Relevance Vector Machine) hyperspectral classification method integrated with multi-scale morphological characteristics
CN105320965A (en) * 2015-10-23 2016-02-10 西北工业大学 Hyperspectral image classification method based on spectral-spatial cooperation of deep convolutional neural network
CN106295714A (en) * 2016-08-22 2017-01-04 中国科学院电子学研究所 A kind of multi-source Remote-sensing Image Fusion based on degree of depth study

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
康旭东: "高光谱遥感影像空谱特征提取与分类方法研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918776A (en) * 2017-11-01 2018-04-17 中国科学院深圳先进技术研究院 A kind of plan for land method, system and electronic equipment based on machine vision
CN107918776B (en) * 2017-11-01 2022-03-22 中国科学院深圳先进技术研究院 Land planning method and system based on machine vision and electronic equipment
CN108427969A (en) * 2018-03-27 2018-08-21 陕西科技大学 A kind of paper sheet defect sorting technique of Multiscale Morphological combination convolutional neural networks
CN109325395A (en) * 2018-04-28 2019-02-12 二十世纪空间技术应用股份有限公司 The recognition methods of image, convolutional neural networks model training method and device
CN108711150A (en) * 2018-05-22 2018-10-26 电子科技大学 A kind of end-to-end pavement crack detection recognition method based on PCA
CN108711150B (en) * 2018-05-22 2022-03-25 电子科技大学 End-to-end pavement crack detection and identification method based on PCA
CN110298396A (en) * 2019-06-25 2019-10-01 北京工业大学 Hyperspectral image classification method based on deep learning multiple features fusion
CN110569751A (en) * 2019-08-23 2019-12-13 南京信息工程大学 High-resolution remote sensing image building extraction method
CN110569751B (en) * 2019-08-23 2021-11-16 南京信息工程大学 High-resolution remote sensing image building extraction method
CN111539403A (en) * 2020-07-13 2020-08-14 航天宏图信息技术股份有限公司 Agricultural greenhouse identification method and device and electronic equipment
CN111539403B (en) * 2020-07-13 2020-10-16 航天宏图信息技术股份有限公司 Agricultural greenhouse identification method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN107239775A (en) Terrain classification method and device
Zhang et al. Spectral-spatial classification of hyperspectral imagery using a dual-channel convolutional neural network
CN110458107B (en) Method and device for image recognition
CN110929774B (en) Classification method, model training method and device for target objects in image
Zhao et al. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery
CN109766858A (en) Three-dimensional convolution neural network hyperspectral image classification method combined with bilateral filtering
CN112801146B (en) Target detection method and system
CN108197250B (en) Picture retrieval method, electronic equipment and storage medium
Wang et al. FE-YOLOv5: Feature enhancement network based on YOLOv5 for small object detection
CN105989336B (en) Scene recognition method based on deconvolution deep network learning with weight
CN107609563A (en) Picture semantic describes method and device
CN111612017A (en) Target detection method based on information enhancement
CN107358182A (en) Pedestrian detection method and terminal device
CN110414344A (en) A kind of human classification method, intelligent terminal and storage medium based on video
CN110490849A (en) Surface Defects in Steel Plate classification method and device based on depth convolutional neural networks
US20210012145A1 (en) System and method for multi-modal image classification
CN107767416A (en) The recognition methods of pedestrian's direction in a kind of low-resolution image
CN110457677A (en) Entity-relationship recognition method and device, storage medium, computer equipment
CN107871314A (en) A kind of sensitive image discrimination method and device
CN113837151B (en) Table image processing method and device, computer equipment and readable storage medium
Zhao et al. Multiscale object detection in high-resolution remote sensing images via rotation invariant deep features driven by channel attention
CN112132032A (en) Traffic sign detection method and device, electronic equipment and storage medium
CN107818299A (en) Face recognition algorithms based on fusion HOG features and depth belief network
CN110363206B (en) Clustering of data objects, data processing and data identification method
Priscila et al. Classification of Satellite Photographs Utilizing the K-Nearest Neighbor Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20171010

RJ01 Rejection of invention patent application after publication