CN107169450A - The scene classification method and system of a kind of high-resolution remote sensing image - Google Patents
The scene classification method and system of a kind of high-resolution remote sensing image Download PDFInfo
- Publication number
- CN107169450A CN107169450A CN201710340257.5A CN201710340257A CN107169450A CN 107169450 A CN107169450 A CN 107169450A CN 201710340257 A CN201710340257 A CN 201710340257A CN 107169450 A CN107169450 A CN 107169450A
- Authority
- CN
- China
- Prior art keywords
- scene
- classification
- remote sensing
- subgraph
- dbn
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a kind of high-resolution Classifying Method in Remote Sensing Image, including:Scene set of sub-images is generated to given remote sensing images, and sets up scene classification training set;Extract the first semantic feature of every width subgraph in the scene classification training set;First scene classification model is set up according to the first semantic feature of every width subgraph;First classification results probability vector is calculated based on the first scene classification model;Extract the second semantic feature of every width subgraph in the scene classification training set;Second scene classification model is set up according to the second semantic feature of every width subgraph;Second classification results probability vector is calculated based on the second scene classification model;The first classification results probability vector and the second classification results probability vector are merged using fractional layer fusion method, the 3rd classification results probability vector is obtained;3rd scene classification model is set up according to the 3rd classification results probability vector;The scene type of remote sensing images to be sorted is judged according to the 3rd scene classification model.The present invention can utilize the multistage semantic feature of high-resolution scene image simultaneously, the two is organically blended, improve the accuracy of scene classification.
Description
Technical field
High-resolution remote sensing images are based on the present invention relates to remote sensing images scene classification technical field, more particularly to one kind
Sorting technique and categorizing system.
Background technology
(illumination, landform etc.) should have identical and similar spectral information and space to similar atural object under the same conditions
Information characteristics.There is difference between inhomogeneous atural object, all pixels in remote sensing images are pressed by its property according to this species diversity
It is divided into the classification of the process of several classifications, referred to as remote sensing images.
In recent years, with the further raising of imaging technique, available High Resolution Remote Sensing Satellites image is increasingly
Increase, it is other that its spatial resolution can reach sub-meter grade.At present, with the continuous lifting of spatial resolution, some human factor shadows
Complicated multiple target, the scene type (such as integrated mill, airport, harbour, parking lot etc.) of many ground class covering under ringing is distant
Can clearly it be showed in sense image.It is this under the new situation, utilize scene information included in remote sensing images to carry out
Remote sensing image interpretation provides a kind of new thinking, the field based on high-resolution remote sensing image for high-resolution remote sensing image classification
Scape sort research turns into a popular research direction.
Nearly ten years, visual word bag (bag-of-visual-words, the BOVW) model occurred in field of machine vision
Feasible method is provided for the scene classification of high-resolution remote sensing image.Visual word bag model is expressed as a kind of middle level features
Method, is, to the secondary abstract of low-level feature, to contain semantic information, is achieved in the research of natural image scene classification
Extensive accreditation.The objectives that it need not be analyzed in scene image are constituted, but the overall statistical information of application image scene,
Image low-level feature after quantization is considered as vision word, is distributed to express image scene content by the vision word of image.
Visual word bag model extracts the low-level feature of topography's block in image first, then using K mean cluster algorithm to obtaining
Low-level feature is quantified, and each cluster centre is considered as into vision word, is built visual dictionary, is finally counted and calculate image
Included in vision word frequency histogram, in this, as expression picture material characteristic vector.The model is in mark sheet
Up to the advantage of aspect, it is set to be advantageous to express the scene type of complicated multiple target, the covering of many ground class.
With continuing to develop for depth learning technology, depth convolutional neural networks (deep convolutional are utilized
Neural network, CNN) model can effectively extract the Deep Semantics feature of image, and there is very outstanding feature
Ability to express.As a kind of High level feature extraction method, its application in remote sensing fields is also deep all the more.
However, the method currently for high-resolution remote sensing image scene classification is all also merely resting on using middle level semanteme
One kind in feature or high-level semantics features carries out feature representation, lacks and the two is organically blended, for multi-layer feature
Overall availability it is not enough.
The content of the invention
In view of this, the invention provides a kind of high-resolution Classifying Method in Remote Sensing Image, including:
Scene set of sub-images is generated to given remote sensing images, and sets up scene classification training set;
Extract the first semantic feature of every width subgraph in the scene classification training set;
First scene classification model is set up according to the first semantic feature of every width subgraph;
First classification results probability vector is calculated based on the first scene classification model;
Extract the second semantic feature of every width subgraph in the scene classification training set;
Second scene classification model is set up according to the second semantic feature of every width subgraph;
Second classification results probability vector is calculated based on the second scene classification model;
The first classification results probability vector and the second classification results probability vector are melted using fractional layer fusion method
Close, obtain the 3rd classification results probability vector;
3rd scene classification model is set up according to the 3rd classification results probability vector;
The scene type of remote sensing images to be sorted is judged according to the 3rd scene classification model.
Further, described scene classification training set of setting up includes:
Several scene types are defined according to the scene type of given remote sensing images, and numbered, for each classification
Some width subgraphs are randomly selected respectively as scene classification training set.
Further, the first semantic feature of every width subgraph in the described extraction scene classification training set, bag
Include:
To each width subgraph in the scene classification training set, using BOVW methods, its histogram feature is calculated, is obtained
To the first semantic feature.
Further, it is described to set up the first scene classification model, including:
It regard the first semantic feature of every width subgraph scene type numbering corresponding with the subgraph as training data, fortune
DBN networks are designed with DBN algorithms, DBN networks, the DBN networks trained i.e. the first scene classification model is then trained.
Further, described calculating the first classification results probability vector, including:
Classification prediction is carried out to training data using the DBN networks trained, the result of DBN network output layers is regard as the
One classification results probability vector.
Further, the second semantic feature of every width subgraph in the described extraction scene classification training set, bag
Include:
Using the output of last full articulamentum before CNN network output layers as feature extractor, to each width
The semantic feature of image zooming-out second.
Further, it is described to set up the second scene classification model, including:
It regard the second semantic feature of every width subgraph scene type numbering corresponding with the subgraph as training data, fortune
DBN networks are designed with DBN algorithms, DBN networks, the DBN networks trained i.e. the second scene classification model is then trained.
Further, described calculating the second classification results probability vector, including:
Classification prediction is carried out to training data using the DBN networks trained, the result of DBN network output layers is regard as the
Two classification results probability vectors.
Further, described the 3rd scene classification model of setting up includes:
It regard the 3rd classification results probability vector of every width subgraph scene type numbering corresponding with the subgraph as instruction
Practice data, DBN networks are designed with DBN algorithms, then train DBN networks, the DBN networks trained i.e. the 3rd scene classification mould
Type.
Further, the scene type that remote sensing images to be sorted are judged according to the 3rd scene classification model, including:
Remote sensing images to be sorted are generated into set of sub-images to be sorted, to each width subgraph, using BOVW methods, calculated
Its histogram feature, obtains the first semantic feature of subgraph to be sorted;
First semantic feature of above-mentioned every subgraph to be sorted scene type corresponding with the subgraph to be sorted is compiled
Number as training data, DBN networks are designed with DBN algorithms, DBN networks are then trained, using the DBN networks trained to instruction
Practice data carry out classification prediction, using the result of DBN network output layers as remote sensing images to be sorted the first classification results probability
Vector;
Using the output of last full articulamentum before CNN network output layers as feature extractor, to above-mentioned each
Subgraph to be sorted extracts the second semantic feature;
Second semantic feature of above-mentioned every subgraph to be sorted scene type corresponding with the subgraph to be sorted is compiled
Number as training data, DBN networks are designed with DBN algorithms, DBN networks are then trained, using the DBN networks trained to instruction
Practice data carry out classification prediction, using the result of DBN network output layers as remote sensing images to be sorted the second classification results probability
Vector;
The first classification results probability vector of remote sensing images to be sorted and the second classification are tied using fractional layer fusion method
Fruit probability vector is merged, and obtains the 3rd classification results probability vector of remote sensing images to be sorted;
The 3rd classification results probability vector of remote sensing images to be sorted is sentenced using the 3rd scene classification model
It is fixed, obtain the scene classification result of remote sensing images to be sorted.
The present invention also proposes a kind of scene classification system for realizing the above method, including:
Subgraph generation module, for generating scene set of sub-images to given remote sensing images;
Scene classification training set sets up module, for setting up scene classification training set;
First semantic feature extraction module, the first language for extracting every width subgraph in the scene classification training set
Adopted feature;
First scene classification model building module, for generating first according to the first semantic feature of every width subgraph
Scene classification model;
First computing module, for calculating the first classification results probability vector based on the first scene classification model;
Second semantic feature extraction module, the second language for extracting every width subgraph in the scene classification training set
Adopted feature;
Second scene classification model building module, for generating second according to the second semantic feature of every width subgraph
Scene classification model;
Second computing module, for calculating the second classification results probability vector based on the second scene classification model;
3rd computing module, for being tied the first classification results probability vector and the second classification using fractional layer fusion method
Fruit probability vector is merged, and obtains the 3rd classification results probability vector;
3rd scene classification model building module, for generating the 3rd scene classification according to the 3rd classification results probability vector
Model;
Determination module, the scene type for judging remote sensing images to be sorted according to the 3rd scene classification model.
The present invention uses above-mentioned technical proposal, has the following advantages that:
The scene classification method and system of remote sensing images provided by the present invention, high-resolution scene figure can be utilized simultaneously
The middle level of picture and high-rise multistage semantic feature, make the two organically blend, improve the accuracy of scene classification.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to limited in any way.Except foregoing description
Schematical aspect, embodiment and feature outside, by reference to accompanying drawing and the following detailed description, the present invention is further
Aspect, embodiment and feature would is that what is be readily apparent that.
Brief description of the drawings
In the accompanying drawings, unless specified otherwise herein, otherwise represent same or analogous through multiple accompanying drawing identical references
Part or element.What these accompanying drawings were not necessarily to scale.It should be understood that these accompanying drawings depict only according to the present invention
Some disclosed embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 is a kind of flow chart of the scene classification method of remote sensing images provided in an embodiment of the present invention.
Fig. 2 is the DBN network structures in the embodiment of the present invention.
Fig. 3 is the RBM network structures in the embodiment of the present invention.
Fig. 4 is a kind of structure chart of the scene classification system of remote sensing images provided in an embodiment of the present invention.
Embodiment
Hereinafter, some exemplary embodiments are simply just described.As one skilled in the art will recognize that
Like that, without departing from the spirit or scope of the present invention, described embodiment can be changed by various different modes.
Therefore, accompanying drawing and description are considered essentially illustrative rather than restrictive.
High-definition remote sensing scene image used in the present embodiment can be that any spatial resolution is less than 5 meters
Remotely-sensed data.
As shown in figure 1, a kind of scene classification method of high-resolution remote sensing image of the present embodiment includes:
Given remote sensing images are generated scene set of sub-images, and set up scene classification training set by step S01.
In this example, the method for building up of scene classification training set is:Defined according to the scene type of given remote sensing images
Several scene types, and number, randomly select some width subgraphs respectively for each classification trains as scene classification
Collection.For example:The scene type of given remote sensing images is city, defines 10 scene classes such as building, road, water body, greenery patches
Not, it is 1,2,3,4 and to number respectively ... ..., 10;To each scene type, 200 width subgraphs are randomly choosed respectively, will be obtained
This 2000 width subgraph be used as scene classification training sample set.
Step S02, extracts the first semantic feature of every width subgraph in scene classification training set.
First semantic feature of the present embodiment can utilize BOVW methods, calculate its Nogata value tag and obtain, be a kind of middle level
Semantic feature.Specific method is as follows:
For each width subgraph in scene classification training set, principal component analysis conversion (principal is carried out respectively
Component analysis, PCA), the first principal component image after being converted to PCA carries out regular grid sampling, obtains uniform
The image block of specification, the size of each image block is at intervals of M × M between N × N (the present embodiment takes N=16), image block
(being originally that embodiment takes N=8), N is positive integer, and is 2 integral number power, and M is positive integer;To the image block obtained by sampling,
Its scale invariant feature (Scale Invariant Feature Transform, SIFT) is extracted, this feature is by calculating image
Gradient direction in block whole region and obtain;By the SIFT feature vector set of all subgraphs in scene classification training set
K mean cluster is carried out, each obtained cluster centre will be clustered as a vision word, (the present embodiment takes K=by this K
250) value of individual cluster centre and its corresponding vision word numbering are as visual vocabulary table, and K is positive integer;Using vision list
Word mapping method PCA is converted after first principal component image in the SIFT feature of image block that includes be mapped to corresponding vision
Word, calculates characteristic value corresponding to each vision word in the corresponding visual vocabulary table of SIFT feature of each image block
Between Euclidean distance, find out the numbering of the minimum vision word of Euclidean distance, and as the SIFT of corresponding image block
Feature Mapping result;The number of times that each vision word occurs in scene image-region in statistics visual vocabulary table, and with vector
[f1 ..., fj ..., fK] represents the visual word bag histogram feature of scene image, and this feature is the first semantic feature.
Step S03, the first scene classification model is set up according to the first semantic feature of every width subgraph.
Step S04, the first classification results probability vector is calculated based on the first scene classification model.
In this implementation, the first scene classification model uses DBN network models, specifically, by every width subgraph
First semantic feature scene type numbering corresponding with the subgraph designs DBN networks as training data with DBN algorithms,
Then DBN networks, the DBN networks trained i.e. the first scene classification model are trained.
Step S03 and step 04 are described in detail with reference to Fig. 2 and Fig. 3.
Using the first semantic feature of every width subgraph scene type numbering corresponding with the subgraph as training data, if
Count DBN networks, DBN networks by the unsupervised limited Boltzmann machine of multilayer (restricted boltzman machine,
RBM) network and one layer of backpropagation (back propagation, BP) network for having supervision are constituted, wherein, RBM can by one
See layer (typically being represented with v) and hidden layer (typically being represented with a h) composition, only visible node layer and hidden layer node have company
Weights are connect, and are not connected between each layer of node;By the number of plies (this implementation for determining RBM in network depth, i.e. intermediate layer
Example take 3), input layer number (determined according to the dimension of characteristic vector, the present embodiment take 250), output layer nodes are (according to field
10) scape number of types determines that the present embodiment takes the structural parameters such as, can obtain initial DBN network structures;Afterwards, DBN is trained
Network model, training process is divided into " pre-training " and " fine setting " two parts, and pre-training stage DBN uses unsupervised successively training
Mode is trained to the RBM in each layer, and low one layer of RBM hidden layer output is inputted as the RBM of last layer visible layer, really
Protect maps feature vectors to be optimal, the fine setting stage is trained the BP networks of last layer using supervised learning mode, and
By the error of reality output and anticipated output successively back-propagation, the weights of whole DBN networks are finely adjusted;Using training
DBN networks to training data carry out classification prediction, regard the result [M1, M2 ..., Mm] of DBN network output layers as first point
The probability vector of class result.
Step S05, extracts the second semantic feature of every width subgraph in the scene classification training set.
Second semantic feature of the present embodiment is high-level semantics features, it is necessary to be extracted by CNN network models.By CNN nets
The output of last full articulamentum before network output layer is as feature extractor, to every in the scene classification training set
One width subgraph extracts the second semantic feature.CNN models are a kind of existing International Publication and the network mould trained
Type, such as AlexNet and GoogLeNet, the present embodiment select GoogLeNet.
Step S06, the second scene classification model is set up according to the second semantic feature of every width subgraph.
Step S07, the second classification results probability vector is calculated based on the second scene classification model.
In this implementation, the second scene classification model still uses DBN network models, is exactly by scene classification specifically
The second semantic feature of every width subgraph scene type numbering corresponding with the subgraph is used as training data, design in training set
DBN network structures, determine the number of plies (the present embodiment takes 3) of RBM in network intermediate layer, (the present embodiment takes input layer number
1024), the structural parameters such as output layer nodes (the present embodiment takes 10);Should using the training process training described in step S03
DBN network models, carry out classification prediction, by the result of DBN network output layers using the DBN networks trained to training data
[H1, H2 ..., Hh] it is used as the probability vector of the second classification results.
Step S08, using fractional layer fusion method by the first classification results probability vector and the second classification results probability to
Amount is merged, and obtains the 3rd classification results probability vector.
Specifically, by the first classification results probability vector [M1, M2 ..., Mm] and the second classification results probability vector [H1,
H2 ..., Hh] direct splicing together, realize characteristic vector fractional layer fusion, obtain the 3rd classification results probability vector [M1,
M2,…,Mm,H1,H2,…,Hh]。
Step S09, the 3rd scene classification model is set up according to the 3rd classification results probability vector.
In this implementation, the 3rd scene classification model still uses DBN network models, specifically, by every width subgraph
3rd classification results probability vector scene type numbering corresponding with the subgraph is designed as training data with DBN algorithms
(in the present embodiment, network depth is 3 to DBN networks, and input layer number is 10+10=20, and output layer nodes are 10), to use
Training process described in step S03 trains the DBN network models, the DBN network models trained i.e. the 3rd scene classification mould
Type.
Step S10, the scene type of remote sensing images to be sorted is judged according to the 3rd scene classification model.
Particularly, decision method includes:
Remote sensing images to be sorted are generated set of sub-images to be sorted by step S1001, to each width subgraph, are utilized
BOVW methods, calculate its histogram feature, obtain the first semantic feature of subgraph to be sorted, specific method is referring to step S02;
Step S1002, the first semantic feature of above-mentioned every subgraph to be sorted is corresponding with the subgraph to be sorted
Scene type numbering designs DBN networks as training data with DBN algorithms, then trains DBN networks, utilizes what is trained
DBN networks carry out classification prediction to training data, regard the result of DBN network output layers as first point of remote sensing images to be sorted
Class probability of outcome vector, specific method is referring to step S03 and step S04;
Step S1003, regard the output of last full articulamentum before CNN network output layers as the second semantic feature
Extractor, extracts the second semantic feature, specific method is referring to step S05 to above-mentioned each subgraph to be sorted;
Step S1004, the second semantic feature of above-mentioned every subgraph to be sorted is corresponding with the subgraph to be sorted
Scene type numbering designs DBN networks as training data with DBN algorithms, then trains DBN networks, utilizes what is trained
DBN networks carry out classification prediction to training data, regard the result of DBN network output layers as second point of remote sensing images to be sorted
Class probability of outcome vector, specific method is referring to step S06 and step S07;
Step S1005, using fractional layer fusion method by the first classification results probability vector of remote sensing images to be sorted and
Second classification results probability vector is merged, and obtains the 3rd classification results probability vector of remote sensing images to be sorted, specific side
Method is referring to step S08;
Step S1006, step is substituted into using the 3rd classification results probability vector of remote sensing images to be sorted as characteristic vector
The 3rd scene classification model in S09 obtains the scene classification result of remote sensing images to be sorted to judging.
As shown in figure 4, present invention also offers a kind of system that can realize the above method, including:
Subgraph generation module 101, for generating scene set of sub-images to given remote sensing images;
Scene classification training set sets up module 102, for setting up scene classification training set;
First semantic feature extraction module 103, for extracting every width subgraph in the scene classification training set
One semantic feature;
First scene classification model building module 104, for being generated according to the first semantic feature of every width subgraph
First scene classification model;
First computing module 105, for calculating the first classification results probability vector based on the first scene classification model;
Second semantic feature extraction module 106, for extracting every width subgraph in the scene classification training set
Two semantic features;
Second scene classification model building module 107, for being generated according to the second semantic feature of every width subgraph
Second scene classification model;
Second computing module 108, for calculating the second classification results probability vector based on the second scene classification model;
3rd computing module 109, for utilizing fractional layer fusion method by the first classification results probability vector and second point
Class probability of outcome vector is merged, and obtains the 3rd classification results probability vector;
3rd scene classification model building module 1010, for generating the 3rd scene according to the 3rd classification results probability vector
Disaggregated model;
Determination module 1011, the scene type for judging remote sensing images to be sorted according to the 3rd scene classification model.
Further, scene classification training set is set up module 102 and also included:
Class declaration module, for defining several scene types according to the scene type of given remote sensing images, and is compiled
Number;And sample acquisition module, for randomly selecting some width subgraphs respectively as scene point for each scene type
Sample in class training set.
Further, the first semantic feature refers to obtain histogram feature using BOVW methods, is that a kind of middle level is semantic special
Levy.First scene classification model is made with the first semantic feature of every width subgraph scene type numbering corresponding with the subgraph
For the DBN network models of training data.Second semantic feature is high-level semantics features, be with before CNN network output layers most
The output of latter full articulamentum is obtained as feature extractor.Second scene classification model is with the second language of every width subgraph
Adopted feature scene type corresponding with the subgraph numbers the DBN network models as training data.
Further, determination module 1011 also includes:
Image collection module 10111, the remote sensing images to be sorted for obtaining;
Characteristic vector acquisition module 10112, is carried for remote sensing images to be sorted to be substituted into the first described semantic feature
Modulus block 103, the first scene classification model building module 104, the first computing module 105, the second semantic feature extraction module
106th, the second scene classification model building module 107, the second computing module 108 and the 3rd computing module 109, obtain to be sorted
3rd classification results probability vector of remote sensing images, and as the characteristic vector of remote sensing images to be sorted;
And contrast module 10113, for the characteristic vector of above-mentioned remote sensing images to be sorted to be substituted into the 3rd scene classification
Model is contrasted, and obtains final scene type.
The scene classification method and system for the remote sensing images that the embodiment of the present invention is provided, high-resolution can be utilized simultaneously
The middle level of scene image and high-rise multistage semantic feature, make the two organically blend, improve the accuracy of scene classification.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, its various change or replacement can be readily occurred in,
These should all be included within the scope of the present invention.Therefore, protection scope of the present invention should be with the guarantor of the claim
Shield scope is defined.
Claims (11)
1. a kind of high-resolution Classifying Method in Remote Sensing Image, it is characterised in that including:
Scene set of sub-images is generated to given remote sensing images, and sets up scene classification training set;
Extract the first semantic feature of every width subgraph in the scene classification training set;
First scene classification model is set up according to the first semantic feature of every width subgraph;
First classification results probability vector is calculated based on the first scene classification model;
Extract the second semantic feature of every width subgraph in the scene classification training set;
Second scene classification model is set up according to the second semantic feature of every width subgraph;
Second classification results probability vector is calculated based on the second scene classification model;
The first classification results probability vector and the second classification results probability vector are merged using fractional layer fusion method, obtained
To the 3rd classification results probability vector;
3rd scene classification model is set up according to the 3rd classification results probability vector;
The scene type of remote sensing images to be sorted is judged according to the 3rd scene classification model.
2. Classifying Method in Remote Sensing Image according to claim 1, it is characterised in that described sets up scene classification training set
Including:
Several scene types are defined according to the scene type of given remote sensing images, and numbered, are distinguished for each classification
Some width subgraphs are randomly selected as scene classification training set.
3. Classifying Method in Remote Sensing Image according to claim 2, it is characterised in that the described extraction scene classification instruction
Practice the first semantic feature of the every width subgraph concentrated, including:
To each width subgraph in the scene classification training set, using BOVW methods, its histogram feature is calculated, is obtained
One semantic feature.
4. Classifying Method in Remote Sensing Image according to claim 3, it is characterised in that described sets up the first scene classification mould
Type, including:
Using the first semantic feature of every width subgraph scene type numbering corresponding with the subgraph as training data, use
DBN algorithms design DBN networks, then train DBN networks, the DBN networks trained i.e. the first scene classification model.
5. Classifying Method in Remote Sensing Image according to claim 4, it is characterised in that the described classification results of calculating first are general
Rate vector, including:
Classification prediction is carried out to training data using the DBN networks trained, the result of DBN network output layers is regard as first point
Class probability of outcome vector.
6. Classifying Method in Remote Sensing Image according to claim 5, it is characterised in that the described extraction scene classification instruction
Practice the second semantic feature of the every width subgraph concentrated, including:
Using the output of last full articulamentum before CNN network output layers as feature extractor, to each width subgraph
Extract the second semantic feature.
7. Classifying Method in Remote Sensing Image according to claim 6, it is characterised in that described sets up the second scene classification mould
Type, including:
Using the second semantic feature of every width subgraph scene type numbering corresponding with the subgraph as training data, use
DBN algorithms design DBN networks, then train DBN networks, the DBN networks trained i.e. the second scene classification model.
8. Classifying Method in Remote Sensing Image according to claim 7, it is characterised in that the described classification results of calculating second are general
Rate vector, including:
Classification prediction is carried out to training data using the DBN networks trained, the result of DBN network output layers is regard as second point
Class probability of outcome vector.
9. Classifying Method in Remote Sensing Image according to claim 8, it is characterised in that described sets up the 3rd scene classification mould
Type includes:
It regard the 3rd classification results probability vector of every width subgraph scene type numbering corresponding with the subgraph as training number
According to, with DBN algorithms design DBN networks, then train DBN networks, the DBN networks trained i.e. the 3rd scene classification model.
10. Classifying Method in Remote Sensing Image according to claim 9, it is characterised in that described according to the 3rd scene classification
Model judges the scene type of remote sensing images to be sorted, including:
Remote sensing images to be sorted are generated into set of sub-images to be sorted, to each width subgraph, using BOVW methods, its are calculated straight
Square figure feature, obtains the first semantic feature of subgraph to be sorted;
First semantic feature of above-mentioned every subgraph to be sorted scene type numbering corresponding with the subgraph to be sorted is made
For training data, DBN networks are designed with DBN algorithms, DBN networks are then trained, using the DBN networks trained to training number
According to carry out classification prediction, using the result of DBN network output layers as remote sensing images to be sorted the first classification results probability vector;
Using the output of last full articulamentum before CNN network output layers as feature extractor, above-mentioned each width is treated
Subgraph of classifying extracts the second semantic feature;
Second semantic feature of above-mentioned every subgraph to be sorted scene type numbering corresponding with the subgraph to be sorted is made
For training data, DBN networks are designed with DBN algorithms, DBN networks are then trained, using the DBN networks trained to training number
According to carry out classification prediction, using the result of DBN network output layers as remote sensing images to be sorted the second classification results probability vector;
It is using fractional layer fusion method that the first classification results probability vector of remote sensing images to be sorted and the second classification results is general
Rate vector is merged, and obtains the 3rd classification results probability vector of remote sensing images to be sorted;
The 3rd classification results probability vector of remote sensing images to be sorted is judged using the 3rd scene classification model, obtained
To the scene classification result of remote sensing images to be sorted.
11. a kind of scene classification system of high-resolution remote sensing image, it is characterised in that including:
Subgraph generation module, for generating scene set of sub-images to given remote sensing images;
Scene classification training set sets up module, for setting up scene classification training set;
First semantic feature extraction module, first for extracting every width subgraph in the scene classification training set is semantic special
Levy;
First scene classification model building module, for generating the first scene according to the first semantic feature of every width subgraph
Disaggregated model;
First computing module, for calculating the first classification results probability vector based on the first scene classification model;
Second semantic feature extraction module, second for extracting every width subgraph in the scene classification training set is semantic special
Levy;
Second scene classification model building module, for generating the second scene according to the second semantic feature of every width subgraph
Disaggregated model;
Second computing module, for calculating the second classification results probability vector based on the second scene classification model;
3rd computing module, for using fractional layer fusion method that the first classification results probability vector and the second classification results is general
Rate vector is merged, and obtains the 3rd classification results probability vector;
3rd scene classification model building module, for generating the 3rd scene classification mould according to the 3rd classification results probability vector
Type;
Determination module, the scene type for judging remote sensing images to be sorted according to the 3rd scene classification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340257.5A CN107169450A (en) | 2017-05-15 | 2017-05-15 | The scene classification method and system of a kind of high-resolution remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340257.5A CN107169450A (en) | 2017-05-15 | 2017-05-15 | The scene classification method and system of a kind of high-resolution remote sensing image |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107169450A true CN107169450A (en) | 2017-09-15 |
Family
ID=59816302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710340257.5A Pending CN107169450A (en) | 2017-05-15 | 2017-05-15 | The scene classification method and system of a kind of high-resolution remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107169450A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059594A (en) * | 2019-04-02 | 2019-07-26 | 北京旷视科技有限公司 | A kind of environment sensing adapting to image recognition methods and device |
WO2020062191A1 (en) * | 2018-09-29 | 2020-04-02 | 华为技术有限公司 | Image processing method, apparatus and device |
CN111815529A (en) * | 2020-06-30 | 2020-10-23 | 上海电力大学 | Low-quality image classification enhancement method based on model fusion and data enhancement |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984963A (en) * | 2014-05-30 | 2014-08-13 | 中国科学院遥感与数字地球研究所 | Method for classifying high-resolution remote sensing image scenes |
CN104657751A (en) * | 2015-03-12 | 2015-05-27 | 华北电力大学(保定) | Mainline direction feature based deep belief network image classification method |
CN105844296A (en) * | 2016-03-22 | 2016-08-10 | 西安电子科技大学 | CDCP (complete double-cross pattern) local descriptor-based remote sensing image scene classification method |
CN105956610A (en) * | 2016-04-22 | 2016-09-21 | 中国人民解放军军事医学科学院卫生装备研究所 | Remote sensing image landform classification method based on multi-layer coding structure |
-
2017
- 2017-05-15 CN CN201710340257.5A patent/CN107169450A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103984963A (en) * | 2014-05-30 | 2014-08-13 | 中国科学院遥感与数字地球研究所 | Method for classifying high-resolution remote sensing image scenes |
CN104657751A (en) * | 2015-03-12 | 2015-05-27 | 华北电力大学(保定) | Mainline direction feature based deep belief network image classification method |
CN105844296A (en) * | 2016-03-22 | 2016-08-10 | 西安电子科技大学 | CDCP (complete double-cross pattern) local descriptor-based remote sensing image scene classification method |
CN105956610A (en) * | 2016-04-22 | 2016-09-21 | 中国人民解放军军事医学科学院卫生装备研究所 | Remote sensing image landform classification method based on multi-layer coding structure |
Non-Patent Citations (2)
Title |
---|
JEAN-NICOLA BLANCHET等: "Automated Annotation of Corals in Natural Scene Images Using Multiple Texture Representations", 《PEERJ PREPRINTS》 * |
LIJUN ZHAO等: "Feature significance-based multibag-of-visual-words model for remote sensing image scene classification", 《APPLIED REMOTE SENSING》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020062191A1 (en) * | 2018-09-29 | 2020-04-02 | 华为技术有限公司 | Image processing method, apparatus and device |
CN110059594A (en) * | 2019-04-02 | 2019-07-26 | 北京旷视科技有限公司 | A kind of environment sensing adapting to image recognition methods and device |
CN110059594B (en) * | 2019-04-02 | 2021-10-22 | 北京旷视科技有限公司 | Environment perception self-adaptive image recognition method and device |
CN111815529A (en) * | 2020-06-30 | 2020-10-23 | 上海电力大学 | Low-quality image classification enhancement method based on model fusion and data enhancement |
CN111815529B (en) * | 2020-06-30 | 2023-02-07 | 上海电力大学 | Low-quality image classification enhancement method based on model fusion and data enhancement |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110135267B (en) | Large-scene SAR image fine target detection method | |
CN109344736B (en) | Static image crowd counting method based on joint learning | |
CN102622607B (en) | Remote sensing image classification method based on multi-feature fusion | |
CN106815604B (en) | Method for viewing points detecting based on fusion of multi-layer information | |
CN106408030B (en) | SAR image classification method based on middle layer semantic attribute and convolutional neural networks | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN109255790A (en) | A kind of automatic image marking method of Weakly supervised semantic segmentation | |
Lucchi et al. | Are spatial and global constraints really necessary for segmentation? | |
CN104484681B (en) | Hyperspectral Remote Sensing Imagery Classification method based on spatial information and integrated study | |
CN106991382A (en) | A kind of remote sensing scene classification method | |
CN108549893A (en) | A kind of end-to-end recognition methods of the scene text of arbitrary shape | |
CN109376603A (en) | A kind of video frequency identifying method, device, computer equipment and storage medium | |
CN104680173B (en) | A kind of remote sensing images scene classification method | |
CN110083700A (en) | A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks | |
CN103679191B (en) | An automatic fake-licensed vehicle detection method based on static state pictures | |
Zhang et al. | Unsupervised difference representation learning for detecting multiple types of changes in multitemporal remote sensing images | |
CN107346328A (en) | A kind of cross-module state association learning method based on more granularity hierarchical networks | |
CN103955702A (en) | SAR image terrain classification method based on depth RBF network | |
CN105005789B (en) | A kind of remote sensing images terrain classification method of view-based access control model vocabulary | |
WO2012032788A1 (en) | Image recognition apparatus for objects in general and method therefor, using exclusive classifier | |
CN107480620A (en) | Remote sensing images automatic target recognition method based on heterogeneous characteristic fusion | |
CN113160062B (en) | Infrared image target detection method, device, equipment and storage medium | |
CN105184298A (en) | Image classification method through fast and locality-constrained low-rank coding process | |
CN111079374B (en) | Font generation method, apparatus and storage medium | |
CN109033944A (en) | A kind of all-sky aurora image classification and crucial partial structurtes localization method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170915 |