CN111191618A - KNN scene classification method and system based on matrix group - Google Patents

KNN scene classification method and system based on matrix group Download PDF

Info

Publication number
CN111191618A
CN111191618A CN202010002529.2A CN202010002529A CN111191618A CN 111191618 A CN111191618 A CN 111191618A CN 202010002529 A CN202010002529 A CN 202010002529A CN 111191618 A CN111191618 A CN 111191618A
Authority
CN
China
Prior art keywords
samples
sample
training
matrix
test
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010002529.2A
Other languages
Chinese (zh)
Inventor
徐承俊
朱国宾
舒静倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010002529.2A priority Critical patent/CN111191618A/en
Publication of CN111191618A publication Critical patent/CN111191618A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a KNN scene classification method and a KNN scene classification system based on a matrix group, wherein a remote sensing data set to be processed is obtained and is divided into a training data file and a test data file according to a proportion; projecting the training samples to a lie group manifold space; calculating an internal mean value of the plum group samples of each category; randomly taking samples from the test data, and carrying out projection calculation on the samples to obtain the average value in the plum group; calculating the distance between the average value and the average value in all known classes; arranging according to the ascending order of the distances, and taking the class samples corresponding to the first K minimum distances; and taking the class with the maximum number of the samples belonging to the class, and assigning the class to the test sample. The invention has the following advantages: (1) the method can meet the calculation of vector samples and the calculation of matrix samples, and has good expansibility. (2) The method uses the paradigm distance to replace the Euclid distance, and has good noise resistance. The dimension of the sample is low, and the calculation performance is good.

Description

KNN scene classification method and system based on matrix group
Technical Field
The invention relates to the field of remote sensing image processing and scene classification, in particular to a KNN scene classification method and system based on a matrix group.
Background
Due to the promotion of various optical remote sensing imaging technologies and the rapid development of various sensors, high-resolution remote sensing images can be obtained faster and easier than before, and the environment of life can be better known. The High Spatial Resolution (HSR) remote sensing image can provide a great deal of meaningful information about the structure, texture, shape, contour, etc. of the transaction, thereby contributing to the improvement of the recognition accuracy of the remote sensing image. Meanwhile, the number of various remote sensing image data sets is also increasing. However, the existing remote sensing image classification method is difficult to achieve high accuracy and maintain good storage and calculation performance. Therefore, the classification of remote sensing images remains an important task of current research.
The remote sensing image plays a very prominent role in the fields of urban land cover detection, urban green land detection, target detection, environmental monitoring, gas pollution and the like. The classification methods commonly used are: (1) the KNN classification method comprises the following steps: the method does not need training, and the class of the test sample is judged by calculating Euclidean distances between the test sample and K samples of known classes. (2) Deep learning methods, Convolutional Neural Networks (CNN), are composed of different types of layers, such as convolutional layers, pooling layers, and fully-connected layers (FC), and use a way of sharing weights. Each layer converts the input into the output for neuron activation.
The above classification method has the following disadvantages: (1) the traditional KNN is only suitable for sample calculation of a vector space, and cannot finish calculation of a matrix sample space. Furthermore, conventional KNNs use euclidd (Euclid) space to compute the distance of the test samples from known samples, which treats each sample equally, and is sensitive to individual noise samples, causing misclassifications. (2) The deep learning method needs a large amount of data sets to perform complex calculation, is time-consuming in calculation, has a large amount of parameters, cannot intervene in the training and learning process, is mainly completed by a frame, is not strong in interpretability and comprehensiveness, has high feature dimensionality of images, generally needs GPU (graphics processing unit) auxiliary calculation, and has high requirements on hardware.
Disclosure of Invention
The invention provides a KNN scene classification method based on a matrix group, which is used for solving the problems of image vector space representation and Euclid distance calculation limitation, high feature dimension, poor comprehensiveness and interpretability, complex calculation and the like in the background technology.
In order to achieve the above object, the technical method of the KNN scene classification method based on the matrix group of the present invention comprises the following specific steps:
and Step1, acquiring a remote sensing image data set to be processed, and dividing the remote sensing data set to be processed into a training set and a testing set.
And Step2, converting the training set and the test set into a training data file and a test data file respectively.
And Step3, projecting the image set in the training data file to a plum cluster manifold space to obtain a plum cluster sample set.
Step4, calculating the internal mean value of the plum group samples of each category
Figure BDA0002354017540000021
Step5, randomly taking a sample T from the test data filetestWill TtestProjecting to lie group manifold space and calculating inner mean value of lie group samples
Figure BDA0002354017540000022
Step6, calculating
Figure BDA0002354017540000023
Internal mean of all known classes of lie group samples
Figure BDA0002354017540000024
The distance between
Figure BDA0002354017540000025
Step7, performing ascending arrangement according to the distance, and taking the category samples corresponding to the first K minimum distances.
And Step8, taking the category with the largest number of samples of the category to which the samples belong, and giving the category to the test sample.
Further, in the method for classifying a KNN scene based on a matrix cluster of the present invention, Step3 specifically includes:
step31, carrying out the Liqun mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group sample, x, representing the i-th class in the training data fileijRepresenting the jth sample in the ith class in the training sample set of the lie group.
Further, in the method for classifying a KNN scene based on a matrix cluster of the present invention, Step4 specifically includes:
step41, inner mean value of each class of plum group samples
Figure BDA0002354017540000026
Figure BDA0002354017540000027
Wherein xijRepresents the jth sample, n, in the ith class of the training sample set of the lie groupiRepresenting the number of training samples in the ith classification, c representing the total number of classes, and obtaining an inner mean value
Figure BDA0002354017540000028
Is a matrix, i.e.
Figure BDA0002354017540000029
Wherein t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure BDA00023540175400000210
is an m × n matrix.
Further, in the method for classifying a KNN scene based on a matrix cluster of the present invention, Step6 specifically includes:
computing
Figure BDA0002354017540000031
With all known classes of lie group samplesInner mean of
Figure BDA0002354017540000032
The distance between
Figure BDA0002354017540000033
Figure BDA0002354017540000034
Wherein
Figure BDA0002354017540000035
The invention also provides a KNN scene classification system based on the matrix group, which comprises the following modules:
the remote sensing image data set reading and processing module is used for acquiring a remote sensing image data set to be processed and dividing the remote sensing data set to be processed into a training set and a testing set;
the data conversion module is used for respectively converting the training set and the test set into a training data file and a test data file;
the data set projection module is used for projecting the samples in the training data file to a plum cluster manifold space to obtain a plum cluster sample set;
a sample plum group inner mean module for calculating inner mean of each class of plum group samples
Figure BDA0002354017540000036
A test sample internal mean value calculation module for randomly taking a sample T from the test data filetestWill TtestProjecting to lie group manifold space and calculating inner mean value of lie group samples
Figure BDA0002354017540000037
A distance calculation module for calculating
Figure BDA0002354017540000038
Internal mean of all known classes of lie group samples
Figure BDA0002354017540000039
The distance between
Figure BDA00023540175400000310
The sorting and searching module is used for carrying out ascending sorting according to the distances and taking the category samples corresponding to the first K minimum distances;
and the judging module is used for giving the test sample the category with the largest number of the belonged category samples in the belonged category samples.
Further, the data set projection module specifically includes:
and (3) carrying out lith group mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group sample, x, representing the i-th class in the training data fileijRepresenting the jth sample in the ith class in the training sample set of the lie group.
Further, the sample lie group mean module specifically includes:
inner mean of each class of lie group samples
Figure BDA00023540175400000311
Figure BDA00023540175400000312
Wherein xijRepresents the jth sample, n, in the ith class of the training sample set of the lie groupiRepresenting the number of training samples in the ith classification, c representing the total number of classes, and obtaining an inner mean value
Figure BDA00023540175400000313
Is a matrix, i.e.
Figure BDA00023540175400000314
Wherein t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure BDA0002354017540000041
is an m × n matrix.
Further, the distance calculation module specifically includes:
computing
Figure BDA0002354017540000042
Internal mean of all known classes of lie group samples
Figure BDA0002354017540000043
The distance between
Figure BDA0002354017540000044
Figure BDA0002354017540000045
Wherein
Figure BDA0002354017540000046
Compared with the prior art, the invention has the following beneficial effects: the method of the invention expresses the characteristics of the data set sample by the mean value in the sample, has good self-interpretation capability, enhances comprehensiveness, and solves the problems of poor deep learning interpretability and poor comprehensiveness. The method can adapt to different application scenes and experimental equipment environments and has good robustness. The characteristic dimensionality is expressed by a matrix, the matrix is a real symmetric matrix, the method has the characteristics of low dimensionality and high calculation speed, and the problems of high dimensionality, more characteristics, low calculation efficiency and the like in deep learning are solved. The feature matrix spatial representation can represent the spatial relationship between the feature and the adjacent feature (represented by non-dominant diagonal) in addition to the feature of the feature itself (represented by dominant diagonal), and solves the problem of spatial information loss. In addition, the method can realize the calculation of the spatial features of the vector samples and can also complete the calculation of the spatial features of the matrix samples, and the influence of noisy data on classification can be well solved by using the solution of the spatial norm distance instead of the Euclid distance. The method of the invention can keep high resolution and good calculation performance, and can be used as a reference for similar research.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the present invention is further described below with reference to the accompanying drawings and the embodiments.
FIG. 1 is a simplified flow chart of a method for classifying remote sensing image scenes according to the present invention;
FIG. 2 is a schematic diagram of the confusion matrix in the SIRI-WHU data set of the present invention;
FIG. 3 is a schematic diagram of the confusion matrix in the UC Merceded Data set according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The detailed description of the embodiments of the present invention generally described and illustrated in the figures herein is not intended to limit the scope of the invention, which is claimed, but is merely representative of selected embodiments of the invention.
It should be noted that: like reference symbols in the following drawings indicate like items, and thus, once an item is defined in one drawing, it need not be further defined and explained in subsequent drawings.
Referring to fig. 1, fig. 1 is a simplified flow chart of a remote sensing image scene classification method provided by the present invention. The embodiment is particularly suitable for the classification of high-resolution remote sensing image scenes, and is executed in the development environment of the lie group machine learning manifold space.
Step1, this embodiment downloads a UC mercized data set at Google Earth, where the data set contains 21 categories, each category containing 100 pictures, each picture being a high resolution remote sensing image of 256 × 256 (units: pixels). The SIRI-WHU data set was downloaded through a national focus laboratory official network at wuhan university, and contained 12 categories, each containing 200 pictures, each picture being a high resolution remote sensing image of 200 × 200 (units: pixels) size. Furthermore, classification testing is carried out on the two data sets, the two data sets are divided into two mutually exclusive sets by programming through matlab language, wherein any 70% of images in the two remote sensing data sets are used for training the model, and the rest 30% of images are used as the test set for verifying the accuracy of the model.
It should be noted that the picture data in this embodiment has the following advantages: (1) the data volume is large, the categories are many, and the method is necessary for the plum blossom machine learning. (2) The diversity of sample images, what adopt in this embodiment is the standard data set, covers the different scenes in a plurality of countries and regions, and the sample image has the diversity, and the data set image has all made strict screening to different weather, season, angle, illumination and definition to make the observation angle of every classification image etc. have great difference.
In addition, the comparison between the UC Merced data set and the SIRI-WHU data set with the existing high resolution remote sensing image data set is shown in the table below, and it can be seen from the table that the data set selected in the present embodiment comprehensively considers the types of images and the number of types. The two data sets evaluate the relevant models and algorithms more objectively, so that the method is better developed in the field of remote sensing image scene classification, and is specifically shown in table 1.
Table 1 data set detailed information table
Figure BDA0002354017540000051
Figure BDA0002354017540000061
Step2, converting the training set and the test set into a training data file and a test data file respectively;
step3, the embodiment of the invention constructs the mapping of data samples to lie group samples.
And (3) carrying out lith group mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group sample, x, representing the i-th class in the training data fileijRepresenting the jth sample in the ith class in the training sample set of the lie group.
Step4, inner mean value of each class lie algebraic sample of the embodiment of the invention
Figure BDA0002354017540000062
Inner mean of each class of lie group samples
Figure BDA0002354017540000063
Figure BDA0002354017540000064
Wherein xijRepresenting the jth sample in the ith classification of the training sample set of the lie group, c representing the total number of classes, niRepresenting the number of training samples in the ith classification to obtain an internal mean value
Figure BDA0002354017540000065
Is a matrix, i.e.
Figure BDA0002354017540000066
Wherein, t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure BDA0002354017540000067
is an m × n matrix.
Step5, randomly taking a sample T from the test data filetestProjecting and calculating the inner mean value of the lie group samples
Figure BDA0002354017540000068
Step6, calculating
Figure BDA0002354017540000069
Internal mean of all known classes of lie group samples
Figure BDA00023540175400000610
The distance between
Figure BDA00023540175400000611
Computing
Figure BDA00023540175400000612
Internal mean of all known classes of lie group samples
Figure BDA00023540175400000613
The distance between
Figure BDA00023540175400000614
Figure BDA00023540175400000615
Wherein
Figure BDA00023540175400000616
Step7, the samples are sorted in ascending order according to the distance, and the samples of the category corresponding to the first K smallest distances are taken, wherein K is 5 in the invention.
And Step8, taking the category with the largest number of samples of the category to which the samples belong, and giving the category to the test sample.
TABLE 2 comparison table of classification accuracy of traditional KNN and the method of the present invention
Figure BDA00023540175400000617
Table 2 shows the comparison between the method of the present invention and the conventional KNN algorithm, and it can be seen that the method of the present invention has significant advantages. Referring to fig. 2-3, fig. 2 is a graph of an confusion matrix in a SIRI-WHU data set according to an embodiment of the present invention, and fig. 3 is a graph of an confusion matrix in a UC mercd data set according to an embodiment of the present invention. The confusion matrix is a table of information that analyzes all errors and confusion between different classes and is created by computing the classification of test samples for each type of correct and incorrect and accumulating the results into a table. Here we select SIRI-WHU and UC mercd datasets, each class with the same number of images, so the value of the overall precision is equal to the value of the average precision. The abscissa is the actual category and the ordinate is the predicted category, and the greater the value of the principal diagonal (the greater the color depth) is, the greater the accuracy is. As is apparent from fig. 2 and 3, the accuracy of each category and the proportion of the categories that are mistakenly classified have higher accuracy, and the average accuracy reaches 97%.
The invention also provides a KNN scene classification system based on the matrix group, which comprises the following modules:
the remote sensing image data set reading and processing module is used for acquiring a remote sensing image data set to be processed and dividing the remote sensing data set to be processed into a training set and a testing set;
the data conversion module is used for respectively converting the training set and the test set into a training data file and a test data file;
the data set projection module is used for projecting the samples in the training data file to a plum cluster manifold space to obtain a plum cluster sample set;
a sample plum group inner mean module for calculating inner mean of each class of plum group samples
Figure BDA0002354017540000071
A test sample internal mean value calculation module for randomly taking a sample T from the test data filetestWill TtestProjecting to lie group manifold space and calculating inner mean value of lie group samples
Figure BDA0002354017540000072
A distance calculation module for calculating
Figure BDA0002354017540000073
Internal mean of all known classes of lie group samples
Figure BDA0002354017540000074
The distance between
Figure BDA0002354017540000075
The sorting and searching module is used for carrying out ascending sorting according to the distances and taking the category samples corresponding to the first K minimum distances;
and the judging module is used for giving the test sample the category with the largest number of the belonged category samples in the belonged category samples.
Further, the data set projection module specifically includes:
and (3) carrying out lith group mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group sample, x, representing the i-th class in the training data fileijRepresenting the jth sample in the ith class in the training sample set of the lie group.
Further, the sample lie group mean module specifically includes:
inner mean of each class of lie group samples
Figure BDA0002354017540000081
Figure BDA0002354017540000082
Wherein xijRepresents the jth sample, n, in the ith class of the training sample set of the lie groupiRepresenting the number of training samples in the ith classification, c representing the total number of classes, and obtaining an inner mean value
Figure BDA0002354017540000083
Is a matrix, i.e.
Figure BDA0002354017540000084
Wherein t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure BDA0002354017540000085
is an m × n matrix.
Further, the distance calculation module specifically includes:
computing
Figure BDA0002354017540000086
Internal mean of all known classes of lie group samples
Figure BDA0002354017540000087
The distance between
Figure BDA0002354017540000088
Figure BDA0002354017540000089
Wherein
Figure BDA00023540175400000810
The specific implementation of each module corresponds to each step, and the invention is not described.
The above description is only a part of the embodiments of the present invention, and is not intended to limit the present invention, and it will be apparent to those skilled in the art that various modifications can be made in the present invention. Any changes, equivalent substitutions or improvements made within the spirit and principle of the present invention should be included within the scope of the present invention.

Claims (8)

1. A KNN scene classification method based on matrix groups is characterized by comprising the following steps:
step1, acquiring a remote sensing image data set to be processed, and dividing the remote sensing data set to be processed into a training set and a testing set;
step2, converting the training set and the test set into a training data file and a test data file respectively;
step3, projecting the image set in the training data file to a plum cluster manifold space to obtain a plum cluster sample set;
step4, calculating the internal mean value of the plum group samples of each category
Figure FDA0002354017530000011
Step5, randomly taking a sample T from the test data filetestWill TtestProjecting to lie group manifold space and calculating inner mean value of lie group samples
Figure FDA0002354017530000012
Step6, calculating
Figure FDA0002354017530000013
Internal mean of all known classes of lie group samples
Figure FDA0002354017530000014
The distance between
Figure FDA0002354017530000015
Step7, performing ascending arrangement according to the distances, and taking the category samples corresponding to the first K minimum distances;
and Step8, taking the category with the largest number of samples of the category to which the samples belong, and giving the category to the test sample.
2. The method for KNN scene classification based on matrix clustering as claimed in claim 1, wherein: the Step3 specifically comprises the following steps:
and (3) carrying out lith group mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group sample, x, representing the i-th class in the training data fileijRepresenting the jth sample in the ith class in the training sample set of the lie group.
3. The method for classification of a KNN scene based on a matrix constellation as claimed in claim 2, characterized in that: the Step4 specifically comprises the following steps:
inner mean of each class of lie group samples
Figure FDA0002354017530000016
Figure FDA0002354017530000017
Wherein xijRepresents the jth sample, n, in the ith class of the training sample set of the lie groupiRepresenting the number of training samples in the ith classification, c representing the total number of classes, and obtaining an inner mean value
Figure FDA0002354017530000018
Is a matrix, i.e.
Figure FDA0002354017530000019
Wherein t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure FDA00023540175300000110
is an m × n matrix.
4. The KNN scene classification method based on matrix clustering as recited in claim 3, characterized in that: the Step6 specifically comprises the following steps:
computing
Figure FDA00023540175300000111
Internal mean of all known classes of lie group samples
Figure FDA00023540175300000112
The distance between
Figure FDA00023540175300000113
Figure FDA0002354017530000021
Wherein
Figure FDA0002354017530000022
5. A KNN scene classification system based on matrix groups is characterized by comprising the following modules:
the remote sensing image data set reading and processing module is used for acquiring a remote sensing image data set to be processed and dividing the remote sensing data set to be processed into a training set and a testing set;
the data conversion module is used for respectively converting the training set and the test set into a training data file and a test data file;
the data set projection module is used for projecting the samples in the training data file to a plum cluster manifold space to obtain a plum cluster sample set;
a sample plum group inner mean module for calculating inner mean of each class of plum group samples
Figure FDA0002354017530000023
A test sample internal mean value calculation module for randomly taking a sample T from the test data filetestWill TtestProjecting to lie group manifold space and calculating inner mean value of lie group samples
Figure FDA0002354017530000024
A distance calculation module for calculating
Figure FDA0002354017530000025
Internal mean of all known classes of lie group samples
Figure FDA0002354017530000026
The distance between
Figure FDA0002354017530000027
The sorting and searching module is used for carrying out ascending sorting according to the distances and taking the category samples corresponding to the first K minimum distances;
and the judging module is used for giving the test sample the category with the largest number of the belonged category samples in the belonged category samples.
6. The KNN scene classification system based on matrix clustering of claim 5, characterized by: the dataset projection module specifically comprises:
and (3) carrying out lith group mapping on each sample: x is the number ofij=exp(Mij) Wherein M isijJ-th lie group representing i-th category in training data fileSample, xijRepresenting the jth sample in the ith class in the training sample set of the lie group.
7. The KNN scene classification system based on matrix clustering of claim 6, characterized by: the sample lie group inner mean module specifically comprises:
inner mean of each class of lie group samples
Figure FDA0002354017530000028
Figure FDA0002354017530000029
Wherein xijRepresents the jth sample, n, in the ith class of the training sample set of the lie groupiRepresenting the number of training samples in the ith classification, c representing the total number of classes, and obtaining an inner mean value
Figure FDA00023540175300000210
Is a matrix, i.e.
Figure FDA00023540175300000211
Wherein t isklRepresenting the value of the ith row and the ith column, the values of k and l are respectively 1-m and 1-n,
Figure FDA00023540175300000212
is an m × n matrix.
8. The KNN scene classification system based on matrix clustering of claim 7, characterized by: the distance calculation module specifically includes:
computing
Figure FDA0002354017530000031
Internal mean of all known classes of lie group samples
Figure FDA0002354017530000032
The distance between
Figure FDA0002354017530000033
Figure FDA0002354017530000034
Wherein
Figure FDA0002354017530000035
CN202010002529.2A 2020-01-02 2020-01-02 KNN scene classification method and system based on matrix group Pending CN111191618A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010002529.2A CN111191618A (en) 2020-01-02 2020-01-02 KNN scene classification method and system based on matrix group

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010002529.2A CN111191618A (en) 2020-01-02 2020-01-02 KNN scene classification method and system based on matrix group

Publications (1)

Publication Number Publication Date
CN111191618A true CN111191618A (en) 2020-05-22

Family

ID=70709791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010002529.2A Pending CN111191618A (en) 2020-01-02 2020-01-02 KNN scene classification method and system based on matrix group

Country Status (1)

Country Link
CN (1) CN111191618A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722713A (en) * 2012-02-22 2012-10-10 苏州大学 Handwritten numeral recognition method based on lie group structure data and system thereof
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722713A (en) * 2012-02-22 2012-10-10 苏州大学 Handwritten numeral recognition method based on lie group structure data and system thereof
CN108389171A (en) * 2018-03-08 2018-08-10 深圳市唯特视科技有限公司 A kind of light field deblurring and depth estimation method based on Combined estimator fuzzy variable

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨梦铎 等: "李群机器学习十年研究进展", 《计算机学报》 *
王邦军 等: "基于改进协方差特征的李-KNN分类算法", 《模式识别与人工智能》 *

Similar Documents

Publication Publication Date Title
CN108108657B (en) Method for correcting locality sensitive Hash vehicle retrieval based on multitask deep learning
CN109639739B (en) Abnormal flow detection method based on automatic encoder network
CN109949317B (en) Semi-supervised image example segmentation method based on gradual confrontation learning
CN113657450B (en) Attention mechanism-based land battlefield image-text cross-modal retrieval method and system
CN111027576B (en) Cooperative significance detection method based on cooperative significance generation type countermeasure network
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN114972213A (en) Two-stage mainboard image defect detection and positioning method based on machine vision
CN109871454B (en) Robust discrete supervision cross-media hash retrieval method
CN113139512B (en) Depth network hyperspectral image classification method based on residual error and attention
CN112580480B (en) Hyperspectral remote sensing image classification method and device
CN113255793A (en) Fine-grained ship identification method based on contrast learning
CN114255403A (en) Optical remote sensing image data processing method and system based on deep learning
CN113705641A (en) Hyperspectral image classification method based on rich context network
CN111639697B (en) Hyperspectral image classification method based on non-repeated sampling and prototype network
CN115311502A (en) Remote sensing image small sample scene classification method based on multi-scale double-flow architecture
CN116342894A (en) GIS infrared feature recognition system and method based on improved YOLOv5
CN114399674A (en) Hyperspectral image technology-based shellfish toxin nondestructive rapid detection method and system
CN117237733A (en) Breast cancer full-slice image classification method combining self-supervision and weak supervision learning
Reyes et al. Enhanced rotational invariant convolutional neural network for supernovae detection
CN117036781A (en) Image classification method based on tree comprehensive diversity depth forests
CN116704241A (en) Full-channel 3D convolutional neural network hyperspectral remote sensing image classification method
CN116630700A (en) Remote sensing image classification method based on introduction channel-space attention mechanism
CN111191618A (en) KNN scene classification method and system based on matrix group
CN113313185B (en) Hyperspectral image classification method based on self-adaptive spatial spectrum feature extraction
CN114818945A (en) Small sample image classification method and device integrating category adaptive metric learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522

RJ01 Rejection of invention patent application after publication