CN108846338B - Polarization feature selection and classification method based on object-oriented random forest - Google Patents
Polarization feature selection and classification method based on object-oriented random forest Download PDFInfo
- Publication number
- CN108846338B CN108846338B CN201810561139.1A CN201810561139A CN108846338B CN 108846338 B CN108846338 B CN 108846338B CN 201810561139 A CN201810561139 A CN 201810561139A CN 108846338 B CN108846338 B CN 108846338B
- Authority
- CN
- China
- Prior art keywords
- decomposition
- classification
- random forest
- polarization
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a polarization feature selection and classification method based on an object-oriented random forest, which solves the problems of feature selection and image classification when a plurality of polarization features participate in classification. The method comprises the steps of carrying out multi-scale segmentation on a feature set by adopting an object-oriented method, carrying out random forest modeling on a segmented sample object, calculating the importance of each feature, and carrying out feature set optimization by adopting a sequence forward selection algorithm. The invention adopts an object-oriented random forest method to improve the model training efficiency and the classification precision. And (3) constructing an optimal feature subset by adopting a sequence forward selection algorithm and combining the iteration termination condition of highest precision, so as to avoid trapping in a local optimal solution. The algorithm can improve the classification precision and provide quantitative reference for reasonably optimizing feature sets.
Description
Technical Field
The invention belongs to the field of image processing, mainly relates to polarized SAR image feature extraction and classification, and particularly relates to a polarized feature selection and classification method based on object-oriented random forests.
Background
In recent years, polarized SAR images are increasingly used in the extraction of surface information. The method based on target decomposition is an important means for analyzing and extracting information of the polarized SAR image. For a relatively complex geomorphic environment, all the surface types are difficult to distinguish effectively by a certain polarization decomposition means or a certain characteristic parameter. Therefore, a plurality of polarization decomposition algorithms are integrated, and a plurality of polarization characteristic parameters are combined to form an effective way for solving the problem. However, in practical applications, there may be interdependence between too many characteristic parameters, which easily causes a series of problems, such as: too long time is needed for analyzing the characteristics and training the model, dimension disasters are caused, the model is complex, the calculated amount is increased, the classification precision is reduced, and the like. Therefore, how to select features with good classifying effect on the ground object target from a plurality of feature parameters can be another key problem to reduce the calculation amount and information redundancy while ensuring the classifying precision.
Random Forest (RF) is a classifier with excellent performance developed in the field of machine learning in recent years, and is obtained by freely combining decision trees. The random forest algorithm provides a method for calculating the importance of variables, but how to quantitatively select optimal parameters according to the importance needs further consideration. Based on the method, the method selects samples by taking a segmentation object as a unit, extracts a plurality of polarization characteristic parameters of the samples, and performs characteristic set optimization by adopting a Sequence Forward Selection (SFS) algorithm and combining an iteration termination condition with highest precision according to the size of the importance value of the polarization characteristic parameters.
Disclosure of Invention
The invention aims to provide a feature extraction method based on various polarization decomposition algorithms, which combines object-oriented segmentation and random forest modeling for polarization feature selection to provide quantitative reference for a reasonable optimization feature set, adopts a sequence forward selection algorithm to construct an optimal feature subset, and takes the highest precision as an iteration termination condition to avoid trapping in a local optimal solution and improve the classification precision.
In order to solve the technical problems, the invention adopts the following technical scheme: the invention provides a polarization feature selection and classification method based on object-oriented random forests, which comprises the following steps:
and 2, decomposing the preprocessed polarized image based on 20 target decomposition algorithms.
step 5, adding the features with the highest importance values into the target subset by adopting a sequence forward selection algorithm;
step 6, classifying the characteristics of the target subset based on a random forest method, and calculating the overall precision;
and 7, selecting the optimal polarization characteristic parameters according to the classification overall precision calculated each time to form an optimal characteristic subset.
Further, in step 2, in order to fully mine scattering information of the polarized SAR image and obtain more polarization feature parameters, An initial polarization feature set is constructed, and 20 target decomposition algorithms are used to perform polarization decomposition, namely Pauli decomposition, Krogager decomposition, Huynen decomposition, Barnes1 decomposition, Barnes2 decomposition, Holm1 decomposition, Holm2 decomposition, VanZly3 decomposition, Cloude decomposition, H/a/Alpha decomposition, Freeman2 decomposition, Freeman3 decomposition, Yamaguchi3 decomposition, Yamaguchi4 decomposition, Neumann decomposition, Touzi decomposition, An _ Yang3 decomposition, An _ Yang4 decomposition, Arii3_ NNED decomposition, Arii3_ aned decomposition. 93 polarization characteristic parameters can be obtained by the decomposition algorithm, and 3 matrix elements, namely S11, S12 and S22, are added to obtain a set containing 96 polarization characteristics.
Further, in step 3, the optimal segmentation scale can be selected by multiple experiments to obtain segmentation results at different scales, and the segmentation results are selected by visual interpretation. And then carrying out random forest modeling according to the selected samples, firstly randomly extracting k self-help sample sets from the original training data set in a replacement manner by adopting a self-help method (Bootstrap), and constructing k decision trees by using the k sample sets. In this process, each time the samples not extracted constitute k Out-of-Bag data (Out-of-Bag, OOB); then, setting N characteristics, randomly extracting N characteristics (N is less than or equal to N) at each node of each tree, and selecting a characteristic with the strongest classification capability for splitting by calculating the information content of each characteristic, so that a certain leaf node of a decision tree can not be split continuously, or all samples in the decision tree point to the same classification, and each tree is not pruned, so that the tree grows to the maximum extent; and finally, forming a random forest by all the decision trees, inputting a new sample into a classifier after the random forest is constructed, voting the classification of each decision tree of each sample, and determining the classification result according to the voting number of the decision trees.
Further, in step 4, a serial number is written for each characteristic parameter from 1 to 96, a method based On Out of Bag (OOB) error is adopted for calculating the importance of each characteristic, and a self-service sample B is set to be 1, 2jBased on OOB errorThe calculation procedure is as follows, firstly finding the data outside the bag when b is 1By tree TbTo pairClassifying and recording the correct classification numberFor variable XjJ is 1, 2,.. ang.n, pairX in (1)jIs perturbed, and the perturbed data set is recorded asThen use the tree TbTo pairClassifying and recording the correct classification numberFor B2, 3.., B, the above process is repeated, and the variable X is then setjThe variable importance calculation formula based on the OOB error is as follows:
the importance of each characteristic parameter is calculated through the formula, and the characteristic parameters are sorted from large to small according to the value of the characteristic parameters.
By adopting the technical scheme, compared with the prior art, the invention has the following technical effects:
the invention adopts an object-oriented random forest method to improve the model training efficiency and the classification precision. And (3) constructing an optimal feature subset by adopting a sequence forward selection algorithm and combining the iteration termination condition of highest precision, so as to avoid trapping in a local optimal solution. The algorithm can improve the classification precision and provide quantitative reference for reasonably optimizing feature sets.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a polarization feature parameter importance ranking;
FIG. 3 is a relationship between the number of polarization characteristic parameters participating in classification and the overall classification accuracy;
fig. 4 shows the final result of classification by using the classification method of the present invention and other methods.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the specific drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
The method for selecting and classifying polarization features based on the object-oriented random forest as shown in fig. 1 comprises the following steps:
For qualitative and quantitative analysis of the effectiveness of the method of the invention, the experimental data used were the ALOS PALSAR fully polarized image of japan, with an incidence angle of 23.858 °, a range-wise resolution of 9.37m, and a azimuth resolution of 3.57 m. Firstly, preprocessing such as filtering and multi-view processing are carried out on an image, wherein a referred Lee filter with the window size of 3 multiplied by 3 is adopted for filtering, and the ratio of the multi-view processing in the azimuth direction to the distance direction is 6: 1.
It should be noted that the method of the present invention is not only suitable for ALOS PALSAR full polarization data used in experiments, but also suitable for other airborne or satellite-borne full polarization data, and only the size of the filtering window and the selection of parameters such as the ratio of multi-view processing are different, and different data sources need to be selected.
And 3, performing multi-scale segmentation on the feature set by adopting an object-oriented method, selecting a certain number of training samples by taking each object as a unit, and performing random forest modeling on the training samples. The selection of the optimal segmentation scale can be achieved through a plurality of experiments, segmentation results under different scales are obtained, and the selection is carried out through visual interpretation. Aiming at ALOS PALSAR data adopted in the experiment, multiple times of experiment comparison shows that when the segmentation scale is 15, the segmentation effect is the best, so that the subsequent experiments are based on the segmentation result under the scale factor. And then carrying out random forest modeling according to the selected samples, firstly randomly extracting k self-help sample sets from the original training data set in a replacement manner by adopting a self-help method (Bootstrap), and constructing k decision trees by using the k sample sets. In this process, each time the samples not extracted constitute k Out-of-Bag data (Out-of-Bag, OOB); then, setting N characteristics, randomly extracting N characteristics (N is less than or equal to N) at each node of each tree, and selecting a characteristic with the strongest classification capability for splitting by calculating the information content of each characteristic, so that a certain leaf node of a decision tree can not be split continuously, or all samples in the decision tree point to the same classification, and each tree is not pruned, so that the tree grows to the maximum extent; and finally, forming a random forest by all the decision trees, inputting a new sample into a classifier after the random forest is constructed, voting the classification of each decision tree of each sample, and determining the classification result according to the voting number of the decision trees.
the importance of each feature parameter is calculated by the above formula and sorted from large to small according to its value (fig. 2).
And 5, adding the features with the maximum importance value into the target feature subset each time by adopting a sequence forward selection algorithm, classifying by using the features in the target feature subset, and calculating the overall classification precision. Each time a feature is added, iteration by iteration.
And 6, after each iteration, classifying the features of the target subset based on a random forest method, and analyzing the relation between the classification precision calculated in each iteration and the number of polarization feature parameters (figure 3). To facilitate quantitative analysis and analysis, the present invention employs Overall Accuracy (OA), Producer Accuracy (PA), and User Accuracy (UA) (tables 1 and 2) to evaluate the classification results of the present method ((a) in fig. 4) and the query decision tree method classification results ((b) in fig. 4).
TABLE 1 Classification accuracy of the method of the invention
TABLE 2 Classification accuracy of QUEST decision Tree method
As can be seen from table 1, table 2 and fig. 4, compared with the decision tree algorithm, the method of the present invention has better effects: through comparison with ground real data, the classification result obtained by the method provided by the invention effectively improves the phenomenon of misclassification of ground objects (elliptical and rectangular areas in the figure), effectively distinguishes areas with similar scattering mechanisms, and finally results are closer to the real ground surface, the total classification precision is improved by more than 11%, and the Kappa coefficient is improved by 0.14.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and changes can be made without departing from the principle of the present invention, and these modifications and changes should be also considered as the protection scope of the present invention.
Claims (3)
1. The polarization feature selection and classification method based on the object-oriented random forest is characterized by comprising the following steps of:
step 1, preprocessing a fully polarized SAR image, and removing speckle noise in the image by respectively adopting multi-view processing and a certain filtering algorithm to improve the visual effect of the image;
step 2, decomposing the preprocessed polarized image based on 20 target decomposition algorithms;
step 3, performing multi-scale segmentation on the feature set by adopting an object-oriented method, selecting a certain number of training samples by taking each object as a unit, and performing random forest modeling on the training samples;
step 4, calculating the importance of the features according to the samples, and sequencing the importance values;
step 5, adding the features with the highest importance values into the target subset by adopting a sequence forward selection algorithm;
step 6, classifying the characteristics of the target subset based on a random forest method, and calculating the overall precision;
step 7, selecting the optimal polarization characteristic parameters according to the classification overall precision calculated each time to form an optimal characteristic subset;
in step 4, a serial number is written for each characteristic parameter from 1 to 96, a method based On Out of Bag (OOB) error is adopted for calculating the characteristic importance, and a self-service sample B is set to be 1, 2jBased on OOB errorThe calculation procedure is as follows, firstly finding the data outside the bag when b is 1By tree TbTo pairClassifying and recording the correct classification numberFor variable XjJ is 1, 2,.. ang.n, pairX in (1)jIs perturbed, and the perturbed data set is recorded asThen use the tree TbTo pairClassifying and recording the correct classification numberFor B2, 3.., B, the above process is repeated, and the variable X is then setjThe variable importance calculation formula based on the OOB error is as follows:
the importance of each characteristic parameter is calculated through the formula, and the characteristic parameters are sorted from large to small according to the value of the characteristic parameters.
2. The object-oriented random forest-based polarization feature selection and classification method as claimed in claim 1, wherein:
in the step 2, in order to fully mine scattering information of the polarized SAR image and obtain more polarization feature parameters, An initial polarization feature set is constructed, and 20 target decomposition algorithms are adopted to perform polarization decomposition, namely Pauli decomposition, Krogager decomposition, Huynen decomposition, Barnes1 decomposition, Barnes2 decomposition, Holm1 decomposition, Holm2 decomposition, VanZly3 decomposition, Cloude decomposition, H/a/Alpha decomposition, Freeman2 decomposition, Freeman3 decomposition, Yamaguchi3 decomposition, Yamaguchi4 decomposition, Neumann decomposition, Touzi decomposition, An _ Yang3 decomposition, An _ Yang4 decomposition, Arii3_ NNED decomposition, Arii3_ ann ned decomposition, 93 polarization feature parameters can be obtained through the decomposition algorithms, and a polarization feature set comprising S11, S12 and S22 is obtained by adding the polarization feature parameters.
3. The object-oriented random forest-based polarization feature selection and classification method as claimed in claim 1, wherein:
in the step 3, the optimal segmentation scale can be selected by a method of multiple experiments to obtain segmentation results under different scales, the segmentation results are selected by visual interpretation, random forest modeling is performed according to the selected samples, k self-help sample sets are randomly extracted from an original training data set in a replacement mode by a self-help method (Bootstrap), k decision trees are constructed by using the k self-help sample sets, and in the process, samples which are not extracted each time form k pieces of Out-of-Bag data (Out-of-Bag, OOB); then, setting N characteristics, randomly extracting N characteristics at each node of each tree, wherein N is less than or equal to N, and selecting a characteristic with the strongest classification capability for splitting by calculating the information content of each characteristic, so that a certain leaf node of a decision tree can not be split continuously, or all samples in the decision tree point to the same classification, and each tree is not pruned, so that the tree grows to the maximum extent; and finally, forming a random forest by all the decision trees, inputting a new sample into a classifier after the random forest is constructed, voting the classification of each decision tree of each sample, and determining the classification result according to the voting number of the decision trees.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810561139.1A CN108846338B (en) | 2018-05-29 | 2018-05-29 | Polarization feature selection and classification method based on object-oriented random forest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810561139.1A CN108846338B (en) | 2018-05-29 | 2018-05-29 | Polarization feature selection and classification method based on object-oriented random forest |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108846338A CN108846338A (en) | 2018-11-20 |
CN108846338B true CN108846338B (en) | 2022-04-15 |
Family
ID=64210205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810561139.1A Active CN108846338B (en) | 2018-05-29 | 2018-05-29 | Polarization feature selection and classification method based on object-oriented random forest |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108846338B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109472321B (en) * | 2018-12-03 | 2023-01-31 | 北京工业大学 | Time series type earth surface water quality big data oriented prediction and evaluation model construction method |
CN109726826B (en) * | 2018-12-19 | 2021-08-13 | 东软集团股份有限公司 | Training method and device for random forest, storage medium and electronic equipment |
CN110096967A (en) * | 2019-04-10 | 2019-08-06 | 同济大学 | A kind of road anger driver's hazardous act characteristic variable screening technique based on random forests algorithm |
EP3787229A1 (en) | 2019-09-02 | 2021-03-03 | Siemens Aktiengesellschaft | Method and device for automatically selecting analysis strings for feature extraction |
CN112446522A (en) * | 2019-09-02 | 2021-03-05 | 中国林业科学研究院资源信息研究所 | Grass yield estimation method and device facing multi-scale segmentation and storage medium |
CN110717495B (en) * | 2019-09-30 | 2024-01-26 | 北京工业大学 | Solid waste incineration working condition identification method based on multi-scale color moment characteristics and random forest |
CN111107092A (en) * | 2019-12-23 | 2020-05-05 | 深圳供电局有限公司 | Attack recognition method based on random forest algorithm and energy storage coordination control device |
CN113095426B (en) * | 2021-04-22 | 2023-03-31 | 西安交通大学 | Encrypted traffic classification method, system, equipment and readable storage medium |
CN114187533B (en) * | 2022-02-15 | 2022-05-03 | 西南交通大学 | GB-InSAR (GB-InSAR) atmospheric correction method based on random forest time sequence classification |
CN116206203B (en) * | 2023-03-08 | 2023-08-18 | 中国石油大学(华东) | Oil spill detection method based on SAR and Dual-EndNet |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778836A (en) * | 2016-11-29 | 2017-05-31 | 天津大学 | A kind of random forest proposed algorithm based on constraints |
CN107563425A (en) * | 2017-08-24 | 2018-01-09 | 长安大学 | A kind of method for building up of the tunnel operation state sensor model based on random forest |
CN107766883A (en) * | 2017-10-13 | 2018-03-06 | 华中师范大学 | A kind of optimization random forest classification method and system based on weighted decision tree |
-
2018
- 2018-05-29 CN CN201810561139.1A patent/CN108846338B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778836A (en) * | 2016-11-29 | 2017-05-31 | 天津大学 | A kind of random forest proposed algorithm based on constraints |
CN107563425A (en) * | 2017-08-24 | 2018-01-09 | 长安大学 | A kind of method for building up of the tunnel operation state sensor model based on random forest |
CN107766883A (en) * | 2017-10-13 | 2018-03-06 | 华中师范大学 | A kind of optimization random forest classification method and system based on weighted decision tree |
Non-Patent Citations (1)
Title |
---|
A. Marcano-Cedeño等.Feature selection using Sequential Forward Selection and classification applying Artificial Metaplasticity Neural Network.《IECON 2010 - 36th Annual Conference on IEEE Industrial Electronics Society》.2010,第2845-2850页. * |
Also Published As
Publication number | Publication date |
---|---|
CN108846338A (en) | 2018-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108846338B (en) | Polarization feature selection and classification method based on object-oriented random forest | |
Liu et al. | Deep graph clustering via dual correlation reduction | |
Zhang et al. | Hyperspectral classification based on lightweight 3-D-CNN with transfer learning | |
Wäldchen et al. | Plant species identification using computer vision techniques: a systematic literature review | |
Balntas et al. | HPatches: A benchmark and evaluation of handcrafted and learned local descriptors | |
Herranz et al. | Scene recognition with cnns: objects, scales and dataset bias | |
CN109948478B (en) | Large-scale unbalanced data face recognition method and system based on neural network | |
CN108960833B (en) | Abnormal transaction identification method, equipment and storage medium based on heterogeneous financial characteristics | |
CN108304357B (en) | Chinese character library automatic generation method based on font manifold | |
CN110348399B (en) | Hyperspectral intelligent classification method based on prototype learning mechanism and multidimensional residual error network | |
CN110659207B (en) | Heterogeneous cross-project software defect prediction method based on nuclear spectrum mapping migration integration | |
CN104392250A (en) | Image classification method based on MapReduce | |
CN109815357A (en) | A kind of remote sensing image retrieval method based on Nonlinear Dimension Reduction and rarefaction representation | |
Park et al. | Bayesian manifold learning: the locally linear latent variable model (LL-LVM) | |
Remagnino et al. | Computational Botany | |
CN115496950A (en) | Neighborhood information embedded semi-supervised discrimination dictionary pair learning image classification method | |
Alenazi | Regression for compositional data with compositional data as predictor variables with or without zero values | |
Qu et al. | Subspace vertex pursuit: A fast and robust near-separable nonnegative matrix factorization method for hyperspectral unmixing | |
CN109460788B (en) | Hyperspectral image classification method based on low-rank-sparse information combination network | |
CN108804470A (en) | A kind of image search method and device | |
Saha et al. | On principle axis based line symmetry clustering techniques | |
Eeti et al. | Classification of hyperspectral remote sensing images by an ensemble of support vector machines under imbalanced data | |
Tasoulis et al. | Unsupervised clustering using fractal dimension | |
CN114722920A (en) | Deep map convolution model phishing account identification method based on map classification | |
Roman-Rangel et al. | Assessing sparse coding methods for contextual shape indexing of Maya hieroglyphs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |