CN109583519A - A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks - Google Patents
A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks Download PDFInfo
- Publication number
- CN109583519A CN109583519A CN201811608273.9A CN201811608273A CN109583519A CN 109583519 A CN109583519 A CN 109583519A CN 201811608273 A CN201811608273 A CN 201811608273A CN 109583519 A CN109583519 A CN 109583519A
- Authority
- CN
- China
- Prior art keywords
- sample
- laplacian
- layer
- network
- matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks, belongs to semisupervised classification technical field.Include: 1: extracting training sample feature;2: calculating its p-Laplacian matrix;3: calculating composition of sample information matrix;4: establishing the picture scroll product neural network model of p-Laplacian;5: convolution operation being carried out to the feature of training sample, obtains the output of the first layer network;6: input of the output of each layer network as next layer network;7: the output of the last layer convolutional network being inputted into softmax classifier, obtains model parameter;8: calculating verifying sample cross entropy loss, preference pattern parameter;9: feature is extracted to test sample;10: it being trained using model, the output of the last layer convolutional network is sent to the classification of softmax classifier.The application is operated using multiple convolution, can increase substantially category of model effect.
Description
Technical field
The present invention relates to a kind of semisupervised classification methods, particularly relate to a kind of based on p-Laplacian picture scroll product nerve net
The semisupervised classification method of network.
Background technique
With the rapid development of big data era, all mass data is being generated daily in science, engineering and social life.Cause
How this, preferably excavate out valuable information from a large amount of unmarked samples and a small amount of markd sample, just at
For one of the important research field of current machine study and pattern-recognition.In recent years, assume by deep learning and based on manifold
Algorithm combine applied to more representational data characteristics in semisupervised classification field, can be extracted, facilitate
Improve the classifying quality of model.
Current most representational method is the semisupervised classification algorithm (GCN) based on figure convolutional neural networks.GCN makees
For effective variant of convolutional neural networks (CNN), the number with arbitrary structures can efficiently be handled by being successfully generalized to CNN
According to.Then GCN passes through the first approximation of optimization spectrogram convolution using the manifold structure of figure Laplacian matrix table registration evidence
Value proposes a kind of new stacked linear model formation.In addition, GCN can simultaneously learning sample characteristic information and structure believe
Breath, by fusion feature information and structural information, to extract more comprehensive data characteristics.However, due to Laplacian
Geodetic line function in kernel is a constant function, therefore can not smoothly be extrapolated to invisible data, is caused
Laplacian matrix can not preferably keep the local topology information between sample.
Summary of the invention
To solve deficiency in the prior art, the present invention provides a kind of based on p-Laplacian figure convolutional neural networks
Semisupervised classification method, for Laplacian matrix, p-Laplacian is the non-linear popularization of Laplacian, energy
Finer manifold structure is enough reflected, the structural information that discovery is hidden in data is more advantageous to, to obtain better
Classifying quality.
In order to solve the above technical problems, present invention offer technical solution is as follows:
The present invention provides a kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks, comprising:
Step 1: extracting the feature of training sample, each sample is indicated with a feature vector;
Step 2: its p-Laplacian matrix is calculated separately to the sample after extraction feature;
Step 3: composition of sample information matrix being calculated based on p-Laplacian matrix, thus obtains structure information matrix;
Step 4: the picture scroll product neural network model based on p-Laplacian is established on the basis of structure information matrix
(pLapGCN);
Step 5: carrying out convolution operation with feature of the pLapGCN model to training sample, obtain the output of the first layer network;
Step 6: input of the output of each layer network as next layer network, repeating step (5) can be obtained multilayer
PLapGCN network;
Step 7: the input by the output of the last layer convolutional network as softmax classifier obtains each trained sample
Various model parameters used in prediction label and training pattern corresponding to this;
Step 8: and then the intersection entropy loss of verifying sample is calculated, select best model parameter;
Step 9: test sample and training sample being extracted into sample characteristics using identical method, and obtained most with study
Excellent convolutional network carries out convolution operation to test sample, and after the output of the last layer convolutional network, each test sample is same
The available feature vector of sample;
Step 10: the feature vector that the last layer convolutional network exports being sent to softmax classifier and is classified.
Further, in the step 3, the structure information matrix of sample isWherein, λmaxFor p-
Laplacian matrix LpMaximum eigenvalue.
Further, the step 5 specifically: firstly, initializing first layer network weight W using Xavier method1;
Then, first layer convolution operation is carried out to initial sampling feature vectors matrix using pLapGCN modelObtain the sampling feature vectors matrix H of first layer extraction(1);
Wherein, RELU is linear activation primitive, is common activation primitive in a kind of artificial neural network, f (x)=max
(0, x);Wherein, H(0)=X indicates initial sampling feature vectors matrix.
Further, the step 6 specifically: firstly, by the output H of first layer convolutional network(1)As second layer convolution
The sampling feature vectors matrix of network inputs;Secondly, initializing second layer network weight W using Xavier method2;
Then, second layer convolution operation is carried out using sample vector eigenmatrix of the pLapGCN model to the second layerThen the sampling feature vectors matrix H of second layer extraction is obtained(2)。
Further, the step 7 specifically: firstly, classifying the output of the last layer convolutional network as softmax
The input of device;Pass through softmax functionCalculate the probability distribution that each sample belongs to each class;Then, root
Which kind of label be determined as according to the maximum value that each sample belongs to each class probability.
Further, the step 8 specifically: firstly, by the various model parameters obtained on training sample in verifying sample
It is trained on this;Then, according to cross entropy loss function C=- ∑kyklogZKSelect optimal model parameter;Wherein ykTable
Show true label, ZKIndicate the probability distribution matrix by the output of softmax function.
Further, the first layer network weight W1With second layer network weight W2It is all satisfiedBe uniformly distributed.
Data storage device includes training data, verify data and test data, first extraction training sample in the application
Feature, each sample is indicated with a feature vector;Then its p- is calculated separately to the sample after extraction feature
Laplacian matrix includes the data local geometric information finer than LapLacian matrix in p-Laplacian matrix;
Composition of sample information matrix is calculated based on p-Laplacian matrix, thus obtained structure information matrix just contains sample institute
The local geometric information needed;The figure convolutional neural networks based on p-Laplacian are established on the basis of structure information matrix
Model (pLapGCN), this model ratio GCN model are more representative for the fine degree of local geometric information;With
PLapGCN model carries out convolution operation to the feature of training sample, obtains the output of the first layer network;The output of each layer network
As the input of next layer network, the pLapGCN network of multilayer is can be obtained in repetition;Using the output of the last layer convolutional network as
The input of softmax classifier obtains various used in prediction label corresponding to each training sample and training pattern
Then model parameter calculates the intersection entropy loss of verifying sample, by selecting best model parameter, reduction intersects entropy loss,
Improve category of model effect;Test sample and training sample are extracted into sample characteristics using identical method, and obtained with study
Optimal convolutional network to test sample carry out convolution operation, through the last layer convolutional network output after, each test specimens
This same available feature vector;The feature vector that the last layer convolutional network exports is sent to softmax classifier
Classify.
Compared with prior art, the invention has the following advantages:
The present invention is on the basis of picture scroll product neural network model (pLapGCN) based on p-Laplacian, using multiple
Convolution operation utilizes the structure information matrix based on p-Laplacian during each layer of convolution, can increase substantially
Category of model effect.
Detailed description of the invention
Fig. 1 is the structural representation of the semisupervised classification method of the invention based on p-Laplacian figure convolutional neural networks
Figure;
Fig. 2 show the process of the semisupervised classification method of the invention based on p-Laplacian figure convolutional neural networks
Figure;
Fig. 3 show the flow chart of first layer convolutional network of the present invention;
Fig. 4 show the flow chart of second layer convolutional network of the present invention;
Fig. 5 show the flow chart of classifier layer and model optimization parameter of the present invention;
Fig. 6 show algorithm flow chart when two layers of convolutional network of use of the invention;
Fig. 7 is the experimental result picture of the embodiment of the present invention 1 and comparative example 1-4 on Citeseer database;
Fig. 8 is the experimental result picture of the embodiment of the present invention 1 and comparative example 1-4 on Cora database;
Fig. 9 is the experimental result picture of the embodiment of the present invention 1 and comparative example 1-4 on Pubmed database.
Specific embodiment
To keep the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool
Body embodiment is described in detail.
Embodiment 1
A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks, as shown in figures 1 to 6.First to
Input data information in data storage device, data information here include the data for training, verifying and test.Wherein
Class label a part of training data be it is known, other parts are unknown.And the classification of verify data and test data
Label is known.Furthermore test data can be user input and be also possible to be collected in database.Next is mentioned
The feature for taking each data, by each data conversion at a feature vector.Then using designed method to training data
It is trained, obtains different network model parameters, the different model parameters learnt are trained in verify data, lead to
It crosses reduction and intersects entropy loss to select optimal model parameter.The feature of test sample passes through the optimal mould obtained by above-mentioned study
After type processing, most representational feature vector can be extracted.Finally, feature vector is carried out with softmax classifier
Classification and Identification.
The method that the present invention designs is as shown in Figure 2.Step 10 is origination action.Step 11 is indicated according to feature vector
Data calculate its p-Laplacian matrix.Followed by composition of sample information square is calculated on the basis of p-Laplacian matrix
Battle array.Step 12 is initialization weight parameter W1.Step 13 is to carry out convolution behaviour with feature vector of the pLapGCN model to sample
Make, obtains the output of the first layer network, the detailed description of step 11,12 and 13 is as shown in Figure 3.So far, first layer convolutional network
Terminate, the processing step of the second layer network is similar with first layer.Wherein, step 14 is by the output of first layer as the defeated of the second layer
Enter, and then forms the sampling feature vectors matrix of the second layer.Step 15 is initialization weight parameter W2.Step 16 is according to second
The sampling feature vectors matrix of layer carries out convolution operation using pLapGCN model algorithm.The detailed description of step 14,15 and 16 is such as
Shown in Fig. 4.So far, second layer convolutional network terminates.Step 17 is Classification and Identification process, and the last layer convolutional network is exported
Then input of the sampling feature vectors matrix as softmax classifier is classified using softmax classifier.Step 18
Intersect entropy loss by reducing for pLapGCN model come the process for selecting optimal model parameter, step 17 and 18 is specifically
It is bright as shown in Figure 5.Step 19 is end state.
Fig. 3 gives in Fig. 2 step 11,12 and 13 detailed description, specifically illustrates the structure letter for how calculating sample
Cease matrix, initialization weight parameter W1With the process for calculating first layer convolution.Step 1100 is initial state.Step 1101 is meter
Calculate the p-Laplacian matrix L of samplep, the structure information matrix of sample is then calculated according to p-Laplacian matrixWherein λmaxIndicate p-Laplacian matrix LpMaximum eigenvalue.Step 1201 is to utilize the side Xavier
Method initializes first layer network weight W1, meetBe uniformly distributed.Step 1301 is to utilize pLapGCN
Model carries out first layer convolution operation to initial sampling feature vectors matrixThen is obtained
The sampling feature vectors matrix H of one layer of extraction(1).Wherein RELU is linear activation primitive, is commonly used in a kind of artificial neural network
Activation primitive, f (x)=max (0, x).H(0)=X indicates initial sampling feature vectors matrix.Step 1302 is to terminate shape
State.
Fig. 4 gives in Fig. 2 step 14,15 and 16 detailed description, specifically illustrates and how to initialize weight parameter W2
With the process for calculating second layer convolution.Step 1400 is initial state.Step 1401 is by the output H of first layer convolutional network(1)
Sampling feature vectors matrix as the input of second layer convolutional network.Step 1501 is also with Xavier method initialization the
Double layer network weight W2, meetBe uniformly distributed.Step 1601 is using pLapGCN model to second
The sample vector eigenmatrix of layer carries out second layer convolution operationThen second layer extraction is obtained
Sampling feature vectors matrix H(2).Step 1602 is end state.
Fig. 5 gives in Fig. 2 step 17 and 18 detailed description, specifically illustrate how using softmax classifier into
The process of row classification and selection optimal model parameters.Step 1700 is initial state.Step 1701 is by second layer convolutional network
Output H(2)Input as softmax classifier.Step 1702 is by softmax functionIt calculates each
Sample belongs to the probability distribution of each class.Step 1703 is to belong to the maximum value of each class probability according to each sample which is determined as
A kind of label.Step 1801 is to select the process of best parameter in verify data in model.Test data does not need the process.
Cross entropy loss function C=- ∑kyklogZKIndicate the error of true tag and prediction label, the smaller representative model identification of error
Rate is higher.Wherein ykIndicate true label, ZKIndicate the probability distribution matrix by the output of softmax function.It will be in training number
It is trained in verify data according to the middle different model parameters learnt, selects optimal mould by reducing intersection entropy loss
Shape parameter.Then the corresponding weighted value of optimal model parameter that will be obtained, replaces original W1And W2, surveyed convenient for subsequent
Examination.Step 1802 is end state.
In order to verify the validity of p-Laplacian figure convolutional neural networks (pLapGCN) algorithm, only with above-described embodiment
For building comparative example it is as follows.
Comparative example 1
Classified using existing GCN model.
Comparative example 2
In this comparative example pLapGCN-1, p=2.
Comparative example 3
In this comparative example pLapGCN, p=2.
Comparative example 4
PLapGCN-1 in this comparative example, p are other better p values.
PLapGCN-1 (p=2) and pLapGCN (p=2) is the p-Laplacian construction manifold structure using p=2.
PLapGCN-1 (other better p values) and pLapGCN (other better p values) is the p- using other more preferable p values
Laplacian constructs manifold structure.PLapGCN-1 (p=2) and pLapGCN-1 (other better p values) is structure information matrix
Method in the case where non-optimal.
In order to verify the validity of p-Laplacian figure convolutional neural networks (pLapGCN) algorithm, we are used
Citeseer, Cora and Pubmed database are tested.Due to the limitation of hardware condition, we only use Pubmed data
5000 samples in library.We are using the data of extracted good feature on the net for testing.When experiment, we are chosen
1000 have exemplar as test set, and randomly selecting 500 has exemplar as verifying collection, other samples of data set
As training set.In addition, randomly select 20%, 30%, 40% respectively in the training set of Citeseer and Cora database,
For 50% and 60% sample as there is exemplar, other are all unlabeled exemplars;And divide in the training set of Pubmed database
Not randomly selecting 10%, 15%, 20%, 25% and 30% sample conduct has exemplar, other are all unlabeled exemplars.?
Under conditions of identical experiment setting, GCN, pLapGCN-1 (p=2), pLapGCN (p=2), pLapGCN-1 (other are measured respectively
Better p value) and pLapGCN (other better p values) discrimination.Fig. 7, Fig. 8 and Fig. 9 illustrate experiment as a result, horizontal
The label rate for the training sample that coordinate representation experiment is selected, ordinate indicates discrimination, in each training sample difference number of tags
Experiment is repeated five times under amount, the training sample label tested every time randomly selects.From Fig. 7, Fig. 8 and error shown in Fig. 9
In stick histogram as can be seen that for average recognition rate (that intermediate line i.e. in error bar histogram), pLapGCN (its
His better p value) it is better than preceding four kinds of algorithms, realize best classifying quality.
In summary, the semisupervised classification method based on p-Laplacian figure convolutional neural networks in the application, using more
Secondary convolution operation utilizes the structure information matrix based on p-Laplacian during each layer of convolution, can significantly mention
High category of model effect.
The above is a preferred embodiment of the present invention, it is noted that for those skilled in the art
For, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modifications
It should be regarded as protection scope of the present invention.
Claims (7)
1. a kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks characterized by comprising
Step 1: extracting the feature of training sample, each sample is indicated with a feature vector;
Step 2: its p-Laplacian matrix is calculated separately to the sample after extraction feature;
Step 3: composition of sample information matrix being calculated based on p-Laplacian matrix, thus obtains structure information matrix;
Step 4: the picture scroll product neural network model based on p-Laplacian is established on the basis of structure information matrix
(pLapGCN);
Step 5: carrying out convolution operation with feature of the pLapGCN model to training sample, obtain the output of the first layer network;
Step 6: input of the output of each layer network as next layer network, repeating step 5 can be obtained the pLapGCN net of multilayer
Network;
Step 7: the input by the output of the last layer convolutional network as softmax classifier obtains each training sample institute
Various model parameters used in corresponding prediction label and training pattern;
Step 8: and then the intersection entropy loss of verifying sample is calculated, select best model parameter;
Step 9: test sample and training sample being extracted into sample characteristics using identical method, and optimal with learning to obtain
Convolutional network carries out convolution operation to test sample, and after the output of the last layer convolutional network, each test sample equally may be used
To obtain a feature vector;
Step 10: the feature vector that the last layer convolutional network exports being sent to softmax classifier and is classified.
2. the semisupervised classification method according to claim 1 based on p-Laplacian figure convolutional neural networks, feature
It is, in the step 3, the structure information matrix of sample isWherein, λmaxFor p-Laplacian matrix
LpMaximum eigenvalue.
3. the semisupervised classification method according to claim 1 based on p-Laplacian figure convolutional neural networks, feature
It is, the step 5 specifically: firstly, initializing first layer network weight W using Xavier method1;
Then, first layer convolution operation is carried out to initial sampling feature vectors matrix using pLapGCN modelObtain the sampling feature vectors matrix H of first layer extraction(1);
Wherein, RELU is linear activation primitive, is common activation primitive in a kind of artificial neural network, f (x)=max (0,
x);Wherein, H(0)=X indicates initial sampling feature vectors matrix.
4. the semisupervised classification method according to claim 1 based on p-Laplacian figure convolutional neural networks, feature
It is, the step 6 specifically: firstly, by the output H of first layer convolutional network(1)Sample as the input of second layer convolutional network
Eigen vector matrix;Secondly, initializing second layer network weight W using Xavier method2;
Then, second layer convolution operation is carried out using sample vector eigenmatrix of the pLapGCN model to the second layerThen the sampling feature vectors matrix H of second layer extraction is obtained(2)。
5. the semisupervised classification method according to claim 1 based on p-Laplacian figure convolutional neural networks, feature
It is, the step 7 specifically: firstly, the input by the output of the last layer convolutional network as softmax classifier;It is logical
Cross softmax functionCalculate the probability distribution that each sample belongs to each class;Then, according to each sample category
Which kind of label be determined as in the maximum value of each class probability.
6. the semisupervised classification method according to claim 1 based on p-Laplacian figure convolutional neural networks, feature
It is, the step 8 specifically: firstly, the various model parameters obtained on training sample are trained on verifying sample;
Then, according to cross entropy loss function C=- ∑kyklog ZKSelect optimal model parameter;Wherein ykIndicate true label,
ZKIndicate the probability distribution matrix by the output of softmax function.
7. the semisupervised classification method according to claim 3 or 4 based on p-Laplacian figure convolutional neural networks,
It is characterized in that, the first layer network weight W1With second layer network weight W2It is all satisfiedIt is equal
Even distribution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608273.9A CN109583519A (en) | 2018-12-27 | 2018-12-27 | A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811608273.9A CN109583519A (en) | 2018-12-27 | 2018-12-27 | A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109583519A true CN109583519A (en) | 2019-04-05 |
Family
ID=65933020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811608273.9A Withdrawn CN109583519A (en) | 2018-12-27 | 2018-12-27 | A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109583519A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210330A (en) * | 2019-05-13 | 2019-09-06 | 清华大学 | Electromagnetic signal recognition methods and device based on Tacit Knowledge structure figures convolutional network |
CN111612046A (en) * | 2020-04-29 | 2020-09-01 | 杭州电子科技大学 | Characteristic pyramid graph convolutional neural network and application thereof in 3D point cloud classification |
CN111965476A (en) * | 2020-06-24 | 2020-11-20 | 国网江苏省电力有限公司淮安供电分公司 | Low-voltage diagnosis method based on graph convolution neural network |
CN111985520A (en) * | 2020-05-15 | 2020-11-24 | 南京智谷人工智能研究院有限公司 | Multi-mode classification method based on graph convolution neural network |
-
2018
- 2018-12-27 CN CN201811608273.9A patent/CN109583519A/en not_active Withdrawn
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110210330A (en) * | 2019-05-13 | 2019-09-06 | 清华大学 | Electromagnetic signal recognition methods and device based on Tacit Knowledge structure figures convolutional network |
CN111612046A (en) * | 2020-04-29 | 2020-09-01 | 杭州电子科技大学 | Characteristic pyramid graph convolutional neural network and application thereof in 3D point cloud classification |
CN111612046B (en) * | 2020-04-29 | 2023-10-20 | 杭州电子科技大学 | Feature pyramid graph convolution neural network and application thereof in 3D point cloud classification |
CN111985520A (en) * | 2020-05-15 | 2020-11-24 | 南京智谷人工智能研究院有限公司 | Multi-mode classification method based on graph convolution neural network |
CN111965476A (en) * | 2020-06-24 | 2020-11-20 | 国网江苏省电力有限公司淮安供电分公司 | Low-voltage diagnosis method based on graph convolution neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112308158B (en) | Multi-source field self-adaptive model and method based on partial feature alignment | |
CN109583519A (en) | A kind of semisupervised classification method based on p-Laplacian figure convolutional neural networks | |
CN109299741B (en) | Network attack type identification method based on multi-layer detection | |
CN110472817A (en) | A kind of XGBoost of combination deep neural network integrates credit evaluation system and its method | |
CN109766935A (en) | A kind of semisupervised classification method based on hypergraph p-Laplacian figure convolutional neural networks | |
WO2019179403A1 (en) | Fraud transaction detection method based on sequence width depth learning | |
WO2018052587A1 (en) | Method and system for cell image segmentation using multi-stage convolutional neural networks | |
CN111914728B (en) | Hyperspectral remote sensing image semi-supervised classification method and device and storage medium | |
CN110413924A (en) | A kind of Web page classification method of semi-supervised multiple view study | |
CN110287983A (en) | Based on maximal correlation entropy deep neural network single classifier method for detecting abnormality | |
CN110363253A (en) | A kind of Surfaces of Hot Rolled Strip defect classification method based on convolutional neural networks | |
CN106326288A (en) | Image search method and apparatus | |
CN110827260B (en) | Cloth defect classification method based on LBP characteristics and convolutional neural network | |
CN109582782A (en) | A kind of Text Clustering Method based on Weakly supervised deep learning | |
CN105320967A (en) | Multi-label AdaBoost integration method based on label correlation | |
CN109948742A (en) | Handwritten form picture classification method based on quantum nerve network | |
CN112613536A (en) | Near infrared spectrum diesel grade identification method based on SMOTE and deep learning | |
CN114553475A (en) | Network attack detection method based on network flow attribute directed topology | |
CN111222545B (en) | Image classification method based on linear programming incremental learning | |
CN112163450A (en) | Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm | |
CN110414587A (en) | Depth convolutional neural networks training method and system based on progressive learning | |
CN107392155A (en) | The Manuscripted Characters Identification Method of sparse limited Boltzmann machine based on multiple-objection optimization | |
CN114330516A (en) | Small sample logo image classification based on multi-graph guided neural network model | |
CN104598898B (en) | A kind of Aerial Images system for rapidly identifying and its method for quickly identifying based on multitask topology learning | |
CN115050022A (en) | Crop pest and disease identification method based on multi-level self-adaptive attention |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20190405 |