CN115828100A - Mobile phone radiation source spectrogram category increment learning method based on deep neural network - Google Patents

Mobile phone radiation source spectrogram category increment learning method based on deep neural network Download PDF

Info

Publication number
CN115828100A
CN115828100A CN202211569854.2A CN202211569854A CN115828100A CN 115828100 A CN115828100 A CN 115828100A CN 202211569854 A CN202211569854 A CN 202211569854A CN 115828100 A CN115828100 A CN 115828100A
Authority
CN
China
Prior art keywords
category
learning
class
radiation source
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211569854.2A
Other languages
Chinese (zh)
Inventor
邓建华
吴春江
周锦霆
朱帮瑞
孙晋鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Chengdian Fuzhi Technology Co ltd
Original Assignee
Shanghai Chengdian Fuzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Chengdian Fuzhi Technology Co ltd filed Critical Shanghai Chengdian Fuzhi Technology Co ltd
Priority to CN202211569854.2A priority Critical patent/CN115828100A/en
Publication of CN115828100A publication Critical patent/CN115828100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a deep neural network-based mobile phone radiation source spectrogram category increment learning method which comprises the steps of obtaining a data set of a mobile phone radiation source spectrogram and determining a taskt 1 ~t N And training data sets corresponding to the N tasksD 1 ~D n Class setC 1 ~C n (ii) a Relearning taskt 1 Obtaining a network model; learning tasks in sequencet 2 ~t N Updating the network model, reconstructing the classification layer by increasing parameters of the classification layer in each incremental learning stage to adapt to the increasing number of classes, and training a total loss function by adopting cross entropy loss, distance-based measurement loss and distillation lossAnd (5) training a network model. The method can complete class increment learning of the model on the mobile phone radiation source spectrogram without old class training data, and can initialize new class classification parameters by fully utilizing the currently learned knowledge of the model to promote the learning of the new class.

Description

Mobile phone radiation source spectrogram category increment learning method based on deep neural network
Technical Field
The invention relates to an image category increment identification method, in particular to a mobile phone radiation source spectrogram category increment learning method based on a deep neural network.
Background
The spectrogram of a mobile phone radiation source is a spectrogram obtained by processing mobile phone signals of different models of hands. The existing class increment identification model usually reserves part of the data of the learned class and combines the data of the new class for learning, and the model avoids forgetting the old class knowledge through a knowledge distillation technology. The existing image class increment identification model generally adopts a random initialization strategy to initialize newly added parameters of a model classification layer. These methods have the following drawbacks:
1. when part of the old samples are reserved, the storage burden is easily caused by excessive quantity, the imbalance of new and old training data is easily caused by less quantity, and the requirement of incremental learning is not met.
2. And randomly initializing the newly added classification parameters, and not fully utilizing the known information of the model to promote the learning of the new class.
Disclosure of Invention
The invention aims to solve the problems, and the method for incrementally learning the mobile phone radiation source spectrogram by the model can be completed without old training data, new classification parameters can be initialized by fully utilizing the currently learned knowledge of the model, and the new learning is promoted.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows: a kind of increment learning method of the mobile phone radiation source spectrogram classification based on the deep neural network, including the following steps;
(1) Acquiring a data set of a spectrogram of a radiation source of a mobile phone, and determining N radiation sources according to the number of categories in the data setTask t 1 ~t N Said t is 1 For learning B1 classes, t 2 ~t N The learning method is used for learning B2 categories respectively, and the categories of different tasks are not repeated;
obtaining a training data set of N tasks from a spectrogram data set of a radiation source of a mobile phone to form a data stream D = { D = { (D) } 1 ,D 2 ,…,D N },t 1 Has a training data set of D 1 And category C 1 ,t n Has a training data set of D n Class C n N = 2-N, and the nth class increment learning phase can only obtain the task t n Each training data set comprising a plurality of samples;
(2) In the first category incremental learning phase, learning task t 1 With D 1 Taking the middle sample as input, taking the type of the sample as expected output, and training a RestNet-18 network to obtain a network model; the RestNet-18 network comprises a feature extractor and a classification layer, and t is learned 1 The feature extractor obtained is f 1 (. O) the classification layer is C 1 (·);
(3) At each subsequent incremental learning stage, the tasks t are learned in sequence 2 ~t N Updating the network model once every 1 task is learned, and learning t n The feature extractor obtained is f n (. O) the classification layer is C n (. H), capable of identifying B1+ (n-1). Times.B 2 classes;
updating t n The corresponding network model specifically comprises:
(31) Set task t n Contains B2 classes, each class being in D n Corresponding to S samples;
(32) For 1 of these classes, use t n-1 Obtained feature extractor f n-1 (. O) characterizing its corresponding S samples, where the k sample x k Is characterized in that n-1 (x k ) K =1 to S, and calculates the center value of the category
Figure BDA0003987558410000031
(33) According to the steps of32 To obtain t n The center values of all the categories in the database, wherein the total number of the center values is B2;
(34) At t n-1 Increase t in the resulting classification layer n Corresponding B2 categories of classification parameters, and replacing B2 newly added classification parameters with the B2 central values obtained in the step (33);
(35) The distillation loss L was calculated according to the following formula D
Figure BDA0003987558410000032
In the formula, U is D n Total number of samples in, U = B2 × S;
(36) Computing task t n Total loss function loss of n
loss n =L CE +αL m +βL D
In the formula, L CE As a cross-entropy loss function, L m For distance-based metric loss function, α is L m Beta is L D The weight of (c);
(37) Training t based on total loss function n-1 Obtaining t n The network model of (1).
Preferably, the method comprises the following steps: in said step (36);
the cross entropy loss function L is calculated according to the following equation CE
Figure BDA0003987558410000033
In the formula, i is D n I =1 to U for the ith sample; m is C 1 ~C n C is 1 category in M; y is a sample x i When y = category c, the value is 1, otherwise 0,
Figure BDA0003987558410000034
for the current network model pair x i A predicted probability of belonging to class c;
calculated according to the following formulaDistance-based metric loss function L m
L m =max(0,d(f n (x i ),f n (x p ))-d(f n (x i ),f n (x q ))+D)
x p Is D n Neutral and x i Samples belonging to the same class, the number of which is S-1, x q Is D n Neutral and x i Samples which do not belong to the same category are U-S in number, D (·,) is the Euclidean distance between two features, and D is a preset distance threshold.
Preferably, the method comprises the following steps: the α =0.001 and β =0.05.
Preferably, the method comprises the following steps: learning task t 1 The loss function for training the RestNet-18 network is loss 1 =L CE +αL m
Preferably, the method comprises the following steps: step (37) is to reversely propagate the total loss function value, reduce the total loss function value by using a gradient descent algorithm and update the model parameters.
The invention is in the learning task t 1 The loss function is loss 1 =L CE +αL m At learning t 2 ~t N Each training uses the loss function loss n =L CE +αL m +βL D . In each learning, L is different due to different training data sets CE 、L m 、L D The values of (c) are different.
At the learning task t 1 When the learning task t is finished, the classification parameters of B1 classes are added into the classification layer of the RestNet-18 network 2 When it is needed at t 1 And adding B2 classification parameters into the classification layer of the obtained network model.
Compared with the prior art, the invention has the advantages that: the learned knowledge of the model is fully utilized to promote the learning of the model to the unknown new class, and the recognition accuracy of the model to the unknown new class is improved. The method can realize category incremental learning without old data and without a large amount of storage resources. And the distillation loss function is introduced, and the distillation loss function, the cross entropy loss function and the distance-based measurement loss function form the total loss function of the model. Distillation loss alleviates the model from forgetting the information learned on the old task by reducing the difference in new class sample characteristics before and after model update.
In conclusion, the accuracy rate of identifying the category increment of the spectrogram of the mobile phone radiation source can reach more than 80% on average.
Drawings
FIG. 1 is a diagram illustrating class increment identification in the prior art;
FIG. 2 is a flow chart of the present invention;
FIG. 3 is a diagram illustrating updating a network model of t2 based on t1 according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings.
Example 1: referring to fig. 1, a schematic diagram of a class increment identification method in the prior art is shown. At task t 1 In, using training data set D 1 Training the model by adopting cross entropy loss to obtain a model 1, and then obtaining a model D 1 Extracting and selecting partial samples to obtain a data set R 1 For continuing training in the next task to preserve the model's memory of old class data. At task t 2 In the data set R reserved for the last task 1 Data set D associated with the current task 2 Merging to obtain a task t 2 Of the final training data set
Figure BDA0003987558410000051
For training and updating model 1. Model 1 at update time, for dataset D 2 Calculating cross entropy loss for the samples in (1), for the data set R 1 And (3) calculating distillation loss of the medium sample independently, and finally combining the two losses to be used as the total loss of the model for training to obtain a model 2. Then from
Figure BDA0003987558410000052
The selected partial sample is reserved to obtain a data set R 2 For the next task training. And by analogy, the network model obtained by the last task is updated in the new task, and the network model obtained by the Nth task correspondingly isAnd (4) model N.
Example 2: referring to fig. 2 and fig. 3, a method for incrementally learning the class of a spectrogram of a radiation source of a mobile phone based on a deep neural network comprises the following steps;
(1) Acquiring a data set of a spectrogram of a radiation source of a mobile phone, and determining N tasks t according to the category number in the data set 1 ~t N Said t is 1 For learning B1 classes, t 2 ~t N The learning method is used for learning B2 categories respectively, and the categories of different tasks are not repeated;
obtaining a training data set of N tasks from a spectrogram data set of a radiation source of a mobile phone to form a data stream D = { D = { (D) } 1 ,D 2 ,…,D N },t 1 Has a training data set of D 1 And category C 1 ,t n Has a training data set of D n Class C n N = 2-N, and the nth class increment learning phase can only obtain the task t n Each training data set comprising a plurality of samples;
(2) In the first category incremental learning phase, learning task t 1 With D 1 Taking the middle sample as input, taking the type of the sample as expected output, and training a RestNet-18 network to obtain a network model; the RestNet-18 network comprises a feature extractor and a classification layer, and t is learned 1 The feature extractor obtained is f 1 (. O) the classification layer is C 1 (·);
(3) At each subsequent incremental learning stage, the tasks t are learned in sequence 2 ~t N Updating the network model once every 1 task is learned, and learning t n The feature extractor obtained is f n (. O) the classification layer is C n (. H), capable of identifying B1+ (n-1). Times.B 2 classes;
updating t n The corresponding network model specifically comprises:
(31) Set task t n Contains B2 classes, each class being in D n Corresponding to S samples;
(32) For 1 of these classes, use t n-1 Obtained feature extractor f n-1 (. Take the characteristics of its corresponding S samplesC, wherein the k sample x k Is characterized in that n-1 (x k ) K =1 to S, and calculates the center value of the category
Figure BDA0003987558410000061
(33) Obtaining t according to step (32) n The central values of all the categories in the database, wherein the total number of the central values is B2;
(34) At t n-1 Increase t in the resulting classification layer n Corresponding B2 categories of classification parameters, and replacing B2 newly added classification parameters with the B2 central values obtained in the step (33);
(35) The distillation loss L was calculated according to the following formula D
Figure BDA0003987558410000071
In the formula, U is D n Total number of samples in, U = B2 × S;
(36) Computing task t n Total loss function loss of n
loss n =L CE +αL m +βL D
In the formula, L CE As a cross entropy loss function, L m For distance-based metric loss function, α is L m Beta is L D The weight of (c);
(37) Training t based on total loss function n-1 Obtaining t n The network model of (1).
In said step (36);
calculating a cross entropy loss function L according to CE
Figure BDA0003987558410000072
In the formula, i is D n I =1 to U for the ith sample; m is C 1 ~C n C is 1 category in M; y is a sample x i The label of (a) is used,when y = category c, the value is 1, otherwise 0,
Figure BDA0003987558410000073
for the current network model pair x i A predicted probability of belonging to class c;
calculating a distance-based metric loss function L according to m
L m =max(0,d(f n (x i ),f n (x p ))-d(f n (x i ),f n (x q ))+D)
x p Is D n Neutral and x i Samples belonging to the same class, the number of which is S-1, x q Is D n Neutral and x i Samples which do not belong to the same category are U-S in number, D (·,) is the Euclidean distance between two features, and D is a preset distance threshold.
The α =0.001 and β =0.05.
Learning task t 1 The loss function for training the RestNet-18 network is loss 1 =L CE +αL m
Step (37) is to reversely propagate the total loss function value, reduce the total loss function value by using a gradient descent algorithm and update the model parameters.
Example 3: referring to fig. 2 and fig. 3, a specific example is given on the basis of embodiment 2.
Step (1), like step (1) of example 2, the data set of the spectrogram of the radiation source of the mobile phone obtained by the user totally comprises 22 categories, and the user determines that N =5 tasks are t respectively 1 ~t 5 Wherein t is 1 For identifying 6 classes, t 2 -t 5 Each identifying 4 categories, and the categories of different tasks are not repeated; the spectrograms in the data set are collected by the user, and the spectrograms of the mobile phone radiation source refer to spectrograms obtained by processing mobile phone signals of different models; 5 tasks t 1 ~t 5 Respectively as D 1 ~D 5 Class set of C 1 ~C 5 ,C 1 In which 6 classes, C 2 ~C 5 Each of which contains 4 classesOtherwise.
Step (2) is the same as step (2) of example 2, and the purpose is to train the initial task t 1 A network model is obtained. Wherein, t 1 Which may be referred to as an initial task, the network model is referred to as an initial network model. Initial task t 1 During training, the input of the RestNet-18 network is D 1 One picture P1 in (1), output as P1 belongs to C 1 The probability of these 6 classes in (1) is a 6-dimensional vector, e.g., [0.1,0.9,0.4,0.3,0.8,0.6 [ ]]The probability that P1 belongs to class 1 is 0.1, the probability that P belongs to class 2 is 0.9, and so on. Finally, the class label is compared with the corresponding class label of P1, and the class label is also a 6-dimensional vector, for example, the label of class 1 is [1,0 ]]Class 2 tags are [0,1,0]By analogy, the expected output should be [0,1,0, by comparing the output to the corresponding class label, updated according to the difference between the output and the label, where P1 corresponds to class 2]And adjusting and updating the model parameters according to the difference between the actual output and the expected output of the model. Finally, we get t 1 The network model of (1).
At the learning task t 1 The loss function for training the RestNet-18 network is loss 1 =L CE +αL m When actually training the network model, we will divide the data set into a training set and a testing set, train with the data of the training set, test with the data of the testing set, for example, for an input testing sample P2, the model outputs its predicted probability, and compares with the labels of all classes that the model can recognize, and judges which label the output is close to, and belongs to which class P2.
And (3): like step (3) in embodiment 2, the step is a process of updating the network model once, and the network model has the capability of identifying more than B2 classes every time the network model is updated.
Take n =2 as an example, update t 2 Corresponding network model, then in step (31), task t 2 Contains B2=4 classes, each class being in D n Corresponds to 250 samples, then D n A total of 1000 samples;
through the steps (32) and (33), 4 central values corresponding to 4 categories are obtained;
in step (35), U = B2 × S =4 × 250=1000.
It is noted that D 2 There are only 4 new classes in the total loss function value, so the input sample is also only 4 classes when calculating the total loss function value, but when the model judges the class to which the input sample belongs, the model judges t 1 And t 2 M is the sum of 10 classes.
(37) Training t based on total loss function n-1 Obtaining t n The network model of (1).
After learning t, the invention 1 Network model of (1), can identify C 1 6 categories in (1), t is learned 2 Network model of (2), can identify C 1 And C 1 6+4=10 categories in the middle, and so on, finally t is learned 5 After that, 22 categories can be identified.
Each layer of the neural network model has parameters, and the parameters of the classification layer are generally called classification parameters. For example, the feature dimension extracted by the feature extractor is 512,t 1 The invention learns t by recognizing 6 classes, the parameters of the classification layer are 512 multiplied by 6, each class corresponds to a classification parameter of 512 multiplied by 1, the classification parameter is generally initialized randomly 1 Random initialization may be employed. t is t 2 4 categories are added, namely 4 classification parameters, the classification layer parameters are changed into 512 multiplied by 10, 4 randomly initialized classification parameters of 512 multiplied by 1 are added, each new category corresponds to one classification parameter, and 4 central values are used for replacing the 4 randomly initialized classification parameters. Since the center value is the mean of the features, also 512 x 1.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (5)

1. A kind of mobile phone radiation source spectrogram category increment learning method based on a deep neural network is characterized in that: comprises the following steps;
(1) Acquiring a data set of a spectrogram of a radiation source of a mobile phone, and determining N tasks t according to the category number in the data set 1 ~t N Said t is 1 For learning B1 classes, t 2 ~t N The learning method is used for learning B2 categories respectively, and the categories of different tasks are not repeated;
obtaining a training data set of N tasks from a spectrogram data set of a radiation source of a mobile phone to form a data stream D = { D = { (D) } 1 ,D 2 ,…,D N },t 1 Has a training data set of D 1 And category C 1 ,t n Has a training data set of D n Class C n N = 2-N, and the nth class increment learning phase can only obtain the task t n Each training data set comprising a plurality of samples;
(2) In the first category incremental learning phase, learning task t 1 With D 1 Taking the middle sample as input, taking the type of the sample as expected output, and training a RestNet-18 network to obtain a network model; the RestNet-18 network comprises a feature extractor and a classification layer, and t is learned 1 The feature extractor obtained is f 1 (. O) the classification layer is C 1 (·);
(3) At each subsequent incremental learning stage, the tasks t are learned in sequence 2 ~t N Updating the network model once every 1 task is learned, and learning t n The feature extractor obtained is f n (. O) the classification layer is C n (. Cndot.) capable of identifying B1+ (n-1). Times.B 2 classes;
updating t n The corresponding network model specifically comprises:
(31) Set task t n Contains B2 classes, each class is in D n Corresponding to S samples;
(32) For 1 of these classes, use t n-1 Obtained feature extractor f n-1 (. The) taking the corresponding S samples of which the k sample x k Is characterized in that n-1 (x k ) K =1 to S, and calculates the center value of the category
Figure FDA0003987558400000011
(33) Obtaining t according to step (32) n The center values of all the categories in the database, wherein the total number of the center values is B2;
(34) At t n-1 Increase t in the resulting classification layer n Corresponding B2 categories of classification parameters, and replacing B2 newly added classification parameters with the B2 central values obtained in the step (33);
(35) The distillation loss L was calculated according to the following formula D
Figure FDA0003987558400000021
In the formula, U is D n Total number of samples in, U = B2 × S;
(36) Computing task t n Total loss function loss of n
loss n =L CE +αL m +βL D
In the formula, L CE As a cross-entropy loss function, L m For distance-based metric loss function, α is L m Beta is L D The weight of (c);
(37) Training t based on total loss function n-1 Obtaining t n The network model of (2).
2. The handset radiation source spectrogram category incremental learning method based on the deep neural network as claimed in claim 1, wherein: in said step (36);
calculating a cross entropy loss function L according to CE
Figure FDA0003987558400000022
In the formula, i is D n I =1 to U for the ith sample; m is C 1 ~C n C is 1 category in M; y is a sample x i When y = label of (2)Class c has a value of 1, otherwise 0,
Figure FDA0003987558400000023
for the current network model pair x i A predicted probability of belonging to class c;
calculating a distance-based metric loss function L according to m
L m =max(0,d(f n (x i ),f n (x p ))-d(f n (x i ),f n (x q ))+D)
x p Is D n Neutral and x i Samples belonging to the same class, the number of which is S-1, x q Is D n Neutral and x i Samples which do not belong to the same category are U-S in number, D (·,) is the Euclidean distance between two features, and D is a preset distance threshold.
3. The handset radiation source spectrogram category incremental learning method based on the deep neural network as claimed in claim 1 or 2, wherein: the α =0.001 and β =0.05.
4. The handset radiation source spectrogram category incremental learning method based on the deep neural network as claimed in claim 1 or 2, wherein: learning task t 1 The loss function for training the RestNet-18 network is loss 1 =L CE +αL m
5. The handset radiation source spectrogram category incremental learning method based on the deep neural network as claimed in claim 1, wherein: step (37) is to reversely propagate the total loss function value, reduce the total loss function value by using a gradient descent algorithm and update the model parameters.
CN202211569854.2A 2022-12-08 2022-12-08 Mobile phone radiation source spectrogram category increment learning method based on deep neural network Pending CN115828100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211569854.2A CN115828100A (en) 2022-12-08 2022-12-08 Mobile phone radiation source spectrogram category increment learning method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211569854.2A CN115828100A (en) 2022-12-08 2022-12-08 Mobile phone radiation source spectrogram category increment learning method based on deep neural network

Publications (1)

Publication Number Publication Date
CN115828100A true CN115828100A (en) 2023-03-21

Family

ID=85544561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211569854.2A Pending CN115828100A (en) 2022-12-08 2022-12-08 Mobile phone radiation source spectrogram category increment learning method based on deep neural network

Country Status (1)

Country Link
CN (1) CN115828100A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116306875A (en) * 2023-05-18 2023-06-23 成都理工大学 Drainage pipe network sample increment learning method based on space pre-learning and fitting

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116306875A (en) * 2023-05-18 2023-06-23 成都理工大学 Drainage pipe network sample increment learning method based on space pre-learning and fitting
CN116306875B (en) * 2023-05-18 2023-08-01 成都理工大学 Drainage pipe network sample increment learning method based on space pre-learning and fitting

Similar Documents

Publication Publication Date Title
CN107944410B (en) Cross-domain facial feature analysis method based on convolutional neural network
CN113326731B (en) Cross-domain pedestrian re-identification method based on momentum network guidance
CN110188343A (en) Multi-modal emotion identification method based on fusion attention network
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN110197286A (en) A kind of Active Learning classification method based on mixed Gauss model and sparse Bayesian
CN110232448A (en) It improves gradient and promotes the method that the characteristic value of tree-model acts on and prevents over-fitting
CN107203600B (en) Evaluation method for enhancing answer quality ranking by depicting causal dependency relationship and time sequence influence mechanism
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN104966105A (en) Robust machine error retrieving method and system
CN112001422B (en) Image mark estimation method based on deep Bayesian learning
CN111639540A (en) Semi-supervised character re-recognition method based on camera style and human body posture adaptation
CN111582396A (en) Fault diagnosis method based on improved convolutional neural network
CN108596204B (en) Improved SCDAE-based semi-supervised modulation mode classification model method
CN102663681B (en) Gray scale image segmentation method based on sequencing K-mean algorithm
CN108241872A (en) The adaptive Prediction of Stock Index method of Hidden Markov Model based on the multiple features factor
CN116910571B (en) Open-domain adaptation method and system based on prototype comparison learning
CN115511012B (en) Class soft label identification training method with maximum entropy constraint
CN113128671A (en) Service demand dynamic prediction method and system based on multi-mode machine learning
CN115828100A (en) Mobile phone radiation source spectrogram category increment learning method based on deep neural network
CN114692732A (en) Method, system, device and storage medium for updating online label
CN109284378A (en) A kind of relationship classification method towards knowledge mapping
CN113872904B (en) Multi-classification communication signal automatic modulation recognition method based on ensemble learning
CN112132096A (en) Behavior modal identification method of random configuration network for dynamically updating output weight
CN116958548A (en) Pseudo tag self-distillation semantic segmentation method based on category statistics driving
CN114973350A (en) Cross-domain facial expression recognition method irrelevant to source domain data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination