CN108449295A - Combined modulation recognition methods based on RBM networks and BP neural network - Google Patents

Combined modulation recognition methods based on RBM networks and BP neural network Download PDF

Info

Publication number
CN108449295A
CN108449295A CN201810113576.7A CN201810113576A CN108449295A CN 108449295 A CN108449295 A CN 108449295A CN 201810113576 A CN201810113576 A CN 201810113576A CN 108449295 A CN108449295 A CN 108449295A
Authority
CN
China
Prior art keywords
layer
rbm
neural network
parameter
hidden layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810113576.7A
Other languages
Chinese (zh)
Inventor
李文刚
艾灿
王屹伟
钱天蓉
黄辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Innovation Institute of Xidian University
Original Assignee
Kunshan Innovation Institute of Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Innovation Institute of Xidian University filed Critical Kunshan Innovation Institute of Xidian University
Priority to CN201810113576.7A priority Critical patent/CN108449295A/en
Publication of CN108449295A publication Critical patent/CN108449295A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The combined modulation recognition methods based on RBM networks and BP neural network that the invention discloses a kind of, including:(1) modulated signal is pre-processed;(2) characteristic parameter is extracted;(3) training sample and test sample of every class modulation system are randomly generated;(4) BP input layers are trained with first layer hidden layer as a RBM;(5) intersection of the parameter of initialization RBM;(6) training RBM, obtains the intersection of parameter and the output feature of hidden layer;(7) visible layer and hidden layer by the first layer hidden layer of BP and next layer as second RBM are trained, input of the output of first RBM as second RBM, repeats the intersection of the parameter of (5) (6) (7) until obtaining all RBM;(8) re -training BP, until reaching optimal solution state;(9) test data is normalized, inputs trained BP, calculate Modulation Mode Recognition rate.The invention has the beneficial effects that:Input dimension is reduced, Modulation Identification rate is improved.

Description

Combined modulation recognition methods based on RBM networks and BP neural network
Technical field
The present invention relates to Modulation Identification methods, and in particular to is identified based on RBM networks and the combined modulation of BP neural network Method belongs to field of communication technology.
Background technology
With the development of the communication technology, the application of signal modulation mode identification covers almost entire business and military communication Field, in signal authentication, disturbance ecology, electronic countermeasure etc. important role, the value that is widely used and Foreground.
Modulation Identification refers to that modulation system is identified before receiving signal demodulation.In general, there are three types of Modulation Identifications Method:Decision tree classifier, clustering, neural network classifier.
Nandi and Azzouz proposes a kind of decision Tree algorithms that feature based parameter is classified.Decision tree classification is A kind of low complex degree and intuitive but method easily affected by noise, therefore be usually combined with other methods in practical application.It is poly- Alanysis is a kind of method of multivariate statistics classification, be blind classification is carried out according to the pattern similarity in unlabelled sample, but It is that clustering method is vulnerable to the influence of noise, and the characteristic parameter difference extracted is affected to recognition performance.As Most common method possesses self study and abstract ability backpropagation (back propagation, BP) and radial base (radial Basis function, RBF) neural network be highly suitable for input signal and output there are the classification of potential Nonlinear Mapping Problem, however neural-network classification method is easily trapped into locally optimal solution problem.In addition to this, neural-network classification method is connecing Rate of convergence is excessively slow when being bordering on optimal solution, and generalization ability is poor, in the case that signal-to-noise ratio (SNR) lower discrimination compared with It is low.
It is in Hinton and Sejnowski to limit Boltzmann machine (Restricted Boltzmann Machine, RBM) Connectionless Structure Network in a kind of layer being suggested on the basis of the Boltzmann machine (Boltzmann Machine, BM) of proposition Network can be fitted arbitrary discrete distribution in the case where hidden unit is enough;Quick of RBM is proposed in Hinton in 2002 After algorithm-is practised to sdpecific dispersion (contrastive divergence, CD), RBM is successfully used in different engineerings In habit problem, such as classification, recurrence, dimensionality reduction.
Invention content
To solve the deficiencies in the prior art, the purpose of the present invention is to provide one kind being based on RBM networks and BP neural network , the combined modulation recognition methods that Modulation Identification rate can be improved.
In order to realize that above-mentioned target, the present invention adopt the following technical scheme that:
Combined modulation recognition methods based on RBM networks and BP neural network, which is characterized in that include the following steps:
Step1:Modulated signal to be sorted is pre-processed;
Step2:To passing through pretreated signal extraction characteristic parameter, preceding feature parameter is based on time domain, frequency domain and system The characteristic parameter of metering;
Step3:The training sample and test specimens of every class modulation system are randomly generated according to the Step2 characteristic parameters extracted This, obtains training sample set, test data set and corresponding class tag set;
Step4:The relevant parameter of BP neural network is set, using the input layer of BP neural network and first layer hidden layer as One RBM network is trained;
Step5:Initialize intersection θ, θ=(W, a, b) of the parameter of RBM networks, wherein W is the network concealed layers of RBM and can See that the weight matrix between layer, a are the bias vectors of visible layer, b is the bias vector of hidden layer;
Step6:Training RBM networks after the Step3 training samples generated are divided into small lot data and are normalized, obtain The intersection θ of the parameter of the RBM networks and output feature p (h of hidden layerj=1 | v), wherein h is the state vector of hidden layer, hj Indicate that the state of j-th of neuron in hidden layer, v are the state vector of visible layer;
Step7:By the first layer hidden layer of BP neural network and next layer as the visible layer of second RBM network and hidden It hides layer to be trained, the output p (h of first RBM networkj=1 | the v) input as second RBM network, repetition Step5, The intersection θ of parameters of the Step6 and Step7 until obtaining all RBM networks;
Step8:It sets the initial parameter θ ' of BP neural network to the parameter intersection θ of trained RBM networks, has The re -training BP neural network of supervision, to parameter θ ' be finely adjusted, be optimal solution state;
Step9:Test data is normalized, trained BP neural network is inputted, calculates Modulation Mode Recognition rate.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step1, Aforementioned pretreatment includes:Zero averaging and normalization.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step2, Preceding feature parameter includes:The high-order spectrum signature of instantaneous amplitude feature, the spectrum signature of signal and signal.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step4, Aforementioned relevant parameter includes:Each node layer number and the hidden layer number of plies of BP neural network.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step5, The detailed process for initializing the intersection θ of the parameter of RBM networks is as follows:
The Step3 training samples generated are divided into comprising the small lot data per 20 samples of class by (5a), and visible layer is arranged Number of unit nvFor the input layer number of BP neural network, setting hidden layer number of unit nhIt is first of BP neural network The number of nodes of hidden layer;
Weight matrix W is initialized as the random number from normal distribution N (0,0.01) by (5b), by aiAnd bjIt is initialized as 0, by object function when each iteration about wijPartial derivative approximation Δ wij, object function is about aiPartial derivative approximation It is worth Δ ai, object function is about bjPartial derivative approximation Δ bjIt is initialized as 0, it will be seen that the power between layer and hidden layer Weight learning rate εw, visible layer biasing between learning rate εaAnd the learning rate ε between hidden layer biasingbIt is disposed as 0.01, it will Momentum learning rate and final momentum learning rate are respectively set to 0.5 and 0.9, between power attenuation coefficient λ values 0.01 to 0.0001 Arbitrary value, setting k walk to the parameter k=1 in sdpecific dispersion algorithm.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step6, The detailed process of training RBM networks is as follows:
When modulated signal forward direction (6a) to be sorted is transmitted, vi (0)The input value for indicating positive for the first time when transmitting, RBM nets The output of the hidden layer of network is:
Wherein,It is the activation primitive of RBM networks, utilizes the v of inputi (0)Calculate output probability;
(6b) is by p (hj (0)=1 | v(0)) binaryzation is as hj (0)Value, if this number be less than p (hj (0)=1 | v(0)), then hj (0)Value be 0, if this number be more than or equal to p (hj (0)=1 | v(0)), then hj (0)Value be 1;
When (6c) modulated signal back transfer to be sorted, the h come will have been found outj (0)As input, RBM networks can See that the output of layer is:
(6d) is according to the v calculated in backpropagationi (1)Step (6a) and step (6b) are repeated, after calculating second of iteration P (hj (1)=1 | v(1)) and hj (1)Value;
(6e) uses CD-k algorithms, carries out the alternately Gibbs samplings of k steps, object function is about w when obtaining each iterationij、 ai、bjPartial derivative approximation, due to k=1, so Δ wij、Δai、ΔbjIt is formulated as respectively:
Δwij≈p(hi (0)=1 | v(0))·vj (0)-p(hi (1)=1 | v(1))·vj (1) (4)
Δai=vi (0)-vi (1) (5)
Δbi=p (hi (0)=1 | v(0))-p(hi (1)=1 | v(1)) (6)
(6f) updates w when iterating to l+1 times, using gradient rise methodij、ai、bj, more new formula is:
wij (l+1)=wij (l)+Δwij (l+1) (7)
ai (l+1)=ai (l)+Δai (l+1) (8)
bj (l+1)=bj (l)+Δbj (l+1) (9)
Wherein, Δ wij、Δai、ΔbjIt can be expressed as:
Wherein, ρ is momentum learning rate, nblockIt is the number of small lot data,It is power attenuation term.
Combined modulation recognition methods above-mentioned based on RBM networks and BP neural network, which is characterized in that in Step8, Training BP neural network, to parameter θ ' be finely adjusted be optimal solution state process it is specific as follows:
(8a) initialize BP neural network parameter θ ' for the parameter intersection θ of trained RBM networks, be then arranged The transmission function of BP neural network is Sigmoid functions:
When (8b) modulated signal forward-propagating to be sorted, by former training sample and class label form new training set to from Input layer is incoming, after each hidden layer is successively handled, is transmitted to output layer, if the reality output of output layer and desired output error It is too big, then it is transferred to the error back propagation stage;
When (8c) modulated signal backpropagation to be sorted, will output by hidden layer to input layer successively anti-pass, will be accidentally Difference is spread out to all units of each input layer, to obtain each input layer all units error signal, be used in combination this error to believe The weight parameter of each input layer number is corrected, when error is less than minimal error, training is completed.
The invention has the beneficial effects that:
1, the pretreatments such as zero averaging (zero center), normalization have been carried out to modulated signal to be sorted, and has carried out spy The extraction for levying parameter, reduces input dimension;
2, effective to avoid BP neural network training and be absorbed in local minimum and excessively slow in optimal solution everywhere convergent speed The problems such as, improve the Modulation Identification rate of system.
Description of the drawings
Fig. 1 is the general flow chart of the combined modulation recognition methods of the present invention;
Fig. 2 is that RBM networks combine the schematic diagram being modulated with BP neural network;
Fig. 3 is the training flow chart of RBM networks.
Specific implementation mode
The present invention has carried out the pretreatments such as zero averaging, normalization to modulated signal to be sorted, and has carried out feature ginseng Several extractions reduces input dimension, then to the input layer of BP neural network, multilayer hidden layer and output layer according to multilayer The mode of RBM networks is trained, and obtains the weight of BP neural network and the initial value of offset parameter, finally trains BP nerves Network is finely adjusted parameter, to carry out signal modulation mode identification.
Specific introduce is made to the present invention below in conjunction with the drawings and specific embodiments.
Referring to Fig.1, the combined modulation recognition methods of the invention based on RBM networks and BP neural network implements step It is as follows:
Step1:Modulated signal x (n) to be sorted is subjected to zero averaging and normalization pre-processes.
It is formulated as by zero averaging and the pretreated signal sequence s (n) of normalization:
Wherein,It is the sequence of modulated signal x (n) to be sorted Jing Guo zero averaging, HT (sZ) it is sz(n) Hilbert variation, N is the length of signal sequence.
Step2:To passing through pretreated signal extraction characteristic parameter, this feature parameter is primarily referred to as based on time domain, frequency domain With the characteristic parameter of statistic, wherein:
(1) characteristic parameter based on time domain be instantaneous amplitude feature, this feature parameter for distinguish MASK modulation systems, MQAM modulation systems;
(2) characteristic parameter based on frequency domain is the spectrum signature of signal, and this feature parameter is for distinguishing MFSK modulation systems;
(3) characteristic parameter based on statistic is the high-order spectrum signature of signal, and this feature parameter is used to distinguish the tune of MPSK Mode processed.
Step3:The training sample and test specimens of every class modulation system are randomly generated according to the Step2 characteristic parameters extracted This, obtains training sample set, test data set and corresponding class tag set.
Step4 to Step7 is the process of RBM networks and BP neural network combined modulation, with reference to Fig. 2.
Step4:The relevant parameter of BP neural network is set, using the input layer of BP neural network and first layer hidden layer as One RBM network is trained.
Step5:Initialize intersection θ, θ=(W, a, b) of the parameter of RBM networks, wherein W is the network concealed layers of RBM and can See the weight matrix between layer, W={ wij, wijIt is the company of i-th of neuron and j-th of neuron in visible layer in hidden layer Connect weight;A is the bias vector of visible layer, a={ ai, aiIt is the biasing of i-th of neuron in visible layer;B is the inclined of hidden layer Set vector, b={ bj, bjIt is the biasing of j-th of neuron in hidden layer.
The detailed process for initializing the intersection θ of the parameter of RBM networks is as follows:
The Step3 training samples generated are divided by (5a) (to be calculated comprising the small lot data per 20 samples of class with improving Efficiency), setting visible layer number of unit nvFor the input layer number of BP neural network, setting hidden layer number of unit nhFor BP The number of nodes of first hidden layer of neural network.
Weight matrix W is initialized as the random number from normal distribution N (0,0.01) by (5b), by aiAnd bjIt is initialized as 0, by object function when each iteration about wijPartial derivative approximation Δ wij, object function is about aiPartial derivative approximation It is worth Δ ai, object function is about bjPartial derivative approximation Δ bjIt is initialized as 0, i.e. Δ wij=0, Δ ai=0, Δ bj= 0;It will be seen that the weight learning rate ε between layer and hidden layerw, visible layer biasing between learning rate εaAnd hidden layer biases it Between learning rate εbIt is disposed as 0.01, i.e. εwab=0.01, momentum learning rate and final momentum learning rate are set respectively It is set to 0.5 and 0.9, power attenuation coefficient λ can be arranged k and walked to sdpecific dispersion algorithm with the arbitrary value between value 0.01 to 0.0001 In parameter k=1.
Step6:Training RBM networks after the Step3 training samples generated are divided into small lot data and are normalized, obtain The intersection θ of the parameter of the RBM networks and output feature p (h of hidden layerj=1 | v), wherein hjIt is j-th of nerve in hidden layer The state of member, hj∈ h, h are the state vectors of hidden layer, and v is the state vector of visible layer, v={ vi, viIt is i-th in visible layer The state of a neuron.
With reference to Fig. 3, the detailed process of training RBM networks is as follows:
When modulated signal forward direction (6a) to be sorted is transmitted, vi (0)The input value for indicating positive for the first time when transmitting, RBM nets The output of the hidden layer of network is:
Wherein,It is the activation primitive of RBM networks, utilizes the v of inputi (0)Calculate output probability.
(6b) is by p (hj (0)=1 | v(0)) binaryzation is as hj (0)Value, that is, randomly generate the number between one 0 to 1, if This number is less than p (hj (0)=1 | v(0)), then hj (0)Value be 0, if this number be more than or equal to p (hj (0)=1 | v(0)), then hj (0) Value be 1.
When (6c) modulated signal back transfer to be sorted, the h come will have been found outj (0)As input, RBM networks can See that the output of layer is:
Due to the v of inputi (0)It is therefore also not need two-value after calculating the output probability of visible layer between [0,1] Change, can directly regard the input v of forward-propagating next time asi (1)
(6d) is according to the v calculated in backpropagationi (1)Step (6a) and step (6b) are repeated, after calculating second of iteration P (hj (1)=1 | v(1)) and hj (1)Value.
(6e) uses CD-k algorithms, carries out the alternately Gibbs samplings of k steps, object function is about w when obtaining each iterationij、 ai、bjPartial derivative approximation, due to k=1, so Δ wij、Δai、ΔbjIt is formulated as respectively:
Δwij≈p(hi (0)=1 | v(0))·vj (0)-p(hi (1)=1 | v(1))·vj (1) (4)
Δai=vi (0)-vi (1) (5)
Δbi=p (hi (0)=1 | v(0))-p(hi (1)=1 | v(1)) (6)
(6f) updates w when iterating to l+1 times, using gradient rise methodij、ai、bj, more new formula is:
wij (l+1)=wij (l)+Δwij (l+1) (7)
ai (l+1)=ai (l)+Δai (l+1) (8)
bj (l+1)=bj (l)+Δbj (l+1) (9)
Wherein, Δ wij、Δai、ΔbjIt can be expressed as:
Wherein, ρ is momentum learning rate, nblockIt is the number of small lot data,It is power attenuation term.
Step7:By the first layer hidden layer of BP neural network and next layer as the visible layer of second RBM network and hidden It hides layer to be trained, the output p (h of first RBM networkj=1 | the v) input as second RBM network, repetition Step5, The intersection θ of parameters of the Step6 and Step7 until obtaining all RBM networks.
Step8:It sets the initial parameter θ ' of BP neural network to the parameter intersection θ of trained RBM networks, has The re -training BP neural network of supervision, to parameter θ ' be finely adjusted, be optimal solution state, whole process is specific as follows:
(8a) initialize BP neural network parameter θ ' for trained RBM networks parameter intersection θ, θ=(W, a, B), i.e., the parameter θ of initialization BP neural network ' for trained weight matrix W and bias vector a, bias vector b, so The transmission function of setting BP neural network is Sigmoid functions afterwards:
When (8b) modulated signal forward-propagating to be sorted, by former training sample and class label form new training set to from Input layer is incoming, after each hidden layer is successively handled, is transmitted to output layer, if the reality output of output layer and desired output error It is too big, then it is transferred to the error back propagation stage.
When (8c) modulated signal backpropagation to be sorted, will output by hidden layer to input layer successively anti-pass, will be accidentally Difference is spread out to all units of each input layer, to obtain each input layer all units error signal, be used in combination this error to believe The weight parameter of each input layer number is corrected, when error is less than minimal error, training is completed.
Step9:Test data is normalized, trained BP neural network is inputted, calculates Modulation Mode Recognition rate.
Because using RBM networks trained parameter as the initial parameter of BP neural network so that BP nerve nets The selection of the initial parameter of network is absorbed in close to optimal solution, the institute's BP neural network that effectively avoids with the inventive method training Local minimum and the problems such as optimal solution everywhere convergent speed is excessively slow, improves the Modulation Identification rate of system.
The combined modulation recognition methods based on RBM networks and BP neural network of the present invention, can be used for communication signal transmissions Modulation Mode Recognition in the process.
It should be noted that the invention is not limited in any way for above-described embodiment, it is all to use equivalent replacement or equivalent change The technical solution that the mode changed is obtained, all falls in protection scope of the present invention.

Claims (7)

1. the combined modulation recognition methods based on RBM networks and BP neural network, which is characterized in that include the following steps:
Step1:Modulated signal to be sorted is pre-processed;
Step2:To passing through pretreated signal extraction characteristic parameter, the characteristic parameter is based on time domain, frequency domain and statistic Characteristic parameter;
Step3:The training sample and test sample that every class modulation system is randomly generated according to the Step2 characteristic parameters extracted, obtain To training sample set, test data set and corresponding class tag set;
Step4:The relevant parameter of BP neural network is set, using the input layer of BP neural network and first layer hidden layer as one RBM networks are trained;
Step5:Initialize intersection θ, θ=(W, a, b) of the parameter of RBM networks, wherein W is the network concealed layers of RBM and visible layer Between weight matrix, a is the bias vector of visible layer, and b is the bias vector of hidden layer;
Step6:Training RBM networks after the Step3 training samples generated are divided into small lot data and are normalized, obtain RBM nets The intersection θ of the parameter of the network and output feature p (h of hidden layerj=1 | v), wherein h is the state vector of hidden layer, hjIt indicates The state of j-th of neuron in hidden layer, v are the state vector of visible layer;
Step7:By the first layer hidden layer of BP neural network and the next layer of visible layer and hidden layer as second RBM network It is trained, the output p (h of first RBM networkj=1 | the v) input as second RBM network repeats Step5, Step6 The intersection θ of parameter with Step7 until obtaining all RBM networks;
Step8:It sets the initial parameter θ ' of BP neural network to the parameter intersection θ of trained RBM networks, there is supervision Re -training BP neural network, to parameter θ ' be finely adjusted, be optimal solution state;
Step9:Test data is normalized, trained BP neural network is inputted, calculates Modulation Mode Recognition rate.
2. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In in Step1, the pretreatment includes:Zero averaging and normalization.
3. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In in Step2, the characteristic parameter includes:The high-order spectrum signature of instantaneous amplitude feature, the spectrum signature of signal and signal.
4. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In in Step4, the relevant parameter includes:Each node layer number and the hidden layer number of plies of BP neural network.
5. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In in Step5, the detailed process for initializing the intersection θ of the parameter of RBM networks is as follows:
The Step3 training samples generated are divided into comprising the small lot data per 20 samples of class by (5a), and visible layer unit is arranged Number nvFor the input layer number of BP neural network, setting hidden layer number of unit nhFirst for BP neural network is hidden The number of nodes of layer;
Weight matrix W is initialized as the random number from normal distribution N (0,0.01) by (5b), by aiAnd bjIt is initialized as 0, it will Object function is about w when each iterationijPartial derivative approximation Δ wij, object function is about aiPartial derivative approximation Δ ai, object function is about bjPartial derivative approximation Δ bjIt is initialized as 0, it will be seen that the weight between layer and hidden layer Habit rate εw, visible layer biasing between learning rate εaAnd the learning rate ε between hidden layer biasingbIt is disposed as 0.01, by momentum Learning rate and final momentum learning rate are respectively set to 0.5 and 0.9, appointing between power attenuation coefficient λ values 0.01 to 0.0001 Meaning value, setting k are walked to the parameter k=1 in sdpecific dispersion algorithm.
6. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In in Step6, the detailed process of training RBM networks is as follows:
When modulated signal forward direction (6a) to be sorted is transmitted, vi (0)The input value for indicating positive for the first time when transmitting, RBM networks The output of hidden layer is:
Wherein,It is the activation primitive of RBM networks, utilizes the v of inputi (0)Calculate output probability;
(6b) is by p (hj (0)=1 | v(0)) binaryzation is as hj (0)Value, if this number be less than p (hj (0)=1 | v(0)), then hj (0) Value be 0, if this number be more than or equal to p (hj (0)=1 | v(0)), then hj (0)Value be 1;
When (6c) modulated signal back transfer to be sorted, the h come will have been found outj (0)As input, the visible layer of RBM networks Output be:
(6d) is according to the v calculated in backpropagationi (1)Step (6a) and step (6b) are repeated, the p after second of iteration is calculated (hj (1)=1 | v(1)) and hj (1)Value;
(6e) uses CD-k algorithms, carries out the alternately Gibbs samplings of k steps, object function is about w when obtaining each iterationij、ai、bj's The approximation of partial derivative, due to k=1, so Δ wij、Δai、ΔbjIt is formulated as respectively:
Δwij≈p(hi (0)=1 | v(0))·vj (0)-p(hi (1)=1 | v(1))·vj (1) (4)
Δai=vi (0)-vi (1) (5)
Δbi=p (hi (0)=1 | v(0))-p(hi (1)=1 | v(1)) (6)
(6f) updates w when iterating to l+1 times, using gradient rise methodij、ai、bj, more new formula is:
wij (l+1)=wij (l)+Δwij (l+1) (7)
ai (l+1)=ai (l)+Δai (l+1) (8)
bj (l+1)=bj (l)+Δbj (l+1) (9)
Wherein, Δ wij、Δai、ΔbjIt can be expressed as:
Wherein, ρ is momentum learning rate, nblockIt is the number of small lot data,It is power attenuation term.
7. the combined modulation recognition methods according to claim 1 based on RBM networks and BP neural network, feature exist In, in Step8, training BP neural network, to parameter θ ' be finely adjusted be optimal solution state process it is specific as follows:
(8a) initialize BP neural network parameter θ ' for the parameter intersection θ of trained RBM networks, then BP god is set Transmission function through network is Sigmoid functions:
When (8b) modulated signal forward-propagating to be sorted, former training sample and class label are formed into new training set to from input Layer is incoming, after each hidden layer is successively handled, is transmitted to output layer, if the reality output of output layer and desired output error are too Greatly, then it is transferred to the error back propagation stage;
When (8c) modulated signal backpropagation to be sorted, will output by hidden layer to input layer successively anti-pass, by error point Spread out to all units of each input layer, to obtain each input layer all units error signal, be used in combination this error signal to repair The weight parameter of just each input layer, when error is less than minimal error, training is completed.
CN201810113576.7A 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network Pending CN108449295A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810113576.7A CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810113576.7A CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Publications (1)

Publication Number Publication Date
CN108449295A true CN108449295A (en) 2018-08-24

Family

ID=63191728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810113576.7A Pending CN108449295A (en) 2018-02-05 2018-02-05 Combined modulation recognition methods based on RBM networks and BP neural network

Country Status (1)

Country Link
CN (1) CN108449295A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109787927A (en) * 2019-01-03 2019-05-21 荆门博谦信息科技有限公司 Modulation Identification method and apparatus based on deep learning
CN109918794A (en) * 2019-03-11 2019-06-21 哈尔滨理工大学 A kind of blade analysis method for reliability based on RBMBP extreme value response phase method
CN110120926A (en) * 2019-05-10 2019-08-13 哈尔滨工程大学 Modulation mode of communication signal recognition methods based on evolution BP neural network
CN110224956A (en) * 2019-05-06 2019-09-10 安徽继远软件有限公司 Modulation Identification method based on interference cleaning and two stages training convolutional neural networks model
CN110472501A (en) * 2019-07-10 2019-11-19 南京邮电大学 A kind of fingerprint pore coding specification method neural network based
CN110536257A (en) * 2019-08-21 2019-12-03 成都电科慧安科技有限公司 A kind of indoor orientation method based on depth adaptive network
CN112115821A (en) * 2020-09-04 2020-12-22 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112132191A (en) * 2020-09-01 2020-12-25 兰州理工大学 Intelligent evaluation and identification method for early damage state of rolling bearing
CN112288020A (en) * 2020-10-30 2021-01-29 江南大学 Digital modulation identification method based on discriminant limited Boltzmann machine
CN114626635A (en) * 2022-04-02 2022-06-14 北京乐智科技有限公司 Steel logistics cost prediction method and system based on hybrid neural network
CN117933499A (en) * 2024-03-22 2024-04-26 中国铁建电气化局集团有限公司 Invasion risk prediction method, device and storage medium for high-speed railway catenary

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
US20120065976A1 (en) * 2010-09-15 2012-03-15 Microsoft Corporation Deep belief network for large vocabulary continuous speech recognition
CN103795592A (en) * 2014-01-21 2014-05-14 中国科学院信息工程研究所 Online water navy detection method and device
CN104166548A (en) * 2014-08-08 2014-11-26 同济大学 Deep learning method based on motor imagery electroencephalogram data
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101021900A (en) * 2007-03-15 2007-08-22 上海交通大学 Method for making human face posture estimation utilizing dimension reduction method
US20120065976A1 (en) * 2010-09-15 2012-03-15 Microsoft Corporation Deep belief network for large vocabulary continuous speech recognition
CN103795592A (en) * 2014-01-21 2014-05-14 中国科学院信息工程研究所 Online water navy detection method and device
CN104166548A (en) * 2014-08-08 2014-11-26 同济大学 Deep learning method based on motor imagery electroencephalogram data
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
CN107256393A (en) * 2017-06-05 2017-10-17 四川大学 The feature extraction and state recognition of one-dimensional physiological signal based on deep learning

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109787927A (en) * 2019-01-03 2019-05-21 荆门博谦信息科技有限公司 Modulation Identification method and apparatus based on deep learning
CN109918794A (en) * 2019-03-11 2019-06-21 哈尔滨理工大学 A kind of blade analysis method for reliability based on RBMBP extreme value response phase method
CN110224956A (en) * 2019-05-06 2019-09-10 安徽继远软件有限公司 Modulation Identification method based on interference cleaning and two stages training convolutional neural networks model
CN110120926B (en) * 2019-05-10 2022-01-07 哈尔滨工程大学 Communication signal modulation mode identification method based on evolution BP neural network
CN110120926A (en) * 2019-05-10 2019-08-13 哈尔滨工程大学 Modulation mode of communication signal recognition methods based on evolution BP neural network
CN110472501A (en) * 2019-07-10 2019-11-19 南京邮电大学 A kind of fingerprint pore coding specification method neural network based
CN110472501B (en) * 2019-07-10 2022-08-30 南京邮电大学 Neural network-based fingerprint sweat pore coding classification method
CN110536257A (en) * 2019-08-21 2019-12-03 成都电科慧安科技有限公司 A kind of indoor orientation method based on depth adaptive network
CN110536257B (en) * 2019-08-21 2022-02-08 成都电科慧安科技有限公司 Indoor positioning method based on depth adaptive network
CN112132191A (en) * 2020-09-01 2020-12-25 兰州理工大学 Intelligent evaluation and identification method for early damage state of rolling bearing
CN112115821A (en) * 2020-09-04 2020-12-22 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112115821B (en) * 2020-09-04 2022-03-11 西北工业大学 Multi-signal intelligent modulation mode identification method based on wavelet approximate coefficient entropy
CN112288020A (en) * 2020-10-30 2021-01-29 江南大学 Digital modulation identification method based on discriminant limited Boltzmann machine
CN114626635A (en) * 2022-04-02 2022-06-14 北京乐智科技有限公司 Steel logistics cost prediction method and system based on hybrid neural network
CN117933499A (en) * 2024-03-22 2024-04-26 中国铁建电气化局集团有限公司 Invasion risk prediction method, device and storage medium for high-speed railway catenary

Similar Documents

Publication Publication Date Title
CN108449295A (en) Combined modulation recognition methods based on RBM networks and BP neural network
Kim et al. Champion-challenger analysis for credit card fraud detection: Hybrid ensemble and deep learning
CN102915445A (en) Method for classifying hyperspectral remote sensing images of improved neural network
Sengur Multiclass least-squares support vector machines for analog modulation classification
CN108768907A (en) A kind of Modulation Identification method based on temporal characteristics statistic and BP neural network
Sayjadah et al. Credit card default prediction using machine learning techniques
CN110120926A (en) Modulation mode of communication signal recognition methods based on evolution BP neural network
CN111914919A (en) Open set radiation source individual identification method based on deep learning
CN104732244A (en) Wavelet transform, multi-strategy PSO (particle swarm optimization) and SVM (support vector machine) integrated based remote sensing image classification method
CN112347844B (en) LID-based signal countermeasure sample detector design method
Badr et al. A novel evasion attack against global electricity theft detectors and a countermeasure
CN115422537A (en) Method for resisting turnover attack of federal learning label
Wozniak et al. Some remarks on chosen methods of classifier fusion based on weighted voting
CN111144500A (en) Differential privacy deep learning classification method based on analytic Gaussian mechanism
Pereira et al. A robust fingerprint presentation attack detection method against unseen attacks through adversarial learning
CN104794499A (en) Method for designing interval gray correlation classifier based on self-adaptive entropy coefficient
CN114980122A (en) Small sample radio frequency fingerprint intelligent identification system and method
CN113109782A (en) Novel classification method directly applied to radar radiation source amplitude sequence
CN110995631B (en) Communication signal modulation mode identification method and system based on LSTM and SVM
Gai et al. Spectrum sensing method based on residual cellular network
Samui Vector machine techniques for modeling of seismic liquefaction data
CN114912482A (en) Method and device for identifying radiation source
Hwang et al. An efficient domain-adaptation method using GAN for fraud detection
CN108121912A (en) A kind of malice cloud tenant recognition methods and device based on neutral net
Zhang et al. A neural rejection system against universal adversarial perturbations in radio signal classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180824