CN109523023A - A kind of deep learning network and system for subsurface communication Modulation Identification - Google Patents

A kind of deep learning network and system for subsurface communication Modulation Identification Download PDF

Info

Publication number
CN109523023A
CN109523023A CN201811364726.8A CN201811364726A CN109523023A CN 109523023 A CN109523023 A CN 109523023A CN 201811364726 A CN201811364726 A CN 201811364726A CN 109523023 A CN109523023 A CN 109523023A
Authority
CN
China
Prior art keywords
deep learning
learning network
data
network layer
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811364726.8A
Other languages
Chinese (zh)
Inventor
王岩
张喆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taishan University
Original Assignee
Taishan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taishan University filed Critical Taishan University
Priority to CN201811364726.8A priority Critical patent/CN109523023A/en
Publication of CN109523023A publication Critical patent/CN109523023A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

This application discloses a kind of deep learning networks and system for subsurface communication Modulation Identification, comprising: data prediction layer, for by by water-bed communication be transmitted through come Different Modulations data pre-process;Data characteristics extract layer, it includes four deep learning network layers that data characteristics, which extracts network layer, is respectively used to generate feature extraction collection and the first data characteristics collection;Data characteristics classification layer, including four deep learning network layers are respectively used to generate the second data characteristics collection and carry out data classification identification, and the data characteristics accuracy of the second data characteristics collection is higher than the first data characteristics collection;Data classification result output layer, for judging and exporting final modulation system.By multiple deep learning network layers, the case where actually using is simulated, improves the using effect in the reality bottom communicates, more convenient bottom communication modulation of efficiently completing identifies, the accuracy of raising Modulation Identification judgement in water-bed communication process.

Description

A kind of deep learning network and system for subsurface communication Modulation Identification
Technical field
This application involves deep learning field, in particular to a kind of deep learning network for subsurface communication Modulation Identification And system.
Background technique
Bottom wireless communication is considered as most challenging wireless communications method, water-bed wireless channel due to its own Characteristic (such as narrow bandwidth, extended time and serious intersymbol interference) and so that communication process is become abnormal difficult.These water Bottom radio communication characteristics seriously affect the stability of water-bed communication system, and cause serious hindrance to high speed water bottom wireless communication.
Modulation Identification plays conclusive effect during being sorted in communication system, at communication sink, signal demodulation Correct modulation classification identification is all based on signal identification.Due to the complexity and unstability of water-bed wireless communication system, Correct modulation system is difficult in practical water-bed communication period.
In order to overcome the above problem, it is used for some water-bed communication modulation recognition methods now, such as sparse adaptive Convolution method, the method for time domain turbo equilibrium and frequency domain turbo equilibrium, but these methods are answered to remain complicated in the presence of calculating Degree is high, the low problem of debud mode classification success rate.Therefore, the modulation system that accurately the identification bottom communicates, which becomes one, to be had The problem of challenge.
Summary of the invention
In order to solve the above-mentioned technical problem the application provides, the application is achieved by the following technical solution:
In a first aspect, the embodiment of the present application provides a kind of deep learning network for subsurface communication Modulation Identification, packet Include: data prediction layer, the data prediction layer for will be transmitted through by water-bed communication come Different Modulations data into Row pretreatment;Data characteristics extract layer, it includes the first deep learning network layer, the second depth that the data characteristics, which extracts network layer, Learning network layer, third deep learning network layer and the 4th deep learning network layer, the first deep learning network layer and institute The second deep learning network layer is stated for the data prediction layer treated preprocessed data to be generated two layers of feature extraction Collection, the third deep learning network layer and the 4th deep learning network layer are used to generate two according to the feature extraction collection The first data characteristics collection of layer;Data characteristics is classified layer, the data characteristics extract network layer include the 5th deep learning network layer, 6th deep learning network layer, the 7th deep learning network layer and the 8th deep learning network layer, the 5th deep learning net Network layers and the 6th deep learning network layer are used to generate the second data characteristics collection according to two layers of first data characteristics collection, The data characteristics accuracy of the second data characteristics collection is higher than the first data characteristics collection;The 7th deep learning network Layer and the 8th deep learning network layer are used to the second data characteristics collection carrying out data classification identification;Data classification knot Fruit output layer, for judging and exporting final modulation system.
Using above-mentioned implementation, by training deep learning network model data and test deep learning network model data It is independently input in the deep learning network of the application design, is used for subsurface communication Modulation Identification due to provided by the present application Deep learning network include multiple deep learning network layers, and with deep learning network layer depth be incremented by, depth It practises network layer to be incremented by the identification capability of data, so that deep learning network provided by the present application is being used for subsurface communication When, the using effect in practical water-bed communication can be improved, be more convenient efficiently to complete water-bed communication modulation identification, improve The accuracy that Modulation Identification judges in water-bed communication process.
According in a first aspect, the received institute of the data prediction layer in a first possible implementation of that first aspect Stating Different Modulations data is the modulation system data for training the deep learning network analog, and for described in training The water-bed communication modulation mode data of deep learning network are identical as water-bed communication modulation mode data characteristics is tested.
The first possible implementation according to first aspect, in a second possible implementation of that first aspect, institute Stating being transmitted through the Different Modulations data come by water-bed communication and pre-process includes: by the Different Modulations data Complex format data corresponding conversion at binary Real data format format conversion function part;Adjust Data Format Transform at For funtion part can be adjusted with the data format of input data feature extraction layer;It will the water-bed communication modulation mode data of training and survey Water-bed communication modulation mode data are tried to be normalized.
According to the implementation of first aspect, in first aspect in the third possible implementation, first depth Learning network layer is identical with the neuron number of the second deep learning network layer, convolution kernel be the preprocessed data row and The convolution of data amount check on column;Wherein, the second deep learning network layer adds compared to the first deep learning network layer Pond function is added;The third deep learning network layer is identical with the neuron number of the 4th deep learning network layer, Convolution kernel is the convolution of the data amount check on the feature extraction collection row and column, wherein the 4th deep learning network layer phase Pond function is added to than the third deep learning network layer.
The third possible implementation according to first aspect, in the 4th kind of possible implementation of first aspect, institute The neuron number for stating the 5th deep learning network layer is determined according to the data volume of processing, and the 5th deep learning network Layer is full articulamentum, and Dropout technology is added in the 5th deep learning network layer, for improving the 5th deep learning net The Data Classifying Quality of network layers;The 6th deep learning network layer cooperation the 5th deep learning network layer carries out classification and sentences Disconnected processing.
According to the 4th kind of possible implementation of first aspect, in the 5th kind of possible implementation of first aspect, institute The neuron number for stating the 7th deep learning network layer is determined according to the data volume of processing, and the 7th deep learning network Layer is full articulamentum, and Dropout technology is added in the 7th deep learning network layer, for improving the 5th deep learning net The Data Classifying Quality of network layers;The 8th deep learning network layer cooperation the 7th deep learning network layer carries out classification and sentences Disconnected processing.
According to the 5th kind of possible implementation of first aspect, in the 6th kind of possible implementation of first aspect, institute Stating data classification result output layer includes the 9th deep learning network layer, and the 9th deep learning network layer is according to the described 7th Deep learning network layer and the data classification recognition result of the 8th deep learning network layer output judge and export final Modulation system.
According to the 6th kind of possible implementation of first aspect, in the low seven kinds of possible implementations of first aspect, the One deep learning network layer, the second deep learning network layer, third deep learning network layer and four layer deep learning network layers are equal For convolutional neural networks, the fifth nerve network and seventh nerve network are full Connection Neural Network, the sixth nerve Network and eighth nerve network are LeakyReLU layers.
Second aspect, a kind of system for subsurface communication Modulation Identification, including above-mentioned first aspect or first aspect are appointed One possible deep learning network, the deep learning network include: data prediction layer, the first deep learning network layer, Two deep learning network layers, third deep learning network layer, the 4th deep learning network layer, the 5th deep learning network layer, Six deep learning network layers, the 7th deep learning network layer, the 8th deep learning network layer and the 9th deep learning network layer;Institute State data prediction layer, the data prediction layer will be for that will be transmitted through the Different Modulations data come progress by water-bed communication Pretreatment;The first deep learning network layer and the second deep learning network layer are used for will be at the data prediction layer Preprocessed data after reason generates two layers of feature extraction collection, the third deep learning network layer and the 4th deep learning net Network layers are used to generate two layers of first data characteristics collection according to the feature extraction collection;The 5th deep learning network layer and described 6th deep learning network layer is used to generate the second data characteristics collection, second number according to two layers of first data characteristics collection It is more accurate compared to the data characteristics of the first data characteristics collection according to feature set;The 7th deep learning network layer and described 8th deep learning network layer is used to the second data characteristics collection carrying out data classification identification;The 9th deep learning net The data classification recognition result that network layers are exported according to the 7th deep learning network layer and the 8th deep learning network layer Judge and exports final modulation system.
Detailed description of the invention
The application is further described with reference to the accompanying drawing.
Fig. 1 is a kind of signal of the deep learning network for subsurface communication Modulation Identification provided by the embodiments of the present application Figure;
Fig. 2 is a kind of water-bed wireless communication system model schematic provided by the embodiments of the present application;
Fig. 3 is a kind of simple neural network structure schematic diagram provided by the embodiments of the present application;
Fig. 4 is a kind of 3D deconvolution process schematic diagram provided by the embodiments of the present application;
Schematic diagram when Fig. 5 is a kind of maximum pondization processing provided by the embodiments of the present application;
Schematic diagram when Fig. 6 is a kind of average pondization processing provided by the embodiments of the present application;
Fig. 7 is the performance schematic diagram in a kind of deep learning network training process provided by the embodiments of the present application;
Fig. 8 is a kind of identification effect of the deep learning network provided by the embodiments of the present application in signal-to-noise ratio from -20dB to 20dB Fruit schematic diagram;
Fig. 9 is that a kind of deep learning network provided by the embodiments of the present application is illustrated in the recognition effect that signal-to-noise ratio is -10dB Figure;
Figure 10 is that a kind of deep learning network provided by the embodiments of the present application is illustrated in the recognition effect that signal-to-noise ratio is -4dB Figure;
Figure 11 is a kind of structural schematic diagram of the system for subsurface communication Modulation Identification provided by the embodiments of the present application;
Figure 12 is a kind of structural schematic diagram of terminal provided by the embodiments of the present application.
Specific embodiment
In order to clarify the technical characteristics of the invention, explaining with reference to the accompanying drawing with specific embodiment this programme It states.
Deep learning (Deep Learning, abbreviation DL) is a kind of new machine learning method, is led in many applications It is shown in domain than conventional machines learning method significant progress.Especially in nearest a period of time, due to advanced big data Analysis and processing capacity, this method have become very popular in many fields.Deep learning network is as most powerful point One of class tool, it has been applied to various application fields, such as computer vision, natural language processing and speech recognition.Obtain this The main reason for a little rapid progresses is that deep learning can realize that multistage network automatically extracts data set features and indicates study.Depth It practises network and the image classification task of other classical models is defeated with huge advantage.It is most of to use especially during the past two years Deep learning network is all used in the industrial products of speech recognition, a series of this depth successfully excited to automatic speech recognition The new round research of learning algorithm and framework is spent, this research is still underway so far.
Deep learning depends on the support of mass data, for studying and applying deep learning network, these basic branch Hold easily to obtain in a communications system.In addition, different from conventional machines study, deep learning, which has, not to be needed to select manually The advantages of extracting data characteristics function is selected, and substantially increases classification effectiveness.Nearest deep learning method has been introduced into communication Field, such as communication system is detected using convolutional neural networks (Convolutional Neural Networks, CNN), it uses Deep learning carries out orthogonal frequency division multiplexing (Orthogonal Frequency Division Multiplexing, OFDM) signal The channel estimation of detection and multiple-input and multiple-output (Multiple-Input Multiple-Output, MIMO) multiplexing.
Currently, deep learning is widely used to image recognition and speech processes, especially CNN is played in these areas Important function.CNN realizes the accuracy of identification for surmounting other algorithms by multitiered network structure, illustrates it and identifies in feature The advantage of aspect.From the perspective of neurology, the design of convolutional neural networks is by human brain visual cortex to external generation The inspiration of the perception on boundary.The something outside perceived is transmitted to brain in the form of images by human eye, and brain is successively abstracted image, It extracts the edge of image and indicates other high latitude features of image, to help brain to make accurate judgement.
In order to realize accurately identifying for subsurface communication modulation system, this application provides one kind for underwater as shown in Figure 1 The deep learning network of communication modulation identification.Referring to Fig. 1, deep learning network 10 provided by the present application includes: data prediction Layer 101, data characteristics extract layer 102, data characteristics classification layer 103 and data classification results output layer 104.
Data prediction layer 101 be used for by by water-bed communication be transmitted through come Different Modulations data pre-process. In the embodiment of the present application, the received Different Modulations data of the data prediction layer 101 are for described in training The modulation system data of deep learning network analog, and the water-bed communication modulation mode number for training the deep learning network According to identical as water-bed communication modulation mode data characteristics is tested.
Described by being transmitted through the Different Modulations data come by water-bed communication and pre-process includes: by a variety of tune The complex format data corresponding conversion of mode data processed at binary Real data format format conversion function part;Adjust data Format, which is converted into, to adjust funtion part with the data format of input data feature extraction layer;It will the water-bed communication modulation side of training Formula data are normalized with water-bed communication modulation mode data are tested.
It includes the first deep learning network layer, the second deep learning network layer, third depth that data characteristics, which extracts network layer 102, Spend learning network layer and the 4th deep learning network layer, the first deep learning network layer and the second deep learning network Layer is for generating two layers of feature extraction collection, the third deep learning for the data prediction layer treated preprocessed data Network layer and the 4th deep learning network layer are used to generate two layers of first data characteristics collection according to the feature extraction collection.
First deep learning network layer, the fisrt feature for generating the water-bed communication modulation mode signal extract collection.The Two deep learning network layers extract collection according to the fisrt feature that the first deep learning network layer generates and obtain second feature extraction Collection.Third deep learning network layer is generated through the water-bed communication modulation signal after the second deep learning network layer handles Data characteristics collection.4th deep learning network layer is generated and is adjusted by the water-bed communication after third deep learning network layer handles The data characteristics collection for finally portraying modulated signal of signal processed.
The first deep learning network layer is identical with the neuron number of the second deep learning network layer, convolution kernel For the convolution of the data amount check on the preprocessed data row and column;Wherein, the second deep learning network layer is compared to described First deep learning network layer is added to pond function.The third deep learning network layer and the 4th deep learning network The neuron number of layer is identical, and convolution kernel is the convolution of the data amount check on the feature extraction collection row and column, wherein described the Four deep learning network layers are added to pond function compared to the third deep learning network layer.
It includes the 5th deep learning network layer, the 6th deep learning network layer, the 7th depth that data characteristics, which extracts network layer 103, Spend learning network layer and the 8th deep learning network layer, the 5th deep learning network layer and the 6th deep learning network For layer for generating the second data characteristics collection according to two layers of first data characteristics collection, the data of the second data characteristics collection are special It levies accuracy and is higher than the first data characteristics collection;The 7th deep learning network layer and the 8th deep learning network layer For the second data characteristics collection to be carried out data classification identification.
The feature set data of generation are carried out processing of presorting by the 5th deep learning network layer (Dense layers).7th depth Learning network layer (Dense layers), the result data of presorting that front layer is generated carry out the six, the 8th deep learning of classification processing again Network layer (LeakyReLU () layer) improves the classification information effect of full articulamentum output.
The neuron number of the 5th deep learning network layer is determined according to the data volume of processing, and the described 5th is deep Degree learning network layer is full articulamentum, and Dropout technology is added in the 5th deep learning network layer, for improving the described 5th The Data Classifying Quality of deep learning network layer.The 6th deep learning network layer cooperates the 5th deep learning network layer Carry out classification judgement processing.
The neuron number of the 7th deep learning network layer is determined according to the data volume of processing, and the described 7th is deep Degree learning network layer is full articulamentum, and Dropout technology is added in the 7th deep learning network layer, for improving the described 5th The Data Classifying Quality of deep learning network layer.The 8th deep learning network layer cooperates the 7th deep learning network layer Carry out classification judgement processing.
Data classification result output layer 104 is for judging and exporting final modulation system.The data classification result is defeated Layer 104 includes the 9th deep learning network layer out, and the 9th deep learning network layer is according to the 7th deep learning network Layer and the data classification recognition result of the 8th deep learning network layer output judge and export final modulation system.It is used In from the front layer characteristic of division sequence generate to the various final classification judging results by water-bed communication modulation mode.
In the present embodiment, the first deep learning network layer, the second deep learning network layer, third deep learning network layer It is convolutional neural networks with four layer deep learning network layers, the fifth nerve network and seventh nerve network are full connection Neural network, the sixth nerve network and eighth nerve network are LeakyReLU layers.
In the present embodiment, trained water-bed wireless communication modulation data and the water-bed wireless communication modulation data point of test have been used The not independent information as input deep learning network, that is to say, that the identification of the water-bed wireless communication modulation system data of test, It simultaneously can be with reference to the water-bed wireless communication modulation system data of training by same communication channel characteristics.Herein, it tests Bottom wireless communication modulation system data, which refer to, needs to judge the various water-bed wireless communication datas for sending modulation system;And it trains Bottom wireless communication modulation system data refer to the water-bed wireless communication modulation system data of test also from same communication The water-bed wireless communication data of channel characteristics.The water-bed wireless communication modulation system data of the training used simultaneously in the present invention and survey The water-bed wireless communication modulation system data of examination can simulate the practical communication process of water-bed wireless communication, so as to improve to water The judging nicety rate of bottom wireless communication modulation system.
In the embodiment of the present application, the Automatic Modulation Recognition in communication system be at receiver Modulation recognition and demodulation must Want part.It is different from other common machine learning methods, when being modulated classification using deep learning method, do not need spy Sign is extracted.Do not influence classifying quality it is even better in the case where, be equivalent to and simplify process, improve system identification efficiency. Implement to design related Simple water-bed wireless communication system model schematic Fig. 2 shows this programme, concern influences channel Three factors: multipath fading, Doppler effect, Gaussian noise, the channel model are considered the volume with additive noise Product operation mode.For convenient for analysis, by receipt signal model is defined as:
Here wherein x (t) is that modulation sends signal, and y (t) is the reception signal for the water-bed wireless channel being interfered.It examines Consider the complexity of water-bed wireless channel, in order to preferably analyze the influence to signal is received, we are indicated using h (t, δ) Water-bed wireless channel model parameter.αi(t) indicate that attenuation coefficient, δ (t) indicate that random time delay, N indicate the quantity of multipath, Additive noise n (t) assumes that statistical property meets Gaussian characteristics, operatorIndicate convolution.
In the embodiment of the present application, in the water-bed wireless communication used common modulator approach be MQAM and MPSK etc. mostly into PSK and QAM communication modulation mode processed, both modulation systems are commonly used for the modulation system of water-bed wireless channel.MQAM signal Expression and the expression of MPSK be slightly different.It is expressed as xMQAM(t)=Aicos(wct)+Bisin(wcT), i=1,2 ..., M. Aicos(wcAnd B t)isin(wcT) different two carrier waves, A under MQAM modulation system are shown respectivelyiAnd BiIt represents and needs to send out The different sequences of two sent.MPSK is most common modulator approach, and phase shift is based on the phase as variable, and amplitude and frequency Rate is modulated as constant signal.Mpsk signal can indicate x by such one group of signalMPSK(t)=A cos (wct+θm) wherein A Expression amplitude, wcIndicate that angular frequency, phase theta m are indicated by evenly spaced phase angle groupM indicates the quantity of symbol, the phase in modulated signal between two adjacent signals Bit interval is 2 π/M. for example, the phase intervals of QPSK are exactly pi/2.
In the deep learning network of the embodiment of the present application, exactly identification is received from above-mentioned water-bed wireless communication system The debud mode of signal, received signal receive the interference of factor in various communication process, are difficult correctly to identify in receiving end, And deep learning network method through the invention can carry out judgement identification well.It is previous studies have shown that having at least The neural network of one hidden layer is equivalent to general approximate function.With the increase of network level, so that it may equally approximate to appoint What continuous function.Shown in Fig. 3, simple neural network is by input layer, three hidden layers and output layer composition.Intermediate hidden layers can With the level comprising 3 layers or more, the neuronal quantity in every layer can be set as any amount according to the practical object to be fitted. The number of plies in usual neural network said herein does not include input layer, that is to say, that the number of the hidden layer from first layer to output layer Amount is the number of plies in neural network.Theoretically, if there is enough hidden layers and sufficiently large data set, then it can be fitted and take the post as It what is the need for the function wanted.But in practical applications, excessive hidden layer may result in over-fitting.In other words, training set is used The neural network of data training is only highly effective to training data, but is applied to when data set in practice that effect can be very poor, Other data sets in addition to training dataset can not be fitted.
Implement in design in this concrete scheme, in the ideal case, as long as network not over-fitting, neural network it is more deep more It is good.However, in a practical situation, when the continuously level of addition neural network, can there is precision deterioration problem.That is, Accuracy rate will rise first and then reach saturation and can not further increase.Continuing to increase level will lead to deep learning network mould The problem of type accuracy further decreases, this is not an overfitting, because the error not only in test set increases, Er Qiexun The error for practicing collection itself also increases.When network model has a large amount of levels, with the progress of propagated forward, input data Some information may be dropped.It include in each layer, to generate by using activation primitive and the technologies such as deactivated at random The general extensive ability to express of model.
In this programme design specific implementation process, from the point of view of the development trend of CNN structure, small convolution core can be used Convolutional layer obtains more data informations, using the full articulamentum of multilayer improves classification capacity.Convolution core size refers to frame The quantity of the processing data of image data, after data enter input layer by convolution algorithm, which becomes first The value of 0 neuron of convolutional layer, the process are equivalent to extraction image data information.
Implement in design in this programme, general convolution algorithm includes 1D, 2D and 3D operation.For example, in 2D operation, figure 4 be the 2D CNN convolution algorithm using the image sequence of 2D convolution core.The time dimension of convolution algorithm is 2, i.e., to two companies Continuous data execute convolution algorithm.2D convolution is to form plane by stacking multiple continuous datas, is then used in the planes 2D convolution kernel.In this configuration, each Feature Mapping in convolutional layer is connected to multiple adjacent continuous data in preceding layer, from And capturing information.On the left side of Fig. 4, in trellis diagram the value of position be by by current receptive field convolution at two of level-one above Same position in continuous data and obtain.2D convolution kernel can only extract a type of feature from plane, because of convolution The weight of core is identical in entire plane.That is, shared weight is identical convolution kernels (identical power in figure The connecting line of different depth colors indicates identical weight again).Various convolution kernels can be used usually to extract various features, For example 1D is usually used in natural language processing, 2D and 3D convolution is usually used in image data.Wherein, one image of 2D process of convolution, 3D Process of convolution multiple images.It is compared with 2D convolution, 3D convolution considers the information of time dimension.
Implement in design in scheme, activation primitive function determines whether neuron is activated, and the received information of neuron is It is no useful, if should abandoned or discarding.The nonlinear transformation of activation primitive can be fitted various classification curves, make nerve net Network can handle extremely complex task.As the number of plies of neural network increases, the capability of fitting of training dataset is become more By force, this may cause over-fitting.In order to avoid the above problem, select to compare shallow-layer network rack in the design of CNN structure herein Structure and random deactivation technology prevent over-fitting.It is random deactivate refer to using loss technology by random drop partial data come Prevent overfitting.Here it is random deactivate be exactly Dropout technology, after Dropout, some neurons can inactivate and with Other neurons are unrelated, therefore the phenomenon that overfitting can be effectively reduced.Here inactivation is random, it means that next The secondary neuron for abandoning inactivation can reopen.
In specifically application embodiment, general CNN structure includes pond (pooling) layer, and the application adds this layer here It is added in the DL structure of design.Pond layer is also referred to as down-sampling layer, generally all follows convolutional layer, and the function of pond layer is that removal is superfluous Remaining information simultaneously reduces computation complexity.In pondization processing, the maximum pondization operation (MaxPooling) in Fig. 5 will be maximum on weekly duty The information enclosed is considered as useless, therefore deletes these information.Here filter is dimensioned to 2 × 2.Similar in Fig. 6 Average pondization operates (AveragePooling), and the data in filter are averaged to improve the classification of overall data.? During modulation classification, and not all information both is from the transmission symbol for needing to identify, existing can lose as image procossing The peripheral information of abandoning.
In specific application embodiment of the invention, it is exclusively used in the deep learning framework that Modulation Mode Recognition is proposed and such as plucks Shown in scheming.The layer composition that it is connected entirely by input layer, the hiding convolutional layer of multilayer and multilayer.Feature extraction is hidden by intermediate 4 layers Layer is completed, and the first two feature extraction hidden layer be made of 32 neurons and 1 × 1filter, latter two feature extraction is hiding Layer is made of 64 neurons and 1 × 1filter.Instinctively, more neurons will have better fitting effect, 64 nerves Precision more higher than 32 neuroid structures may be implemented in metanetwork structure, but over-fitting is also without selecting more in order to prevent More neurons.Convolutional layer is output to next layer by the convolution core size of the deep learning network architecture of design to extract Front layer information.
Here when scheme is implemented, activation primitive function is defined as followsWherein Cn And ZnIt is to output and input Feature Mapping in n-th layer.WithIt is the weight and deviation of j-th of convolution kernel in n-th layer respectively (n=1,2 ..., 8).ρ () is activation primitive ReLU, operatorIndicate convolution.
In scheme when it is implemented, full articulamentum can be used for for final output being mapped to linear separability from space.In addition, The classification of modulation system is executed according to the learning characteristic of preceding layer conventional part.Full articulamentum includes two Dense layers, every layer Dense layers include 128 neurons and 256 neurons, and Dense layers of output layer include 5 neurons.With Dense's 5 Output layer corresponds to the neurons of 5 kinds of possible modulation systems of test, including BPSK, QPSK, 8PSK, 16QAM comprising 5, 64QAM.All hidden layers also use ReLU as activation function.The last one output layer uses Softmax function as activation Function.ReLU function is defined as:X is input, and g (x) is after max is operated Activation primitive result.The essence of Softmax function be k is tieed up any real vector be mapped to another k dimension real vector, wherein Each element in vector is between (0,1).Softmax is defined as:Wherein zmIt is m-th of neuron Input, ρ (z)mIt is the output of m-th of neuron.It is the sum of the input of all neurons in this layer, K is nerve The quantity of member.
In the embodiment of the present application, in order to improve performance, LeakReLU layers are added after the layer connected entirely, LeakReLU It is the special version of ReLU.When neuron deactivates, LeakReLU still has non-zero output value, to generate small ladder Degree avoids the issuable neuron inactivation of ReLU.LeakReLU function is defined as:Wherein α It is greater than 0 floating number, indicates the slope of activation primitive.
In actual specific embodiment, because deep learning network is easy to occur to intend during training pattern It closes, this will lead to model poor performance in actual use., can be by using Dropout technology come to avoid over-fitting. Dropout refers to the neural network unit temporarily abandoned from network during the training of deep learning network with certain probability. After cross validation, when the Dropout rate of concealed nodes is equal to 0.5, effect can reach best.The reason is that at 0.5, Dropout generates most network structures at random.Therefore, the Dropout factor here is set as 0.5.
In practical solution implementation, in the training stage, cross entropy of classifying is used as the loss function in output-bound layer.It hands over Fork entropy describes the distance between reality output and required output.Cross entropy is smaller, and two probability distribution are closer.Assuming that probability Distribution p (θ) is anticipated output, and probability distribution q (θ) is reality output, and θ is the sample variable in sample space, and H (p, q) is to hand over Fork entropy can be expressed as H (p, q)=- ∑θP (θ) logq (θ), cross entropy reflect the similarity between distribution p (θ) and q (θ).
During applying for embodiment, it is very big to be generally used for training dataset, and because calculator memory capacity and GPU memory size limitation, it is impossible to all data disposably be imported into model and be trained.If data set is enough, that The gradient calculated with the half (or even less) of data set is almost identical as the gradient with the training of all data.Gradient contains Justice is slope, it is for machine learning to find optimum (minimum value of curve).Gradient allows to pass through formulaIn find correct weightAnd deviationSo that all training inputs of Cn output are all It is similar to Zn.The range of the result currently obtained from objective result is measured, cost function is defined are as follows: Wherein w is the set of all weights, and α is all deviations.λ is the quantity of training data, and ε is conduct The input k of output layer vector, for all training input k summations.Symbol | | ... | | refer to the norm of vector.D is also referred to as secondary Cost function;Sometimes it is referred to as square error or mean square error (MSE).If learning algorithm can find suitable weight and Deviation makes D (w, α) ≈ 0, then this is a good learning algorithm.Therefore, the target of training algorithm is by adjusting letter Several weights and deviation minimizes cost function D (w, α), realizes this mesh using a kind of algorithm for being known as gradient descent method Mark.
In a specific embodiment, when adjusting the mode of model modification weight and straggling parameter, optimization algorithm can make Model generates more preferably and faster as a result, such as gradient decline, stochastic gradient descent and adaptive moments estimation (Adam).In reality In, Adam method operational excellence, and the adjusting learning rate of each parameter can be calculated.With other adaptive learnings Rate algorithms are compared, and faster, learning effect is more preferable for Adam convergence rate, and the disadvantage compared in other optimisation techniques uses Adam all It can be corrected, such as the disappearance of learning rate, convergence rate is slow etc..The more preferable performance of Adam method is that it is not only stored previously The exponential damping average value of squared gradient, but also keep the exponential damping average value of previous gradient.
It is extensive to refer to that deep learning model is judging unknown sample using deep learning model in concrete scheme implementation During learn unknown sample parameter performance.Since computer hardware limitation does not allow once to input all data Collection, therefore different batch processing sizes please be select, input it equally close to training precision with complete data set.Data set size Indicate that the entire data volume that be input in model, batch size indicate the partial data in data set size.In conditions permit In the case where, entire data set can be trained in input model, but in general, limited hardware condition is unsatisfactory for such item Part, therefore large data sets are divided into for trained batch input model.Batch size will determine the sample size of a secondary training, And it will affect the optimization and speed of model.The storage efficiency to be considered is size and the model training essence according to computer storage Degree requires to select reasonable batch size.The correct selection of batch size is found between model training precision and memory efficient Optimum balance.
When the training of specific scheme is tested, the different batch sizes for being sent to deep learning network finally will lead to figure not With modulation discrimination.Since final convergence precision eventually falls into different local extremums, select different of large quantities It is small to obtain optimal final convergence precision.For example batch size=1 indicates corresponding sample, also referred to as vector.Batch size Indicate the quantity of vector, batch size=64 are 64 vectors.Here the training size selected is 64, and when batch size=64 can To reach full accuracy.When batch size is other values, nicety of grading is not so good as batch size=64.It shows different batches Size will affect the training effect of model.For normal data set, if batch is too small, training data would become hard to restrain, The performance of middle batch size=32 is poor.
Implement in training process in scheme, only the model with optimum performance is saved on verifying collection, minimum in Fig. 7 Train loss+error and val_error are changed.Longitudinal axis loss ratio indicates to use formula H (p, q)=- ∑θp(θ)logq(θ) Loss function calculated result.Train loss+error is the previously loss defined in the final output layer of network structure Function.In other words, train loss+error indicates to be concentrated use in loss function (classification in data using training network model Intersect entropy function) caused by prediction data distribution real data distribution between loss.Here, loss function is H (p, q) =-∑θClassification defined in p (θ) logq (θ) intersects entropy function.Val_error is used to test in advance by using training pattern Data are demonstrate,proved, and verify model to the prediction effect of verifying collection after each epoch.Val_error is similar to train loss+ Both error is calculated by loss function, but they use different data sets, and val_error has used validation data set. Selection minimum val_error value simultaneously saves corresponding weight parameter.Finally, these parameters are for predicting test data.Nearly 30 When a epoch, which has started to restrain.Although val_error also has fluctuation, it almost with train loss+error mono- It causes, loss late is about 0.7.It can achieve convergence by less epoch, this illustrates deep learning model of the invention at this Validity in kind modulation classification task.
In the test process that scheme is implemented, due to the first layer neuron of the full articulamentum of fc (full connecting) Increase can cause the increase of network model parameter, so the parameter of first layer fc is as small as possible while not influencing network again Performance.Here at multiple neuron fc layers and in the case where the similar behavior of a small number of fc layers of neurons, a small amount of neuron fc is used Layer can reduce number of parameters, reduce computation complexity and reduce memory use.When first fc is 128 and 256, performance is several It is identical, but 128 to 256 parameters reduce nearly 2 times.In order to which the computational complexity and accuracy of model, fc quilt is better balanced It is selected as 128 quantity as the neuron of the first fc of model.
Table 1
The first FC layers of parameter amount under different neuronal quantities
In specifically application embodiment test process, pondization mainly includes average pondization and maximum pond.In Average During Pooling, filter is dimensioned to 1 × 1, and otherwise network model can not restrain.In Max Pooling, filter It is dimensioned to 2*2, Max Pooling can be obtained than Average Pooling better performance.
When scheme specifically implements test, Fig. 8 is the summation of all modulation classification end values from -20dB to 20dB, And the final result of classifying quality expression modulation classification (wherein QAM16 represents 16QAM, and QAM64 represents 64QAM).For instructing Total amount of data in five kinds of experienced modulating modes is 50000, and the data volume of every kind of modulation type is 10000.By 10 training It is averaged.By taking 64QAM as an example, the longitudinal axis is actual modulated mode, and horizontal axis is prediction modulation, and 64QAM intersection point is horizontally and vertically Accurate Prediction, color show that success rate prediction is higher more deeply feeling.Color is more shallow, and the probability of erroneous judgement is lower, and lightpink representative does not almost have There is erroneous judgement.When data volume is different in fig. 8, DL model shows basicly stable property, and can identify 5 kinds of modulator approaches, example Such as MPSK (BPSK, QPSK and 8PSK), MQAM (16QAM and 64QAM).From the point of view of overall performance, when data volume difference, two kinds The QAM performance difference of type is obvious.By according to specific SNR to classification results caused by the amount as different number Difference carries out detailed analysis.
In concrete scheme implementation, when Fig. 9 is SNR=-10dB, the discrimination of BPSK and 16QAM are relatively high, can be with Correct identification, at this moment other modulator approaches are difficult to.When SNR rises to SNR=-4dB in Figure 10, almost can accurately know Other 8PSK, BPSK and 16QAM, but 64QAM and QPSK modulator approach is not easily distinguishable, but than knowing when SNR=-10dB in Figure 10 Not rate improves 10% or more.
When implementing concrete scheme, the Modulation Identification result of different DL methods is used.Including conventional DL method, such as ANN (people Artificial neural networks), MLP (multilayer perceptron), 4 layers of DNN (deep neural network) and 8 layers of DNN, CNN (convolutional neural networks), with And DL method used herein.CNN3 represents 3 layers of CNN network.When SNR is -20dB to -15dB, various neural networks are known Other ability is suitable, and CNN and DL method of the invention slightly have advantage.When SNR rises to -15dB to -5dB, CNN and of the invention DL has apparent advantage than the neural network of other forms, and CNN is omited in this SNR ranges internal ratio DL of the invention Micro- advantage.When SNR is more than -5dB, the DL method that the present invention uses has higher discrimination to CNN, shows depth network The advantage of framework.
In the embodiment of the present application, combined deep learning network feature extraction functions part M1, deep learning net can be passed through Network feature extraction functions part M2, deep learning network characterization abstraction function part M3 ..., deep learning network characterization extract function Energy part Mk (total k, k >=2), deep learning network characterization abstraction function part Mi (1≤i≤k) and full connection judgment layer structure At deep neural network system.It is mentioned that is, deep neural network system can be including multiple deep learning network characterizations Take funtion part (deep learning network characterization abstraction function part M1 above-mentioned, deep learning network characterization abstraction function part M2, deep learning network characterization abstraction function part M3 ..., deep learning network characterization abstraction function part Mk) and full connection Judge layer.
In the embodiment of the present application, the input of deep learning network characterization abstraction function part Mi (1≤i≤k) be can come from The different reference modulation datas and target modulation data of the same bottom radio communication channel.
In addition, in an exemplary embodiment, deep learning network characterization abstraction function part Mi (1≤i≤k) can be adopted With above-mentioned deep learning network characterization abstraction function part.That is, deep learning network characterization abstraction function part Mi The different depth neural network of the modulation system from identical water-bed wireless channel can be used in (1≤i≤k).
Full connection judgment layer can be to the defeated of above-mentioned multiple deep learning network characterization abstraction function part Mi (1≤i≤k) Result is handled and can export the final judgement to modulation system out.I.e. above-mentioned multiple deep learning network characterizations extract function The output of energy part Mi (1≤i≤k) is connected with full connection judgment layer, and full connection judgment layer passes through to these depth nerve nets The integrated treatment of network exports final judging result.
In an exemplary embodiment, full connection judgment layer can export that there are the judgements of the result of which kind of modulation system.And In other example, full connection judgment layer can also export whether need further to judge modulation system, if it is certain Modulation system should be able to preferably be classified as the judgement of certain modulation system.
In an exemplary embodiment, full connection judgment layer can judge final result by way of output probability.This Outside, in some other example, full connection judgment layer can also by the way of various non-linear or linear classifier ratio Such as random forest (Random Forest), decision tree (Decision Tree), support vector machines (SVM).Even certain In example, some simple numerical operation methods are also can be used in full connection judgment layer, for example maximum value determining method, average value are sentenced Disconnected method etc..
Referring to Figure 11, the embodiment of the present application also provides a kind of systems for subsurface communication Modulation Identification.The application is real Apply example provide for subsurface communication Modulation Identification system 20 include deep learning network 10 provided by the above embodiment, it is described Deep learning network includes: data prediction layer, the first deep learning network layer, the second deep learning network layer, third depth Learning network layer, the 4th deep learning network layer, the 5th deep learning network layer, the 6th deep learning network layer, the 7th depth Learning network layer, the 8th deep learning network layer and the 9th deep learning network layer;
The data prediction layer, the data prediction layer for will be transmitted through by water-bed communication come more modulation side Formula data are pre-processed;The first deep learning network layer and the second deep learning network layer are used for the data Pretreatment layer treated preprocessed data generates two layers of feature extraction collection, the third deep learning network layer and the described 4th Deep learning network layer is used to generate two layers of first data characteristics collection according to the feature extraction collection;The 5th deep learning net Network layers and the 6th deep learning network layer are used to generate the second data characteristics collection according to two layers of first data characteristics collection, The second data characteristics collection is more accurate compared to the data characteristics of the first data characteristics collection;The 7th deep learning net Network layers and the 8th deep learning network layer are used to the second data characteristics collection carrying out data classification identification;Described 9th The data point that deep learning network layer is exported according to the 7th deep learning network layer and the 8th deep learning network layer Class recognition result judges and exports final modulation system.
Present invention also provides a kind of terminals, and as shown in figure 12, the terminal 30 includes: processor 301, memory 302 With communication interface 303.
In Figure 12, processor 301, memory 302 and communication interface 303 can be connected with each other by bus;Bus can To be divided into address bus, data/address bus, control bus etc..Only to be indicated with a thick line in Figure 12, but not table convenient for indicating Show only a bus or a type of bus.
To water after the allomeric function of the usually controlling terminal 30 of processor 301, such as the starting and terminal starting of terminal Lower communication debud mode identifies etc..In addition, processor 301 can be general processor, for example, central processing unit (English: Central processing unit, abbreviation: CPU), network processing unit (English: network processor, abbreviation: NP) Or the combination of CPU and NP.Processor is also possible to microprocessor (MCU).Processor can also include hardware chip.It is above-mentioned hard Part chip can be specific integrated circuit (ASIC), programmable logic device (PLD) or combinations thereof.Above-mentioned PLD can be complexity Programmable logic device (CPLD), field programmable gate array (FPGA) etc..
Memory 302 is configured as storage computer executable instructions to support the operation of 30 data of terminal.Memory 301 It can be realized by any kind of volatibility or non-volatile memory device or their combination, such as static random-access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), Programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
After starting terminal 30, processor 301 and memory 302 are powered on, and processor 301, which reads and executes, is stored in memory Computer executable instructions in 302, to complete the above-mentioned subsurface communication Modulation Mode Recognition side based on deep neural network All or part of the steps in method embodiment.
Communication interface 303 transmits data for terminal 30, such as realizes the data communication between underwater communication apparatus.It is logical Believe that interface 303 includes wired communication interface, can also include wireless communication interface.Wherein, wired communication interface includes that USB connects Mouth, Micro USB interface, can also include Ethernet interface.Wireless communication interface can be WLAN interface, cellular network communication Interface or combinations thereof etc..
In one exemplary embodiment, terminal 30 provided by the embodiments of the present application further includes power supply module, power supply module Various assemblies for terminal 30 provide electric power.Power supply module may include power-supply management system, one or more power supplys and other The associated component of electric power is generated, managed, and distributed with for terminal 30.
Communication component, communication component are configured to facilitate the logical of wired or wireless way between terminal 30 and other equipment Letter.Terminal 30 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Communication component warp Broadcast singal or broadcast related information from external broadcasting management system are received by broadcast channel.Communication component further includes near field (NFC) module is communicated, to promote short range communication.For example, radio frequency identification (RFID) technology, infrared data can be based in NFC module Association (IrDA) technology, ultra wide band (UWB) technology, bluetooth (BT) technology and other technologies are realized.
In one exemplary embodiment, terminal 30 can be by one or more application specific integrated circuit (ASIC), number Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, processor or other electronic components are realized.
The same or similar parts between the embodiments can be referred to each other in present specification.Especially for system And for terminal embodiment, since deep learning network therein is substantially similar to the embodiment of deep learning network, so retouching That states is fairly simple, and related place is referring to the explanation in deep learning network embodiment.
It should be noted that, in this document, the relational terms of such as " first " and " second " or the like are used merely to one A entity or operation with another entity or operate distinguish, without necessarily requiring or implying these entities or operation it Between there are any actual relationship or orders.Moreover, the terms "include", "comprise" or its any other variant are intended to Cover non-exclusive inclusion, so that the process, method, article or equipment for including a series of elements not only includes those Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or setting Standby intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that There is also other identical elements in the process, method, article or apparatus that includes the element.
Certainly, above description is also not limited to the example above, technical characteristic of the application without description can by or It is realized using the prior art, details are not described herein;The technical solution that above embodiments and attached drawing are merely to illustrate the application is not It is the limitation to the application, Tathagata substitutes, and the application is described in detail only in conjunction with and referring to preferred embodiment, ability Domain it is to be appreciated by one skilled in the art that those skilled in the art were made in the essential scope of the application Variations, modifications, additions or substitutions also should belong to claims hereof protection scope without departure from the objective of the application.

Claims (9)

1. a kind of deep learning network for subsurface communication Modulation Identification characterized by comprising
Data prediction layer, the data prediction layer for will be transmitted through by water-bed communication come Different Modulations data into Row pretreatment;
Data characteristics extract layer, it includes the first deep learning network layer, the second deep learning that the data characteristics, which extracts network layer, Network layer, third deep learning network layer and the 4th deep learning network layer, the first deep learning network layer and described Two deep learning network layers are used to the data prediction layer treated preprocessed data generating two layers of feature extraction collection, institute Third deep learning network layer and the 4th deep learning network layer is stated to be used to be generated according to the feature extraction collection two layers the One data characteristics collection;
Data characteristics classification layer, it includes the 5th deep learning network layer, the 6th deep learning that the data characteristics, which extracts network layer, Network layer, the 7th deep learning network layer and the 8th deep learning network layer, the 5th deep learning network layer and described Six deep learning network layers are used to generate the second data characteristics collection, second data according to two layers of first data characteristics collection The data characteristics accuracy of feature set is higher than the first data characteristics collection;The 7th deep learning network layer and the described 8th Deep learning network layer is used to the second data characteristics collection carrying out data classification identification;
Data classification result output layer, for judging and exporting final modulation system.
2. the deep learning network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that the number The received Different Modulations data of Data preprocess layer are the modulation system for training the deep learning network analog Data, and for training the water-bed communication modulation mode data of the deep learning network and testing water-bed communication modulation mode number It is identical according to feature.
3. the deep learning network according to claim 2 for subsurface communication Modulation Identification, which is characterized in that described to incite somebody to action It is transmitted through the Different Modulations data come by water-bed communication and pre-process and include:
The complex format data corresponding conversion of the Different Modulations data is converted at the format of binary Real data format Funtion part;
Adjusting Data Format Transform becomes and can adjust funtion part with the data format of input data feature extraction layer;
The water-bed communication modulation mode data of training are normalized with water-bed communication modulation mode data are tested.
4. the deep learning network according to claim 1 for subsurface communication Modulation Identification, which is characterized in that described One deep learning network layer is identical with the neuron number of the second deep learning network layer, and convolution kernel is the pretreatment number According to the convolution of the data amount check on row and column;Wherein, the second deep learning network layer compares the first deep learning net Network layers are added to pond function;
The third deep learning network layer is identical with the neuron number of the 4th deep learning network layer, and convolution kernel is institute State the convolution of the data amount check on feature extraction collection row and column, wherein the 4th deep learning network layer compares the third Deep learning network layer is added to pond function.
5. the deep learning network according to claim 4 for subsurface communication Modulation Identification, which is characterized in that described The neuron number of five deep learning network layers is determined according to the data volume of processing, and the 5th deep learning network layer is Full articulamentum, Dropout technology is added in the 5th deep learning network layer, for improving the 5th deep learning network layer Data Classifying Quality;
The 6th deep learning network layer cooperates the 5th deep learning network layer to carry out classification judgement processing.
6. the deep learning network according to claim 5 for subsurface communication Modulation Identification, which is characterized in that described The neuron number of seven deep learning network layers is determined according to the data volume of processing, and the 7th deep learning network layer is Full articulamentum, Dropout technology is added in the 7th deep learning network layer, for improving the 5th deep learning network layer Data Classifying Quality;
The 8th deep learning network layer cooperates the 7th deep learning network layer to carry out classification judgement processing.
7. the deep learning network according to claim 6 for subsurface communication Modulation Identification, which is characterized in that the number It include the 9th deep learning network layer according to classification results output layer, the 9th deep learning network layer is according to the 7th depth Learning network layer and the data classification recognition result of the 8th deep learning network layer output judge and export final modulation Mode.
8. the deep learning network according to claim 7 for subsurface communication Modulation Identification, which is characterized in that first is deep Degree learning network layer, the second deep learning network layer, third deep learning network layer and four layer deep learning network layers are volume Product neural network, the fifth nerve network and seventh nerve network are full Connection Neural Network, the sixth nerve network It is LeakyReLU layers with eighth nerve network.
9. a kind of system for subsurface communication Modulation Identification, which is characterized in that described in any item including such as claim 1-8 Deep learning network, the deep learning network include: data prediction layer, the first deep learning network layer, the second depth Practise network layer, third deep learning network layer, the 4th deep learning network layer, the 5th deep learning network layer, the 6th depth Practise network layer, the 7th deep learning network layer, the 8th deep learning network layer and the 9th deep learning network layer;
The data prediction layer, the data prediction layer for will be transmitted through by water-bed communication come Different Modulations number According to being pre-processed;The first deep learning network layer and the second deep learning network layer for locating the data in advance It manages layer treated preprocessed data and generates two layers of feature extraction collection, the third deep learning network layer and the 4th depth Learning network layer is used to generate two layers of first data characteristics collection according to the feature extraction collection;The 5th deep learning network layer It is used to generate the second data characteristics collection according to two layers of first data characteristics collection with the 6th deep learning network layer, it is described Second data characteristics collection is more accurate compared to the data characteristics of the first data characteristics collection;The 7th deep learning network layer It is used to the second data characteristics collection carrying out data classification identification with the 8th deep learning network layer;9th depth Learning network layer is known according to the data classification that the 7th deep learning network layer and the 8th deep learning network layer export Other result judges and exports final modulation system.
CN201811364726.8A 2018-11-16 2018-11-16 A kind of deep learning network and system for subsurface communication Modulation Identification Pending CN109523023A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811364726.8A CN109523023A (en) 2018-11-16 2018-11-16 A kind of deep learning network and system for subsurface communication Modulation Identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811364726.8A CN109523023A (en) 2018-11-16 2018-11-16 A kind of deep learning network and system for subsurface communication Modulation Identification

Publications (1)

Publication Number Publication Date
CN109523023A true CN109523023A (en) 2019-03-26

Family

ID=65778015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811364726.8A Pending CN109523023A (en) 2018-11-16 2018-11-16 A kind of deep learning network and system for subsurface communication Modulation Identification

Country Status (1)

Country Link
CN (1) CN109523023A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048978A (en) * 2019-04-09 2019-07-23 西安电子科技大学 A kind of signal modulate method
CN112329524A (en) * 2020-09-25 2021-02-05 泰山学院 Signal classification and identification method, system and equipment based on deep time sequence neural network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108038471A (en) * 2017-12-27 2018-05-15 哈尔滨工程大学 A kind of underwater sound communication signal type Identification method based on depth learning technology
US10003483B1 (en) * 2017-05-03 2018-06-19 The United States Of America, As Represented By The Secretary Of The Navy Biologically inspired methods and systems for automatically determining the modulation types of radio signals using stacked de-noising autoencoders
CN108234370A (en) * 2017-12-22 2018-06-29 西安电子科技大学 Modulation mode of communication signal recognition methods based on convolutional neural networks
CN108616470A (en) * 2018-03-26 2018-10-02 天津大学 Modulation Signals Recognition method based on convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10003483B1 (en) * 2017-05-03 2018-06-19 The United States Of America, As Represented By The Secretary Of The Navy Biologically inspired methods and systems for automatically determining the modulation types of radio signals using stacked de-noising autoencoders
CN108234370A (en) * 2017-12-22 2018-06-29 西安电子科技大学 Modulation mode of communication signal recognition methods based on convolutional neural networks
CN108038471A (en) * 2017-12-27 2018-05-15 哈尔滨工程大学 A kind of underwater sound communication signal type Identification method based on depth learning technology
CN108616470A (en) * 2018-03-26 2018-10-02 天津大学 Modulation Signals Recognition method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAN WANG ET AL.: "Modulation Classification of Underwater Communication with", 《COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110048978A (en) * 2019-04-09 2019-07-23 西安电子科技大学 A kind of signal modulate method
CN112329524A (en) * 2020-09-25 2021-02-05 泰山学院 Signal classification and identification method, system and equipment based on deep time sequence neural network

Similar Documents

Publication Publication Date Title
CN109299697A (en) Deep neural network system and method based on underwater sound communication Modulation Mode Recognition
CN112418014B (en) Modulated signal identification method based on wavelet transformation and convolution long-term and short-term memory neural network
CN108234370B (en) Communication signal modulation mode identification method based on convolutional neural network
CN106059972B (en) A kind of Modulation Identification method under MIMO correlated channels based on machine learning algorithm
Ozpoyraz et al. Deep learning-aided 6G wireless networks: A comprehensive survey of revolutionary PHY architectures
CN107979554B (en) Radio signal Modulation Identification method based on multiple dimensioned convolutional neural networks
CN111314257B (en) Modulation mode identification method based on complex value neural network
CN109379120A (en) Chain circuit self-adaptive method, electronic device and computer readable storage medium
CN109361635A (en) Subsurface communication Modulation Mode Recognition method and system based on depth residual error network
CN112235023B (en) MIMO-SCFDE self-adaptive transmission method based on model-driven deep learning
Wang et al. Modulation classification of underwater communication with deep learning network
Li-Da et al. Modulation classification of underwater acoustic communication signals based on deep learning
CN113378644B (en) Method for defending signal modulation type recognition attack based on generation type countermeasure network
CN109523023A (en) A kind of deep learning network and system for subsurface communication Modulation Identification
Lin et al. DL-CFAR: A novel CFAR target detection method based on deep learning
CN113298031B (en) Signal modulation identification method and system considering signal physical and time sequence characteristics
CN109462564A (en) Subsurface communication Modulation Mode Recognition method and system based on deep neural network
CN110190909A (en) A kind of signal equalizing method and device for optic communication
CN110460359A (en) A kind of mimo system signal acceptance method neural network based
KR102064301B1 (en) Signal detection apparatus using ensemble machine learning based on MIMO system and method thereof
CN101414378A (en) Hidden blind detection method for image information with selective characteristic dimensionality
CN109547374A (en) A kind of depth residual error network and system for subsurface communication Modulation Identification
Ali et al. Modulation format identification using supervised learning and high-dimensional features
Le-Tran et al. Deep learning-based collaborative constellation design for visible light communication
El-Khoribi et al. Automatic digital modulation recognition using artificial neural network in cognitive radio

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Wang Yan

Inventor after: Xiao Jing

Inventor after: Zhang Lian

Inventor after: Yang Hongfang

Inventor after: Zhang Zhe

Inventor before: Wang Yan

Inventor before: Zhang Zhe

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20190326

RJ01 Rejection of invention patent application after publication