CN115935172A - Signal identification method based on integrated deep learning - Google Patents
Signal identification method based on integrated deep learning Download PDFInfo
- Publication number
- CN115935172A CN115935172A CN202310016915.0A CN202310016915A CN115935172A CN 115935172 A CN115935172 A CN 115935172A CN 202310016915 A CN202310016915 A CN 202310016915A CN 115935172 A CN115935172 A CN 115935172A
- Authority
- CN
- China
- Prior art keywords
- signal
- signal identification
- characteristic image
- data
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000013135 deep learning Methods 0.000 title claims abstract description 21
- 238000012549 training Methods 0.000 claims abstract description 41
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000005516 engineering process Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 8
- 238000002372 labelling Methods 0.000 claims abstract 4
- 230000006870 function Effects 0.000 claims description 23
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 238000010586 diagram Methods 0.000 claims description 13
- 238000005070 sampling Methods 0.000 claims description 12
- 238000011176 pooling Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 5
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000002156 mixing Methods 0.000 claims description 3
- 230000016273 neuron death Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 206010063385 Intellectualisation Diseases 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a signal identification method based on integrated deep learning, which comprises the following steps: determining a proper characteristic image set according to a specific signal identification task, and converting acquired signal data into a signal characteristic image; performing category labeling on the signal characteristic image to obtain a training set containing labeling information, and expanding the training set by using a data enhancement technology; constructing a signal identification model based on a deep neural network; inputting the training set into a deep neural network to train a signal recognition model; improving the performance of a signal identification model based on an ensemble learning algorithm; and recognizing the signals by using the trained model. According to the invention, an ensemble learning algorithm is added on the basis of deep learning, the signal identification capability of the model is further improved by the ensemble learning algorithm and the ensemble learning algorithm, and in the ensemble learning process, a user can select different types of characteristic images and basic models according to requirements, so that the method has pertinence and operability.
Description
Technical Field
The invention belongs to the technical field of radio signal identification, and particularly relates to a signal identification method based on integrated deep learning.
Background
With the popularization of the internet of things and the rise of 5G technology, radio waves become an important carrier for communicating everything and play an irreplaceable role in the military and civil fields. Radio signal identification has become an important guarantee for ensuring safe use of radio, and has important practical significance and urgent need.
In the existing signal modulation mode identification technology, various radio receiving devices are used for collecting aerial signals, the aerial signals are displayed in a visualization mode such as a oscillogram, a frequency spectrogram, a waterfall graph, a constellation map and the like after being processed, and then a professional determines the modulation mode of the aerial signals according to the characteristics of the signals in the images. This manual identification by hand requires a very high knowledge and experience of the operator, and the efficiency and accuracy of manual identification are greatly reduced when the monitoring time is increased or the radio signal is increased. Currently, machine learning algorithms have been applied to radio modulation mode identification, which greatly improves efficiency and accuracy compared with manual identification. However, the existing machine learning algorithm depends on selecting key characteristic parameters, does not completely realize automation, and still requires strong professional background and prior knowledge.
Deep learning has unique advantages in pattern recognition tasks, and has achieved great success in many fields such as image recognition and natural language processing in recent years. The radio signal identification is also a special pattern identification in nature, deep learning is introduced into the field of signal identification, the capacity of integrating massive, multi-source and dynamic big data is realized, and the integrated learning can further improve the capacities, which is not conspired with the development trend of unmanned, automatic, intelligent and precise radio signal identification, and brings new development opportunity for the field.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a signal identification method based on integrated deep learning, which utilizes the strong characteristic extraction capability of deep learning to automatically extract the mode characteristics of a signal image to realize signal identification, and aims to solve the problems of low identification speed, low accuracy and dependence on a large amount of expert experience of the prior art and realize high-efficiency and accurate signal intelligent identification without prior information.
The signal identification method based on the integrated deep learning provided by the invention comprises the following steps:
s10, determining a proper characteristic image set according to a specific signal identification task, and converting acquired signal data into a signal characteristic image;
step S20, carrying out category marking on the signal characteristic images to obtain a training set containing marking information, and expanding the training set by using a data enhancement technology;
s30, constructing a signal identification model based on a deep neural network;
s40, inputting a training set into a deep neural network to train a signal recognition model;
s50, improving the performance of the signal identification model based on an integrated learning algorithm;
and S60, identifying the signal by using the trained model.
Further, step S10 includes the following sub-steps:
s11, determining a suitable feature image set according to a specific signal identification task; the signal data comprise BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK, PAM4, WB-FM, AM-SSB and AM-DSB, each modulation mode comprises 1000 signal samples, each signal sample comprises two paths of signals of an I path and a Q path, each path of signal comprises 128 sampling points, and one point corresponds to the amplitude of a signal at a certain moment; setting a characteristic image set comprising 5 types of images of a oscillogram, a frequency spectrum graph, a constellation locus graph and a waterfall graph;
s12, converting the acquired signal data into a two-dimensional simple characteristic image;
and S13, generating a high-dimensional complex characteristic image based on the two-dimensional simple characteristic image.
Furthermore, the constellation diagram is generated by taking the path I data as a horizontal axis and the path Q data as a vertical axis; the constellation locus diagram is formed by superposing a plurality of constellation diagrams together; the oscillogram is generated by taking time as a horizontal axis and I-path or Q-path data as a vertical axis, and a time value is calculated according to the sampling frequency of IQ data; the spectrogram is generated by discrete Fourier transform;
the generation mode of the frequency spectrum diagram is as follows:
IQ two-path time domain data generate a group of complex values through discrete Fourier transform, each complex value represents the frequency intensity of a corresponding frequency value, and the complex values are calculated by the following formula:
whereinRepresenting discrete IQ signals, T>0 and F>0 is the sampling period of the time variable and the frequency variable, respectively, m and n take integer values, which represent sampling at equally spaced time and frequency grid points (mT, nF), the frequency value is represented by the horizontal axis, the frequency intensity is represented by the vertical axis, and a spectrogram is generated;
the waterfall plot generation mode is as follows:
the frequency value is represented by a horizontal axis, the frequency intensity is represented by color, a two-dimensional spectrogram is converted into a colored line, the lines of a plurality of continuous time points are stacked into a plane according to time sequence, a colored waterfall graph is drawn, the horizontal axis in the waterfall graph represents frequency, the vertical axis represents time, and the color of each point corresponds to the frequency intensity.
Further, step S20 includes the following sub-steps:
s21, carrying out category marking on the characteristic image;
s22, adopting a GAN network to enhance data, and generating a large number of labeled characteristic images, wherein the method comprises the following steps: firstly, a generator and a discriminator of the GAN network are constructed, the marked characteristic images are input and output as synthesized characteristic images; then training a GAN network, wherein a generator is used as a focus point in the training process, and through countertraining, the generator generates a vivid synthetic characteristic image, so that a discriminator cannot distinguish the image from a real image;
and S23, mixing the original characteristic image and the generated characteristic image together to form an enhanced training set.
Further, step S30 includes the following sub-steps:
s31, selecting a convolutional neural network to construct a signal identification model;
s32, designing a convolutional neural network structure; the convolutional neural network comprises 2 convolutional layers, 2 pooling layers and 2 full-connection layers; the convolution layer uses convolution kernels of 1 × 3 and 2 × 3 to compress the number of model parameters, 1 × 1 stepping is adopted for edge filling, 64 convolution kernels are configured in the first layer, and 16 convolution kernels are configured in the second layer; each convolution layer is followed by a pooling layer, and the images are sampled down, so that the size of image data is reduced, and consumed resources are reduced; the output of each pooling layer is subjected to batch normalization processing, a dropout layer is added after each normalizing layer to prevent overfitting, a ReLU nonlinear activation function is used to reduce the probability of neuron death in training, finally, image data are flattened into one-dimensional data through a flatten layer, 11 numerical values are finally output through two full-connected layers, and the sum of the 11 numerical values is ensured to be 1 by adopting a softmax activation function;
and S33, designing a deep neural network loss function, and taking the multi-class cross entropy loss function as a loss function of the convolutional neural network.
Further, step S40 includes the following sub-steps:
s41, adjusting the size of the signal characteristic image according to the network input requirement, and converting the characteristic image into an image with the size of 100 multiplied by 1;
s42, initializing a network weight by adopting a Glorot method, and performing iterative training and learning on the convolution network by utilizing a random gradient descent method; using the early stop regularization strategy, the number of early stop cycles is set to 10. Adjusting network weight parameters through a loss function back propagation error, and obtaining a trained network model when a network loss function value is converged;
and S43, storing the trained network parameters.
Further, step S50 includes the following sub-steps:
s51, constructing a basis classifier for each feature image generated in step S10 according to steps S30 and S40, wherein the output of the basis classifier is a sequence of M =11 values, each value representing the probability that the signal belongs to a certain class of signals.
And S52, combining all the base classifiers together by adopting an integrated learning algorithm to form a strong classifier to improve the recognition capability of the model, combining the prediction results of different feature images of the same signal sample to obtain one sample when forming a new training sample set, and finally outputting the sequence with M =11 numerical values.
Further, the merging of the prediction results of different feature images of the same signal sample to obtain the sample includes:
generating N characteristic images of different categories by using a signal sample, and inputting each characteristic image into a corresponding base classifier to predict to obtain a prediction result, namely a sequence containing M numerical values; and obtaining N sequences containing M numerical values in the N images, regarding the N sequences as a matrix with N rows and M columns to form a new training sample, wherein the jth element in the ith row represents the possibility that the ith classifier predicts that the sample belongs to the jth signal.
Further, step S60 includes the following sub-steps:
s61, converting the signals into characteristic images according to the step S10;
and S62, inputting the characteristic image into the signal identification model and outputting the modulation mode type of the signal.
Compared with the prior art, the invention has the following advantages:
firstly, the invention adopts the trained deep neural network model to replace manual work for signal processing, can realize all-weather uninterrupted signal identification, promotes the signal identification technology to move from manual work to automation, reduces the cost of signal identification, and ensures that the processing flow has the advantages of automation and intellectualization.
Secondly, the invention identifies the signal characteristic image based on the most advanced deep neural network model at present, replaces the manual mode based on expert experience, overcomes the defects of more dependence on expert experience, more condition hypothesis limitation and the like of the traditional method, reduces the professional ability requirement on the signal identification personnel, and greatly improves the efficiency and the accuracy compared with the manual identification method.
Thirdly, because the ensemble learning algorithm is added on the basis of deep learning, the advantages of the ensemble learning algorithm and the ensemble learning algorithm are complementary, so that the signal recognition capability of the model is further improved, and in the ensemble learning process, a user can select different types of feature images and basic models according to requirements, so that the method has pertinence and operability.
Drawings
Fig. 1 is a framework of a signal recognition algorithm based on depth image learning and ensemble learning.
Fig. 2 is a step of generating a signal feature image by using the spectrum data.
FIG. 3 is an ensemble learning algorithm framework.
Detailed Description
The invention is further described with reference to the accompanying drawings, but the invention is not limited in any way, and any alterations or substitutions based on the teaching of the invention are within the scope of the invention.
The following is a preferred embodiment of the present invention, and the implementation steps of the present invention are further described with reference to fig. 1.
And S10, determining a proper characteristic image set according to a specific signal identification task, and converting the acquired signal data into a signal characteristic image.
The example input signal data is an IQ data set, which includes 11 modulation schemes widely used in wireless communication systems, i.e., 8 digital modulation schemes (BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK, and PAM 4) and 3 analog modulation schemes (WB-FM, AM-SSB, and AM-DSB). Each modulation mode comprises 1000 signal samples, each signal sample comprises an I path signal and a Q path signal, each path signal comprises 128 sampling points, and one point corresponds to the amplitude of a signal at a certain moment.
When the characteristic image is used for signal identification, the characteristic differentiation of a modulation mode needs to be presented by adopting a proper image type according to the modulation principle of the signal, and in most cases, the signal identification based on one characteristic image is far insufficient, and the accuracy rate is improved by forming a characteristic image set by multiple characteristic image types to identify the signal. According to the modulation mode situation to be identified in the example, the set of feature images comprises 5 types of images such as a waveform image, a frequency spectrum image, a constellation trajectory image, a waterfall image and the like.
After determining the feature image set, as shown in fig. 2, preprocessing operations such as filtering, classifying, transforming, clipping, etc. are performed on the IQ data, and then different types of feature images are generated. Specifically, a constellation diagram can be generated by using the I-path data as a horizontal axis and the Q-path data as a vertical axis. And superposing the plurality of constellation diagrams together to form a constellation locus diagram. The time is taken as a horizontal axis, the time value is calculated according to the sampling frequency of IQ data, and an I path or Q path data is taken as a vertical axis to generate a waveform diagram. The spectrogram is generated by discrete fourier transform. IQ two-path time domain data generate a group of complex values through discrete Fourier transform, each complex value represents the frequency intensity of a corresponding frequency value, and the complex values are calculated by the following formula:
whereinRepresenting discrete IQ signals, T>0 and F>0 is the sampling period of the time and frequency variables, respectively, and m and n take integer values, representing sampling at equally spaced time and frequency grid points (mT, nF). The frequency value is represented by the horizontal axis and the frequency intensity is represented by the vertical axis, and a spectrogram is generated. The frequency values are represented by horizontal axes, the frequency intensities by colors, and the two-dimensional spectrogram is converted into a colored "line". And (3) stacking the 'lines' of a plurality of continuous time points into 'one surface' according to the time sequence, and drawing a colorful waterfall graph, wherein the horizontal axis in the waterfall graph represents frequency, the vertical axis represents time, and the color of each point corresponds to the intensity of the frequency.
After step S10, the signal data is converted into 5 classes of feature image sets, where one signal sample corresponds to 5 different classes of feature images.
And S20, carrying out category marking on the signal characteristic image to obtain a training set containing marking information, and expanding the training set by using a data enhancement technology.
The signal data in the example is generated by a software radio platform, and in the generation process, the corresponding relation between the signal and the type is established, so that the marking of the modulation mode type of the characteristic image can be finished. In order to acquire more training data, data enhancement can be carried out on the basis of the characteristic images based on a GAN network. Firstly, a generator and a discriminator of the GAN network are constructed, the marked characteristic images are input and set, and the synthesized characteristic images are output. Then, a GAN network is trained, a generator is used as an emphasis point in the training process, and through countertraining, the generator can generate a very vivid synthetic characteristic image, so that a discriminator cannot distinguish the synthetic characteristic image from a real image. With the above method, a large number of characteristic images can be generated. And finally, mixing the original characteristic images and the synthesized characteristic images together to form an enhanced training set.
And step S30, constructing a signal identification model based on the deep neural network.
The invention adopts a convolution neural network to construct a signal identification model. Based on experimental debugging, when the height and the width of the image are larger than 100, the improvement effect of the increase of the image size on the model identification capability is smaller and smaller, and the training time is greatly increased. Therefore, the height and width of the feature image are both set to 100, i.e., the input of the convolutional neural network is a 100 × 100 × 1 signal feature image. The experimental data contains 11 different types of signals, so the output of the convolutional neural network is a sequence containing 11 values. As shown in table 1, the convolutional neural network comprises 2 convolutional layers (conv), 2 pooling layers (maxpool) and 2 fully-connected layers (dense). The convolutional layers use 1 × 3 and 2 × 3 convolutional kernels (kernel) to compress the number of model parameters, and perform edge padding with 1 × 1 steps (step), with 64 convolutional kernels arranged in the first layer and 16 convolutional kernels arranged in the second layer. Each convolution layer is followed by a pooling layer, and the images are sampled down, so that the size of image data is reduced, and consumed resources are reduced. The output of each pooling layer is processed by batch normalization (bathnormalization), so that the robustness and the convergence of the model are improved, and the training speed is increased. Normalization can promote the rapid convergence of the model, but meanwhile, the side effect of overfitting can also be generated, a dropout layer is added after each normalization layer to prevent overfitting, and the loss rate is set to be 0.5. The probability of neuron death during training is reduced using the ReLU nonlinear activation function. And finally flattening the image data into one-dimensional data through a flatten layer, outputting 11 numerical values through two full-connection layers, and ensuring that the sum of the 11 numerical values is 1 by adopting a softmax activation function. And adopting a multi-class cross entropy loss function as a loss function of the convolutional neural network.
TABLE 1 convolutional neural network architecture
And S40, inputting the training set into a deep neural network to train the signal recognition model.
During training, firstly converting the characteristic diagram into an image with the size of 100 multiplied by 1, inputting the image into a network, initializing a network weight by adopting a Glorot method, carrying out iterative training and learning on the convolutional network by utilizing a random gradient descent method, setting the maximum training cycle number to be 200, updating the batch size each time to be 8, using an early-stop regularization strategy, and setting the early-stop cycle number to be 10. And adjusting the network weight parameters by the back propagation error of the loss function, obtaining a trained network model when the network loss function value is converged, and storing all the parameter weight values of the trained network.
And S50, improving the performance of the signal identification model based on an ensemble learning algorithm.
The invention refers to an ensemble learning stacking algorithm, and trains a model in two stages, as shown in fig. 3, where N =5 indicates that 5 feature images are generated in step S10 in the example, and M =11 indicates that 11 signal categories exist in the data in the example.
The first stage constructs and trains a deep neural network as a signal recognition submodel, i.e. a base classifier, for each feature image according to steps S30 and S40. The output of the base classifier is a sequence of M =11 values, each value representing the likelihood of the signal belonging to a class of signals. And in the second stage, the integrated learning algorithm is adopted to combine the base classifiers together to form a strong classifier so as to improve the model identification capability. Specifically, in the second stage, firstly, the trained base classifier is used for predicting samples, and prediction results of different classes are combined together to form a new training set. The strong classifier is then trained based on the new training set, outputting a sequence of M =11 values, each value more accurately predicting the likelihood that a signal belongs to a certain class of signals.
It should be noted that, when a new training sample set is constructed in the second stage, one sample is obtained by combining the prediction results of different feature images of the same signal sample. For example, a signal sample generates N characteristic images of different types, and each input corresponding base classifier predicts to obtain a prediction result, namely a sequence containing M numerical values; n images obtain N sequences containing M numerical values in total, and the N sequences can be regarded as a matrix with N rows and M columns to form a new training sample, wherein the jth element in the ith row represents the possibility that the ith classifier predicts that the sample belongs to the jth signal. With this approach, the prediction result set is converted into a new training sample set.
And S60, identifying the signal by using the trained model.
And generating a characteristic image of the signal to be recognized according to the step S10, inputting the characteristic image into the trained recognition network model, and outputting the modulation mode of the signal.
Compared with the prior art, the invention has the following advantages:
firstly, because the invention adopts the trained deep neural network model to replace the manual work to process the signal, the all-weather uninterrupted signal identification can be realized, the signal identification technology is promoted to move from the manual work to the automation, the cost of the signal identification is reduced, and the processing flow has the advantages of automation and intellectualization.
Secondly, the invention identifies the signal characteristic image based on the most advanced deep neural network model at present, replaces the manual mode based on expert experience, overcomes the defects of more dependence on expert experience, more condition hypothesis limitation and the like of the traditional method, reduces the professional ability requirement on the signal identification personnel, and greatly improves the efficiency and the accuracy compared with the manual identification method.
Thirdly, because the ensemble learning algorithm is added on the basis of deep learning, the advantages of the ensemble learning algorithm and the ensemble learning algorithm are complementary, so that the signal recognition capability of the model is further improved, and in the ensemble learning process, a user can select different types of feature images and basic models according to requirements, so that the method has pertinence and operability.
The word "preferred" is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a concrete fashion. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise or clear from context, "X employs A or B" is intended to include either of the permutations as a matter of course. That is, if X employs A; b is used as X; or X employs both A and B, then "X employs A or B" is satisfied in any of the foregoing examples.
Also, although the disclosure has been shown and described with respect to one or an implementation, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The present disclosure includes all such modifications and alterations, and is limited only by the scope of the appended claims. In particular regard to the various functions performed by the above described components (e.g., elements, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of the other implementations as may be desired and advantageous for a given or particular application. Furthermore, to the extent that the terms "includes," has, "" contains, "or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term" comprising.
Each functional unit in the embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or a plurality of or more than one unit are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium. The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Each apparatus or system described above may execute the storage method in the corresponding method embodiment.
In summary, the above-mentioned embodiment is an implementation manner of the present invention, but the implementation manner of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be regarded as equivalent replacements within the protection scope of the present invention.
Claims (9)
1. A signal identification method based on integrated deep learning is characterized by comprising the following steps:
s10, determining a proper characteristic image set according to a specific signal identification task, and converting acquired signal data into a signal characteristic image;
step S20, carrying out category labeling on the signal characteristic image to obtain a training set containing labeling information, and expanding the training set by using a data enhancement technology;
s30, constructing a signal identification model based on a deep neural network;
s40, inputting a training set into a deep neural network to train a signal recognition model;
s50, improving the performance of the signal identification model based on an integrated learning algorithm;
and S60, identifying the signal by using the trained model.
2. The signal identification method based on the integrated deep learning of claim 1, wherein the step S10 comprises the following sub-steps:
s11, determining a suitable feature image set according to a specific signal identification task; the signal data comprise BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK, PAM4, WB-FM, AM-SSB and AM-DSB, each modulation mode comprises 1000 signal samples, each signal sample comprises two paths of signals of I path and Q path, each path of signal comprises 128 sampling points, and one point corresponds to the amplitude of a signal at a certain moment; setting a characteristic image set comprising 5 types of images of a oscillogram, a frequency spectrum graph, a constellation locus graph and a waterfall graph;
s12, converting the acquired signal data into a two-dimensional simple characteristic image;
and S13, generating a high-dimensional complex characteristic image based on the two-dimensional simple characteristic image.
3. The signal identification method based on the integrated deep learning of claim 2, wherein the constellation diagram is generated by using the way I data as a horizontal axis and the way Q data as a vertical axis; the constellation locus diagram is formed by superposing a plurality of constellation diagrams together; the oscillogram is generated by taking time as a horizontal axis and I-path or Q-path data as a vertical axis, and a time value is calculated according to the sampling frequency of IQ data; the spectrogram is generated by discrete Fourier transform;
the generation mode of the frequency spectrum diagram is as follows:
IQ two-path time domain data generate a group of complex values through discrete Fourier transform, each complex value represents the frequency intensity of a corresponding frequency value, and the complex values are calculated by the following formula:
whereinRepresenting discrete IQ signals, T>0 and F>0 is the sampling period of the time variable and the frequency variable, respectively, m and n take integer values, representing sampling at equally spaced time and frequency grid points (mT, nF), with the frequency value represented by the horizontal axis and the frequency intensity represented by the vertical axis, generating a spectrogram;
the waterfall plot generation mode is as follows:
the frequency value is represented by a horizontal axis, the frequency intensity is represented by color, a two-dimensional spectrogram is converted into a colored line, the lines of a plurality of continuous time points are stacked into a plane according to time sequence, a colored waterfall graph is drawn, the horizontal axis in the waterfall graph represents frequency, the vertical axis represents time, and the color of each point corresponds to the frequency intensity.
4. The signal identification method based on the integrated deep learning of claim 1, wherein the step S20 comprises the following sub-steps:
s21, carrying out category marking on the characteristic image;
s22, adopting a GAN network to enhance data, and generating a large number of labeled characteristic images, wherein the method comprises the following steps: firstly, a generator and a discriminator of the GAN network are constructed, the marked characteristic images are input and output as synthesized characteristic images; then training a GAN network, wherein a generator is taken as a side point in the training process, and through confrontation training, the generator generates a vivid synthesized characteristic image, so that a discriminator cannot distinguish the synthesized characteristic image from a real image;
and S23, mixing the original characteristic image and the generated characteristic image together to form an enhanced training set.
5. The signal identification method based on the integrated deep learning of claim 1, wherein the step S30 comprises the following sub-steps:
s31, selecting a convolutional neural network to construct a signal identification model;
s32, designing a convolutional neural network structure; the convolutional neural network comprises 2 convolutional layers, 2 pooling layers and 2 full-connection layers; the convolution layer uses convolution kernels of 1 × 3 and 2 × 3 to compress the number of model parameters, 1 × 1 stepping is adopted for edge filling, 64 convolution kernels are configured in the first layer, and 16 convolution kernels are configured in the second layer; each convolution layer is followed by a pooling layer, and the images are sampled down, so that the size of image data is reduced, and consumed resources are reduced; the output of each pooling layer is subjected to batch normalization processing, a dropout layer is added after each normalizing layer to prevent overfitting, a ReLU nonlinear activation function is used to reduce the probability of neuron death in training, finally, image data are flattened into one-dimensional data through a flatten layer, 11 numerical values are finally output through two full-connected layers, and the sum of the 11 numerical values is ensured to be 1 by adopting a softmax activation function;
and S33, designing a deep neural network loss function, and taking the multi-class cross entropy loss function as a loss function of the convolutional neural network.
6. The signal identification method based on the integrated deep learning of claim 1, wherein the step S40 comprises the following sub-steps:
s41, adjusting the size of the signal characteristic image according to the network input requirement, and converting the characteristic image into an image with the size of 100 multiplied by 1;
s42, initializing a network weight by adopting a Glorot method, and performing iterative training and learning on the convolution network by utilizing a random gradient descent method; using an early stop regularization strategy, setting the number of early stop cycles to be 10;
adjusting network weight parameters through a loss function back propagation error, and obtaining a trained network model when a network loss function value is converged;
and S43, storing the trained network parameters.
7. The signal identification method based on the integrated deep learning of claim 1, wherein the step S50 comprises the following sub-steps:
s51, constructing a base classifier for each feature image generated in the step S10 according to the steps S30 and S40, wherein the output of the base classifier is a sequence containing M =11 numerical values, and each numerical value represents the possibility that the signal belongs to a certain class of signals;
and S52, combining all the base classifiers together by adopting an integrated learning algorithm to form a strong classifier to improve the recognition capability of the model, combining the prediction results of different feature images of the same signal sample to obtain one sample when forming a new training sample set, and finally outputting the sequence with M =11 numerical values.
8. The signal identification method based on the integrated deep learning of claim 7, wherein the one sample is obtained by combining the prediction results of different feature images of the same signal sample, and the method comprises:
generating N characteristic images of different categories by using a signal sample, and inputting each characteristic image into a corresponding base classifier to predict to obtain a prediction result, namely a sequence containing M numerical values; and obtaining N sequences containing M numerical values in the N images, regarding the N sequences as a matrix with N rows and M columns to form a new training sample, wherein the jth element in the ith row represents the possibility that the ith classifier predicts that the sample belongs to the jth signal.
9. The signal identification method based on the integrated deep learning of claim 1, wherein the step S60 comprises the following sub-steps:
s61, converting the signals into characteristic images according to the step S10;
and S62, inputting the characteristic image into the signal identification model and outputting the modulation mode type of the signal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310016915.0A CN115935172A (en) | 2023-01-06 | 2023-01-06 | Signal identification method based on integrated deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310016915.0A CN115935172A (en) | 2023-01-06 | 2023-01-06 | Signal identification method based on integrated deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115935172A true CN115935172A (en) | 2023-04-07 |
Family
ID=86697960
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310016915.0A Pending CN115935172A (en) | 2023-01-06 | 2023-01-06 | Signal identification method based on integrated deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115935172A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190072601A1 (en) * | 2017-01-23 | 2019-03-07 | DGS Global Systems, Inc. | Systems, methods, and devices for automatic signal detection based on power distribution by frequency over time within a spectrum |
CN110414554A (en) * | 2019-06-18 | 2019-11-05 | 浙江大学 | One kind being based on the improved Stacking integrated study fish identification method of multi-model |
CN111178260A (en) * | 2019-12-30 | 2020-05-19 | 山东大学 | Modulation signal time-frequency diagram classification system based on generation countermeasure network and operation method thereof |
CN111259798A (en) * | 2020-01-16 | 2020-06-09 | 西安电子科技大学 | Modulation signal identification method based on deep learning |
CN111444832A (en) * | 2020-03-25 | 2020-07-24 | 哈尔滨工程大学 | Whale cry classification method based on convolutional neural network |
CN112308133A (en) * | 2020-10-29 | 2021-02-02 | 成都明杰科技有限公司 | Modulation identification method based on convolutional neural network |
CN113536919A (en) * | 2021-06-10 | 2021-10-22 | 重庆邮电大学 | Signal modulation recognition algorithm based on data enhancement and convolutional neural network |
CN113872904A (en) * | 2021-09-18 | 2021-12-31 | 北京航空航天大学 | Multi-classification communication signal automatic modulation identification method based on ensemble learning |
CN115273237A (en) * | 2022-08-01 | 2022-11-01 | 中国矿业大学 | Human body posture and action recognition method based on integrated random configuration neural network |
-
2023
- 2023-01-06 CN CN202310016915.0A patent/CN115935172A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190072601A1 (en) * | 2017-01-23 | 2019-03-07 | DGS Global Systems, Inc. | Systems, methods, and devices for automatic signal detection based on power distribution by frequency over time within a spectrum |
CN110414554A (en) * | 2019-06-18 | 2019-11-05 | 浙江大学 | One kind being based on the improved Stacking integrated study fish identification method of multi-model |
CN111178260A (en) * | 2019-12-30 | 2020-05-19 | 山东大学 | Modulation signal time-frequency diagram classification system based on generation countermeasure network and operation method thereof |
CN111259798A (en) * | 2020-01-16 | 2020-06-09 | 西安电子科技大学 | Modulation signal identification method based on deep learning |
CN111444832A (en) * | 2020-03-25 | 2020-07-24 | 哈尔滨工程大学 | Whale cry classification method based on convolutional neural network |
CN112308133A (en) * | 2020-10-29 | 2021-02-02 | 成都明杰科技有限公司 | Modulation identification method based on convolutional neural network |
CN113536919A (en) * | 2021-06-10 | 2021-10-22 | 重庆邮电大学 | Signal modulation recognition algorithm based on data enhancement and convolutional neural network |
CN113872904A (en) * | 2021-09-18 | 2021-12-31 | 北京航空航天大学 | Multi-classification communication signal automatic modulation identification method based on ensemble learning |
CN115273237A (en) * | 2022-08-01 | 2022-11-01 | 中国矿业大学 | Human body posture and action recognition method based on integrated random configuration neural network |
Non-Patent Citations (2)
Title |
---|
崔天舒: "面向天基电磁信号识别的深度学习方法", 《中国优秀硕士学位论文全文数据库 基础科学辑》 * |
王咏实: "基于深度学习的数字调制信号识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107979554B (en) | Radio signal Modulation Identification method based on multiple dimensioned convolutional neural networks | |
CN110070101B (en) | Plant species identification method and device, storage medium and computer equipment | |
CN106372648A (en) | Multi-feature-fusion-convolutional-neural-network-based plankton image classification method | |
CN115249332B (en) | Hyperspectral image classification method and device based on space spectrum double-branch convolution network | |
CN111627080B (en) | Gray level image coloring method based on convolution nerve and condition generation antagonistic network | |
CN114663685B (en) | Pedestrian re-recognition model training method, device and equipment | |
CN109872326B (en) | Contour detection method based on deep reinforced network jump connection | |
Hou et al. | Electromagnetic signal feature fusion and recognition based on multi-modal deep learning | |
CN115270872A (en) | Radar radiation source individual small sample learning and identifying method, system, device and medium | |
Yang et al. | One-dimensional deep attention convolution network (ODACN) for signals classification | |
CN111986275A (en) | Inverse halftoning method for multi-modal halftone image | |
Chi et al. | Plant species recognition based on bark patterns using novel Gabor filter banks | |
CN113902095A (en) | Automatic modulation identification method, device and system for wireless communication | |
CN102737232B (en) | Cleavage cell recognition method | |
CN115935172A (en) | Signal identification method based on integrated deep learning | |
CN116935226A (en) | HRNet-based improved remote sensing image road extraction method, system, equipment and medium | |
CN114581470B (en) | Image edge detection method based on plant community behaviors | |
CN113656754B (en) | Fixed star spectrum data enhancement method and system | |
CN109768944A (en) | A kind of signal modulation identification of code type method based on convolutional neural networks | |
CN115731172A (en) | Crack detection method, device and medium based on image enhancement and texture extraction | |
CN112580598A (en) | Radio signal classification method based on multi-channel Diffpool | |
CN112906783A (en) | Electroencephalogram emotion recognition method and device suitable for cross-test | |
CN111935043A (en) | Phase modulation signal modulation mode identification method based on phase statistical chart | |
Gros et al. | Joint use of bivariate empirical mode decomposition and convolutional neural networks for automatic modulation recognition | |
CN112232430A (en) | Neural network model testing method and device, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230407 |