CN108616471B - Signal modulation identification method and device based on convolutional neural network - Google Patents

Signal modulation identification method and device based on convolutional neural network Download PDF

Info

Publication number
CN108616471B
CN108616471B CN201810427110.4A CN201810427110A CN108616471B CN 108616471 B CN108616471 B CN 108616471B CN 201810427110 A CN201810427110 A CN 201810427110A CN 108616471 B CN108616471 B CN 108616471B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
signal
modulation type
maximum value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810427110.4A
Other languages
Chinese (zh)
Other versions
CN108616471A (en
Inventor
郑仕链
杨小牛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 36 Research Institute
Original Assignee
CETC 36 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 36 Research Institute filed Critical CETC 36 Research Institute
Priority to CN201810427110.4A priority Critical patent/CN108616471B/en
Publication of CN108616471A publication Critical patent/CN108616471A/en
Application granted granted Critical
Publication of CN108616471B publication Critical patent/CN108616471B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L27/00Modulated-carrier systems
    • H04L27/0012Modulated-carrier systems arrangements for identifying the type of modulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Digital Transmission Methods That Use Modulated Carrier Waves (AREA)

Abstract

The invention discloses a signal modulation identification method and a signal modulation identification device based on a convolutional neural network, wherein the method comprises the following steps: comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network; when N is equal to L, directly inputting the received signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network; and when N is not equal to L, performing corresponding filling or segmentation processing on the received signal, and inputting the signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network. Thus, higher recognition accuracy is obtained at low signal-to-noise ratio than in conventional feature extraction-based methods. And the method adapts to different lengths of the signals to be identified, can fully utilize the complete information of the signals to be identified, and avoids information waste.

Description

Signal modulation identification method and device based on convolutional neural network
Technical Field
The invention relates to the technical field of signal modulation identification, in particular to a signal modulation identification method and device based on a convolutional neural network.
Background
The traditional signal modulation identification is mainly realized by a method for extracting signal characteristics. The scholars provide a modulation type identification method, which avoids a special feature extraction link, directly takes radio signal sampling data as the input of a convolutional neural network and completes the identification of a signal modulation mode, and obtains higher identification accuracy rate than the traditional feature extraction-based method under the condition of low signal-to-noise ratio.
However, the length of the signal to be identified in this modulation identification method is fixed (for example, the length is 128 sampling points), and in practical applications, the length of the signal to be identified is not determined, and may be longer or shorter than the signal input length of the convolutional neural network design. When the length of the signal to be recognized is larger than the designed signal input length, a simple method is to intercept a signal segment which accords with the input of the convolutional neural network from the signal to be recognized and recognize the intercepted new signal segment, but not only the signal information which is not input into the convolutional neural network is wasted, but also the recognition accuracy rate is influenced, so that improvement is needed urgently.
Disclosure of Invention
The invention provides a signal modulation identification method and a signal modulation identification device based on a convolutional neural network, which are used for solving the problems that the existing signal modulation identification technology cannot fully utilize the information of a signal to be identified and the identification accuracy rate is low.
According to one aspect of the application, a signal modulation identification method based on a convolutional neural network is provided, and comprises the following steps:
comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network;
when N is equal to L, directly inputting the received signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network;
and when N is not equal to L, performing corresponding filling or segmentation processing on the received signal, and inputting the signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network.
Optionally, when N is not equal to L, performing corresponding padding or segmentation processing on the received signal, and inputting the signal into the convolutional neural network includes:
if N is less than L, according to the input signal length L supported by the convolutional neural network, zero filling is carried out on the received signal, and then the signal is input into the convolutional neural network;
and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
Optionally, after zero padding is performed on the received signal, inputting the signal into the convolutional neural network includes:
after the tail part of a sampling sequence x (N) of a received signal is supplemented with N-L0, inputting the sampling sequence x (N) (N is 0,1, 2.., L-1) into a convolutional neural network according to an input format of the convolutional neural network so as to identify the modulation type;
dividing the received signal into signal segments with length L, and inputting the signal segments into the convolutional neural network comprises:
for the sampling sequence x (n) of the received signal, each signal segment y with the length L is selected in a sliding mode according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network;
wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure BDA0001652384750000021
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
Optionally, each signal segment yiAnd (m) inputting the data into the convolutional neural network according to the input format of the convolutional neural network, identifying the modulation type, and performing fusion judgment according to the identification result or performing fusion judgment according to the confidence coefficient.
Optionally, the performing a fusion decision according to the recognition result includes:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification;
step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M];
Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identification is finished;
step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value and then outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
Optionally, the making of the fusion decision according to the confidence includes:
step A1, dividing the signal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M],
Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification;
step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure BDA0001652384750000031
j=1,2,...,M;
Step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value, and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
According to another aspect of the present invention, there is provided a convolutional neural network-based signal modulation identification apparatus, including:
the comparison module is used for comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network or not;
the identification module is used for directly inputting the received signal into the convolutional neural network when N is equal to L so as to obtain a signal modulation type output by the convolutional neural network; and when N is not equal to L, performing corresponding filling or segmentation processing on the received signal, and inputting the signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network.
Optionally, the identification module is specifically configured to, if N is less than L, perform zero padding on the received signal according to an input signal length L supported by the convolutional neural network, and then input the signal into the convolutional neural network; and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
Optionally, the identification module is configured to, after the tail of the sample sequence x (N) of the received signal is complemented with N-L0 s, input the sample sequence x (N) (N ═ 0,1, 2.., L-1) into the convolutional neural network according to the input format of the convolutional neural network for modulation type identification; and, for the sampling sequence x (n) of the received signal, sliding each signal segment y of length L according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network; wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure BDA0001652384750000041
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
Optionally, the identification module is further configured to identify each signal segment yi(m) after the input is input into the convolutional neural network according to the input format of the convolutional neural network and the modulation type is identified, fusion judgment is carried out according to the identification result or fusion judgment is carried out according to the confidence coefficient;
specifically, the fusion judgment according to the recognition result includes:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification;
step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M];
Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identificationFinishing;
step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting a maximum value and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result;
the fusion judgment according to the confidence degree comprises the following steps:
step A1, dividing the signal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M],
Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification;
step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure BDA0001652384750000051
j=1,2,...,M;
Step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value, and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
The embodiment of the invention has the beneficial effects that: according to the signal modulation identification scheme based on the convolutional neural network, whether the length N of a received signal is equal to the length L of an input signal supported by the convolutional neural network or not is compared, if the N is equal to the L, the received signal is directly input into the convolutional neural network, and if the N is not equal to the L, the received signal is input into the convolutional neural network after being subjected to corresponding completion or segmentation processing, so that the signal modulation type output by the convolutional neural network is obtained. Therefore, the technical scheme of the embodiment adapts to different lengths of the signals to be identified, can fully utilize the complete information of the signals to be identified, avoids information waste and improves the identification accuracy of the signal modulation type.
Drawings
FIG. 1 is a schematic diagram of a convolutional neural network structure according to an embodiment of the present invention;
FIG. 2 is a flow chart of a convolutional neural network-based signal modulation identification method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a signal modulation identification method based on a convolutional neural network according to another embodiment of the present invention;
fig. 4 is a block diagram of a signal modulation identification apparatus based on a convolutional neural network according to an embodiment of the present invention.
Detailed Description
In a dynamic spectrum access network, an unauthorized user can use a spectrum hole which is not currently used by an authorized user (also called a master user) to communicate, so that the utilization rate of wireless spectrum resources is improved. One key technique for dynamic spectrum access is the need to detect authorized users to avoid harmful interference to authorized users. The modulation type identification is used for assisting in identifying the main user by judging which modulation type is adopted by the received radio signal, and has important significance for judging the type of the main user. Based on this, the present embodiment provides a signal modulation identification method and apparatus based on a convolutional neural network, which improve the accuracy of identifying a signal modulation mode.
For ease of understanding, the signal modulation type and convolutional neural network are briefly described herein.
In order to ensure the communication effect, overcome the problems in long-distance signal transmission and improve the frequency spectrum utilization rate and the communication quality, the signal frequency spectrum is moved to a high-frequency channel for transmission through modulation. This process of loading the signal to be transmitted into the high frequency signal is called modulation. Three basic modulation methods of digital signals are amplitude modulation, frequency modulation and phase modulation, and other modulation methods are modifications or combinations of the above methods. Convolutional Neural Networks (CNN) have an important position in deep learning because of their outstanding effects in the image field, and are one of the most widely used Neural networks. Compared with the traditional neural network, the method has two obvious characteristics. Firstly, the convolutional neural network uses weight sharing, and the same filter is used for input to obtain the output of a corresponding channel. Second, convolutional neural networks provide pooling operations, thereby having a degree of translation without distortion, and reduce deeper levels of computational complexity by downsampling. Most convolutional neural network structures tend to contain four basic layers: a convolutional layer, a normalization layer, a nonlinear activation layer, and a pooling layer.
The invention respectively splices two columns of a matrix of an I path (a real part of a received complex baseband signal) and a Q path (an imaginary part of the received complex baseband signal) of a received signal as the input of a convolutional neural network, the structure of the convolutional neural network is shown in figure 1, the length of a supported input signal is L, and the supported input format is a matrix form of L rows and 2 columns (namely Lx 2). In fig. 1, conv represents a convolution layer, the number before conv (i.e., 21x1) represents the size of the convolution kernel, and the number after conv (128) represents the number of convolution kernels; ReLU denotes rectified linear activation; dropout represents a Dropout layer, and the number (0.5) in parentheses represents a Dropout probability; fc denotes the fully-connected layer, and the number (i.e., 256) represents the number of neurons; the SoftMax represents a SoftMax layer, and the number of neurons in the layer is M, namely the total number of modulation type categories; and finally, outputting the classification label as a classification, wherein the classification label adopts One-Hot coding. A batch normalization layer is also included between the convolutional layer and the nonlinear active layer and is not shown in fig. 1 for simplicity. In one embodiment, the network training objective function may adopt a cross entropy loss function, and the training method may adopt a random gradient descent method.
Fig. 2 is a flowchart of a signal modulation identification method based on a convolutional neural network according to an embodiment of the present invention, and referring to fig. 2, the signal modulation identification method based on the convolutional neural network according to the present embodiment includes the following steps:
step S201, comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network;
step S202, when N is equal to L, directly inputting the received signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network;
and step S203, when N is not equal to L, performing corresponding filling or segmentation processing on the received signal, and inputting the signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network.
As shown in fig. 2, in the signal modulation identification method of this embodiment, the relationship between the length of the received signal and the length of the input signal supported by the convolutional neural network is compared, and different processing is performed according to the difference of the comparison result, so that signals to be identified with different lengths are implemented, the information of the signals to be identified is fully utilized, and the identification accuracy of the signal modulation type is improved.
Fig. 3 is a schematic flow chart of a signal modulation identification method based on a convolutional neural network according to another embodiment of the present invention, and the following description focuses on implementation steps of the signal modulation identification method based on the convolutional neural network according to the present embodiment with reference to fig. 3.
Referring to fig. 3, the process starts, and step S301 is executed first, to determine the relationship between the length N of the received signal and the length L of the input signal supported by the convolutional neural network;
it is understood that the received signal is the signal to be identified. The relationship between the length of the signal to be identified and the length of the input signal supported by the convolutional neural network is not three, namely, less than the relationship, more than the relationship or equal to the relationship. The embodiment adopts different processing steps aiming at three different relations, the relation is relatively simple, a signal is directly input into a convolutional neural network for identification, when the signal length N is not equal to the length L supported by the convolutional neural network, the embodiment carries out corresponding completion or segmentation processing on the received signal and then inputs the signal into the convolutional neural network, and concretely comprises the steps of conducting zero filling on the received signal according to the input signal length L supported by the convolutional neural network and then inputting the signal into the convolutional neural network if N is less than L; and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
Referring to fig. 3, steps S302 and S305 are executed when the aforementioned first relationship is present.
Specifically, in step S302, if N is less than L, the signal is zero-padded, where the number of zero-padded is N-L.
In this embodiment, after N-L0 s are added to the tail of the sample sequence x (N) of the received signal, the sample sequence x (N) (0, 1, 2.., L-1) is input into the convolutional neural network according to the input format of the convolutional neural network to identify the modulation type. That is, N-L0 s are added after the sequence of signal samples x (N) to be recognized, i.e., x (N) 0, N, L-1.
Then, step S305 is executed, and the convolutional neural network is input for identification, and the identification result is the modulation type with the highest confidence. Here, x (n) (0, 1, 2., L-1) is input into the convolutional neural network according to an input format supported by the convolutional neural network to perform modulation pattern recognition, and the recognition result is the modulation type with the highest confidence.
The second relationship is that step S303 and step S306 are executed.
Step S303, if N is larger than L, segmenting the signal, wherein the length of each segment is L.
Here, dividing the received signal into signal segments having a length L, and inputting the signal segments into the convolutional neural network includes: for the sampling sequence x (n) of the received signal, each signal segment y with the length L is selected in a sliding mode according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network; wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure BDA0001652384750000081
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
That is to say, the sequence of signal samples to be identified x (N), N being 0,1,2P sliding to select each signal section y with length Li(m) wherein yi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I。
Figure BDA0001652384750000082
Figure BDA0001652384750000083
Represents the largest integer not greater than (N-L + 1)/P. P is more than or equal to 1 and less than or equal to N-L + 1.
And S306, inputting each section of signal into a convolutional neural network for recognition, and determining a final recognition result according to the fusion of the recognition results or the fusion of confidence degrees.
The divided signal segments y in step S303 are divided intoiAnd (m) inputting the modulation pattern into the convolutional neural network according to an input format supported by the convolutional neural network for recognition, and performing fusion judgment according to a recognition result or confidence.
The fusion judgment according to the recognition result in the embodiment includes:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification;
step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M];
Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identification is finished;
step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value and then outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
That is, first, let the recognitionSignal segment yi(m) is the number of modulation types jj0, j 1,2, M represents the number of identified modulation types supported by the designed convolutional neural network. Will yi(m) inputting the data into the convolutional neural network according to the input format of the convolutional neural network for modulation recognition, wherein the recognition result is oi,oi∈[1,2,...,M]If o isiJ, the number of times that the j-th modulation type is identified is added by 1, i.e., vj=vj+1, repeat the process until all yi(m) all the identification is finished. Finally, v is obtainedjAnd (j ═ 1, 2.. multidot., M), if only one maximum value exists, the corresponding index j is used as the final modulation type identification result (the identification result is the jth modulation type), and if a plurality of maximum values exist, one maximum value is randomly selected, and the corresponding index j is used as the final modulation type identification result (the identification result is the jth modulation type).
The fusion judgment according to the confidence degree comprises the following steps:
step A1, dividing the signal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M],
Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification;
step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure BDA0001652384750000101
j=1,2,...,M;
Step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (a) is more than two, after randomly selecting one maximum value, taking the modulation type represented by the subscript j corresponding to the maximum value as the final modulation type identification resultAnd (6) outputting.
That is, first, y isi(m) inputting the data into the convolutional neural network according to the input format of the convolutional neural network for modulation recognition to obtain a confidence coefficient vector (namely the output of the SoftMax layer of the convolutional neural network) pi=[pi,1,pi,2,...,pi,M]Wherein p isi,jDenotes a number yi(M) a confidence level for identifying the jth modulation type, j 1,2, M representing the number of identified modulation types supported by the designed convolutional neural network. Then calculating the average value of the confidence degrees of the j modulation types to obtain
Figure BDA0001652384750000102
j is 1, 2. Finally, find wjAnd (j ═ 1, 2.. multidot., M), if only one maximum value exists, the corresponding index j is used as the final modulation type identification result (the identification result is the jth modulation type), and if a plurality of maximum values exist, one maximum value is randomly selected, and the corresponding index j is used as the final modulation type identification result (the identification result is the jth modulation type).
And executing step S304 in the third relation, and if N is equal to L, inputting the signal into the convolutional neural network for identification, where the identification result is the modulation type with the highest confidence.
From the above, the embodiment discloses a convolutional neural network modulation identification method adaptive to different signal lengths, and when the length of a signal sampling sequence to be identified is smaller than the input length supported by a convolutional neural network, the signal sampling sequence is adaptive to the convolutional neural network input format by zero padding; and when the length of the signal sampling sequence to be identified is greater than the input length supported by the convolutional neural network, performing modulation pattern identification by performing segmented input on the signal sampling sequence into the convolutional neural network, and then performing identification result fusion or confidence fusion to obtain a final modulation pattern identification result. The complete information of the signal to be identified is fully utilized, and the accuracy rate of the identification of the modulation type is improved.
Fig. 4 is a block diagram of a convolutional neural network based signal modulation identification apparatus according to an embodiment of the present invention, and referring to fig. 4, the convolutional neural network based signal modulation identification apparatus 400 of the present embodiment includes:
a comparing module 401, configured to compare whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network;
an identifying module 402, configured to directly input the received signal into the convolutional neural network when N is equal to L, so as to obtain a signal modulation type output by the convolutional neural network; and when N is not equal to L, performing corresponding filling or segmentation processing on the received signal, and inputting the signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network.
In an embodiment of the present invention, the identifying module 402 is specifically configured to, if N is less than L, perform zero padding on the received signal according to an input signal length L supported by the convolutional neural network, and then input the signal into the convolutional neural network; and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
In an embodiment of the present invention, after the tail of the sample sequence x (N) of the received signal is complemented with N-L0 s, the identifying module 402 inputs the sample sequence x (N) (N ═ 0,1, 2.., L-1) into the convolutional neural network according to the input format of the convolutional neural network to identify the modulation type; and, for the sampling sequence x (n) of the received signal, sliding each signal segment y of length L according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network; wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure BDA0001652384750000111
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
In one embodiment of the invention, the identification module 402 is further configured to identify each signal segment yi(m) inputting the data into the convolutional neural network according to the input format of the convolutional neural network and identifying the modulation typePerforming fusion judgment according to the recognition result or performing fusion judgment according to the confidence coefficient; specifically, the fusion judgment according to the recognition result includes:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification; step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M](ii) a Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identification is finished; step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting a maximum value and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result;
the fusion judgment according to the confidence degree comprises the following steps: step A1, dividing the signal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M]Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification; step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure BDA0001652384750000121
j ═ 1,2,. said, M; step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (2) is more than two, after one maximum value is randomly selected, the subscript j corresponding to the maximum value is representedAnd outputting the modulation type as a final modulation type identification result.
It should be noted that the signal modulation identification apparatus based on the convolutional neural network of this embodiment corresponds to the signal modulation identification method based on the convolutional neural network, and therefore, the content that is not described in the signal modulation identification apparatus based on the convolutional neural network of this embodiment may refer to the description in the foregoing method embodiment, and is not described here again.
An embodiment of the present invention provides an electronic device, including: a processor, and a memory for storing processor-executable instructions. Wherein the processor is configured to execute instructions stored in the memory corresponding to the steps of the convolutional neural network-based signal modulation identification method in the foregoing embodiments.
In addition, the logic instructions in the memory may be implemented in the form of software functional units and may be stored in a computer readable storage medium when sold or used as a stand-alone product.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium.
It is to be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description. Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
While the foregoing is directed to embodiments of the present invention, other modifications and variations of the present invention may be devised by those skilled in the art in light of the above teachings. It should be understood by those skilled in the art that the foregoing detailed description is for the purpose of illustrating the invention rather than the foregoing detailed description, and that the scope of the invention is defined by the claims.

Claims (8)

1. A signal modulation identification method based on a convolutional neural network is characterized by comprising the following steps:
comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network;
when N is equal to L, directly inputting the received signal into the convolutional neural network to obtain a signal modulation type output by the convolutional neural network;
when N is not equal to L, the received signal is input into the convolutional neural network after being subjected to corresponding completion or segmentation processing, so that a signal modulation type output by the convolutional neural network is obtained;
if N is less than L, according to the input signal length L supported by the convolutional neural network, zero filling is carried out on the received signal, and then the signal is input into the convolutional neural network;
and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
2. The method of claim 1, wherein inputting the received signal into the convolutional neural network after zero-filling the received signal comprises:
after the tail part of a sampling sequence x (N) of a received signal is supplemented with N-L0, inputting the sampling sequence x (N) (N is 0,1, 2.., L-1) into a convolutional neural network according to an input format of the convolutional neural network so as to identify the modulation type;
dividing the received signal into signal segments with length L, and inputting the signal segments into the convolutional neural network comprises:
for the sampling sequence x (n) of the received signal, each signal segment y with the length L is selected in a sliding mode according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network;
wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure FDA0002641901180000011
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
3. Method according to claim 2, characterized in that each signal segment y is divided intoiAnd (m) inputting the data into the convolutional neural network according to the input format of the convolutional neural network, identifying the modulation type, and performing fusion judgment according to the identification result or performing fusion judgment according to the confidence coefficient.
4. The method of claim 3, wherein making a fusion decision based on the recognition result comprises:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification;
step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M];
Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identification is finished;
step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value and then outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
5. The method of claim 3, wherein making a fusion decision based on the confidence level comprises:
step A1, dividing the signal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M],
Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification;
step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure FDA0002641901180000021
j=1,2,...,M;
Step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value, and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
6. A convolutional neural network-based signal modulation identification apparatus, comprising:
the comparison module is used for comparing whether the length N of the received signal is equal to the length L of the input signal supported by the convolutional neural network or not;
the identification module is used for directly inputting the received signal into the convolutional neural network when N is equal to L so as to obtain a signal modulation type output by the convolutional neural network; when N is not equal to L, the received signal is input into the convolutional neural network after being subjected to corresponding completion or segmentation processing, so that a signal modulation type output by the convolutional neural network is obtained;
if N is less than L, according to the input signal length L supported by the convolutional neural network, zero filling is carried out on the received signal, and then the signal is input into the convolutional neural network; and if N is larger than L, dividing the received signal into signal segments with the length of L, and inputting the signal segments into the convolutional neural network.
7. The apparatus of claim 6, wherein the identification module is configured to,
after the tail part of a sampling sequence x (N) of a received signal is supplemented with N-L0, inputting the sampling sequence x (N) (N is 0,1, 2.., L-1) into a convolutional neural network according to an input format of the convolutional neural network so as to identify the modulation type;
and, for the sampling sequence x (n) of the received signal, sliding each signal segment y of length L according to the interval Pi(m) dividing each signal segment yi(m) inputting the modulation type identification into the convolutional neural network according to the input format of the convolutional neural network; wherein N is 0,1, 2.., N-1; y isi(m)=x((i-1)P+m),m=0,1,2,...,L-1,i=1,2,...,I,
Figure FDA0002641901180000031
Represents the largest integer not greater than (N-L + 1)/P; p is more than or equal to 1 and less than or equal to (N-L + 1).
8. The apparatus of claim 7, wherein the identification module is further configured to identify each signal segment yi(m) after the input is input into the convolutional neural network according to the input format of the convolutional neural network and the modulation type is identified, fusion judgment is carried out according to the identification result or fusion judgment is carried out according to the confidence coefficient;
specifically, the fusion judgment according to the recognition result includes:
step S1, initializing signal segment yi(m) the number of times v that the modulation type is identified as being of the j-th typej0, where j 1,2, M represents the number of modulation types that the convolutional neural network supports identification;
step S2, mixing yi(m) inputting the convolutional neural network for modulation type recognition to obtain an original recognition result oiWherein o isi∈[1,2,...,M];
Step S3, if oiJ, the number of times v of the j-th modulation typejAdding 1, repeating the steps S2 to S3 until the signal segment yi(m) all identification is finished;
step S4, calculating vjWhen v is the maximum value ofjWhen only one maximum value exists, outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result; when v isjWhen the maximum value of (a) is more than two, randomly selecting a maximum value and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result;
the fusion judgment according to the confidence degree comprises the following steps:
step A1, mixingSignal segment yi(m) inputting the signal into the convolutional neural network for modulation type recognition, and obtaining a confidence coefficient vector pi=[pi,1,pi,2,...,pi,M],
Wherein p isi,jRepresents the signal segment yi(M) a confidence level identified as a jth modulation type, j ═ 1, 2.., M, representing the number of modulation types the convolutional neural network supports identification;
step A2, calculating the average value of the confidence of the j modulation type to obtain
Figure FDA0002641901180000041
j=1,2,...,M;
Step A3, determining the mean value wjWhen w is a maximum ofjWhen only one maximum value exists, the modulation type represented by the subscript j corresponding to the maximum value is taken as a final modulation type identification result and output; when w isjWhen the maximum value of (a) is more than two, randomly selecting one maximum value, and outputting the modulation type represented by the subscript j corresponding to the maximum value as a final modulation type identification result.
CN201810427110.4A 2018-05-07 2018-05-07 Signal modulation identification method and device based on convolutional neural network Active CN108616471B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810427110.4A CN108616471B (en) 2018-05-07 2018-05-07 Signal modulation identification method and device based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810427110.4A CN108616471B (en) 2018-05-07 2018-05-07 Signal modulation identification method and device based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN108616471A CN108616471A (en) 2018-10-02
CN108616471B true CN108616471B (en) 2020-11-20

Family

ID=63662061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810427110.4A Active CN108616471B (en) 2018-05-07 2018-05-07 Signal modulation identification method and device based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN108616471B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109274624B (en) * 2018-11-07 2021-04-27 中国电子科技集团公司第三十六研究所 Carrier frequency offset estimation method based on convolutional neural network
CN109787929A (en) * 2019-02-20 2019-05-21 深圳市宝链人工智能科技有限公司 Signal modulate method, electronic device and computer readable storage medium
CN110048980A (en) * 2019-04-19 2019-07-23 中国电子科技集团公司第三十六研究所 A kind of blind demodulation method of digital communication and device
CN110266620B (en) * 2019-07-08 2020-06-30 电子科技大学 Convolutional neural network-based 3D MIMO-OFDM system channel estimation method
CN112039820B (en) * 2020-08-14 2022-06-21 哈尔滨工程大学 Communication signal modulation and identification method for quantum image group mechanism evolution BP neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067325A (en) * 2013-01-31 2013-04-24 南京邮电大学 Cooperative modulation identification method based on multi-class characteristic parameters and evidence theory
CN103220241A (en) * 2013-03-29 2013-07-24 南京信息职业技术学院 Method for extracting box-dimension features from signals at low signal-to-noise ratio condition
US9530400B2 (en) * 2014-09-29 2016-12-27 Nuance Communications, Inc. System and method for compressed domain language identification
CN106778594A (en) * 2016-12-12 2017-05-31 燕山大学 Mental imagery EEG signal identification method based on LMD entropys feature and LVQ neutral nets
CN107808138A (en) * 2017-10-31 2018-03-16 电子科技大学 A kind of communication signal recognition method based on FasterR CNN

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103067325A (en) * 2013-01-31 2013-04-24 南京邮电大学 Cooperative modulation identification method based on multi-class characteristic parameters and evidence theory
CN103220241A (en) * 2013-03-29 2013-07-24 南京信息职业技术学院 Method for extracting box-dimension features from signals at low signal-to-noise ratio condition
US9530400B2 (en) * 2014-09-29 2016-12-27 Nuance Communications, Inc. System and method for compressed domain language identification
CN106778594A (en) * 2016-12-12 2017-05-31 燕山大学 Mental imagery EEG signal identification method based on LMD entropys feature and LVQ neutral nets
CN107808138A (en) * 2017-10-31 2018-03-16 电子科技大学 A kind of communication signal recognition method based on FasterR CNN

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Deep architectures for modulation recognition;Nathan West;《IEEE international symposium on dynamic spectrum access networks》;20170309;全文 *
无线通信信号调制识别关键技术与理论研究;杨发权;《中国优秀博士学位论文数据库》;20160331;全文 *

Also Published As

Publication number Publication date
CN108616471A (en) 2018-10-02

Similar Documents

Publication Publication Date Title
CN108616471B (en) Signal modulation identification method and device based on convolutional neural network
Zeng et al. Large-scale JPEG image steganalysis using hybrid deep-learning framework
CN107480707B (en) Deep neural network method based on information lossless pooling
CN109711426B (en) Pathological image classification device and method based on GAN and transfer learning
CN110223292B (en) Image evaluation method, device and computer readable storage medium
CN107835496B (en) Spam short message identification method and device and server
CN110222760B (en) Quick image processing method based on winograd algorithm
CN109033780B (en) A kind of edge calculations access authentication method based on wavelet transformation and neural network
CN112633420B (en) Image similarity determination and model training method, device, equipment and medium
CN109887047B (en) Signal-image translation method based on generation type countermeasure network
CN115937655B (en) Multi-order feature interaction target detection model, construction method, device and application thereof
CN111814744A (en) Face detection method and device, electronic equipment and computer storage medium
CN110968845B (en) Detection method for LSB steganography based on convolutional neural network generation
CN113806746A (en) Malicious code detection method based on improved CNN network
CN114553648A (en) Wireless communication modulation mode identification method based on space-time diagram convolutional neural network
Ding et al. Noise-resistant network: a deep-learning method for face recognition under noise
CN115941112B (en) Portable hidden communication method, computer equipment and storage medium
CN111860834B (en) Neural network tuning method, system, terminal and storage medium
CN111353514A (en) Model training method, image recognition method, device and terminal equipment
CN113919401A (en) Modulation type identification method and device based on constellation diagram characteristics and computer equipment
CN114896887A (en) Frequency-using equipment radio frequency fingerprint identification method based on deep learning
CN113920516A (en) Calligraphy character skeleton matching method and system based on twin neural network
Qiu et al. Deepsig: A hybrid heterogeneous deep learning framework for radio signal classification
Yu et al. A multi-task learning CNN for image steganalysis
CN110110120B (en) Image retrieval method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant