CN112688772A - Machine learning superimposed training sequence frame synchronization method - Google Patents

Machine learning superimposed training sequence frame synchronization method Download PDF

Info

Publication number
CN112688772A
CN112688772A CN202011498196.3A CN202011498196A CN112688772A CN 112688772 A CN112688772 A CN 112688772A CN 202011498196 A CN202011498196 A CN 202011498196A CN 112688772 A CN112688772 A CN 112688772A
Authority
CN
China
Prior art keywords
frame synchronization
sequence
network
net
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011498196.3A
Other languages
Chinese (zh)
Other versions
CN112688772B (en
Inventor
卿朝进
饶川贵
余旺
唐书海
郭奕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202011498196.3A priority Critical patent/CN112688772B/en
Publication of CN112688772A publication Critical patent/CN112688772A/en
Application granted granted Critical
Publication of CN112688772B publication Critical patent/CN112688772B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

The invention discloses a frame synchronization method for a machine learning superposition training sequence. It includes: preprocessing a transmission frame signal generated by a transmitter received by a receiver in a superimposed training sequence mode to obtain a normalized measurement vector of the transmission frame signal; inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization; inputting a frame synchronization estimation signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network Estequ-Net to obtain an estimation value of a transmission frame signal; eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode; the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network Estequ-Net is constructed based on a deep neural network. The invention can reduce the occupation of frequency spectrum resources and improve the frame synchronization performance, particularly under the condition of nonlinear distortion.

Description

Machine learning superimposed training sequence frame synchronization method
Technical Field
The invention relates to the technical field of wireless communication frame synchronization.
Background
The frame synchronization is one of the important components of the whole wireless communication system, and the performance of the frame synchronization is directly related to the performance of the whole wireless communication system. However, non-linear distortion inevitably exists in the wireless communication system, and the orthogonality of the training sequence is destroyed due to the existence of the non-linear distortion in the conventional frame synchronization method (such as the correlation method), so that the synchronization performance is greatly reduced, and the method is difficult to be applied under the condition of the non-linear distortion.
Disclosure of Invention
The invention aims to provide a frame synchronization method for a machine learning superposition training sequence, which obviously reduces the occupation of frequency spectrum resources and effectively improves the frame synchronization error probability performance under a nonlinear distortion system compared with the traditional correlation synchronization method.
The technical scheme of the invention is as follows:
a method of machine learning superimposed training sequence frame synchronization, comprising:
preprocessing a transmission frame signal generated by a transmitter received by a receiver in a superimposed training sequence mode to obtain a normalized measurement vector of the transmission frame signal;
inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization;
inputting a synchronization signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network EstEqu-Net to obtain an estimation value of a transmission frame signal;
eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode;
the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network Estequ-Net is constructed based on a deep neural network.
According to some preferred embodiments of the present invention, the transmission frame signal is obtained by superimposing a training sequence, as follows:
x=αs+(1-α)c;
wherein, alpha represents a superposition factor,
Figure BDA0002842821060000011
a training sequence of length M is represented,
Figure BDA0002842821060000012
representing a modulated data sequence of length M,
Figure BDA0002842821060000013
representing an M-dimensional complex field.
According to some preferred embodiments of the invention, the obtaining of the normalized cross-correlation vector comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure BDA0002842821060000021
The following were used:
Figure BDA0002842821060000022
s22 sequential double training sequence
Figure BDA0002842821060000023
Middle-intercepting the sequence with the length of M to generate an intercepted sequence
Figure BDA0002842821060000024
The following were used:
Figure BDA0002842821060000025
s23 obtaining the truncated sequence through cross-correlation processing
Figure BDA0002842821060000026
Cross-correlation metric Γ with received signal vector ytThe following are:
Figure BDA0002842821060000027
s24 collects M of the cross-correlation metrics ΓtThe cross-correlation metric vector γ is constructed as follows:
γ=[Γ01,…,ΓM-1]Tγ satisfies
Figure BDA0002842821060000028
Wherein the content of the first and second substances,
Figure BDA0002842821060000029
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure BDA00028428210600000210
The following were used:
Figure BDA00028428210600000211
the superscript T represents transposition operation, the superscript H represents conjugate transposition operation, and | γ | | | represents the Frobenius norm of the measurement vector γ.
According to some preferred embodiments of the invention, the frame synchronization network FSN-Net comprises:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of the hidden layer is
Figure BDA00028428210600000212
Wherein the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function.
According to some preferred embodiments of the present invention, the training of the frame synchronization network FSN-Net comprises:
s31 Collection of NtSequence of M length samples of the received signal
Figure BDA00028428210600000213
And constructing a sample sequence set
Figure BDA00028428210600000214
S32 pairs of sample sequence sets
Figure BDA00028428210600000215
Each signal sequence of
Figure BDA00028428210600000216
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure BDA0002842821060000031
Which forms a set of normalized cross-correlation metrics
Figure BDA0002842821060000032
S33 based on the synchronization offset value τi,i=1,2,…,NtObtaining a tag sequence T corresponding to the sample sequence through one-hot codingi,i=1,2,…,NtForming a set of labels
Figure BDA0002842821060000033
Wherein tau isiThe label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scenei,i=1,2,…,NtObtained by one-hot encoding as follows:
Figure BDA0002842821060000034
s34 according to GaussRandom distribution yields weights for each normalized cross-correlation metric vector
Figure BDA0002842821060000035
And bias
Figure BDA0002842821060000036
Cross-correlating normalized metrics with a metric vector
Figure BDA0002842821060000037
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA0002842821060000038
The hidden layer output of (2), as follows:
Figure BDA0002842821060000039
where σ (·) represents an activation function;
s35 Collection of NtThe hidden layers output to form an output matrix H, namely:
Figure BDA00028428210600000310
wherein the content of the first and second substances,
Figure BDA00028428210600000311
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000312
wherein the content of the first and second substances,
Figure BDA00028428210600000313
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
According to some preferred embodiments of the present invention, the estimating and balancing sub-network EstEqu-Net comprises:
1 input layer, rHHidden layers, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of each layer of the hidden layer is sequentially
Figure BDA00028428210600000314
li≥2,i=1,...,rHWherein r isHNot less than 2; the hidden layers all use a Leaky ReLU function as an activation function, and the loss function of the estimation and equalization sub-network EstEqu-Net is a mean square error loss function.
According to some preferred embodiments of the present invention, the frame synchronization estimation signal is transmitted to the mobile station
Figure BDA00028428210600000315
Training set with transmitted frame signal vector x as training input and training labels
Figure BDA0002842821060000041
And training the network, and storing the network model and the parameters after the error is converged to obtain the trained network.
According to some preferred embodiments of the present invention, the obtaining of the demodulated data in the transmitted frame signal estimate comprises:
s51 transmitting frame signal estimated value
Figure BDA0002842821060000042
Eliminating the superposed sequence to obtain an estimated data sequence
Figure BDA0002842821060000043
The following were used:
Figure BDA0002842821060000044
s52 pairs of estimated data sequences
Figure BDA0002842821060000045
Demodulating to obtain detected data
Figure BDA0002842821060000046
The invention can efficiently utilize spectrum resources by the Superposition (SC) technology of the training sequences, and can effectively solve the synchronization problem of signals under nonlinear distortion by Machine Learning (ML), such as an ELM network.
The method effectively combines the advantages of SC and ML technologies, learns the synchronization measurement characteristic of the superposed training sequence at the receiving end through the ELM network model, thereby accurately estimating the offset position of frame synchronization, obtains the transmitted frame signal through the estimation model established based on the deep neural network according to the frame starting point, detects the data sequence, reduces the occupation of frequency spectrum resources on the basis of improving the frame synchronization error probability performance under a system, particularly a nonlinear distortion system, brings more implementable schemes for frame synchronization research, and has great significance.
Drawings
FIG. 1 is a flow chart of the operation of one embodiment of the present invention.
Fig. 2 is a training flowchart of the frame synchronization network FSN-Net according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail below with reference to the following embodiments and the attached drawings, but it should be understood that the embodiments and the attached drawings are only used for the illustrative description of the present invention and do not limit the protection scope of the present invention in any way. All reasonable variations and combinations that fall within the spirit of the invention are intended to be within the scope of the invention.
According to the technical scheme of the invention, a specific implementation manner is shown in fig. 1, and the method comprises the following steps:
s1 receiver reception transmitterForming an on-line received signal vector with the length of M by adopting a transmitting frame signal generated by a superimposed training sequence mode
Figure BDA0002842821060000051
Wherein a frame signal is transmitted
Figure BDA0002842821060000052
Obtained by means of superimposed training sequences, as follows:
x=αs+(1-α)c;
wherein, alpha represents a superposition factor which can be set according to engineering experience;
Figure BDA0002842821060000054
represents a training sequence of length M;
Figure BDA0002842821060000055
representing a modulated data sequence of length M;
Figure BDA0002842821060000056
representing an M-dimensional complex field.
S2 preprocesses the on-line received signal vector y to obtain the normalized cross-correlation measurement vector of the on-line received signal vector y and the training sequence
Figure BDA0002842821060000057
Specifically, the pretreatment comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure BDA0002842821060000058
The following were used:
Figure BDA0002842821060000059
s22 sequential double training sequence
Figure BDA00028428210600000510
Middle-intercepting the sequence with the length of M to generate an intercepted sequence
Figure BDA00028428210600000511
The following were used:
Figure BDA00028428210600000512
s23 obtaining the truncated sequence through cross-correlation processing
Figure BDA00028428210600000513
Cross-correlation metric Γ with received signal vector ytThe following are:
Figure BDA00028428210600000514
s24 collects M of the cross-correlation metrics ΓtThe cross-correlation metric vector γ is constructed as follows:
γ=[Γ01,…,ΓM-1]Tγ satisfies
Figure BDA00028428210600000515
Wherein the content of the first and second substances,
Figure BDA00028428210600000516
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure BDA00028428210600000517
The following were used:
Figure BDA00028428210600000518
the superscript T represents transposition operation, the superscript H represents conjugate transposition operation, and | γ | | | represents the Frobenius norm of the measurement vector γ.
S3 cross-correlating the obtained normalized metric with the metric vector
Figure BDA00028428210600000519
Inputting the trained frame synchronization network FSN-Net to obtain the frame synchronization estimated value
Figure BDA00028428210600000520
And obtaining a frame synchronization estimation signal
Figure BDA00028428210600000521
The frame synchronization network FSN-Net can use the following network model:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the sequence, and the number of nodes of the hidden layer is
Figure BDA0002842821060000061
Wherein the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function.
The frame synchronization network FSN-Net can be trained through the process shown in fig. 2, which specifically includes:
s31 Collection of NtSequence of M length samples of the received signal
Figure BDA0002842821060000062
And constructing a sample sequence set
Figure BDA0002842821060000063
S32 pairs of sample sequence sets
Figure BDA0002842821060000064
Each signal sequence of
Figure BDA0002842821060000065
Proceed as steps S21-S25, obtaining a normalized cross-correlation measurement vector sequence
Figure BDA0002842821060000066
Which forms a set of normalized cross-correlation metrics
Figure BDA0002842821060000067
S33 based on the synchronization offset value τi,i=1,2,…,NtObtaining a tag sequence T corresponding to the sample sequence through one-hot codingi,i=1,2,…,NtForming a set of labels
Figure BDA0002842821060000068
Wherein tau isiThe label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scenei,i=1,2,…,NtObtained by one-hot encoding as follows:
Figure BDA0002842821060000069
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure BDA00028428210600000610
And bias
Figure BDA00028428210600000611
Cross-correlating normalized metrics with a metric vector
Figure BDA00028428210600000612
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA00028428210600000613
The hidden layer output of (2), as follows:
Figure BDA00028428210600000614
where σ (·) represents an activation function;
s35 Collection of NtThe hidden layers output to form an output matrix H, as follows:
Figure BDA00028428210600000615
wherein the content of the first and second substances,
Figure BDA00028428210600000616
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000617
wherein the content of the first and second substances,
Figure BDA0002842821060000071
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
S38 collecting received signal sample sequence with length of 2M
Figure BDA0002842821060000072
Slave sequence
Figure BDA0002842821060000073
Beginning with the starting point, intercepting the sequence with the length of M to obtain the sample sequence of the online received signal
Figure BDA0002842821060000074
Y pairs according to steps S21-S25onlinePreprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure BDA0002842821060000075
Will be provided with
Figure BDA0002842821060000076
The output vector is learned in a frame synchronization network FSN-Net
Figure BDA0002842821060000077
The following were used:
Figure BDA0002842821060000078
s39 finds the index position of the maximum value of the square of the amplitude in the output vector O, i.e., the frame synchronization estimate
Figure BDA0002842821060000079
The following were used:
Figure BDA00028428210600000710
s310 estimating value according to frame synchronization
Figure BDA00028428210600000711
And receiving a sequence of signal samples
Figure BDA00028428210600000712
Obtaining a frame synchronization estimation signal
Figure BDA00028428210600000713
The following were used:
Figure BDA00028428210600000714
s4 synchronizing signal obtained
Figure BDA00028428210600000715
Inputting the estimation and equalization sub-network EstEqu-Net after training to obtain the estimation value of the transmission frame signal x
Figure BDA00028428210600000716
Wherein, estimating and balancing the subnetwork EstEqu-Net can use the following deep neural network:
1 input layer, 2 hidden layers and 1 output layer. The number of nodes of the input layer is M, the number of nodes of the 2 hidden layers is nM, and the number of nodes of the output layer is M;
the value of n can be set according to engineering experience, and the 2 hidden layers all adopt a ReLU function as an activation function; the loss function of the network is a mean square error loss function.
The training of the estimation and equalization subnetwork EstEqu-Net comprises:
will be provided with
Figure BDA00028428210600000717
Training set with transmitted frame signal vector x as training input and training labels
Figure BDA00028428210600000718
Training the network, and storing a network model and parameters after the error is converged to obtain a trained network;
s5 uses the estimated value of the transmitted frame signal obtained by estimating and equalizing the sub-network EstEqu-Net
Figure BDA00028428210600000719
Eliminating superimposed training sequence and demodulating demodulation data in transmitting frame signal
Figure BDA00028428210600000720
Specifically, it may further comprise the steps of:
s51 transmitting frame signal estimated value
Figure BDA0002842821060000081
Eliminating the superposed sequence to obtain an estimated data sequence
Figure BDA0002842821060000082
To representComprises the following steps:
Figure BDA0002842821060000083
wherein, alpha represents a superposition factor,
Figure BDA0002842821060000084
represents a training sequence of length M;
s52 pairs of estimated data sequences
Figure BDA0002842821060000085
Demodulating to obtain demodulated data
Figure BDA0002842821060000086
Example 1:
frame synchronization is performed by the process of the detailed embodiment, wherein:
s1 sets M to 64, α to 0.2, Nt=105,m=2,
Figure BDA0002842821060000087
A receiver receives a transmission frame signal generated by a transmitter in a superimposed training sequence mode to form an online received signal vector y with the length of 64;
a vector x sequence of frame signals is transmitted as follows:
x=[x0,x1,…,x63]T
training sequence s, as follows:
s=[s0,s1,…,s63]T
data sequence c, as follows:
c=[c0,c1,…,c63]T
s2 is based on the setting of S1, and assumes that the received signal vector y is as follows:
y=[y0,y1,…,y63]T
then the cross correlation degree is normalizedVector of quantities
Figure BDA0002842821060000088
The following were used:
Figure BDA0002842821060000089
dual training sequences
Figure BDA00028428210600000810
The following were used:
s=[s0,s1,…,s63,s0,…,s63]T
assuming that t is 3, the generated truncated sequence
Figure BDA00028428210600000811
As follows
Figure BDA00028428210600000812
Hypothetical normalized cross-correlation metric vector
Figure BDA00028428210600000813
The following were used:
Figure BDA00028428210600000814
s3 according to the setting of S1, the network model of the frame synchronization network FSN-Net is an ELM model, as follows:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the output layer is 64, and the number of nodes of the hidden layer is 128; the activation function of the hidden layer is a sigmoid function.
The frame synchronization network FSN-Net training process specifically comprises the following steps:
s31 Collection 105A sequence of 64-length received signal samples
Figure BDA0002842821060000091
And constructing a sample sequence set
Figure BDA0002842821060000092
S32 pairs of sample sequence sets
Figure BDA0002842821060000093
Each signal sequence of
Figure BDA0002842821060000094
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure BDA0002842821060000095
Which forms a set of normalized cross-correlation metrics "
Figure BDA0002842821060000096
S33 based on the synchronization offset value τi,i=1,2,…,105Obtaining a tag sequence T corresponding to the sample sequence through one-hot codingi,i=1,2,…,105Forming a set of labels
Figure BDA0002842821060000097
Wherein tau isiThe label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scenei,i=1,2,…,NtObtained by one-hot encoding as follows:
Figure BDA0002842821060000098
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure BDA0002842821060000099
And bias
Figure BDA00028428210600000910
Cross-correlating normalized metrics with a metric vector
Figure BDA00028428210600000911
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure BDA00028428210600000912
The hidden layer output of (2), as follows:
Figure BDA00028428210600000913
s35 Collection 105The hidden layers output to form an output matrix H, as follows:
Figure BDA00028428210600000914
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure BDA00028428210600000915
wherein the content of the first and second substances,
Figure BDA00028428210600000916
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
S38 collecting a sequence of 128-length received signal samples
Figure BDA00028428210600000917
Slave sequence
Figure BDA00028428210600000918
Starting from the starting point of (1), truncating the sequence of length 64 to obtainLine received signal sample sequence
Figure BDA00028428210600000919
Y pairs according to steps S21-S25onlinePreprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure BDA0002842821060000101
Will be provided with
Figure BDA0002842821060000102
The output vector is learned in a frame synchronization network FSN-Net
Figure BDA0002842821060000103
The following were used:
Figure BDA0002842821060000104
s39 finds the index position of the maximum value of the square of the amplitude in the output vector O, i.e., the frame synchronization estimate
Figure BDA0002842821060000105
The following were used:
Figure BDA0002842821060000106
s310 estimating value according to frame synchronization
Figure BDA0002842821060000107
And receiving a sequence of signal samples
Figure BDA0002842821060000108
Obtaining a frame synchronization estimation signal
Figure BDA0002842821060000109
The following were used:
Figure BDA00028428210600001010
s4 estimates and balances the deep neural network that the sub-network EstEqu-Net can use according to the setting of S1, as follows:
1 input layer, 2 hidden layers and 1 output layer. The number of nodes of the input layer is 64, the number of nodes of 2 hidden layers is 128, and the number of nodes of the output layer is 64; wherein, the 2 hidden layers all adopt a ReLU function as an activation function; the loss function of the network is a mean square error loss function.
The training of the estimation and equalization subnetwork EstEqu-Net comprises:
will be provided with
Figure BDA00028428210600001011
Training set with transmitted frame signal vector x as training input and training labels
Figure BDA00028428210600001012
Training the network, and storing a network model and parameters after the error is converged to obtain a trained network;
s5 is based on the setting of S1 and assumes that the frame signal estimate is transmitted
Figure BDA00028428210600001013
The sequence is as follows:
Figure BDA00028428210600001014
suppositively obtained estimated data sequence
Figure BDA00028428210600001015
The sequence is as follows:
Figure BDA00028428210600001016
hypothesis demodulation to obtain demodulated data
Figure BDA00028428210600001017
Sequence, assume the following:
Figure BDA00028428210600001018
the above examples are merely preferred embodiments of the present invention, and the scope of the present invention is not limited to the above examples. All technical schemes belonging to the idea of the invention belong to the protection scope of the invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention, and such modifications and embellishments should also be considered as within the scope of the invention.

Claims (9)

1. A machine learning superimposed training sequence frame synchronization method is characterized by comprising the following steps:
preprocessing a transmission frame signal generated by a transmitter received by a receiver in a superimposed training sequence mode to obtain a normalized measurement vector of the transmission frame signal;
inputting the normalized measurement vector into a trained frame synchronization network FSN-Net to obtain a frame synchronization estimation value thereof, and realizing frame synchronization;
inputting a frame synchronization estimation signal obtained according to the frame synchronization estimation value into a trained estimation and equalization sub-network Estequ-Net to obtain an estimation value of a transmission frame signal;
eliminating the superimposed training sequence and demodulating the demodulated data in the transmitting frame signal by the estimated value of the transmitting frame signal and the superimposed training sequence mode;
the frame synchronization network FSN-Net is constructed based on an ELM network model, and the estimation and equalization sub-network Estequ-Net is constructed based on a deep neural network.
2. The frame synchronization method of claim 1, wherein: the transmission frame signal is obtained by superposing a training sequence, and the method comprises the following steps:
x=αs+(1-α)c;
wherein alpha represents a superposition factor and is set by engineering experience,
Figure FDA0002842821050000011
A training sequence of length M is represented,
Figure FDA0002842821050000012
representing a modulated data sequence of length M,
Figure FDA0002842821050000013
representing an M-dimensional complex field.
3. The frame synchronization method of claim 1, wherein: the obtaining of the normalized metric vector comprises:
s21 splicing two frames of same training sequence S into a double training sequence with length of 2M
Figure FDA0002842821050000014
The following were used:
Figure FDA0002842821050000015
s22 sequential double training sequence
Figure FDA0002842821050000016
Middle-intercepting the sequence with the length of M to generate an intercepted sequence
Figure FDA0002842821050000017
The following were used:
Figure FDA0002842821050000018
s23 obtaining the truncated sequence through cross-correlation processing
Figure FDA0002842821050000019
And receivingThe cross-correlation measure Γ of the signal vector ytThe following are:
Figure FDA00028428210500000110
s24 collects M of the cross-correlation metrics ΓtThe cross-correlation metric vector γ is constructed as follows:
γ=[Γ01,…,ΓM-1]Tγ satisfies
Figure FDA0002842821050000021
Wherein the content of the first and second substances,
Figure FDA0002842821050000022
representing an M-dimensional real number domain;
s25, normalizing the cross-correlation measurement vector gamma to obtain a normalized cross-correlation measurement vector
Figure FDA0002842821050000023
The following were used:
Figure FDA0002842821050000024
the superscript T represents transposition operation, the superscript H represents conjugate transposition operation, and | γ | | | represents the Frobenius norm of the measurement vector γ.
4. The frame synchronization method of claim 1, wherein: the frame synchronization network FSN-Net comprises:
1 input layer, 1 hidden layer, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of the hidden layer is
Figure FDA0002842821050000025
Wherein the content of the first and second substances,the value of m is set according to engineering experience; the activation function of the hidden layer is a sigmoid function.
5. The frame synchronization method of claim 4, wherein: the training of the frame synchronization network FSN-Net comprises the following steps:
s31 Collection of NtSequence of received signal samples of length M
Figure FDA0002842821050000026
And constructing a sample sequence set
Figure FDA0002842821050000027
S32 pairs of sample sequence sets
Figure FDA0002842821050000028
Each signal sequence of
Figure FDA0002842821050000029
Preprocessing the data to obtain a normalized cross-correlation metric vector sequence in steps S21-S25
Figure FDA00028428210500000210
Which forms a set of normalized cross-correlation metrics
Figure FDA00028428210500000211
S33 based on the synchronization offset value τi,i=1,2,…,NtObtaining a tag sequence T corresponding to the sample sequence through one-hot codingi,i=1,2,…,NtForming a set of labels
Figure FDA00028428210500000212
Wherein tau isiThe label T can be obtained by combining the existing method or equipment according to a statistical channel model or according to an actual scenei,i=1,2,…,NtBy one-hot encodingObtained as follows:
Figure FDA00028428210500000213
s34 generating weights for each normalized cross-correlation metric vector based on Gaussian random distribution
Figure FDA00028428210500000214
And bias
Figure FDA0002842821050000031
Cross-correlating normalized metrics with a metric vector
Figure FDA0002842821050000032
Forming corresponding label as input of frame synchronization network FSN-Net input layer
Figure FDA0002842821050000033
The hidden layer output of (2), as follows:
Figure FDA0002842821050000034
where σ (·) represents an activation function;
s35 Collection of NtThe hidden layers output to form an output matrix H, namely:
Figure FDA0002842821050000035
wherein the content of the first and second substances,
Figure FDA0002842821050000036
s36 obtains the output weight β from the hidden layer output matrix H and the label set T by using the following equation:
Figure FDA0002842821050000037
wherein the content of the first and second substances,
Figure FDA0002842821050000038
Moore-Penrose pseudoinverse representing H;
s37, model parameters W, b and beta are saved to obtain the frame synchronization network FSN-Net after training.
6. The frame synchronization method of claim 1, wherein: the obtaining of the frame synchronization estimation signal comprises:
s31 collecting received signal sample sequence with length of 2M
Figure FDA0002842821050000039
Slave sequence
Figure FDA00028428210500000310
Beginning with the starting point, intercepting the sequence with the length of M to obtain the sample sequence of the online received signal
Figure FDA00028428210500000311
Y pairs according to steps S21-S25onlinePreprocessing the measurement vector to obtain an online normalized measurement cross-correlation measurement vector
Figure FDA00028428210500000312
Will be provided with
Figure FDA00028428210500000313
The output vector is learned in a frame synchronization network FSN-Net
Figure FDA00028428210500000314
The following were used:
Figure FDA00028428210500000315
s32 finds the index position of the maximum value of the square of the amplitude in the output vector O, i.e., the frame synchronization estimate
Figure FDA00028428210500000320
The following were used:
Figure FDA00028428210500000316
s33 estimating value according to frame synchronization
Figure FDA00028428210500000317
And receiving a sequence of signal samples
Figure FDA00028428210500000318
A frame synchronization estimation signal is obtained as follows:
Figure FDA00028428210500000319
7. the frame synchronization method of claim 1, wherein: the estimation and equalization subnetwork EstEqu-Net comprises:
1 input layer, rHHidden layers, 1 output layer; the number of nodes of the input layer and the number of nodes of the output layer are equal to the length M of the training sequence, and the number of nodes of each layer of the hidden layer is sequentially
Figure FDA0002842821050000041
Wherein r isHNot less than 2; the hidden layer takes a Leaky ReLU function as an activation function, and the loss function of the estimation and equalization sub-network EstEqu-Net is a mean square error loss function.
8. The frame synchronization method of claim 7, wherein: the training of the estimation and equalization subnetwork EstEqu-Net comprises:
estimating signal of frame synchronization
Figure FDA0002842821050000042
Training set with transmitted frame signal vector x as training input and training labels
Figure FDA0002842821050000043
And training the network, and storing the network model and the parameters after the error is converged to obtain the trained network.
9. The frame synchronization method of claim 1, wherein: transmitting frame signal estimated value obtained by utilizing estimating and equalizing sub-network Estequ-Net
Figure FDA0002842821050000044
Eliminating superimposed training sequence and demodulating demodulation data in transmitting frame signal
Figure FDA0002842821050000045
The method comprises the following steps:
s51 transmitting frame signal estimated value
Figure FDA0002842821050000046
Eliminating the superposed sequence to obtain an estimated data sequence
Figure FDA0002842821050000047
The following were used:
Figure FDA0002842821050000048
wherein, alpha represents a superposition factor,
Figure FDA0002842821050000049
represents a training sequence of length M;
s52 paired estimationData sequence counting
Figure FDA00028428210500000410
Demodulating to obtain demodulated data
Figure FDA00028428210500000411
CN202011498196.3A 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method Active CN112688772B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011498196.3A CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011498196.3A CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Publications (2)

Publication Number Publication Date
CN112688772A true CN112688772A (en) 2021-04-20
CN112688772B CN112688772B (en) 2022-08-26

Family

ID=75448856

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011498196.3A Active CN112688772B (en) 2020-12-17 2020-12-17 Machine learning superimposed training sequence frame synchronization method

Country Status (1)

Country Link
CN (1) CN112688772B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114096000A (en) * 2021-11-18 2022-02-25 西华大学 Joint frame synchronization and channel estimation method based on machine learning
CN114157544A (en) * 2021-12-07 2022-03-08 中南大学 Frame synchronization method, device and medium based on convolutional neural network
CN117295149A (en) * 2023-11-23 2023-12-26 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292481A (en) * 2005-09-06 2008-10-22 皇家飞利浦电子股份有限公司 Method and apparatus for estimating channel based on implicit training sequence
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
US20130188679A1 (en) * 2012-01-20 2013-07-25 Chih-Peng Li Communication system having data-dependent superimposed training mechanisim and communication method thereof
CN110830112A (en) * 2019-10-16 2020-02-21 青岛海信电器股份有限公司 Visible light communication method and device
CN111970078A (en) * 2020-08-14 2020-11-20 西华大学 Frame synchronization method for nonlinear distortion scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101292481A (en) * 2005-09-06 2008-10-22 皇家飞利浦电子股份有限公司 Method and apparatus for estimating channel based on implicit training sequence
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
US20130188679A1 (en) * 2012-01-20 2013-07-25 Chih-Peng Li Communication system having data-dependent superimposed training mechanisim and communication method thereof
CN110830112A (en) * 2019-10-16 2020-02-21 青岛海信电器股份有限公司 Visible light communication method and device
CN111970078A (en) * 2020-08-14 2020-11-20 西华大学 Frame synchronization method for nonlinear distortion scene

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHAOJIN QING; BIN CAI; QINGYAO YANG; JIAFAN WANG; CHUAN HUANG: "ELM-Based Superimposed CSI Feedback for FDD Massive MIMO System", 《IEEE ACCESS》 *
CHAOJIN QING; WANG YU; BIN CAI; JIAFAN WANG; CHUAN HUANG: "ELM-Based Frame Synchronization in Burst-Mode Communication Systems With Nonlinear Distortion", 《IEEE WIRELESS COMMUNICATIONS LETTERS》 *
毛兴: "卫星OFDM***的同步与接入技术研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》 *
董磊,卿朝进,余旺,蔡斌,杜艳红: "基于DDST的信道估计与信号检测", 《电脑知识与技术》 *
赵辉; 王钦波; 王汝言; 张鸿: "基于叠加训练序列的FSO-OFDM***时间同步算法基于叠加训练序列的FSO-OFDM***时间同步算法", 《重庆邮电大学学报(自然科学版)》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114096000A (en) * 2021-11-18 2022-02-25 西华大学 Joint frame synchronization and channel estimation method based on machine learning
CN114096000B (en) * 2021-11-18 2023-06-23 西华大学 Combined frame synchronization and channel estimation method based on machine learning
CN114157544A (en) * 2021-12-07 2022-03-08 中南大学 Frame synchronization method, device and medium based on convolutional neural network
CN117295149A (en) * 2023-11-23 2023-12-26 西华大学 Frame synchronization method and system based on low-complexity ELM assistance
CN117295149B (en) * 2023-11-23 2024-01-30 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Also Published As

Publication number Publication date
CN112688772B (en) 2022-08-26

Similar Documents

Publication Publication Date Title
CN112688772B (en) Machine learning superimposed training sequence frame synchronization method
JP2001517399A (en) Self-synchronous equalization method and system
CN109246039A (en) A kind of Soft Inform ation iteration receiving method based on two-way time domain equalization
CN108540419A (en) A kind of OFDM detection methods of the anti-inter-sub-carrier interference based on deep learning
CN113395225B (en) Universal intelligent processing method and device for directly receiving communication signal waveform to bit
CN108768585A (en) Uplink based on deep learning exempts from signaling NOMA system multi-user detection methods
US8831128B2 (en) MIMO communication system signal detection method
CN111970078B (en) Frame synchronization method for nonlinear distortion scene
Aref et al. Deep learning-aided successive interference cancellation for MIMO-NOMA
CN106656881B (en) A kind of adaptive blind equalization method based on deviation compensation
KR102510513B1 (en) Deep learning based beamforming method and apparatus for the same
CN106130697A (en) Estimate based on Bayes and between data stream, combining of power distribution disturbs phase alignment method
Yang et al. Folded chirp-rate shift keying modulation for LEO satellite IoT
Wang et al. Online LSTM-based channel estimation for HF MIMO SC-FDE system
Ponnaluru et al. RETRACTED ARTICLE: Deep learning for estimating the channel in orthogonal frequency division multiplexing systems
Rahman et al. Deep learning based pilot assisted channel estimation for Rician fading massive MIMO uplink communication system
CN105723783A (en) Synchronization signal transmitting device, receiving device, method, and system
CN104868962B (en) Frequency spectrum detecting method and device based on compressed sensing
CN101651643B (en) Blind equalization method for wavelet neural network based on space diversity
CN109286587A (en) A kind of how active generalized space method for modulation detection
CN110944002B (en) Physical layer authentication method based on exponential average data enhancement
CN112491754A (en) Channel estimation and signal detection method based on DDST and deep learning
Bilbao et al. Ai-based inter-tower communication networks: Challenges and benefits
CN103152300A (en) OFDM (orthogonal frequency division multiplexing) receiver based on pilot frequency and channel balance method thereof
CN114006797A (en) Multi-antenna equalization receiving method for high-speed video communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Suining Feidian Cultural Communication Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000027

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231129

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Chengdu Suyouyun Information Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000030

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231201

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20210420

Assignee: Chengdu Yingling Feifan Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000032

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231212

Application publication date: 20210420

Assignee: Sichuan Shenglongxing Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000031

Denomination of invention: A Machine Learning Overlay Training Sequence Frame Synchronization Method

Granted publication date: 20220826

License type: Common License

Record date: 20231211