CN111970078B - Frame synchronization method for nonlinear distortion scene - Google Patents

Frame synchronization method for nonlinear distortion scene Download PDF

Info

Publication number
CN111970078B
CN111970078B CN202010821398.0A CN202010821398A CN111970078B CN 111970078 B CN111970078 B CN 111970078B CN 202010821398 A CN202010821398 A CN 202010821398A CN 111970078 B CN111970078 B CN 111970078B
Authority
CN
China
Prior art keywords
frame synchronization
vector
sequence
output
online
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010821398.0A
Other languages
Chinese (zh)
Other versions
CN111970078A (en
Inventor
卿朝进
余旺
董磊
杜艳红
唐书海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xihua University
Original Assignee
Xihua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xihua University filed Critical Xihua University
Priority to CN202010821398.0A priority Critical patent/CN111970078B/en
Publication of CN111970078A publication Critical patent/CN111970078A/en
Application granted granted Critical
Publication of CN111970078B publication Critical patent/CN111970078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0602Systems characterised by the synchronising information used
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)

Abstract

The invention discloses a frame synchronization method of a nonlinear distortion scene, which comprises the following steps: collecting N t A sequence of M N long samples y i (1) ,y i (2) ,…y i (M) ,i=1,2,…,N t (ii) a Weighted superposition to obtain a superposed sample sequence y i (S) ,i=1,2,…,N t (ii) a For the superimposed sample sequence y i (S) Preprocessing to obtain synchronization metrics
Figure DDA0002634511110000011
i=1,2,…,N t (ii) a Constructing an ELM-based network, and constructing a tag T according to a frame synchronization deviation value of a received signal i ,i=1,2,…,N t Learning network parameters; learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure DDA0002634511110000012
The invention can improve the frame synchronization performance under the nonlinear distortion scene, and compared with the traditional correlation method, the frame synchronization performance of the invention is greatly improved.

Description

Frame synchronization method for nonlinear distortion scene
Technical Field
The invention relates to the technical field of wireless communication frame synchronization, in particular to a frame synchronization method for a nonlinear distortion scene.
Background
As one of the important components in a communication system, the performance of the frame synchronization method is good and bad, which directly affects the performance of the whole wireless communication system. However, the wireless communication system inevitably has nonlinear distortion, such as high-efficiency power amplifier distortion, analog-to-digital or digital-to-analog converter distortion, and distortion caused by two-way imbalance of I/Q, etc. In addition, in the next generation wireless communication system (e.g. 6G system), in order to avoid the transceiver being too expensive, low cost and low resolution devices (e.g. power amplifier, AD sampler) are required, which causes the non-linear distortion to be particularly prominent. The traditional frame synchronization method (such as the correlation method) and the time-new frame synchronization method mostly do not consider the nonlinear distortion scene, so that the method is difficult to be applied under the nonlinear distortion condition. Machine learning has excellent learning ability for nonlinear distortion, however, frame synchronization techniques based on machine learning have little research and do not achieve good synchronization performance, and improvement is urgently needed.
Therefore, the invention utilizes a machine learning method and develops interframe correlation prior information to form a frame synchronization method for improving the error probability performance of frame synchronization. At a receiving end, firstly, carrying out weighted superposition preprocessing on frames, developing interframe correlation prior information, and preliminarily capturing frame synchronization measurement characteristics; then, an ELM frame synchronization network is constructed, and the estimation of frame synchronization deviation is trained off line; and finally, estimating the frame synchronization offset on line by combining the preprocessed ELM network parameters with the learned ELM network parameters. Aiming at wireless communication scenes with nonlinear distortion, such as 5G and 6G systems, the method can improve the error probability performance of the frame synchronization and promote the intelligent processing level of the frame synchronization, brings a plurality of implementable schemes for intelligent frame synchronization research, and has great significance.
Disclosure of Invention
Compared with the traditional related synchronization method, the method combines multi-frame weighted superposition and an ELM network, and effectively improves the frame synchronization performance under the nonlinear distortion system.
The specific invention scheme is as follows:
a frame synchronization method for a nonlinear distortion scene comprises the following steps:
(a) collecting N t M frames of N long sample sequences y i (1) ,y i (2) ,…y i (M) ,i=1,2,…,N t
(b) For y i (1) ,y i (2) ,…y i (M) Carrying out weighted superposition to obtain a superposed sample sequence y i (S) ,i=1,2,…,N t
(c) For the superimposed sample sequence y i (S) Preprocessing to obtain standard measurement vector
Figure BDA00026345110900000210
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signal i ,i=1,2,…,N t Learning network parameters;
(e) learning to obtain frame synchronization offset estimation value by using learning-obtained ELM network model
Figure BDA0002634511090000029
Further, the obtaining of the M frames of N-long sample sequence of step (a) may be represented as:
Figure BDA0002634511090000021
wherein M and N are set according to engineering experience.
Further, the method step (b) the weighted overlap-add is represented as:
y i (S) =μ 1 y i (1)2 y i (2) +…+μ M y i (M)
the mu i I is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Further, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y (S) Middle observation length of N s Observation sequence of
Figure BDA0002634511090000022
And length N s Training sequence of
Figure BDA0002634511090000023
After 'cross-correlation operation' is carried out, the 'cross-correlation measurement' gamma is obtained t Namely:
Figure BDA0002634511090000024
the observation length is N s Setting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition (S) Begins to observe N s A long sample sequence;
Figure BDA0002634511090000025
denotes y (S) Middle t to t + N s -1 element;
the K is N-N s +1, representing the size of the search window;
(c2) measured by K correlations
Figure BDA0002634511090000026
Constructing a metric vector
Figure BDA0002634511090000027
To N t Individual measurement vector gamma i Normalization processing is carried out to obtain a standard measurement vector
Figure BDA0002634511090000028
Namely:
Figure BDA0002634511090000031
said N is t According to the engineering experience setting, the | | | gamma i | represents the measurement vector γ i Frobenius norm of (1).
Further, the network model and parameters in step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is K
Figure BDA0002634511090000032
The number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integrated
Figure BDA0002634511090000033
As an input;
and m is set according to engineering experience.
Further, the step (d) of constructing the tag comprises the steps of:
according to the synchronization deviation value tau i ,i=1,2,…,N t Forming a set of tags
Figure BDA00026345110900000315
The label T i ,i=1,2,…,N t According to the synchronization deviation value tau i Obtained by one-hot coding, i.e.
Figure BDA0002634511090000034
Said tau i From the received signal y i And determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Further, the offline training process of step (d) specifically includes the following steps:
(d1) generating weights from random distributions
Figure BDA0002634511090000035
And bias
Figure BDA0002634511090000036
Sequentially combining the standard metric vectors
Figure BDA0002634511090000037
Input ELM network, hidden layerOutput the output
Figure BDA0002634511090000038
Expressed as:
Figure BDA0002634511090000039
the σ (-) represents an activation function sigmoid;
(d2) from N t Individual metric vector
Figure BDA00026345110900000310
Obtained N t A hidden layer output H i Constructing hidden layer output matrices
Figure BDA00026345110900000311
Obtaining output weight according to hidden layer output matrix H and label set T constructed in step (a3)
Figure BDA00026345110900000312
Figure BDA00026345110900000313
The above-mentioned
Figure BDA00026345110900000314
Moore-Penrose pseudoinverse representing H;
(d3) model parameters W, b and β are saved.
Further, the on-line operation process of the step (e) comprises the following steps:
(e1) receiving M frames of N long online sample sequences y online (1) ,y online (2) ,…,y online (M) Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure BDA0002634511090000041
Will be provided with
Figure BDA0002634511090000042
The vector is sent to an ELM network model to learn an output vector
Figure BDA0002634511090000043
Expressed as:
Figure BDA0002634511090000044
(e2) finding the index position of the maximum of the squared magnitude in the output vector O, i.e. the frame synchronization estimate
Figure BDA0002634511090000047
Figure BDA0002634511090000045
The beneficial effects of the invention are: the frame synchronization performance under the nonlinear distortion system is improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a flowchart of ELM network offline training;
fig. 3 is a diagram of an on-line operation process of the ELM network.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following.
As shown in fig. 1, a method for frame synchronization of a non-linear distortion scene includes the following steps:
(a) collecting N t M frames of N long sample sequences y i (1) ,y i (2) ,…y i (M) ,i=1,2,…,N t
Specifically, the obtaining of the M frames of N-long sample sequence in step (a) of the method may be represented as:
Figure BDA0002634511090000046
wherein M and N are set according to engineering experience.
(b) To y i (1) ,y i (2) ,…y i (M) Carrying out weighted superposition to obtain a superposed sample sequence y i (S) ,i=1,2,…,N t
Specifically, the weighted overlap-add of the method step (b) can be expressed as:
y i (S) =μ 1 y i (1)2 y i (2) +…+μ M y i (M)
the mu i I is 1,2, …, and M is a weighting coefficient; and setting according to the received signal-to-noise ratio of each frame.
Example 1: the weighting coefficients are set as follows:
suppose that M is 3 and the signal-to-noise ratio of the 3-frame signal is alpha respectively 123
Figure BDA0002634511090000051
(c) For the superimposed sample sequence y i (S) Preprocessing to obtain synchronization metrics
Figure BDA0002634511090000052
Specifically, the pretreatment step in step (c) of the method is:
(c1) one-time training superposition sample sequence y (S) Middle observation length of N s Observation sequence of
Figure BDA0002634511090000053
And length N s Training sequence of
Figure BDA0002634511090000054
After the "cross-correlation operation" is performed,obtaining a 'cross-correlation metric' gamma t Namely:
Figure BDA0002634511090000055
the observation length is N s Setting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition (S) First element of (2) begins to observe N s A long sample sequence;
Figure BDA0002634511090000056
denotes y (S) Middle t to t + N s -1 element;
the K is N-N s +1, representing the size of the search window;
(c2) measured by K correlations
Figure BDA0002634511090000057
Constructing a metric vector
Figure BDA0002634511090000058
To N t Individual measurement vector gamma i Normalization processing is carried out to obtain a standard measurement vector
Figure BDA0002634511090000059
Namely:
Figure BDA00026345110900000510
said N is t According to the engineering experience setting, the | | | gamma i | represents the measurement vector γ i Frobenius norm of (1).
(d) Constructing an ELM network, and constructing a tag T according to the frame synchronization deviation value of the received signal i ,i=1,2,…,N t Learning network parameters;
in an embodiment of the present application, the network model and parameters in step (d) of the method are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, and the number of nodes of the hidden layer is K
Figure BDA00026345110900000511
The number of nodes of an output layer is K, a hidden layer adopts sigmoid as an activation function, and preprocessed standard measurement vectors are integrated
Figure BDA00026345110900000512
As an input;
and m is set according to engineering experience.
Specifically, the step (d) of the method for constructing the tag comprises the following steps:
according to the synchronization deviation value tau i ,i=1,2,…,N t Forming a set of tags
Figure BDA0002634511090000061
The label T i ,i=1,2,…,N t According to the synchronization deviation value tau i Obtained by one-hot coding, i.e.
Figure BDA0002634511090000062
Said tau i From the received signal y i And determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment.
Example 2: the label in step (d) is exemplified as follows:
let N be 64, τ i =5,N t =10 5
Training labels:
Figure BDA0002634511090000063
as shown in fig. 2, in the embodiment of the present application, the offline training process of step (d) of the method specifically includes the following steps:
(d1) generating weights from random distributions
Figure BDA0002634511090000064
And bias
Figure BDA0002634511090000065
Sequentially combining the standard metric vectors
Figure BDA0002634511090000066
Input to ELM network, hidden layer output
Figure BDA0002634511090000067
Expressed as:
Figure BDA0002634511090000068
the σ (-) represents an activation function sigmoid;
(d2) from N t Individual metric vector
Figure BDA0002634511090000069
Obtained N t A hidden layer output H i Constructing hidden layer output matrices
Figure BDA00026345110900000610
According to the hidden layer output matrix H and the label set T constructed in the step (d), the output weight is obtained
Figure BDA00026345110900000611
Figure BDA00026345110900000612
The above-mentioned
Figure BDA00026345110900000613
Moore-Penrose pseudoinverse representing H;
(d3) model parameters W, b and β are saved.
(e) ELM network obtained by learningLearning the network model to obtain the estimated value of frame synchronization shift
Figure BDA00026345110900000614
As shown in fig. 3, in the embodiment of the present application, specifically, the online operation process of step (e) includes the following steps:
(e1) receiving M frames of N long online sample sequences y online (1) ,y online (2) ,…,y online (M) Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure BDA0002634511090000071
Will be provided with
Figure BDA0002634511090000072
The vector is sent to an ELM network model to learn an output vector
Figure BDA0002634511090000073
Expressed as:
Figure BDA0002634511090000074
(e2) finding the index position of the maximum of the squared amplitudes in the output vector O, i.e. the frame synchronization estimate
Figure BDA0002634511090000075
Figure BDA0002634511090000076
It is to be understood that the embodiments described herein are for the purpose of assisting the reader in understanding the manner of practicing the invention and are not to be construed as limiting the scope of the invention to such particular statements and embodiments. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (2)

1. A frame synchronization method for a nonlinear distortion scene is characterized by comprising the following steps:
(a) collecting N t M frames of N long sample sequences y i (1) ,y i (2) ,…y i (M) ,i=1,2,…,N t
The sequence of samples N long for M frames is represented as:
Figure FDA0003743230390000011
wherein M and N are set according to engineering experience;
(b) for y i (1) ,y i (2) ,…y i (M) Carrying out weighted superposition to obtain a superposed sample sequence y i (S) ,i=1,2,…,N t
The weighted overlap-add is represented as:
y i (S) =μ 1 y i (1)2 y i (2) +…+μ M y i (M)
the mu i I is 1,2, …, and M is a weighting coefficient set according to the received signal-to-noise ratio of each frame;
(c) for the superimposed sample sequence y i (S) Preprocessing to obtain standard measurement vector
Figure FDA0003743230390000012
The pretreatment steps are as follows:
(c1) one-time training superposition sample sequence y (S) Middle observation length of N s Observation sequence of
Figure FDA0003743230390000013
And length N s Training sequence of
Figure FDA0003743230390000014
After 'cross-correlation operation' is carried out, the 'cross-correlation measurement' gamma is obtained t Namely:
Figure FDA0003743230390000015
the observation length is N s Setting according to engineering experience;
the t represents the initial index position of the observed superposition sequence, and t belongs to [1, K ]]For example, t-1 denotes a sequence y of samples from the superposition (S) Begins to observe N s A long sample sequence;
Figure FDA0003743230390000016
denotes y (S) Middle t to t + N s -1 element;
the K is N-N s +1, representing the size of the search window;
(c2) measured by K correlations
Figure FDA0003743230390000017
Constructing a metric vector
Figure FDA0003743230390000018
To N t Individual measurement vector gamma i Normalization processing is carried out to obtain a standard measurement vector
Figure FDA0003743230390000019
Namely:
Figure FDA0003743230390000021
said N is t According to the engineering experience setting, the | | | gamma i | | denotes the measurement vector γ i Froben of (1)An ius norm;
(d) constructing an ELM network, and constructing a label T according to the frame synchronization deviation value of the received signal i ,i=1,2,…,N t Obtaining a network model and parameters through an off-line training process;
the steps of constructing the tag are as follows:
according to the synchronisation offset value tau i ,i=1,2,…,N t Forming a set of tags
Figure FDA00037432303900000212
The label T i ,i=1,2,…,N t According to the synchronization deviation value tau i Obtained by one-hot coding, i.e.
Figure FDA0003743230390000022
Said τ being i From the received signal y i Determining, and collecting according to a statistical channel model or according to an actual scene by combining the existing method or equipment;
the off-line training process specifically comprises the following steps:
(d1) generating weights from random distributions
Figure FDA00037432303900000213
And bias
Figure FDA00037432303900000214
Sequentially dividing the standard metric vector
Figure FDA0003743230390000023
Input to ELM network, hidden layer output
Figure FDA0003743230390000024
Expressed as:
Figure FDA0003743230390000025
the σ (-) represents an activation function sigmoid;
(d2) from N t Individual metric vector
Figure FDA0003743230390000026
Obtained N t A hidden layer output H i Constructing hidden layer output matrices
Figure FDA0003743230390000027
Figure FDA0003743230390000028
Obtaining output weight according to hidden layer output matrix H and label set T
Figure FDA0003743230390000029
Figure FDA00037432303900000210
The above-mentioned
Figure FDA00037432303900000211
Moore-Penrose pseudoinverse representing H;
(d3) saving model parameters W, b and beta;
(e) obtaining a frame synchronization offset estimation value through an online operation process by utilizing an ELM network model obtained through learning;
the online operation process comprises the following steps:
receiving M frames of N long online sample sequences y online (1) ,y online (2) ,…,y online (M) Performing superposition preprocessing according to the steps (b) and (c) to obtain an online standard measurement vector
Figure FDA0003743230390000031
Will be provided with
Figure FDA0003743230390000032
The vector is sent to an ELM network model to learn an output vector
Figure FDA0003743230390000033
Expressed as:
Figure FDA0003743230390000034
(e1) find the index position of the maximum value of the magnitude squared in the output vector O, the formula is as follows:
Figure FDA0003743230390000035
in the formula
Figure FDA0003743230390000036
Is the frame synchronization estimate.
2. The method of claim 1, wherein the network model and parameters in step (d) are:
the ELM network model comprises 1 input layer, 1 hidden layer and 1 output layer, wherein the number of nodes of the input layer is K, the number of nodes of the hidden layer is N-mK, the number of nodes of the output layer is K, the hidden layer adopts sigmoid as an activation function, and the preprocessed standard measurement vectors are integrated
Figure FDA0003743230390000037
As an input;
and m is set according to engineering experience.
CN202010821398.0A 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene Active CN111970078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010821398.0A CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010821398.0A CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Publications (2)

Publication Number Publication Date
CN111970078A CN111970078A (en) 2020-11-20
CN111970078B true CN111970078B (en) 2022-08-16

Family

ID=73387814

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010821398.0A Active CN111970078B (en) 2020-08-14 2020-08-14 Frame synchronization method for nonlinear distortion scene

Country Status (1)

Country Link
CN (1) CN111970078B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112688772B (en) * 2020-12-17 2022-08-26 西华大学 Machine learning superimposed training sequence frame synchronization method
CN113112028B (en) * 2021-04-06 2022-07-01 西华大学 Machine learning time synchronization method based on label design
CN114096000B (en) * 2021-11-18 2023-06-23 西华大学 Combined frame synchronization and channel estimation method based on machine learning
CN117295149B (en) * 2023-11-23 2024-01-30 西华大学 Frame synchronization method and system based on low-complexity ELM assistance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103222243A (en) * 2012-12-05 2013-07-24 华为技术有限公司 Data processing method and apparatus
CN108512795A (en) * 2018-03-19 2018-09-07 东南大学 A kind of OFDM receiver baseband processing method and system based on low Precision A/D C

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101252560A (en) * 2007-11-01 2008-08-27 复旦大学 High-performance OFDM frame synchronization algorithm
CN102291360A (en) * 2011-09-07 2011-12-21 西南石油大学 Superimposed training sequence based optical OFDM (Orthogonal Frequency Division Multiplexing) system and frame synchronization method thereof
ES2593093B1 (en) * 2015-06-05 2017-09-19 Fundacio Centre Tecnologic De Telecomunicacions De Catalunya Method and device for frame synchronization in communication systems
US9692588B2 (en) * 2015-07-07 2017-06-27 Samsung Electronics Co., Ltd. System and method for performing synchronization and interference rejection in super regenerative receiver (SRR)
CN106130945B (en) * 2016-06-02 2019-06-28 泰凌微电子(上海)有限公司 Frame synchronization and carrier wave frequency deviation associated detecting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103222243A (en) * 2012-12-05 2013-07-24 华为技术有限公司 Data processing method and apparatus
CN108512795A (en) * 2018-03-19 2018-09-07 东南大学 A kind of OFDM receiver baseband processing method and system based on low Precision A/D C

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ELM-Based Frame Synchronization in Burst-Mode Communication Systems With Nonlinear Distortion;Chaojin Qing,etc;《IEEE Wireless Communications Letters》;20200221;第2-3页 *
非线性失真场景下基于ELM帧同步改进方法;卿朝进余旺董磊杜艳红唐书海;《科技创新与应用 》;20210208;全文 *

Also Published As

Publication number Publication date
CN111970078A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111970078B (en) Frame synchronization method for nonlinear distortion scene
CN110971457B (en) Time synchronization method based on ELM
CN108566257B (en) Signal recovery method based on back propagation neural network
CN107529222B (en) WiFi indoor positioning system based on deep learning
CN110336594B (en) Deep learning signal detection method based on conjugate gradient descent method
CN112688772B (en) Machine learning superimposed training sequence frame synchronization method
CN113114599B (en) Modulation identification method based on lightweight neural network
CN114268388B (en) Channel estimation method based on improved GAN network in large-scale MIMO
CN115577305B (en) Unmanned aerial vehicle signal intelligent recognition method and device
CN111050315A (en) Wireless transmitter identification method based on multi-core two-way network
CN108806723A (en) Baby's audio recognition method and device
CN113807214A (en) Small target face recognition method based on deit attached network knowledge distillation
CN113762529A (en) Machine learning timing synchronization method based on statistical prior
CN114759997B (en) MIMO system signal detection method based on data model double driving
CN115631771A (en) Sound event detection and positioning method based on combined convolutional neural network
CN114614920B (en) Signal detection method based on data and model combined driving of learning factor graph
CN111652021A (en) Face recognition method and system based on BP neural network
CN113343796B (en) Knowledge distillation-based radar signal modulation mode identification method
CN114070415A (en) Optical fiber nonlinear equalization method and system
CN113792751B (en) Cross-domain behavior recognition method, device, equipment and readable storage medium
CN113596757A (en) Rapid high-precision indoor fingerprint positioning method based on integrated width learning
CN113852434A (en) LSTM and ResNet assisted deep learning end-to-end intelligent communication method and system
CN114096000B (en) Combined frame synchronization and channel estimation method based on machine learning
CN118074791B (en) Satellite communication method and system based on non-orthogonal multiple access and orthogonal time-frequency space modulation
CN115952407B (en) Multipath signal identification method considering satellite time sequence and airspace interactivity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201120

Assignee: Chengdu Tiantongrui Computer Technology Co.,Ltd.

Assignor: XIHUA University

Contract record no.: X2023510000028

Denomination of invention: A frame synchronization method for nonlinear distortion scenarios

Granted publication date: 20220816

License type: Common License

Record date: 20231124