CN115225437A - Joint intelligent equalization and decoding method for underwater acoustic cooperative communication - Google Patents

Joint intelligent equalization and decoding method for underwater acoustic cooperative communication Download PDF

Info

Publication number
CN115225437A
CN115225437A CN202210636299.4A CN202210636299A CN115225437A CN 115225437 A CN115225437 A CN 115225437A CN 202210636299 A CN202210636299 A CN 202210636299A CN 115225437 A CN115225437 A CN 115225437A
Authority
CN
China
Prior art keywords
mdfe
feedback
output
iteration
equalizer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210636299.4A
Other languages
Chinese (zh)
Other versions
CN115225437B (en
Inventor
刘志勇
蒋凌
石若松
陈炼翰
张钦宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Weihai
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN202210636299.4A priority Critical patent/CN115225437B/en
Publication of CN115225437A publication Critical patent/CN115225437A/en
Application granted granted Critical
Publication of CN115225437B publication Critical patent/CN115225437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03878Line equalisers; line build-out devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B13/00Transmission systems characterised by the medium used for transmission, not provided for in groups H04B3/00 - H04B11/00
    • H04B13/02Transmission systems in which the medium consists of the earth or a large mass of water thereon, e.g. earth telegraphy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L25/00Baseband systems
    • H04L25/02Details ; arrangements for supplying electrical power along data transmission lines
    • H04L25/03Shaping networks in transmitter or receiver, e.g. adaptive shaping networks
    • H04L25/03006Arrangements for removing intersymbol interference
    • H04L25/03178Arrangements involving sequence estimation techniques
    • H04L25/03248Arrangements for operating in conjunction with other apparatus
    • H04L25/03254Operation with other circuitry for removing intersymbol interference
    • H04L25/03267Operation with other circuitry for removing intersymbol interference with decision feedback equalisers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Power Engineering (AREA)
  • Filters That Use Time-Delay Elements (AREA)
  • Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)

Abstract

The invention relates to the technical field of underwater acoustic communication, in particular to a joint intelligent equalization and decoding method for underwater acoustic cooperative communication, which is characterized in that multi-path signals of a target node are processed by deep learning to realize multi-branch combination and equalization joint processing; deep learning is used as an equalization part of a signal, soft information generated by iteration between an equalizer and a decoder is used as a feedback signal, iteration gain is fully utilized, network parameters of a feedforward part and a feedback part of the equalizer are updated by constructing a cost function and using total errors, and the deep learning-based multi-branch decision feedback equalizer DL-MDFE is provided.

Description

Joint intelligent equalization and decoding method for underwater acoustic cooperative communication
The technical field is as follows:
the invention relates to the technical field of underwater acoustic communication, in particular to a joint intelligent balancing and decoding method for underwater acoustic cooperative communication, which can obviously improve the error rate performance of an underwater acoustic communication system.
The background art comprises the following steps:
due to the extremely complex underwater environment, the underwater acoustic channel has the characteristics of large transmission attenuation, serious multipath effect and the like. These factors can severely impact the reliability of the underwater acoustic communication link. Current research generally employs equalization techniques and cooperative communication techniques to eliminate the effects of intersymbol interference and signal transmission attenuation, respectively.
In the aspect of underwater acoustic cooperation, most researches adopt a maximum ratio combining and equal gain combining mode to realize the combination of signals of all nodes. The maximum ratio combining performance is better than equal gain combining, but it needs to know the channel state information between nodes to realize. For complex and variable underwater acoustic environments, the channel state information among the nodes is difficult to obtain. The advantage of equal gain is that the implementation is simple and the channel state information is not required to be known. In the aspect of equalization, a Turbo equalization method can be considered, an iteration loop is formed between the equalizer and the decoder to exchange soft information, and the system performance can be further improved through iteration. In the Turbo equalization method, the performance optimization is based on a Maximum posterior probability (MAP) method, but the MAP method has high computational complexity, and the complexity is exponential to the channel length and the modulation order. However, the equalization part in the existing method adopts the traditional filtering structure, and the nonlinear capacity is limited. The deep learning method proposed in recent years has the characteristics of strong learning capability, good nonlinear fitting capability and the like, and researchers have proposed a channel equalization method based on deep learning. However, the current channel equalization methods based on deep learning are all directed at end-to-end communication, and no intelligent multi-branch equalization method under the underwater acoustic cooperative communication system is researched at present, and no method for realizing the research and decoding in a combined manner is researched.
The invention content is as follows:
aiming at the defects and shortcomings in the prior art, the invention provides a joint intelligent balancing and decoding method for underwater acoustic cooperative communication, which can obviously improve the error rate performance of an underwater acoustic communication system.
The invention is achieved by the following measures:
a joint intelligent equalization and decoding method for underwater acoustic cooperative communication is characterized in that a plurality of paths of signals of a target node are processed by deep learning, and multi-branch combination and equalization joint processing are realized; deep learning is used as an equalization part of the signal, and iteration is generated between an equalizer and a decoderThe generated soft information is used as a feedback signal, the iterative gain is fully utilized, the network parameters of a feedforward part and a feedback part of the equalizer are updated by constructing a cost function and utilizing a total error, wherein the deep learning-based multi-branch decision feedback equalizer DL-MDFE is provided, the multi-branch decision feedback equalizer DL-MDFE comprises the feedforward part and the feedback part, the feedforward part is composed of two deep neural networks, and the working mechanism of the method is to input two paths of signals r received by a target node (SD)(n) and r(RD) (n) the output is an estimate of the transmitted signal x (n); the feedback part is constructed by a deep neural network;
the deep learning-based multi-branch decision feedback equalizer DL-MDFE working mode is specifically divided into a training stage and a tracking stage:
the training phase specifically comprises:
in the first iteration, because the feedback part of the DL-MDFE is not input, firstly the neural network of the feedforward part needs to be trained, and then two paths of signals are processed
Figure BDA0003682220550000011
The feedforward filter of DL-MDFE is composed of two deep neural networks, and the input of the two deep neural networks is composed of
Figure BDA0003682220550000021
Indicating that the signals are respectively received by two paths
Figure BDA0003682220550000022
And
Figure BDA0003682220550000023
derived from
Figure BDA0003682220550000024
And
Figure BDA0003682220550000025
the expression may be represented by:
Figure BDA0003682220550000026
Figure BDA0003682220550000027
wherein ,MRD and MSD Respectively representing the magnitude of the neuron number of two network input layers, assuming that the network layer number of DNN of two branches is L layers, and setting DNN SD Has N in the L-th layer (L =1, L-2) l A neuron, and the vector output from the l-th layer is
Figure BDA0003682220550000028
The output of layer l +1 can be expressed as:
Figure BDA0003682220550000029
wherein ,θ(l) Is DNN SD A weight matrix of the l-th layer, an
Figure BDA00036822205500000210
v (l) Is DNN SD A bias vector of the l-th layer, and
Figure BDA00036822205500000211
f (-) is an activation function, where a tanh function is used, the expression of which is as follows:
Figure BDA00036822205500000212
similarly, let DNN RD Has N (L = 1.., L-2) l A neuron, and DNN RD The vector output from the l layer of (1) is
Figure BDA00036822205500000213
(the vector is of length N l ) Then the output of the l +1 th layer can representComprises the following steps:
Figure BDA00036822205500000214
wherein ,λ(l) Is DNN RD A weight matrix of the l-th layer, and
Figure BDA00036822205500000215
k (l) is DNN RD A bias vector of the l-th layer, and
Figure BDA00036822205500000216
thus, DNN SD and DNNRD The output of the L-1 th layer (the layer preceding the output layer) can be expressed as:
Figure BDA0003682220550000031
Figure BDA0003682220550000032
the total output of the feedforward section resulting in DL-MDFE is shown below:
Figure BDA0003682220550000033
for convenience of presentation the feed-forward output u of the DL-MDFE (1) (n) is represented by:
Figure BDA0003682220550000034
wherein F (-) represents the operator of the feedforward part of DL-MDFE, w f and bf Weights and biases of the neural networks are expressed separately, and for convenience, the relevant parameters of the two neural networks are considered as a whole, i.e., (θ, v, λ, k) ∈ { w } f ,b f };
Due to the inverse of DL-MDFE at this timeThe feed part has no output, u (1) (n) obtaining the total output of the DL-MDFE after the first iteration after the judgment
Figure BDA0003682220550000035
After one iteration, will
Figure BDA0003682220550000036
Input into the feedback section of the DL-MDFE. Assume that the current iteration number is the kth (k)>1) At this time, when the kth iteration is calculated, the feed-forward output of the DL-MDFE is:
Figure BDA0003682220550000037
the input of the DL-MDFE feedback part is the decision value after the output of the previous iteration of DL-MDFE, which can be represented by equation (15):
Figure BDA0003682220550000038
wherein M represents the number of symbols interfering with the current symbol, the calculation process of the neural network of the feedback part is similar to that of the feedforward part, and equation (15)
Figure BDA0003682220550000039
After calculation through a neural network, obtaining the output y of the feedback part of the DL-MDFE in the k iteration (k) (n) as follows:
Figure BDA00036822205500000310
wherein B (-) denotes an operator of the feedback equalizer, w b and bb Respectively, weights and offsets of the feedback equalizer; subtracting the output of the feedback equalizer from the output of the feedforward equalizer to obtain the total output of the DL-MDFE after the k iteration as
Figure BDA0003682220550000041
May be represented by formula (17):
Figure BDA0003682220550000042
after the total output of the DL-MDFE is obtained, a cost function can be constructed as follows:
Figure BDA0003682220550000043
wherein x (n) is an expected signal of a training phase;
the method adopts a gradient descent method to optimize parameters of a neural network of a feedforward part and a feedback part of the DL-MDFE, and uses one cost function to update networks of the feedforward part and the feedback part of the DL-MDFE as the cost function is related to the output of the feedforward part and the feedback part of the DL-MDFE, and the weights and the offsets of the networks of the two parts are updated as follows:
Figure BDA0003682220550000044
training a feedforward equalizer and a feedback equalizer to obtain a final estimated value of DL-MDFE
Figure BDA0003682220550000045
Obtained at this time
Figure BDA0003682220550000046
Is closer to the expected signal x (n) than before training and then is judged to obtain
Figure BDA0003682220550000047
Then will be
Figure BDA0003682220550000048
As a feedback signal to the feedback equalizer to start the next iteration, when the number of iterations is sufficient orAnd when the well-trained DL-MDFE performance meets the requirement, ending the training stage.
In the tracking stage, the cost function is constructed by the estimated value of the DL-MDFE and the estimated decision value, as shown in the formula (20):
Figure BDA0003682220550000049
wherein ,
Figure BDA00036822205500000410
presentation pair
Figure BDA00036822205500000411
The judged value is subjected to repeated iteration and training by a DL-MDFE method in a tracking stage, so that a signal is finally estimated
Figure BDA00036822205500000412
Closest to the transmitted signal x (n).
The invention also comprises the following steps of taking the soft information generated in the iterative process as a feedback signal to carry out joint intelligent equalization and decoding, specifically:
step 1: input received signal r (SD) (n)、r (RD) (n) and corresponding desired signal x (n), maximum number of iterations K max
And 2, step: initializing inputs to a set feedback equalizer
Figure BDA0003682220550000051
Is an all-zero vector;
and 3, step 3: cyclic variable K from K =1 to K max, wherein
Figure BDA0003682220550000052
Figure BDA0003682220550000053
And 4, step 4: using desired output x (n) and network output
Figure BDA0003682220550000054
Constructing a cost function, and training a feedforward equalizer and a feedback equalizer;
and 5: re-estimating the signal using the trained feedforward and feedback networks to obtain the best estimated output of JIEDA
Figure BDA0003682220550000055
Step 6: for the best estimation
Figure BDA0003682220550000056
Performing Turbo iteration to obtain soft information
Figure BDA0003682220550000057
This is taken as the feedback input for the next iteration.
Compared with the prior art, the method can obviously improve the error rate performance of the underwater acoustic communication system.
Description of the drawings:
fig. 1 is a schematic structural diagram of a transmitting end in the present invention.
Fig. 2 is a schematic diagram of the structure of an encoder in the present invention.
FIG. 3 is a schematic diagram of a three-node underwater acoustic cooperative communication system model in the invention.
FIG. 4 is a diagram illustrating the structure of DL-MDFE in the present invention.
FIG. 5 is a schematic diagram showing the structure of the feed forward portion of the DL-MDFE in the present invention.
Fig. 6 is a schematic diagram of the feedback part structure of the DL-MDFE of the present invention.
Fig. 7 is a block diagram of the JIEDA in the present invention.
Fig. 8 is a schematic diagram of a data frame transmission process under a time-varying channel in the present invention.
FIG. 9 is a graph of the error rate performance of DL-MDFE in different iterations in the tracking mode of the present invention.
FIG. 10 is a graph showing the comparison of the bit error rate performance of DL-MDFE of the present invention with that of other conventional methods
Fig. 11 is a schematic diagram of the error rate performance of JIEDA with different iterations in the tracking mode of the present invention.
Fig. 12 is a graph comparing the error rate performance of JIEDA in accordance with the present invention with other methods.
The specific implementation mode is as follows:
the invention is further described below with reference to the drawings and examples.
In order to further improve the performance of the underwater acoustic cooperative transmission, the present example will provide a joint intelligent equalization and decoding method. The method comprises two modules, namely a multi-branch intelligent equalization module (realized based on deep learning) and a decoding module, wherein an iteration loop is formed between the two modules through soft information, and the system performance can be further improved through iteration. In the implementation of the method, the combination of the multi-branch signals, the equalization of each branch and the decoding are not independent from each other, but are jointly implemented.
In the joint intelligence and decoding method, a source node needs to perform processing such as encoding, interleaving, modulation and the like on a signal, and a structure of a transmitting end is shown in fig. 1.
As shown in fig. 1, the sending end of the joint equalization and decoding method can process signals in three steps, i.e., encoding, interleaving, and modulating. These three steps will be described in detail below.
(1) And (3) encoding: let original transmission information b (n) = [ b (1), b (2),.., b (K)]Representing independent identically distributed binary bits, K representing the number of information bits. These bits need to be encoded in order to ensure efficient transmission of the signal in the channel. The encoder used in this example is a rate 1/2 recursive systematic convolutional encoder (RSC) with a generator polynomial of G1, G2]=[5 7]The structure of the encoder is shown in FIG. 2, wherein M 3 、M 2 、M 1 Representing a shift register, the original binary information b (N) is encoded by the 5/7RSC encoder of FIG. 2, resulting in a length N data Coded symbols c (N) = [ c (1), c (2),. -, c (N) data)], wherein Ndata =2*K。
(2) Interlacing
After encoding a signal, in order to prevent continuity errors from occurring in the signal during communication and improve decoding performance, an S-random interleaver is usually used to randomly shuffle the sequence of symbols, and the specific implementation method is as follows: allocating a randomly generated mapping address to each coded bit to realize symbol interleaving, and obtaining a signal t (N) = [ t (1), t (2) ] data )]。
(3) Modulation
And carrying out BPSK modulation on the interleaved signal t (n), wherein the BPSK mapping rule is as follows: 0 → +1,1 → -1, resulting in a signal v (N) = [ v (1), v (2),.., v (N) data )]。
Because the designed method needs to be trained first, so that the method can be used after convergence, a training sequence needs to be added, and a data frame after the training sequence is added can be represented as:
x(n)=[x train (1),x train (2),...,x train (N train ),v(1),v(2),...,v(N data )] (1)
wherein Ntrain Length of training sequence, x train (n)=[x train (1),x train (2),...,x train (N train )]Also through the above-mentioned encoding, interleaving, modulation, etc.
The invention adopts a classic three-node cooperative communication system model, and the system model is shown as figure 3.
As can be seen from the above figure, the three-node underwater acoustic cooperation and communication system mainly includes three parts: a source node S, a relay node R and a destination node D. The cooperative communication process can be completed in two stages, wherein the first stage is a broadcasting stage, and a source node broadcasts signals to a destination node and a relay node respectively. The second stage is a relay stage, and the relay node forwards the received signal to the destination node in a decoding and forwarding mode. The specific signal transmission process of the underwater acoustic cooperative communication system will be described below.
The first stage is as follows: in the broadcasting stage, the source node S broadcasts information to the destination node D and the relay node R, and the destination node D and the relay node R receive information from the source nodeIs otherwise r (SD) (n)、r (RD) (n) may be represented by the following formula:
r (SD) (n)=x(n)*h (SD) (n)+η (SD) (n) (2)
r (SR) (n)=x(n)*h (SR) (n)+η (SR) (n) (3)
wherein, denotes convolution, h (SD) (n)、h (SR) (n) represents the impulse response of the underwater acoustic channel between the source node S and the destination node D, and between the source node S and the relay node R, eta (SD) (n)、η (SR) (n) represents a mean of zero and a variance of respectively
Figure BDA0003682220550000071
White additive gaussian noise.
And a second stage: a relay stage, in which a decoding forwarding mode is adopted, and after receiving a signal from the source node S, the relay node R firstly receives the received signal R (SR) And (n) decoding, performing re-coding interleaving and BPSK modulation on the decoded signal like the source node S, and finally transmitting the signal to the destination node D. In this example, assuming that the relay node R can perfectly recover the information sent by the source node S, the signal received by the destination node D from the relay node R may be represented as:
r (RD) (n)=x(n)*h (RD) (n)+η (RD) (n) (4)
wherein ,h(RD) (n) Impulse responses, η, respectively, of the underwater acoustic channel between the Relay node and the destination node (RD) (n) represents a mean of zero and a variance of respectively
Figure BDA0003682220550000072
White additive gaussian noise. Therefore, the final destination node receives r as the signal from the source node and the relay node (j) (n) where j ∈ { SD, RD }.
Compared with the prior art, the innovation points of the embodiment are as follows: (1) The method utilizes deep learning to process multi-path signals of a target node, and realizes multi-branch combination and equalization combined processing; (2) The method takes deep learning as an equalization part of a signal, takes soft information generated by iteration between an equalizer and a decoder as a feedback signal, and fully utilizes iteration gain; (3) The method utilizes the total error to update the network parameters of the feedforward part and the feedback part of the equalizer by constructing a cost function.
In order to enable the equalization and decoding to be processed jointly, an iterative loop needs to be formed between the Equalizer and the decoder, so this example first designs a Deep Learning-based multi-branch Decision Feedback Equalizer (DL-MDFE), and the structure of the DL-MDFE is shown in fig. 4.
As can be seen from fig. 4, the decision feedback equalizer comprises two parts, namely a feedforward part and a feedback part. The feed forward part structure of the DL-MDFE is shown in FIG. 5.
As can be seen from the figure, the feedforward part of the DL-MDFE is formed by two deep neural networks, and the working mechanism of the method is to input two paths of signals r received by a destination node (SD)(n) and r(RD) (n) the output is an estimate of the transmitted signal x (n).
The feedback part of the DL-MDFE is built by a deep neural network, and the structure of the feedback filter based on the deep neural network in the dashed box of fig. 4 can be represented by the following diagram in fig. 6:
the basic principle of the DL-MDFE method is to feed the equalized result back to the feedback filter after judging the equalized result, thereby forming iteration, and the transmitted signal x (n) is divided into training sequences x train (n) and information sequence x data (n) two parts, receiving signal r (SD) (n)、r (RD) (n) can also be divided into training sequences
Figure BDA0003682220550000073
Figure BDA0003682220550000081
And information sequence
Figure BDA0003682220550000082
Two parts. Therefore, the operation mode of the DL-MDFE can be divided into two phases: (1) TrainingA stage; and (2) a tracking stage.
(1) Training phase
In the first iteration, because the feedback part of the DL-MDFE has no input, the neural network of the feedforward part needs to be trained first, and then two signals are sent
Figure BDA0003682220550000083
The input of the feedforward part is the network, because the feedforward filter of the DL-MDFE is composed of two deep neural networks, the input of the two networks is respectively composed of
Figure BDA0003682220550000084
Indicating that the signals are respectively received by two paths
Figure BDA0003682220550000085
And
Figure BDA0003682220550000086
derived from
Figure BDA0003682220550000087
And
Figure BDA0003682220550000088
the expression may be represented by:
Figure BDA0003682220550000089
Figure BDA00036822205500000810
wherein ,MRD and MSD Respectively representing the magnitude of the neuron numbers of the two network input layers.
Assuming that the network layer number of DNN of two branches is L, setting DNN SD Has N in the L-th layer (L =1, L-2) l Each neuron, and the vector output from the l-th layer is
Figure BDA00036822205500000811
The output of layer l +1 can be expressed as:
Figure BDA00036822205500000812
wherein ,θ(l) Is DNN SD A weight matrix of the l-th layer, and
Figure BDA00036822205500000813
v (l) is DNN SD A bias vector of the l-th layer, and
Figure BDA00036822205500000814
f (-) is an activation function, where a tanh function is used, the expression of which is as follows:
Figure BDA00036822205500000815
similarly, let DNN RD Has N (L = 1.., L-2) l A neuron, and DNN RD The vector output from the l layer of (1) is
Figure BDA00036822205500000816
(the vector is of length N l ) Then the output of layer l +1 can be expressed as:
Figure BDA0003682220550000091
wherein ,λ(l) Is DNN RD A weight matrix of the l-th layer, and
Figure BDA0003682220550000092
k (l) is DNN RD A bias vector of the l-th layer, and
Figure BDA0003682220550000093
thus, DNN SD and DNNRD The output of the L-1 th layer (the layer preceding the output layer) can be expressed as:
Figure BDA0003682220550000094
Figure BDA0003682220550000095
the total output of the feed-forward part of the DL-MDFE can thus be obtained as follows:
Figure BDA0003682220550000096
for convenience of illustration, the feed-forward output u of the DL-MDFE will be referred to herein (1) (n) is represented by:
Figure BDA0003682220550000097
wherein F (-) represents the operator of the feedforward part of DL-MDFE, w f and bf Weights and biases of the neural networks are expressed separately (for convenience, the relevant parameters of the two neural networks are considered as a whole, i.e., (θ, v, λ, k) ∈ { w } f ,b f })。
Since the feedback part of DL-MDFE is not output at this time, u is set (1) (n) obtaining the total output of the DL-MDFE after the first iteration after the judgment
Figure BDA0003682220550000098
After one iteration, the method can be used for
Figure BDA0003682220550000099
Input into the feedback section of the DL-MDFE. Assume that the current iteration number is the kth (k)>1) At this time, the feedforward output of the DL-MDFE at the kth iteration can be calculated as:
Figure BDA00036822205500000910
the input of the DL-MDFE feedback part is the decision value after the output of the previous iteration of DL-MDFE, which can be represented by equation (15):
Figure BDA00036822205500000911
where M represents the number of symbols that cause interference to the current symbol. The input to the feedback equalizer takes this form because intersymbol interference is caused by the superposition of the amplitudes of the signal at a number of moments before the current moment, e.g.
Figure BDA0003682220550000101
Will be aligned with
Figure BDA0003682220550000102
Causing interference.
The calculation process of the neural network of the feedback part is similar to that of the feedforward part, and the calculation process of equation (15)
Figure BDA0003682220550000103
After calculation through a neural network, obtaining the output y of the feedback part of the DL-MDFE in the k iteration (k) (n) as follows:
Figure BDA0003682220550000104
wherein B (-) denotes an operator of the feedback equalizer, w b and bb Respectively, the weights and offsets of the feedback equalizer.
Subtracting the output of the feedback equalizer from the output of the feedforward equalizer to obtain the total output of the DL-MDFE after the k iteration as
Figure BDA0003682220550000105
Can be composed ofThe formula (17) represents:
Figure BDA0003682220550000106
after the total output of the DL-MDFE is obtained, a cost function can be constructed as follows:
Figure BDA0003682220550000107
where x (n) is the desired signal for the training phase.
The use of a gradient descent method to optimize the parameters of the neural network for the feed-forward and feedback portions of the DL-MDFE is employed here. Since the cost function is related to the output of both the feedforward part and the feedback part of the DL-MDFE, one cost function can be used to update the network of the feedforward and feedback parts in the DL-MDFE, and the weights and offsets of the network of the two parts are updated as follows:
Figure BDA0003682220550000108
training a feedforward equalizer and a feedback equalizer to obtain a final estimated value of DL-MDFE
Figure BDA0003682220550000109
Obtained at this time
Figure BDA0003682220550000111
Is closer to the expected signal x (n) than before training, and then is judged to obtain
Figure BDA0003682220550000112
Then will be
Figure BDA0003682220550000113
Is input as a feedback signal to the feedback equalizer to start the next iteration. When the iteration times are enough or the trained DL-MDFE performance reaches the requirementThe training phase may end at this point.
(2) Tracking phase
In the tracking phase, the operation process of the DL-MDFE method is the same as that of the training phase, and the intersymbol interference is eliminated through repeated iteration. However, in order to make the DL-MDFE method track the time-varying property of the underwater acoustic channel, the proposed DL-MDFE needs to adjust parameters even in the absence of the training sequence. Therefore, in the tracking phase, the cost function is constructed by the estimated value of the DL-MDFE and the estimated decision value, as shown in equation (20).
Figure BDA0003682220550000114
wherein ,
Figure BDA0003682220550000115
presentation pair
Figure BDA00036822205500001113
The decided value.
In the tracking stage, the DL-MDFE method is repeatedly iterated and trained, so that the signal is finally estimated
Figure BDA0003682220550000116
Closest to the transmitted signal x (n).
This example provides a Joint intelligent equalization and decoding method (JIEDA) based on DL-MDFE and using the idea of equalization and decoding iteration in Turbo equalization, in which the method uses the soft information generated in the iteration process as a feedback signal to further improve the system performance, and the structure of the JIEDA is shown in fig. 7.
As can be seen from the above figure, the equalization structure in JIEDA is similar to the DL-MDFE structure, but the feedback input of JIEDA is soft information generated after decoding iterations. This has the advantage that soft information can be exploited to improve the accuracy of the equalization. Therefore, the training process of the equalizer in JIEDA is similar to the DL-MDFE method, but differs in the feedback input, and the generation process of the soft information of the feedback of the JIEDA method is mainly described below.
After JIEDA is repeatedly trained, the optimal estimation of the transmitting signal x (n) is finally obtained
Figure BDA0003682220550000117
Then need to be according to
Figure BDA0003682220550000118
Finding the extrinsic information L of the equalizer E (x(n))。
Equalizing output in rules of soft information mapping
Figure BDA0003682220550000119
Can be considered as the sum of the original transmitted signal and the interference terms of noise and residual ISI contributions, in which case the interference terms can be modeled as a mean
Figure BDA00036822205500001114
And variance of
Figure BDA00036822205500001110
The Gaussian random variable can adopt a time average method [15] To find these two terms, as follows:
Figure BDA00036822205500001111
Figure BDA00036822205500001112
therefore, a conditional probability density function of the signal can be obtained, as shown in equation (23):
Figure BDA0003682220550000121
according to the property of conditional probability density function, finally obtaining external information L of equalizer E (x (n)) as shown in formula (24)The following steps:
Figure BDA0003682220550000122
finding the extrinsic information L of the equalizer E After (x (n)), it is demapped and deinterleaved to finally generate the a priori information L (c (n)) of the decoder. And the output extrinsic information of the decoder can be represented as a posteriori information of the decoder minus a priori information, as follows:
Figure BDA0003682220550000123
finding extrinsic information L of a decoder D After (c (n)), the information is interleaved to generate a priori information L D (t (n)). Then L is required to be D (t (n)) soft-mapped to a soft estimate of the symbol x (n)
Figure BDA0003682220550000124
Can be expressed as follows:
Figure BDA0003682220550000125
where E (·) denotes an averaging operation, and P (x (n) = a) i ) The following equation (27) can be obtained:
Figure BDA0003682220550000126
when BPSK modulation is used, z i,j The value of (c) can be obtained by equation (28):
Figure BDA0003682220550000127
therefore, L can be obtained by substituting the formula (27) and the formula (28) into the formula (26) D (x (n)) soft mapping to soft estimate
Figure BDA0003682220550000128
The result is shown in formula (29):
Figure BDA0003682220550000131
will be provided with
Figure BDA0003682220550000132
As input to a feedback equalizer to eliminate intersymbol interference and form an iterative loop. In summary, the iterative method of JIEDA can be summarized as shown in table 1.
Table 1 JIEDA method summary
Figure BDA0003682220550000133
In order to verify the performance of the joint intelligent equalization and decoding method, the proposed method is simulated and verified as follows.
The underwater acoustic channel parameters are set as follows, the seawater depth is 100 meters, the transmitter depth is 20 meters, the receiver depth is 50 meters, the relay node depth is 40 meters, the communication distance between the transmitter and the receiver is 500 meters, the distance between the relay node and the receiver is 250 meters, the sound velocity is 1500m/s, and the seabed absorption coefficient is 0.8. The time-varying model of the channel employs a first-order AR model proposed in document [16], where the relative doppler spread is set to 0.1. In order to verify that the method proposed in this example can track the time-varying property of the underwater acoustic channel, it is assumed that 500 data frames are transmitted in total, the length of a single data frame is 2000 symbols, where the first 500 symbols are training sequences, the last 1500 symbols are information sequences, and the impulse response change period of the channel is assumed to be once changed after 500 symbols are transmitted, that is, the channel has changed four times in the transmission process of one data frame, as shown in fig. 8.
It can be known from the above figure that, in the process of transmitting a data frame, the method is trained by using a training sequence, after the training is finished, the impulse response of the channel is changed from h (1) to h (2), at this time, no training sequence is provided for the method training, and the tracking of the channel can be realized only by means of the tracking function of the decision feedback of the method.
Two networks of the feedforward equalizer of the DL-MDFE adopt a five-layer network structure, and the number of neurons of each layer of network is as follows: 16. 16, 24, 36, 1. The network of the feedback equalizer of the DL-MDFE also adopts a five-layer structure, and the number of neurons of each layer of network is as follows: 16. 24, 16, 1. The number of training times was 300, and the learning rate was 0.05.
Because the DL-MDFE obtains the iteration gain by processing the output of the previous iteration, this section simulates the error rate performance of the DL-MDFE for different iteration times. The error rate performance of the DL-MDFE for different iteration numbers is shown in fig. 9.
As can be seen from fig. 9, the error rate performance of the DL-MDFE method proposed in this example gradually improves as the number of iterations increases. The improvement of the error rate performance is obvious in the first three iterations, and after the three iterations, the improvement of the error rate performance is not obvious any more, which shows that after the three iterations, the DL-DFE already obtains most of iteration gain, and the performance reaches the upper limit at the moment.
Secondly, in order to verify the tracking performance of the DL-MDFE under a time-varying channel, the error rate performance of several methods is compared, and the comparison method comprises the following steps: (1) A conventional LMS-based decision feedback equalizer (LMS-DFE); (2) A Deep learning-based Multi-branch Joint Equalization method (DL-MJE), wherein DL-MJE is a feedforward Equalization part of DL-MDFE, namely a method for removing a feedback part from DL-MDFE; (3) DL-MDFE (non-trace mode), i.e. the parameters of the DL-MDFE method are not updated during the non-training phase. The bit error rate performance ratio of the various methods is shown in fig. 10.
As can be seen from the above figure, compared with the traditional decision feedback equalizer based on the LMS method, the nonlinear approximation effect of DL-MJE and DL-MDFE is better, and therefore the final error rate performance is also better. Second, since DL-MJE has no tracking function for the time-varying channel, the error rate of DL-MJE is higher than that in the semi-steady channel at this time. Finally, the error rate performance of the DL-MDFE in the channel tracking mode provided in this chapter is better than that of the DL-MJE and that of the DL-MDFE in the non-tracking mode, which shows that the DL-MDFE provided in this example can obtain iterative gain through the result of decision feedback, thereby further eliminating intersymbol interference, and meanwhile, the DL-MDFE can also track the time-varying property of the underwater acoustic channel.
The simulation parameters are set as follows: the original binary bits are RSC encoded with the generator polynomial of [ G ] 1 ,G 2 ]=[5,7]The modulation mode is BPSK, a frame of signal has 2000 BPSK symbols, 500 of which are training sequences and 1500 of which are information sequences, and the setting of relevant parameters of the neural network is the same as that of the DL-MDFE. Since the JIEDA obtains the decoding gain by exchanging soft information between the equalizer and the decoder, this section first simulates the error rate performance of the JIEDA method for different iterations. The error rate performance of the different iterations JIEDA in tracking mode is shown in fig. 11.
As can be seen from fig. 11, the error rate performance of the proposed JIEDA gradually improves as the number of iterations increases, and after three iterations, the JIEDA method achieves most of the gain. In the subsequent iteration process, the difference between the error rate curves of the fourth iteration and the fifth iteration is very small, which shows that the error rate tends to be stable after three iterations, and the performance is not improved any more. The above results show that the JIEDA method obtains iterative gain by iteratively transferring soft information, and the closer the soft information output by the final method is to the correct value.
In order to verify the superiority of JIEDA over the existing joint equalization decoding method, by comparing JIEDA with other methods, the comparison method includes: (1) DL-MDFE, which also uses a decision feedback structure based on deep learning, but the feedback signal is a hard decision on the equalization output instead of the soft information used by JIEDA; (2) DL-MDFE (equalization and decoding separation); (3) Direct Adaptive Turbo Equalization (DA-TEQ); (4) Channel Estimation-based Turbo Equalization (CE-TEQ). Fig. 12 shows a comparison of Bit Error Rate (BER) performance for different signal-to-noise ratios (SNRs) after three iterations of the JIEDA proposed in this example and the above method.
As can be seen from fig. 12, the error rate performance of CE-TEQ is the worst, because in a complex underwater acoustic channel, there is an error in the estimation of the channel, thereby affecting the effect of the subsequent iteration. The DA-TEQ adopts a self-adaptive method, and can track underwater sound channel change without estimating a channel. Secondly, compared with the DL-MDFE (balanced decoding and separation), the DL-MDFE has better error rate performance, which shows that the introduced channel coding can improve the anti-interference capability of signals. In addition, the performance of DL-MDFE (equalization decoding separation) is equivalent to that of DA-TEQ, which shows that although DL-MDFE does not obtain iterative gain by iterating equalization and decoding, the nonlinear fitting capability using deep learning makes up for the deficiency. Finally, as can be seen from the error rate curve, the error rate of JIEDA provided in this example is lower than that of other methods when the number of iterations is three. It is shown that JIEDA combines the equalization and decoding methods to achieve more iterative gains.
In summary, the present invention provides a combined intelligent equalization and decoding method based on Turbo equalization, and introduces the derivation flow of the whole method, and JIEDA obtains iterative gain by using soft information after iterative decoding, thereby improving equalization and decoding effects. Finally, the method is simulated, and compared with the existing method, the proposed method has obvious improvement on the error rate performance.

Claims (3)

1. A united intelligent equalization and decoding method for underwater acoustic cooperative communication is characterized in that deep learning is utilized to process multi-path signals of a target node, and multi-branch merging and equalization united processing is achieved; deep learning is used as a signal equalization part, soft information generated by iteration between an equalizer and a decoder is used as a feedback signal, iteration gain is fully utilized, a cost function is constructed, and network parameters of a feedforward part and a feedback part of the equalizer are updated by using a total errorIs input into two signals r received by a destination node (SD)(n) and r(RD) (n) the output is an estimate of the transmitted signal x (n); the feedback part is constructed by a deep neural network;
the deep learning-based multi-branch decision feedback equalizer DL-MDFE working mode is specifically divided into a training stage and a tracking stage:
the training stage specifically comprises:
in the first iteration, because the feedback part of the DL-MDFE has no input, firstly the neural network of the feedforward part needs to be trained, and then two paths of signals are used
Figure FDA0003682220540000011
The input of the feedforward part is the network, because the feedforward filter of the DL-MDFE is composed of two deep neural networks, the input of the two networks is respectively composed of
Figure FDA0003682220540000012
Indicating that the signals are respectively received by two paths
Figure FDA0003682220540000013
And
Figure FDA0003682220540000014
derived from
Figure FDA0003682220540000015
And
Figure FDA0003682220540000016
the expression may be represented by:
Figure FDA0003682220540000017
Figure FDA0003682220540000018
wherein ,MRD and MSD Respectively representing the magnitude of the neuron numbers of the two network input layers,
assuming that the network layer number of DNN of two branches is L, let DNN SD Has N in the L-th layer (L = 1.., L-2) l Each neuron, and the vector output from the l-th layer is
Figure FDA0003682220540000019
The output of layer l +1 can be expressed as:
Figure FDA00036822205400000110
wherein ,θ(l) Is DNN SD A weight matrix of the l-th layer, and
Figure FDA00036822205400000111
v (l) is DNN SD A bias vector of the l-th layer, an
Figure FDA0003682220540000021
(i=1,...,N l ) F (·) is an activation function, here a tanh function is used, whose expression is shown below:
Figure FDA0003682220540000022
similarly, let DNN RD Has N (L =1, L-2) l A neuron, and DNN RD The vector output from the l layer of (1) is
Figure FDA0003682220540000023
(the vector is of length N l ) Then the output of layer l +1 can be expressed as:
Figure FDA0003682220540000024
wherein ,λ(l) Is DNN RD A weight matrix of the l-th layer, and
Figure FDA0003682220540000025
(i=1,...,N l+1 ,j=1,...,N l ),
k (l) is DNN RD A bias vector of the l-th layer, an
Figure FDA0003682220540000026
(i=1,...,N l )。
Thus, DNN SD and DNNRD The output of the L-1 th layer (the layer preceding the output layer) can be expressed as:
Figure FDA0003682220540000027
Figure FDA0003682220540000028
the total output of the feedforward section resulting in DL-MDFE is shown below:
Figure FDA0003682220540000029
for convenience of presentation the feed-forward output u of the DL-MDFE (1) (n) is represented by:
Figure FDA00036822205400000210
wherein F (-) represents the operator of the feedforward part of DL-MDFE, w f and bf Respectively representing the weight and the bias of the neural network, and conveniently, relating parameters of the two neural networksViewed as a whole, i.e. (θ, v, λ, k) e { w f ,b f };
Since the feedback part of DL-MDFE is not output at this time, u is added (1) (n) obtaining the total output of the DL-MDFE after the first iteration after the judgment
Figure FDA00036822205400000211
After one iteration, will
Figure FDA00036822205400000212
Input into the feedback section of the DL-MDFE. Assume that the current iteration number is the kth (k)>1) At this time, when the kth iteration is calculated, the feed-forward output of the DL-MDFE is:
Figure FDA0003682220540000031
the input of the DL-MDFE feedback part is the decision value after the output of the previous iteration of DL-MDFE, which can be represented by equation (15):
Figure FDA0003682220540000032
wherein M represents the number of symbols interfering with the current symbol, the calculation process of the neural network of the feedback part is similar to that of the feedforward part, and equation (15)
Figure FDA0003682220540000033
After calculation through a neural network, obtaining the output y of the feedback part of the DL-MDFE in the k iteration (k) (n), as follows:
Figure FDA0003682220540000034
wherein B (-) denotes an operator of the feedback equalizer, w b and bb Respectively, weights and offsets of the feedback equalizer; subtracting the output of the feedback equalizer from the output of the feedforward equalizer to finally obtain the total output of DL-MDFE after the kth iteration as
Figure FDA0003682220540000035
May be represented by formula (17):
Figure FDA0003682220540000036
after the total output of the DL-MDFE is obtained, a cost function can be constructed as follows:
Figure FDA0003682220540000037
wherein x (n) is the desired signal of the training phase;
the method adopts a gradient descent method to optimize parameters of a neural network of a feedforward part and a feedback part of the DL-MDFE, and uses one cost function to update networks of the feedforward part and the feedback part of the DL-MDFE as the cost function is related to the output of the feedforward part and the feedback part of the DL-MDFE, and the weights and the offsets of the networks of the two parts are updated as follows:
Figure FDA0003682220540000041
training a feedforward equalizer and a feedback equalizer to obtain a final estimated value of DL-MDFE
Figure FDA0003682220540000042
Obtained at this time
Figure FDA0003682220540000043
Is closer to the expected signal x (n) than before training, and then is judged to obtain
Figure FDA0003682220540000044
Then will be
Figure FDA0003682220540000045
And inputting the feedback signal into a feedback equalizer to start the next iteration, and finishing the training phase when the iteration times are enough or the trained DL-MDFE performance meets the requirement.
2. The joint intelligent equalization and decoding method for underwater acoustic cooperative communication as claimed in claim 1, wherein in the tracking stage, the cost function is constructed by the estimated value of DL-MDFE and the estimated decision value, as shown in equation (20):
Figure FDA0003682220540000046
wherein ,
Figure FDA0003682220540000047
presentation pair
Figure FDA0003682220540000048
The judged value is subjected to repeated iteration and training by a DL-MDFE method in a tracking stage, so that a signal is finally estimated
Figure FDA0003682220540000049
Closest to the transmitted signal x (n).
3. The joint intelligent equalization and decoding method for underwater acoustic cooperative communication according to claim 1, further comprising performing joint intelligent equalization and decoding using soft information generated by an iterative process as a feedback signal, specifically:
step 1: inputting a received signal r (SD) (n)、r (RD) (n) and corresponding desired signal x (n), maximum number of iterations K max
And 2, step: initializing inputs to a set feedback equalizer
Figure FDA00036822205400000410
Is an all-zero vector;
and step 3: cyclic variable K from K =1 to K max, wherein
Figure FDA00036822205400000411
Figure FDA00036822205400000412
And 4, step 4: using desired output x (n) and network output
Figure FDA00036822205400000413
Constructing a cost function, and training a feedforward equalizer and a feedback equalizer;
and 5: re-estimating the signal using the trained feedforward and feedback networks to obtain the best estimated output of JIEDA
Figure FDA0003682220540000051
And 6: for the best estimation
Figure FDA0003682220540000052
Performing Turbo iteration to obtain soft information
Figure FDA0003682220540000053
This is taken as the feedback input for the next iteration.
CN202210636299.4A 2022-06-07 2022-06-07 Combined intelligent equalization and decoding method for underwater acoustic cooperative communication Active CN115225437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210636299.4A CN115225437B (en) 2022-06-07 2022-06-07 Combined intelligent equalization and decoding method for underwater acoustic cooperative communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210636299.4A CN115225437B (en) 2022-06-07 2022-06-07 Combined intelligent equalization and decoding method for underwater acoustic cooperative communication

Publications (2)

Publication Number Publication Date
CN115225437A true CN115225437A (en) 2022-10-21
CN115225437B CN115225437B (en) 2023-05-12

Family

ID=83607915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210636299.4A Active CN115225437B (en) 2022-06-07 2022-06-07 Combined intelligent equalization and decoding method for underwater acoustic cooperative communication

Country Status (1)

Country Link
CN (1) CN115225437B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089741A (en) * 2024-04-23 2024-05-28 中国人民解放军海军潜艇学院 Navigation data processing method based on delay Doppler domain Turbo equalization

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813219B1 (en) * 2003-09-15 2004-11-02 The United States Of America As Represented By The Secretary Of The Navy Decision feedback equalization pre-processor with turbo equalizer
CN113242189A (en) * 2021-04-13 2021-08-10 华南理工大学 Adaptive equalization soft information iteration receiving method combined with channel estimation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6813219B1 (en) * 2003-09-15 2004-11-02 The United States Of America As Represented By The Secretary Of The Navy Decision feedback equalization pre-processor with turbo equalizer
CN113242189A (en) * 2021-04-13 2021-08-10 华南理工大学 Adaptive equalization soft information iteration receiving method combined with channel estimation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
景连友;何成兵;张玲玲;孟庆微;***;张群飞;: "水声通信中基于软判决的块迭代判决反馈均衡器", 电子与信息学报 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118089741A (en) * 2024-04-23 2024-05-28 中国人民解放军海军潜艇学院 Navigation data processing method based on delay Doppler domain Turbo equalization

Also Published As

Publication number Publication date
CN115225437B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
US11652557B2 (en) Turbo receivers for multiple-input multiple-output underwater acoustic communications
Sandell et al. Iterative channel estimation using soft decision feedback
US20190058529A1 (en) Turbo receivers for single-input single-output underwater acoustic communications
CN113242189B (en) Adaptive equalization soft information iteration receiving method combined with channel estimation
US6819630B1 (en) Iterative decision feedback adaptive equalizer
CN113242190B (en) Multichannel communication minimum bit error rate Turbo equalization method based on posterior soft symbol
CN109981501B (en) Underwater sound direct self-adaptive MIMO communication method
EP1365534A1 (en) Viterbi decoder for gigabit ethernet
Xu et al. Spatial and time-reversal diversity aided least-symbol-error-rate turbo receiver for underwater acoustic communications
CN115225437B (en) Combined intelligent equalization and decoding method for underwater acoustic cooperative communication
CN101119177A (en) Bit-symbol signal processing method for coherent communication machine
CN105553903A (en) Adaptive turbo equalization method, equalizer and underwater acoustic communication system
Xi et al. Soft direct-adaptation based bidirectional turbo equalization for MIMO underwater acoustic communications
CN112039809B (en) Block iterative equalizer based on mixed soft information and bidirectional block iterative equalizer
CN113660064B (en) Multi-data packet-based joint two-dimensional interleaving coding method suitable for power line communication system
Blackmon et al. Performance comparison of iterative/integral equalizer/decoder structures for underwater acoustic channels
CN111682924B (en) Bidirectional frequency domain Turbo equalization method adopting expected propagation
Zhong et al. Variable step-size least-symbol-error-rate adaptive decision feedback turbo equalization for underwater channel
CN113615139A (en) Method for receiving SOQPSK-TG signal by PAM decomposition
CN110572220B (en) Combined blind frequency domain equalization and no-rate decoding algorithm for no-rate underwater acoustic communication
JP2002252575A (en) Adaptive equalization method and adaptive equalizer
Yen Stochastic unbiased minimum mean error rate algorithm for decision feedback equalizers
CN116192578A (en) Low-complexity underwater acoustic communication sparse self-adaptive turbo equalization method
Liang et al. Bidirectional least-symbol-error-rate turbo equalization for underwater acoustic faster-than-Nyquist system
CN117675461A (en) Turbo self-adaptive equalization receiving method based on improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant