CN107733445B - Turbo code word generating method and decoding method - Google Patents

Turbo code word generating method and decoding method Download PDF

Info

Publication number
CN107733445B
CN107733445B CN201710804826.7A CN201710804826A CN107733445B CN 107733445 B CN107733445 B CN 107733445B CN 201710804826 A CN201710804826 A CN 201710804826A CN 107733445 B CN107733445 B CN 107733445B
Authority
CN
China
Prior art keywords
decoding
sub
state
code
metric
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710804826.7A
Other languages
Chinese (zh)
Other versions
CN107733445A (en
Inventor
管武
梁利平
吴凯
任雁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201710804826.7A priority Critical patent/CN107733445B/en
Publication of CN107733445A publication Critical patent/CN107733445A/en
Application granted granted Critical
Publication of CN107733445B publication Critical patent/CN107733445B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/29Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
    • H03M13/2957Turbo codes and decoding
    • H03M13/296Particular turbo code structure

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

The invention discloses a Turbo code word generating method and a decoding method. The method for generating the Turbo code words comprises the following steps: at least two sub-encoders allowing simultaneous encoding and decoding receive input sequences of codes, respectively, the input sequences of codes comprising at least: a symbol consisting of at least two information bits; the method for decoding Turbo code words comprises the following steps that at least two sub-encoders adopt feedback system convolutional code encoding (RSC) to encode an input sequence of received codes, and generate and output an encoding result, wherein the method for decoding Turbo code words comprises the following steps: receiving an input sequence to be decoded; and decoding the sequence to be decoded by the decoder to obtain and output a decoding result, wherein the decoding process comprises a first decoding stage of performing the RSC decoding of the convolutional code encoding of the feedback system by the first decoding device PCRCDEC and a second decoding stage of performing the PCRC decoding of the parallel cyclic redundancy check code PCRC on the decoding result of the first decoding stage by the second decoding device PCRCDEC. The technical problem that the encoding and decoding complexity of the low-code-rate Turbo code in the related technology is high is solved.

Description

Turbo code word generating method and decoding method
Technical Field
The invention relates to the field of information processing, in particular to a Turbo code word generating method and a decoding method.
Background
At present, the iterative decoding idea of Turbo codes is known as "Turbo prime", and is applied to various related fields, such as multi-user detection, equalization estimation of joint channel parameters, high-density storage, and even artificial intelligence, to different degrees. In practical application, although the Turbo code is generated for a period of time, the Turbo code is limited in practical application due to the complexity and decoding delay of the Turbo code. However, after a decade of research, the Turbo code is quite mature in both coding scheme and decoding algorithm, and now the Turbo code has formally gone on the mainstream stage, becoming a real time pet, and various communication specifications all adopt the Turbo code as one of the standards. In the deep space communication field, 16-state Turbo codes are listed as a new standard by the spatial data standard council (CCSDs). In the field of mobile communications, 3GPP has formally adopted a Turbo code as one of the channel coding standards for IMT2000 high-speed data communications. Representative 3G standards (WCDMA, CDMA-2000, and TD-SCDMA) all use Turbo codes in channel coding for high rate, high quality communication services.
In 1996 Berrou proposed a duobinary Turbo code, which has the following advantages compared to the conventional binary Turbo code: (1) the CRSC is adopted as the subcode, so that the coding efficiency is improved; (2) the interleaving depth is half of the classic Turbo code, and the decoding time delay is reduced; (3) the minimum free distance is increased through intersymbol interleaving, and an error code flat layer is eliminated; (4) under the condition of the same complexity decoder, the error correction performance of the duobinary Turbo code is superior to that of the traditional Turbo code; (5) the performance impact of code rate puncturing on a duo-binary Turbo code is less than that of a conventional Turbo code. Due to its excellent performance, duobinary Turbo codes are widely used in many wireless communication standards, such as WiMAX (IEEE502.16) and european satellite network standard (DVB-RCS). However, currently, Turbo codes are mainly used in the field of high code rate, and the design and application in the field of low code rate are still in the spotlight. Designing a low-code-rate binary Turbo code to realize the coding and decoding of the low-complexity low-code-rate binary Turbo code is a problem to be solved urgently.
Aiming at the technical problem that the encoding and decoding complexity of the low-code-rate Turbo code in the related technology is high, an effective solution is not provided at present.
Disclosure of Invention
The embodiment of the invention provides a method for generating a Turbo code word and a method for decoding the Turbo code word, which at least solve the technical problem that the encoding and decoding complexity of a low-code-rate Turbo code in the related technology is higher.
According to an aspect of the embodiments of the present invention, a method for generating a Turbo code codeword is provided, the method including: at least two sub-encoders allowing simultaneous encoding and decoding respectively receive an input sequence of codes, wherein the input sequence of codes comprises at least: a symbol consisting of at least two information bits; coding an input sequence of a received code by adopting a feedback system convolutional code (RSC) through at least two sub-encoders to generate a coding result; and outputting the coding result.
Further, each sub-encoder comprises two cyclic recursive systematic convolutional codes CRSC encoders, and before the at least two sub-encoders allowing simultaneous encoding and decoding respectively receive the input sequence of codes, the method further comprises: the correlation between the input sequences of the codes received by the at least two sub-encoders is removed by an interleaver.
Further, the CRSC encoder controls the initial state and the termination state of each sub-encoder to be the same based on a self-tailing mechanism, wherein at least two sub-encoders encode the input sequence of the received code by using a feedback system convolutional code (RSC) to generate an encoding result, including: acquiring initial state bits of the sub-encoders and final states obtained by precoding; precoding according to the last state to obtain the cyclic state of the convolutional code of the cyclic recursive system and generate a coding matrix; and setting the initial state of the sub-encoder as a cyclic state obtained by pre-encoding, and encoding to obtain a final encoding result of which the termination state is an initial state bit.
Further, in the process of inputting the input sequence of codes to the sub-encoder, the timing control is performed by the first control means mesgainaddr, and at least one of the following is performed at different stages: information bits, a CRC check, and check bit data for encoding are input.
Further, in the encoding process, the timing control is performed by the second control means InnerAddr and at least one of the following is performed at different stages: non-interleaved coding of information and interleaved coding of information.
Further, in the process of inputting the encoding result, the timing control is performed by the third control means OutAddr and the fourth control means OutIndex, and at least one of the following is performed at different stages: an encoding result of a predetermined length is input.
Further, the first control device performs first counting on the input information, performs bit verification after the first counting reaches a predetermined number of bits, and saves the verification result; and controlling a second control device to start second counting when the coding is finished, and starting the coding.
Further, the input of the sub-encoder is controlled by the second control device, and the stored check result is read and encoded.
The invention also provides an embodiment of a Turbo code word decoding method, which comprises the following steps: receiving an input sequence to be decoded; decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding process comprises a first decoding stage and a second decoding stage, the first decoding stage is to execute feedback system convolutional code encoding RSC decoding by a first decoding device RSDEC, and the second decoding stage is to execute parallel cyclic redundancy check code PCRC decoding on the decoding result of the first decoding stage by a second decoding device PCRDEC; and outputting a decoding result.
Further, receiving an input sequence to be decoded comprises: the at least two sub-decoders respectively receive the coding results generated by the corresponding sub-encoders based on the time sequence control; decoding the code sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding result comprises: and the at least two sub-decoders respectively carry out iterative decoding on the corresponding coding results to obtain decoding results.
Further, in the iterative decoding process, at least one of the following is performed at different stages by performing timing control: information is non-interleaved and information is interleaved. Further, the input sequence to be decoded is divided into a plurality of sub-codes, each sub-code is output by a corresponding encoder, the plurality of sub-codes correspond to the plurality of encoders, and receiving the input sequence to be decoded includes: counting the output of the current encoder by the control of a first control device AddrInM; after the first control means AddrInM reaches the preset number of words, the second control means addrinl controls to select the next encoder to receive input from the plurality of encoders in the preset order, and the counting is performed again by the first control means AddrInM.
Further, the input sequence to be decoded is divided into a plurality of subcodes, and the decoding is performed on the sequence to be decoded by a decoder to obtain a decoding result, which includes: and executing iterative timing control when the plurality of subcodes are decoded by a third control device InnerItern, wherein the third control device is used for counting and selecting the corresponding subcodes from the plurality of subcodes according to the current count.
Further, when the feedback system convolutional code encoding RSC decoding is executed through the first decoding device RSDEC, the calculation of the forward transition probability and the backward transition probability is respectively executed, wherein in the calculation process, the forward transition probability and the backward transition probability are stored through a first storage unit AxInfo, the external information is stored through a second storage unit ExInfo, and the decoding judgment bits are simultaneously stored through a third storage unit DxMesg and a fourth storage unit DxBits.
Further, when the second decoding means PCRCDEC performs parallel CRC decoding on the decoding result of the first decoding stage, the fourth control means CRCAddr performs timing control on the CRC decoding according to a preset timing rule, where the preset timing rule is to read data of a preset number of bits from the third storage unit dxmems.
Further, outputting the decoding result comprises: the timing control of outputting the decoding result, which is the data read from the fourth storage unit DxBits, is performed by the fifth control means OutAddt.
Further, the first decoding device RSCDEC includes a branch Metric Gamma module, an initial state Metric initmetric module, a state Metric calculation Metric module, a forward state Metric storage axiinfo module, a total state Metric totaltric module, and a likelihood ratio calculation LLR module, where the branch Metric Gamma module is configured to perform branch Metric calculation, the initial state Metric initmetric module is configured to perform initialization on the Metric, the state Metric calculation Metric module is configured to perform calculation on branch metrics, the forward state Metric storage axiinfo module is configured to store the forward state metrics calculated by the state Metric calculation Metric module, the total state Metric totaltric module is configured to calculate total metrics, and the likelihood ratio calculation LLR module is configured to calculate soft information.
Furthermore, the branch metric Gamma module combines the channel information, the check information and the external information distributed to itself according to the state transition diagram of the convolutional code or the Turbo code, and outputs the channel metric, the check metric and the branch metric.
Furthermore, the input sequence to be decoded is divided into a plurality of sub-codes, the initial state Metric iniMetric module is used for storing the forward end state and the backward end state of each sub-code, and the corresponding end state is loaded as the initial state in the next iteration to initialize the Metric.
Further, the state Metric calculation Metric module performs recursive operation according to the branch Metric to obtain a forward state Metric and a backward state Metric.
Further, a forward state metric storage AxInfo module is used to provide the forward state metrics to a likelihood ratio computation LLR module.
Further, the likelihood ratio calculation LLR module obtains prior information according to the forward state metric, and calculates and judges the likelihood ratio.
In an embodiment of the present invention, an input sequence of codes is received by at least two sub-encoders allowing simultaneous encoding and decoding, respectively, wherein the input sequence of codes at least comprises: a symbol consisting of at least two information bits; coding an input sequence of a received code by adopting a feedback system convolutional code (RSC) through at least two sub-encoders to generate a coding result; outputting a coding result; decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding process comprises a first decoding stage and a second decoding stage, the first decoding stage is to execute feedback system convolutional code encoding RSC decoding by a first decoding device RSDEC, and the second decoding stage is to execute parallel cyclic redundancy check code PCRC decoding on the decoding result of the first decoding stage by a second decoding device PCRDEC; and the decoding result is output, the technical problem that the encoding and decoding complexity of the low-code-rate Turbo code in the related technology is high is solved, the encoding and decoding complexity of the Turbo code is reduced, and the technical effect of an error correction function is realized.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention to a proper form. In the drawings:
FIG. 1 is a flow chart of an alternative method for generating Turbo code codewords in accordance with embodiments of the invention;
FIG. 2 is a schematic diagram of an alternative Turbo code codeword generation according to an embodiment of the present invention;
FIG. 3 is a diagrammatic illustration of an alternative RSC state transition in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of an alternative D-Turbo code timing sequence according to an embodiment of the present invention;
FIG. 5 is a block diagram of an alternative D-Turbo encoder according to an embodiment of the present invention;
FIG. 6 is a flow chart of an alternative method for decoding Turbo code codewords in accordance with embodiments of the invention;
FIG. 7 is a schematic diagram of an alternative D-Turbo decoding timing sequence according to an embodiment of the present invention;
FIG. 8 is a block diagram of an alternative D-Turbo decoder according to an embodiment of the present invention;
fig. 9 is a schematic diagram of the hardware structure of an alternative RSC decoder, according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The application provides an embodiment of a method for generating Turbo code words.
Fig. 1 is a flowchart of a method for generating an optional Turbo code codeword according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
step S101, allowing at least two sub-encoders that encode and decode simultaneously to receive an input sequence of codes, respectively, wherein the input sequence of codes at least comprises: a symbol consisting of at least two information bits;
step S102, encoding the input sequence of the received code by at least two sub-encoders by adopting a feedback system convolutional code (RSC) to generate an encoding result;
and step S103, outputting the coding result.
As an alternative implementation, each sub-encoder includes two cyclic recursive systematic convolutional codes CRSC encoders, and before at least two sub-encoders allowing simultaneous encoding and decoding respectively receive input sequences of codes, the method further includes: the correlation between the two CRSC encoders of one of the at least two sub-encoders is removed by an interleaver.
As an optional implementation manner, the CRSC encoder controls the initial state and the termination state of each sub-encoder to be the same based on a self-tail-biting mechanism, wherein at least two sub-encoders encode an input sequence of a received code by using a feedback system convolutional code (RSC) to generate an encoding result, including: acquiring initial state bits of the sub-encoders and final states obtained by precoding; precoding according to the final state to obtain the cyclic state of the convolutional code of the cyclic recursive system and generate a coding matrix; and setting the initial state of the sub-encoder as a cyclic state obtained by pre-encoding, and encoding to obtain a final encoding result of which the termination state is an initial state bit.
As an alternative implementation, in the process of inputting the input sequence of codes to the sub-encoder, the timing control is performed by the first control means mesgainaddr, and at least one of the following is performed at different stages: the information bits, the CRC check and the check bit data for encoding are input.
As an optional implementation manner, during the encoding process, the timing control is performed by the second control device InnerAddr, and at least one of the following is performed at different stages: non-interleaved coding of information and interleaved coding of information.
As an alternative implementation, in the process of inputting the coding result, the third control device OutAddr and the fourth control device OutIndex perform timing control, and at different stages perform at least one of the following: an encoding result of a predetermined length is input.
As an alternative embodiment, the first control device performs a first count on the input information, performs bit check after the first count reaches a predetermined number of bits, and saves the check result; and controlling a second control device to start second counting when the coding is finished, and starting the coding.
As an alternative embodiment, the input of the sub-encoder is controlled by the second control device, and the stored verification result is read and encoded.
A specific application of the above embodiment is further explained below with reference to fig. 2:
the code word generation of the D-Turbo code is as shown in figure 2, and comprises two feedback system convolutional code encoders RSC, wherein the correlation between the two sub-codes is eliminated between the two component codes through an Interleaver, the parity bits also need to be punctured in order to improve the code rate, each symbol of an input sequence of the code comprises two information bits A and B, and the component codes adopt the feedback system convolutional codes of self-tail-biting.
In order to increase the codec rate, each sub-encoder may be divided into two encoders, as shown in fig. 2, the first sub-encoder is divided into two parts RSC1a and RSC1b, and the second sub-encoder is divided into two parts RSC2a and RSC2b, each part independently constituting a cyclic RSC code, so that the two parts can be simultaneously codec.
When the coding result is output, parallel coding is realized, and 1 bit is sequentially output among 4 coders in the coding output.
The encoder of the duobinary Turbo code component code adopts a cyclic recursive systematic convolutional code CRSC. The convolution code of the circular recursive system is based on a self-truncation mechanism, so that the initial state and the termination state of an encoder are the same, and a circle on the states is achieved, wherein the same state is called a circular state product Sc
Firstly, defining a generator matrix as G, and a time state of a cyclic recursive system convolutional code encoder k as SkThe input sequence is a vector UkThen for time k, the recurrence relation can be found:
Sk=G×Sk-1+Uk
then the last state of the cyclic recursive systematic convolutional code encoder:
Figure BDA0001401718180000091
cyclic recursive systematic convolutional codes require a cyclic state of hollyhock SN=S0=ScThe loop state can be obtained by substituting the following formula:
Figure BDA0001401718180000092
if get S 00, then the following equation:
Figure BDA0001401718180000093
solving formula of the cycle state:
Figure BDA0001401718180000094
as can be seen from the above derivation, the encoding process of the cyclic recursive systematic convolutional code can be divided into two steps:
1. and (5) pre-coding. The initial state of the coder is 0, and the precoding obtains the final state
Figure BDA0001401718180000095
Then, according to the last state, the cyclic state S of the cyclic recursive systematic convolutional code encoder is obtainedcSince the solving process is only related to the generator matrix G, the corresponding specific generator matrix can be obtained simply by looking up the table.
2. Then setting the initial state of the coder as the cyclic state S obtained by precodingcFinal encoding is carried out, and the final state obtained by encoding is equal to the initial state Sc
State transition diagram As shown in FIG. 3, the state S is cycledcRelated to the information sequence and having a cyclic state ScProvided that the matrix 1+ GNThe state transition table is shown in table 1:
TABLE 1RSC State transition Table
Figure BDA0001401718180000101
Under this state transition, the initial state epitope (N% 8 ═ 0).
TABLE 2RSC Loop initialization Table
Figure BDA0001401718180000102
There are various kinds of codeword mappings.
According to the state transition diagram, different code words are mapped to uniform numbers, and then CodeMap is addressed by the numbers.
TABLE 3RSC code CodeIndex Table
Figure BDA0001401718180000103
Figure BDA0001401718180000111
The CodeMap maps to 10-bit example is as follows.
TABLE 4RSC code CodeMap Table
Figure BDA0001401718180000112
Figure BDA0001401718180000121
For the encoding sequence diagram of the encoding method described in the above embodiment, as shown in fig. 4, the encoding mainly includes 3 stages: CRCENC, RSCENC and OUT stages.
The 3 stages are divided into packets, and data flow handover between the stages is realized through 2 MesgBuffers and 2 CodeBuffers.
CRCENC performs input and CRC encoding. The control timing is performed by mesgainaddr. Inputting information bits when MesgInAddr is 0-999; MesgInAddr outputs a 24-bit CRC check at 1000-. 1000 information and 24 check bits are input into the MesgBuffer for encoding.
The RSCENC performs RSC encoding. The control timing is performed by InnerAddr. InnerADDr is divided into 2 small stages, and encoding of RSC1 (information non-interleaving) and RSC2 (information interleaving) is respectively executed; each small stage is divided into 2 groups, and the first 256 2-bit and the last 256 2-bit encodings are respectively performed. Each 256-bit code is divided into two processes, the first process calculating
Figure BDA0001401718180000122
The second process is based on
Figure BDA0001401718180000123
Obtaining a cyclic initial state Sc through marking, and then carrying out tail biting coding according to the Sc to obtain a coded CodeIndex shown in Table 3; results of tail biting encodingAnd storing the data into the CodeBuffer.
The OUT phase performs CodeMap and output. The control timing is controlled by OutAddr and OutIndex. OutAddr is from 0 to 255, representing 256 coded words per encoder. OutIndex is 0 … 4/R-1, wherein R is code rate 1/3, 1/6, 1/10 and 1/20. OutIndex looks up CodeIndex (Table 3) of RSC1a, RSC1b, RSC2a and RSC2b in turn every 4 cycles, and then looks up CodeMap (Table 4) according to the respective CodeIndex, and exporter OutIndex/4 bits.
As shown in fig. 5, the encoder includes 3 sections (each section is divided by a dotted line in fig. 5).
The first part is to execute CRC operation, MesgInAdd counts the input information, CRCEnc outputs 24 bit check after counting 1000, and then stores in MesgBuffer of ping-pong structure. Generating a polynomial: GI [ x ]]=x24+x7+x2+x+1
When CRC coding is finished, InnerADDr starts counting, RSCENC is started, and RSCENC coding is carried out. The input to RSCENC is controlled by InnerADDr and IntlvAddr, and the inputs to RSA1a, RSC1b, RSC2a and RSC2b are read from MesgBuffer and encoded. The coding result is CodeIndex, and is stored in CodeBuffer.
When the RSC encoding is finished, the output is started. OutAddr reads CodeIndex in CodeBu, then looks up a CodeMap, and outputs an encoding result.
MesgInAddr is a counter from 0 to 1024, 1024 being in the stop state.
InnerADDr is a 0 to 2048 technician and 2048 is inactive.
OutAddr is a counter from 0 to 256, with 256 being in the stall state.
OutIndex is a counter from 0 to 4/R-1, and OutInd stops when OutAddr stops.
RSCENC is a state machine and the state transitions are shown in Table 3.
The application also provides an embodiment of a decoding method of the Turbo code word.
Fig. 6 is a flowchart of an alternative Turbo code codeword decoding method according to an embodiment of the present invention, and as shown in fig. 6, the method includes the following steps:
step S201, receiving an input sequence to be decoded;
step S202, a decoder executes decoding on a sequence to be decoded to obtain a decoding result, wherein the decoding process comprises a first decoding stage and a second decoding stage, the first decoding stage executes feedback system convolutional code encoding RSC decoding through a first decoding device RSDEC, and the second decoding stage executes parallel cyclic redundancy check code PCRC decoding on the decoding result of the first decoding stage through a second decoding device PCRDEC;
step S203, outputting the decoding result.
As an alternative embodiment, receiving an input sequence to be decoded includes: at least two sub-decoders respectively receive the coding results generated by the corresponding sub-encoders based on the time sequence control; decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding result comprises: and the at least two sub-decoders respectively carry out iterative decoding on the corresponding coding results to obtain decoding results.
As an optional implementation manner, in the iterative decoding process, at least one of the following is performed at different stages by performing timing control: information is non-interleaved and information is interleaved.
As an alternative embodiment, the input sequence to be decoded is divided into a plurality of sub-codes, each sub-code is output by a corresponding encoder, the plurality of sub-codes correspond to the plurality of encoders, and receiving the input sequence to be decoded includes: counting the output of the current encoder under the control of a first control device AddrInM; after the first control means AddrInM reaches the preset number of words, the second control means addrinl controls to select the next encoder to receive input from the plurality of encoders in the preset order, and the counting is performed again by the first control means AddrInM.
As an optional implementation manner, dividing an input sequence to be decoded into a plurality of sub-codes, and performing decoding on the sequence to be decoded by a decoder to obtain a decoding result, including: and executing iterative timing control when the plurality of subcodes are decoded by a third control device InnerItern, wherein the third control device is used for counting and selecting the corresponding subcodes from the plurality of subcodes according to the current count.
As an optional implementation manner, when the feedback system convolutional code encoding RSC decoding is performed by the first decoding device RSCDEC, the calculation of the forward transition probability and the backward transition probability is performed separately, wherein in the calculation process, the forward transition probability and the backward transition probability are stored by the first storage unit AxInfo, the extrinsic information is stored by the second storage unit ExInfo, and the decoding decision bits are stored by the third storage unit dxmessg and the fourth storage unit DxBits at the same time.
As an alternative implementation, when the second decoding means PCRCDEC performs parallel CRC decoding on the decoding result of the first decoding stage, the fourth control means CRCAddr performs timing control on CRC decoding according to a preset timing rule, where the preset timing rule is to read data with a preset number of bits from the third storage unit dxmessg.
As an optional implementation, outputting the decoding result includes: the timing control of outputting the decoding result, which is the data read from the fourth storage unit DxBits, is performed by the fifth control means OutAddt.
As an optional implementation manner, the first decoding apparatus RSCDEC includes a branch Metric Gamma module, an initial state Metric initmetric module, a state Metric calculation Metric module, a forward state Metric storage AxInfo module, a total state Metric TotalMetric module, and a likelihood ratio calculation LLR module, where the branch Metric Gamma module is configured to perform branch Metric calculation, the initial state Metric initmetric module is configured to perform initialization on the Metric, the state Metric calculation Metric module is configured to perform calculation on branch metrics, the forward state Metric storage AxInfo module is configured to store the forward state metrics calculated by the state Metric calculation Metric module, the total state Metric TotalMetric module is configured to calculate total metrics, and the likelihood ratio calculation LLR module is configured to calculate soft information.
As an optional implementation manner, the branch metric Gamma module performs combination according to the channel information, the check information and the extrinsic information distributed to itself and according to the state transition diagram of the convolutional code or the Turbo code, and outputs the channel metric, the check metric and the branch metric.
As an optional implementation manner, the input sequence to be decoded is divided into a plurality of sub-codes, the initial state Metric iniMetric module is configured to store a forward end state and a backward end state of each sub-code, and load the corresponding end state as an initial state when next iteration occurs, so as to initialize the Metric.
As an optional implementation, the state Metric calculation Metric module performs a recursive operation according to the branch Metric to obtain a forward state Metric and a backward state Metric.
As an alternative embodiment, the forward state metric storage AxInfo module is configured to provide the forward state metrics to the likelihood ratio calculation LLR module.
As an alternative implementation, the likelihood ratio calculation LLR module obtains prior information according to the forward state metric, and calculates the decision likelihood ratio.
A specific application of the above embodiment is further explained below with reference to fig. 7:
as shown in fig. 7, the decoding mainly includes 4 stages: input, RSCDEC, pcrcdec, and Out stages.
These 4 phase packets enable inter-phase data flow handover via 2 CxInfo and 2 dxmessg, 2 DxBits.
The Input phase performs Input. The control time sequence is controlled by addrinM and AddrInL, and respectively corresponds to Outaddr and OutIndex of the code OUT. AddrInM is from 0 to 255 for 256 coded words per coder. AddrInL is 0 … 4/R-1, wherein R is code rate 1/3, 1/6, 1/10, 1/20. AddrInL inquires input first AddrInL/4 information of RSC1a, RSC1b, RSC2a and RSC2b in turn in every 4 cycles. The input information is stored in the CxInfo. CxIn stores every 1/R information corresponding to CodeMap into different RAM blocks respectively, so that subsequent decoding can be read out in parallel.
The RSCDEC performs RSC decoding. Control timing is performed by InnerItern and InnerIndex. InnerItern is divided into 4 small stages, RSC1a/1b (information) is respectively executedNo interleaving) and RSC2a/2b (information interleaving); each small stage is divided into 2 groups, and forward transfer probability A is respectively executedk(s) and a backward transition probability BkAnd(s) calculating. The effective calculation of the two processes needs 256 beats, the front part has 8-beat gaps for loading in the initial state, and the rear part has 8-beat gaps for storing the calculation result. Forward transition probability A of decodingk(s) and a backward transition probability Bk(s) store AxInfo, external information Le(uk) ExInfo is stored, and the decoded decision bits are stored in both DxMesg and DxBits. The iteration of the whole process is controlled by InnerItern, which completes an iteration flow every 4 counts.
pcrcdec performs parallel crc decoding. The control timing is CRCAddr. The module reads 32 bits from DxMesg each time, reads 1024 bits through 32 beats for CRC decoding, and realizes that 1024-bit parallel CRC decoding is completed by 32 beats.
And outputting by an Out module. The control timing is OutAddt. The module reads the decoding result bit by bit from the DxBits.
As shown in fig. 8, the decoder includes four parts:
the first part is the execution input. AddrInM is the codeword read counter and AddrInL is the read counter for 1/R information in each codeword. Inputting and storing the input into a ping-pong memory CxInfo; CxInfo stores 1/R information in each codeword in parallel.
When the input is finished, the decoding is started. When decoding, two decoders are started simultaneously, and decoding of RSC1a and RSC1b (or RSC2a and RSC2b) is performed respectively. During decoding, when InnerItern modulo 4 is 0, calculating forward transition probabilities of RSC1a and RSC1 b; when InnerItern modulo 4 is 1, calculating the backward transition probability of RSC1a and RSC1 b; when InnerItern modulo 4 is 2, calculating the forward transition probability of RSC2a and RSC2 b; calculations of the backward transition probabilities of RSC2a and RSC2b are performed when innerlitern modulo 4 is 3. RSC calculates and outputs extrinsic information and a decoding result, wherein the extrinsic information is stored in ExInfo, and the decoding result is simultaneously stored in DxMesg and DxBits.
When the decoding is finished, CRCdec operation is executed, CRCAdd counts input bits, and pcrcdec outputs bits for checking correctness or not when counting 32.
When CRC decoding is finished, output operation is executed. The output is 1000 bits by bit from DxBits.
AddrInL is a counter of 0 to 4/R that counts cycles and stops when AddrInM is 256.
AddrInL is a counter from 0 to 256, 256 is off, and 1 is added when AddrInL is 4/R-1.
InnerIndex is a first-adding and then-subtracting circular counter, which is first added from-8 to 256+8 and then subtracted from 256+8 to-8; InnerIter is a counter from 0 to ITERNUM, which is inactive.
CRCAddr is a counter from 0 to 32, with 32 being inactive.
OutAddr is a counter from 0 to 1000, and 1000 is in the stop state.
RSCDec is an RSC decoder.
As shown in fig. 9, the hardware structure of the decoder includes a branch Metric Gamma module, an initial state Metric iniMetric module, a state Metric calculating Metric module, a forward state Metric storage AxInfo module, a total state Metric TotalMetric module, and a likelihood ratio calculating LLR module.
(1) The Gamma module performs branch metric calculation:
the branch measurement is combined according to the distributed channel information, check information and external information and the state transition diagram of the convolutional code or the Turbo code, and the channel measurement is output
Figure BDA0001401718180000181
Check metric
Figure BDA0001401718180000182
And branch metric Ek. Namely, it is
Figure BDA0001401718180000183
Figure BDA0001401718180000184
Figure BDA0001401718180000185
Wherein
Figure BDA0001401718180000186
I.e. InGamma E, E in the figurekI.e., InGamma in the figure, and 64 data thereof correspond to 64 state transition branches of table 3. T is a normalization factor.
(2) Metric completes branch Metric calculation:
according to the branch measurement, making recursive operation so as to obtain forward state measurement Ak(s) and a backward state metric Bk(s) is:
Figure BDA0001401718180000187
Figure BDA0001401718180000188
performing forward state metric A when InnerItern modulo 2 is 0k(s) compute, perform the backward state metric B when InnerItern modulo 2 is 1k(s) calculating. In this calculation, the forward transition state is shown in table 5, and the backward transition state is shown in table 1.
TABLE 5RSC Forward State transition Table
Figure BDA0001401718180000189
Figure BDA0001401718180000191
(3) initmetric completes the initialization of Metric.
Because the RSC code adopts a loop structure, the initial state of each sub-decoder is the final state of the last iteration of each sub-decoder. The iniMetric stores the front end state and the back end state of the RSCA and the front end state and the back end state of the RSCB, 4 groups of state values are used, and the Metric Metric initialization is carried out when the corresponding end state is loaded as the initial state in the next iteration.
(4) AxInfo stores the forward metric computation results for LLR computation.
(5) The Totalmetric module calculates the total metric, i.e.
When u iskWhen the value is 0.3, 16 state transitions are provided
Figure BDA0001401718180000192
When u isk0, s' corresponds to column 2 of table 1; when u isk1, s' corresponds to column 3 of table 1; when u isk2, s' corresponds to column 4 of table 1; when u iskWith 3, s' is assigned to column 5 of table 1.
(6) The calculation of the soft LLR information mainly comprises 3 steps:
1) minimum state transition, i.e. completing 4 minimum value comparisons of 16 to 1
Figure BDA0001401718180000193
2) And (3) likelihood ratio calculation, namely subtracting the likelihood ratio of 0 from the likelihood ratio of 0.3 to obtain prior information:
Figure BDA0001401718180000201
3) finally, the decision likelihood ratio is calculated, i.e.
Figure BDA0001401718180000202
The application provides a design method of a low-code-rate binary Turbo scheme code and a coding and decoding device, wherein the method realizes tailless bit coding through a cyclic recursive system convolutional code; generating low-code-rate code words by mapping check bits through long sequences; and high-speed coding and decoding are realized by parallel of a plurality of recursive systematic convolutional codes. The device can realize the encoding and decoding of the binary Turbo code with low complexity and low code rate and without tail bits at high speed, and solves the problem of higher complexity of the traditional low-code-rate error correcting code.
Compared with the prior art, the method has the following advantages:
the low-code-rate binary Turbo case code is designed, the low-complexity and low-code-rate binary Turbo case code is coded and decoded, and the problem of design and application of the Turbo code in the low-code-rate field is solved.
The present application receives an input sequence of codes respectively by at least two sub-encoders allowing simultaneous encoding and decoding, wherein the input sequence of codes comprises at least: a symbol consisting of at least two information bits; coding an input sequence of a received code by adopting a feedback system convolutional code (RSC) through at least two sub-encoders to generate a coding result; outputting a coding result; decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding process comprises a first decoding stage and a second decoding stage, the first decoding stage is to execute feedback system convolutional code (RSC) decoding by a first decoding device (PCRCDEC), and the second decoding stage is to execute Parallel Cyclic Redundancy Check (PCRC) decoding on the decoding result of the first decoding stage by a second decoding device (PCRCDEC); and the decoding result is output, the technical problem that the encoding and decoding complexity of the low-code-rate Turbo code in the related technology is high is solved, the encoding and decoding complexity of the Turbo code is reduced, and the technical effect of an error correction function is realized.
Of course, it is not necessary for any product to achieve all of the above-described advantages at the same time for the practice of the present application.
It should be noted that although the flow chart of the drawings shows a logical order, in some cases the steps shown or described may be performed in an order different from that shown.
The above-mentioned apparatus may comprise a processor and a memory, and the above-mentioned units may be stored in the memory as program units, and the processor executes the above-mentioned program units stored in the memory to implement the corresponding functions.
The memory may include volatile memory in a computer readable medium, Random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip.
The order of the embodiments of the present application described above does not represent the merits of the embodiments.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. In the embodiments provided in the present application, it should be understood that the disclosed technology can be implemented in other ways.
The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units may be a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be essentially implemented as a part of or all or part of the technical solution contributed by the prior art, and the technical solution may be embodied in a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (17)

1. A method for generating Turbo code words is characterized by comprising the following steps:
at least two sub-encoders allowing simultaneous encoding and decoding respectively receive an input sequence of codes, wherein the input sequence of codes comprises at least: a symbol consisting of at least two information bits;
coding the input sequence of the received code by the at least two sub-encoders by adopting a feedback system convolutional code (RSC) to generate a coding result;
outputting the coding result;
each sub-encoder comprises two cyclic recursive systematic convolutional codes, CRSC, encoders, before at least two sub-encoders allowing simultaneous encoding and decoding receive the input sequence of codes, respectively, the method further comprises:
removing correlation between the input sequences of the codes received by the at least two sub-encoders through an interleaver;
the CRSC encoder controls the initial state and the termination state of each sub-encoder to be the same based on a self-truncation mechanism, wherein the at least two sub-encoders adopt a feedback system convolutional code (RSC) to encode the input sequence receiving the code to generate an encoding result, and the encoding result comprises the following steps:
acquiring initial state bits of the sub-encoders and final states obtained by pre-encoding;
precoding according to the last state to obtain a cyclic state of a convolutional code of a cyclic recursive system and generate a coding matrix;
setting the initial state of the sub-encoder as a cyclic state obtained by pre-encoding, and encoding to obtain a final encoding result of which the termination state is the initial state bit;
in the process of inputting the input sequence of codes to the sub-encoder, timing control is performed by the first control means mesgainaddr and at least one of the following is performed at different stages: inputting information bits, a CRC check and check bit data for coding;
during the encoding, the timing control is performed by the second control means lnneraddr and at least one of the following is performed at different stages: non-interleaved coding of information and interleaved coding of information.
2. Method according to claim 1, characterized in that in the course of inputting the coding result, a timing control is performed by means of a third control means OutAddr and a fourth control means OutIndex, and at least one of the following is performed at different stages: an encoding result of a predetermined length is input.
3. The method according to claim 2, wherein the first control means performs a first count on the input information, performs a bit check after the first count reaches a predetermined number of bits, and saves the check result; and controlling the second control device to start second counting when the coding is finished, and starting the coding.
4. A method according to claim 3, characterized in that the input of the sub-encoder is controlled by the second control means and reads the check result already saved and encodes it.
5. A decoding method of Turbo code words is characterized by comprising the following steps:
receiving an input sequence to be decoded;
decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding process comprises a first decoding stage and a second decoding stage, the first decoding stage is to execute feedback system convolutional code encoding RSC decoding by a first decoding device RSDEC, and the second decoding stage is to execute parallel cyclic redundancy check code PCRC decoding on the decoding result of the first decoding stage by a second decoding device PCRCDEC;
outputting the decoding result;
the input sequence to be decoded is divided into a plurality of subcodes, each subcode is output by a corresponding encoder, the plurality of subcodes correspond to the plurality of encoders, and receiving the input sequence to be decoded comprises:
counting the output of the current encoder under the control of a first control device AddrInM;
after the first control device AddrInM reaches the preset number of words, the second control device addrinl controls to select the next encoder to receive input from the plurality of encoders according to the preset sequence, and the first control device AddrInM performs counting again.
6. The method of claim 5,
receiving an input sequence to be decoded comprises: the at least two sub-decoders respectively receive the coding results generated by the corresponding sub-encoders based on the time sequence control;
decoding the sequence to be decoded by a decoder to obtain a decoding result, wherein the decoding result comprises: and the at least two sub-decoders respectively carry out iterative decoding on the corresponding coding results to obtain decoding results.
7. The method of claim 6, wherein at least one of the following is performed at different stages during the iterative decoding by performing timing control: information is non-interleaved and information is interleaved.
8. The method of claim 5, wherein the input sequence to be decoded is divided into a plurality of subcodes, and the decoding is performed on the sequence to be decoded by a decoder to obtain a decoding result, comprising:
and executing iteration time sequence control when the plurality of subcodes are decoded through a third control device InnerItern, wherein the third control device is used for counting and selecting the corresponding subcodes from the plurality of subcodes according to the current count.
9. The method as claimed in claim 5, wherein the calculation of the forward transition probability and the backward transition probability is performed separately when the feedback system convolutional code coding RSC decoding is performed by the first decoding means RSCDEC, wherein in the process of performing the calculation, the forward transition probability and the backward transition probability are stored by a first storage unit AxInfo, the extrinsic information is stored by a second storage unit ExInfo, and the decoding decision bits are simultaneously stored by a third storage unit dxmems g and a fourth storage unit DxBits.
10. The method as claimed in claim 9, wherein, when the parallel CRC code PCRC decoding is performed on the decoding result of said first decoding stage by the second decoding means PCRCDEC, the timing control of the CRC decoding is performed by the fourth control means CRCAddr according to a preset timing rule, said preset timing rule being that a preset number of bits of data are read from said third storage unit dxmessg.
11. The method of claim 9, wherein outputting the coding result comprises:
performing, by a fifth control device outadddt, timing control of outputting the decoding result, where the decoding result is data read from the fourth storage unit DxBits.
12. The method as claimed in claim 9, wherein the first decoding apparatus RSCDEC includes a branch Metric Gamma module for performing branch Metric calculation, an initial state Metric iniMetric module for performing initialization of the Metric, a forward state Metric storage AxInfo module for performing calculation of the branch Metric, a total state Metric TotalMetric module for performing calculation of the branch Metric, and a likelihood ratio calculation LLR module for calculating soft information.
13. The method of claim 12, wherein the branch metric Gamma module combines the channel metric, the check metric and the branch metric according to the channel information, the check information and the extrinsic information distributed to itself and according to a state transition diagram of a convolutional code or a Turbo code.
14. The method of claim 12 wherein the input sequence to be decoded is divided into a plurality of sub-codes, the initial state Metric iniMetric module is configured to store a forward end state and a backward end state of each sub-code, and to initialize the Metric by loading the corresponding end state as an initial state at a next iteration.
15. The method of claim 12, wherein the state Metric calculation Metric module performs a recursive operation based on the branch metrics to obtain forward state metrics and backward state metrics.
16. The method as recited in claim 12 wherein said forward state metric storage AxInfo module is configured to provide said forward state metrics to said likelihood ratio computation LLR module.
17. The method of claim 16, wherein the likelihood ratio calculation LLR module calculates the decision likelihood ratio based on a priori information obtained from the forward state metrics.
CN201710804826.7A 2017-09-07 2017-09-07 Turbo code word generating method and decoding method Active CN107733445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710804826.7A CN107733445B (en) 2017-09-07 2017-09-07 Turbo code word generating method and decoding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710804826.7A CN107733445B (en) 2017-09-07 2017-09-07 Turbo code word generating method and decoding method

Publications (2)

Publication Number Publication Date
CN107733445A CN107733445A (en) 2018-02-23
CN107733445B true CN107733445B (en) 2021-07-09

Family

ID=61205045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710804826.7A Active CN107733445B (en) 2017-09-07 2017-09-07 Turbo code word generating method and decoding method

Country Status (1)

Country Link
CN (1) CN107733445B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6304995B1 (en) * 1999-01-26 2001-10-16 Trw Inc. Pipelined architecture to decode parallel and serial concatenated codes
CN1455565A (en) * 2003-03-17 2003-11-12 西南交通大学 Parallel Turbo coding-decoding method based on block processing for error control of digital communication
CN101083512A (en) * 2006-06-02 2007-12-05 中兴通讯股份有限公司 Dual-binary system tailbaiting Turbo code coding method and apparatus
CN101257315A (en) * 2008-04-03 2008-09-03 浙江大学 Method for duobinary Turbo code to stop iterative decoding
CN101867379A (en) * 2010-06-24 2010-10-20 东南大学 Cyclic redundancy check-assisted convolutional code decoding method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6304995B1 (en) * 1999-01-26 2001-10-16 Trw Inc. Pipelined architecture to decode parallel and serial concatenated codes
CN1455565A (en) * 2003-03-17 2003-11-12 西南交通大学 Parallel Turbo coding-decoding method based on block processing for error control of digital communication
CN101083512A (en) * 2006-06-02 2007-12-05 中兴通讯股份有限公司 Dual-binary system tailbaiting Turbo code coding method and apparatus
CN101257315A (en) * 2008-04-03 2008-09-03 浙江大学 Method for duobinary Turbo code to stop iterative decoding
CN101867379A (en) * 2010-06-24 2010-10-20 东南大学 Cyclic redundancy check-assisted convolutional code decoding method

Also Published As

Publication number Publication date
CN107733445A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
US10673468B2 (en) Concatenated and sliding-window polar coding
US6530059B1 (en) Tail-biting turbo-code encoder and associated decoder
EP0834222B1 (en) Parallel concatenated tail-biting convolutional code and decoder therefor
JP5653936B2 (en) Coding and decoding methods for deleted correction convolutional codes and convolutional turbo codes
CN106059596B (en) Using binary BCH code as the grouping markov supercomposed coding method and its interpretation method of composition code
JP2006115145A (en) Decoding device and decoding method
JP2000216689A (en) Repetitive turbo code decoder and method for optimizing performance of the decoder
Riedel MAP decoding of convolutional codes using reciprocal dual codes
US8166373B2 (en) Method and apparatus for turbo encoding and decoding
CN103152060A (en) Grouping Markov overlapping coding method
US11843459B2 (en) Spatially coupled forward error correction encoding method and device using generalized error locating codes as component codes
CN105634508A (en) Realization method of low complexity performance limit approximate Turbo decoder
JP5700035B2 (en) Error correction code decoding apparatus, error correction code decoding method, and error correction code decoding program
EP1471677A1 (en) Method of blindly detecting a transport format of an incident convolutional encoded signal, and corresponding convolutional code decoder
US8983008B2 (en) Methods and apparatus for tail termination of turbo decoding
KR19990081470A (en) Method of terminating iterative decoding of turbo decoder and its decoder
CN107733445B (en) Turbo code word generating method and decoding method
CN108471341B (en) Method for convolutional encoding and decoding
CN108880569B (en) Rate compatible coding method based on feedback grouping Markov superposition coding
WO2009158341A2 (en) Device having turbo decoding capabilities and a method for turbo decoding
CN112332868A (en) Turbo parallel decoding method based on DVB-RCS2
WO2017121473A1 (en) Window-interleaved turbo (wi-turbo) codes
CN108649966B (en) Low-complexity iterative decoding method for Reed-Solomon-convolution concatenated code
US20090015448A1 (en) decoder
Shah et al. Performance analysis of turbo code for CDMA 2000 with convolutional coded IS-95 system in wireless communication system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant