CN106788899A - To border initial method after highly reliable Turbo decoders - Google Patents

To border initial method after highly reliable Turbo decoders Download PDF

Info

Publication number
CN106788899A
CN106788899A CN201611254047.6A CN201611254047A CN106788899A CN 106788899 A CN106788899 A CN 106788899A CN 201611254047 A CN201611254047 A CN 201611254047A CN 106788899 A CN106788899 A CN 106788899A
Authority
CN
China
Prior art keywords
window
decoding
iteration
training sequence
initial value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611254047.6A
Other languages
Chinese (zh)
Inventor
刘振
杨乐
申山山
吴斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Microelectronics of CAS
Original Assignee
Institute of Microelectronics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Microelectronics of CAS filed Critical Institute of Microelectronics of CAS
Priority to CN201611254047.6A priority Critical patent/CN106788899A/en
Publication of CN106788899A publication Critical patent/CN106788899A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0056Systems characterized by the type of code used
    • H04L1/0059Convolutional codes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Error Detection And Correction (AREA)

Abstract

To border initial method after a kind of highly reliable Turbo decoders, including:In combination with the method for border transmission between training sequence and iteration, the β boundary values of i-th decoding window original position during by 1 iteration of kth are stored, the β boundary values of i-th window original position pass to the i-th 2 corresponding training windows of decoding window as its β borders initial value when in kth time iteration using upper once iteration, the i-th 2 β borders initial values of decoding window, wherein i >=3 are produced by training the backward recursive of window to calculate.Present invention can be suitably applied to various communication standards, and in combination with the border transmission between training sequence and iteration, relative to existing scheme, same performance can be realized under code check high using smaller sliding window window, and under parallel decoding framework, can be with the memory spending of substantial amounts of reduction storage state metric.

Description

To border initial method after highly reliable Turbo decoders
Technical field
The invention belongs to the channel decoding field in wireless communication field, relate more specifically to translating for Turbo decoders Code method, it is especially a kind of to be decoded using the Turbo of log-map algorithms or max-log-map algorithms suitable for component decoder To border initial method after the highly reliable Turbo decoders of device.
Background technology
In order to improve the transmission reliability of radio communication, forward channel coding techniques is typically used in a communications system. 1993, C.Berrou, A.Glavieux and P.Thitimajshima proposed the concept of Turbo code first, and Turbo code is one Parallel Concatenated Convolutional Code is planted, two parallel convolutional code component coders has been used in Turbo encoder, wherein second component Then the input message sequence of code can carry out convolutional encoding by the treatment of random interleaver, and the information after coding can be answered With and carry out punching treatment, to improve code check.The proposition of Turbo code is landmark breakthrough in field of channel coding, with it The excellent properties that shannon limit can be approached cause extensive concern and the research of scholars, and the thought based on iteration is rapid in nothing Line is widely used in the communications field, such as iterative receiver and the channel estimation based on alternative manner etc..
Because Turbo code performance can approach shannon limit, therefore to entangling before being employed as by multiple wireless communication standards Miscoding scheme, such as high-speed slender body theory (High Speed Download Packet Access, HSDPA) agreement, Long Term Evolution (the Long of third generation affiliate (3rd Generation Partnership Project, 3GPP) tissue Term Evolution, LTE) agreement.Turbo code is gradually applied in the systems such as satellite communication simultaneously.
In the application of Turbo code, the decoder of receiving terminal in order to avoid the state measurement of the whole code block of storage, deposit by reduction The use of reservoir is general using slip window algorithm.It is general all using punching in Turbo encoding schemes in order to improve efficiency of transmission Technology deletes check bit to lift code check, and code check reaches as high as 0.95 in 3GPP LTE systems.The check bit pair of deletion The log-likelihood ratio (Log Likelihood Ratios, LLR) answered is filled with 0, because the LLR information of mass efficient is deleted Remove, obvious decoding performance can be caused to lose.Conventional method is lifted by increasing training sequence length and decoding length of window Performance under code check high, but memory spending and decoding latency increase can be caused.
The content of the invention
In view of this, the present invention is in order to solve prior art when using the decoding algorithm for sliding window algorithm, in height There are problems that hydraulic performance decline is obvious under code check, initialized to β borders after a kind of highly reliable Turbo decoders especially set out Method.
Specifically, the present invention is proposed after a kind of Turbo decoders to border initial method, it is characterised in that including Following steps:
In combination with the method for border transmission between training sequence and iteration, i-th decoding window during by -1 iteration of kth The β boundary values of mouth original position are stored, i-th β side of window original position when in kth time iteration by upper once iteration Dividing value passes to the i-th -2 and decodes the corresponding training window of window as its β borders initial value, is passed by training the backward of window Return the β borders initial value for calculating and producing the i-th -2 decoding windows, wherein i is natural number, i >=3.
Wherein, in the first iteration, in addition to last decoding window, the border initial value of window is decoded only by instruction Practice sequence to produce, the β initial values of training sequence are set to probability values.
Wherein, in each iteration, to last decoding window, it is necessary to these are translated decoding window since the 3rd The β boundary values of the head original position of code window are stored in SMP memories.
Wherein, in second and follow-up iteration, for first training sequence for decoding window, at the beginning of its β border Initial value is set to the 3rd β borders initial value of decoding window preserved in SMP memories during last iteration.
Wherein, most latter two decoding window does not need the boundary value of last iteration transmission initial as the border of training sequence Value.
Wherein, the Turbo decoders are translated using the component of log-map decoding algorithms or max-log-map decoding algorithms Code device.
Present invention also offers after a kind of Turbo decoders to border initial method, it is characterised in that including following step Suddenly:
Step S1, carries out data point window, it is assumed that code block length is N, and the length of window is W, then haveIndividual decoding window Mouthful, whereinExpression rounds up, and N and W are positive integer;Length of window is consistent is W with decoding for the length of training sequence;
Step S2, in first time iteration, first carries out forward recursive calculating and the backward recursive of training sequence is calculated simultaneously, Forward recursive state measurement α is calculated from there through forward recursive, and is stored to last0in first-0out store, by training The backward recursive of sequence is calculated the β borders initial value of decoding window;
Then start to decode the backward recursive in window to calculate, during backward recursive is calculated, the β borders that will be obtained Initial value and the α values taken out from last0in first-0out store give the logarithm of log-likelihood calculations unit calculating corresponding bit seemingly Right ratio;After first LLR ratio for decoding all bits in window is all calculated to be terminated, next decoding is calculated Window, and repeat top-operation;
Since the 3rd decodes window, the β value that will decode the head original position of window is stored in SMP memories, For the β borders initial value in next iteration as interdependent training sequence;
Step S3, in second iteration, storage i-th window head in SMP memories rises during by last iteration The beginning β borders initial value of position passes to the i-th -2 decoding corresponding training sequences of window as its β borders initial value, wherein i It is natural number, and i >=3;
Step S4, the S3 that repeats the above steps terminates until reaching fixed number of iterations, decoding.
Wherein, in the first iteration, the border initial value of the backward recursive β of the training sequence is both configured to step 2 Equiprobable value.
Wherein, toFor individual window, i.e., penultimate decodes window, and the border of its training sequence is initial Value is exactly the β borders initial value of whole grid chart, it is not necessary to the boundary value of last iteration;ToFor individual window, i.e., most Latter decoding window, does not have training sequence, again without the boundary value of last iteration;But two head startings of window The β value of position need to pass to next iteration.
Knowable to above-mentioned technical proposal, compared with existing boundaries initialization scheme, the solution of the present invention needs instruction simultaneously Practice the border transmission between sequence and iteration, the training sequence length that window is decoded in first time iteration is L, but at second repeatedly For when coding sequence training length may be considered 3L, when third time iteration train length be 6L.As iterations increases Plus the equivalent length of training sequence increases sharply, so the backward β borders reliability that the program calculates generation is higher, so reaching To smaller decoding window can be used in the case of the same bit error rate, so as to reduce the expense of memory;The present invention can be with It is applied to component decoder simultaneously using log-map decoding algorithms and the Turbo decoders of max-log-map decoding algorithms, can With suitable for various communication standards.
Compared with prior art, under same error rate condition, the program greatly reduces memory spending;Can To be applied to component decoder simultaneously using log-map decoding algorithms and the Turbo decoders of max-log-map decoding algorithms, Go for various communication standards;Same performance can be realized using smaller sliding window window under code check high, and Under parallel decoding framework, can be with the memory spending of substantial amounts of reduction storage state metric.
Brief description of the drawings
Fig. 1 is the explanatory diagram divided to window during decoding;
Fig. 2 is the operation chart of border initial method of the invention;
Fig. 3 is the principle Organization Chart of component decoder in Turbo decoders using the method;
Fig. 4 is that the bit error rate being applied to the method when code block length 6144, code check in 3GPP LTE systems is 0.95 is imitated True figure;
Fig. 5 is the BER Simulation figure of the application program in the Turbo decoders of 3GPP LTE systems.
Specific embodiment
Turbo decoders have used for reference the concept fed back in electronic circuit, a kind of iterative decoding framework are employed, by two The mode that external information is exchanged between individual soft input/soft output (SISO) component decoder carries out successive ignition decoding, and decoding terminates Carry out hard decision output decoding result.
For considering for performance and complexity, decoding algorithm typically uses max-log-MAP algorithms.
Wherein αkAnd βkRespectively forward recursive state measurement and backward recursive state measurement, γ k (s ', s) represent be from State s ' jumps to the branch metric of state s.
Forward recursive state measurement αkWith backward recursive state measurement βkCan be calculated by following recurrence formula:
αk(s)≈max(αk-1(s′0)+γk(s′0, s), αk-1(s′1)+λk(s′1, s)) and (2)
βk(s)≈max(βk+1(s0)+γk+1(s ', s0), βk+1(s1)+γk+1(s′1, s)) and (3)
Because encoder original state is 0, then the border initial value α of state measurement α0(0)=0, α0(s)=- ∞, s ≠ 0. Encoder final state is also 0, therefore the border initial value of β is βN(0)=0, βN(s)=- ∞, s ≠ 0.
In LLR calculating, two groups are divided into according to different information bit inputs, branch metric calculation formula is as follows:
Wherein ukInformation bit, the systematic bits of encoder output and the school for causing State Transferring are represented respectively Test bit, La(uk) representative information bit prior information,The channel Soft Inform ation of systematic bits is represented,Represent The channel Soft Inform ation of check bit.
During iterative decoding, transmission external information is needed between two component decoders as prior information.External information calculates public Formula is as follows:
Max-log-MAP algorithms are to have done approximate on the basis of log-MAP algorithms, bring the performance loss of about 0.4dB, Performance loss can be reduced to 0.1dB or so by being multiplied by modifying factor s to external information.
Meanwhile, in receiving terminal, decoding algorithm in order to avoid storing the state measurement of whole code block, less memory makes With general use slides window algorithm.It is general in Turbo encoding schemes all to be deleted using cheesing techniques in order to improve efficiency of transmission To lift code check, code check reaches as high as 0.95 to check bit in 3GPP LTE systems.The corresponding logarithm of check bit of deletion Likelihood ratio (Log Likelihood Ratios, LLR) is filled with 0, because the LLR information of mass efficient is deleted, can be caused Obvious decoding performance loss.Conventional method is lifted under code check high by increasing training sequence length and decoding length of window Performance, but can cause memory spending and the decoding latency to increase.
In order to lift performance under code check high, the first scheme is that decoding window and training length of window are increased into low bit- rate Under 2 to 3 times;Second scheme is that decoding window and training length of window are disposed as into 128, and window is set under transfer mode It is 96, but still Shortcomings.The third scheme is window will to be trained to be supplied to the border of decoding window in last iterative process Initial value passes to next iteration, used as the border initial value of adjacent training window, in -1 iteration of kth, window 2L ~3L is training window, and border initial value is provided to decoding window L~2L, the border initial value is stored, and pass to Kth time iteration is used as the border initial value for training window L~2L.For in code check lower slider window algorithm high because border is unreliable The problem of hydraulic performance decline, the present invention is caused to propose a solution, also in conjunction with the method that training sequence and border are transmitted, But in same window size, there is apparent bit error rate performance to be lifted compared to above-mentioned three kinds of schemes.
More specifically, the present invention uses the Turbo of log-map algorithms or max-log-map algorithms for component decoder The design of decoder, discloses a kind of being applied to using the decoding window edge β initialization of the Turbo decoders for sliding window algorithm Design, wherein combined training mode and transfer mode, the β value that window original position is decoded during by last iteration are passed to Next iteration, used as the border initial value of the β of previous training window, thereby may be ensured that increases with iterations, obtains β The equivalent training length on border quickly increases, so as to improve the reliability of β initial-boundary values;Compared with prior art, Under same error rate condition, the program greatly reduces memory spending;Can be used suitable for component decoder simultaneously The Turbo decoders of log-map decoding algorithms and max-log-map decoding algorithms, go for various communication standards;In height Same performance can be realized using smaller sliding window window under code check, and under parallel decoding framework, can be with substantial amounts of Reduce the memory spending of storage state metric.
As a preferred embodiment of the present invention, the invention discloses a kind of suitable for highly reliable under code check high To border initial method after Turbo decoders, comprise the following steps:
In combination with the method for border transmission between the method and iteration of training sequence, decoding during by -1 iteration of kth The β boundary values of window i (i >=3) original position are stored, and window i (i >=3) rises when in kth time iteration by upper once iteration The beginning β boundary values of position pass to the corresponding training windows of decoding window i-2 as its β borders initial value.By training window The backward recursive of mouth calculates the β borders initial value for producing decoding window i-2.
Wherein, in the first iteration, in addition to last decoding window, the border initial value of window is decoded only by instruction Practice sequence to produce, the β initial values of training sequence are set to probability values.
Wherein, in each iteration, to last decoding window, it is necessary to these are translated decoding window since the 3rd The β boundary values of the head original position of code window are stored in SMP memories.
Wherein, in second and follow-up iteration, for first training sequence for decoding window, its β initial value It is set to the 3rd β borders initial value of decoding window preserved in SMP memories during last iteration.
Wherein, most latter two decoding window does not need the boundary value of last iteration transmission initial as the border of training sequence Value.
As a preferred embodiment of the present invention, the invention discloses the border suitable under code check high of the invention Initial method, mainly includes the following steps that:
Data point window is carried out first, it is assumed that code block length is N, and the length of window is W, then haveIndividual decoding window, WhereinExpression rounds up.Length of window is consistent is W with decoding for the length of training sequence.
During half interative computation in first time iteration or component decoder, carry out first simultaneously before to α recursive calculations and The backward recursive of training sequence is calculated, and in the first iteration, the boundary value of the backward recursive β of training sequence is both configured to wait general The value of rate.When forward recursive is calculated, decoding window in α values be stored in LIFO memories (Last in first out, Last0in first-0out store) in.When forward recursive and backward recursive all calculate the afterbody end position of the decoding window, forward direction The backward recursive of recurrence and training sequence terminates, and backward recursive calculates the β borders initial value of decoding window, then starts decoding Backward recursive in window is calculated, during backward recursive is calculated, the β that will the be calculated and α taken out from LIFO Value gives the LLR value that LLR computing units calculate corresponding bit, because the LLR for calculating is the order calculating against window, so needing Whole sequence is carried out to the LLR for calculating.After first LLR for decoding all bits in window is calculated to be terminated, Ke Yiji Next decoding window is calculated, and repeats top-operation.Since the 3rd decodes window, the head original position of window will be decoded β value storage in SMP memories, for the β borders initial value in next iteration as interdependent training sequence.
In second iteration, storage window i (i >=3) in SMP memories when kth time iteration is by upper once iteration The β value of head original position passes to the corresponding training sequences of decoding window i-2 as its β borders initial value.It is like this right For the decoding window i-2 of current iteration, just there is the training sequence length equivalent to three times length of window W.To For individual window, i.e., penultimate decodes window, and its training sequence is exactly last window, so the side of its training sequence Boundary's initial value is exactly the β borders initial value of whole grid chart, it is not necessary to the boundary value of last iteration, toIndividual window and Speech, i.e. last decoding window, do not have training sequence, again without the boundary value of last iteration.But two windows The β value of head original position need to pass to next iteration.
Above step is repeated to terminate until reaching fixed number of iterations, decoding.
To become apparent from the purpose of the present invention, concrete scheme and advantage, compiled below in conjunction with Turbo in 3GPP LTE systems The specific embodiment of decoding, and referring to the drawings, the present invention is described in more detail.
Fig. 1 show the explanatory diagram divided to window during decoding, wherein whole code block is divided into 5 windows, often Individual length of window is L.Wherein decoding window refers to the window where currently decoding bit, and training sequence or training window refer to The latter window adjacent with current decoding window.As a example by Fig. 1,0~L is current decoding window, then L~2L is corresponding Training window.It is original position wherein at the 0 of decoding window, is tail position at L.β value claims at the wherein afterbody L of decoding window To decode the β borders initial value of window, wherein the β boundary values at the afterbody 2L of training window are referred to as β borders initial value.During decoding The backward recursive of forward recursive and training sequence in decoding window is carried out simultaneously, and recurrence reaches to two intersections of window To after the afterbody of decoding window, start to decode the backward recursive in window and calculate β, while calculating LLR.
Fig. 2 show the iteration schematic diagram of specific border initial method of the invention, mainly illustrates how with reference to upper β value during an iteration carries out border transmission.To treat in two kinds of situation, distinguish first time iteration and follow-up iteration meter Calculation process.In first time iteration, the backward β borders initial value of decoding window is only produced by training sequence, there is no border Transmission.In follow-up iterative process, the i-th+2 decoding window head original positions preserved during k-1 is iterated to calculate β boundary values pass to the corresponding training sequence of i-th of kth iteration decoding window, the β borders as training sequence are initial Value, then produces the β borders initial value of decoding window by the backward recursive of training sequence.As a example by Fig. 2, in kth time repeatedly Dai Shi, it is assumed that current decoding window is 0~L, then its corresponding training sequence is L~2L, then will be stored in last iteration The β boundary values of head original position of decoding window 2L~3L pass to training sequence as β initial values.So for kth For the 0~L of decoding window of secondary iteration, produce its β borders initial value is the equal of, by 2L~4L in last iteration, to add this The β borders initial value of the decoding window that the common recursive calculations of L~2L in secondary iteration are produced, is like this just equivalent to 3L long The training sequence of degree decodes the border initial value of window to produce.
Fig. 3 show the principle Organization Chart of Turbo decoders component decoder of the invention, first by before decoder The channel Soft output information that soft demodulation module is produced is stored in memory, after decoding starts, reads depositing for the soft output of channel Reservoir, calculates score value metric γ first, and the γ values that will be calculated give wherein SMP memories all the way to be used to store every time repeatedly The border initial value of window is decoded in generation, forward state metric α computing units are given on another road, and its result of calculation enters after being stored in First go out in LIFO memories, for the calculating of LLR.Dummy β computing units are also given all the way, for the backward of training sequence Recursive training, will calculate the β boundary values for producing and passes to β computing units as the border initial value of decoding window.Backward state The backward recursive that measurement β computing units then enter row decoding window is calculated, and LLR is given after having β to calculate in calculating process Computing unit calculates the LLR of corresponding bit, after head original position of the backward recursive to decoding window, by the β value of the position Storage passes to next iteration in SMP memories.Preserved due to decoding window since the 3rd, therefore only need to preserveThe β boundary values of individual decoding window head original position.
Fig. 4 show the performance simulation figure of the application program in the Turbo decoders in 3GPP LTE systems, wherein horizontal Coordinate representation signal to noise ratio (Signal Noise Ratio, SNR), ordinate represent bit error rate (Bit Error Rate, BER).Code block length is 6144, and code check is 0.95 after punching, is contrasted for convenience, and all of scheme decoding algorithm is used Max-log-map algorithms, it is seen that when same performance is reached, length of window is only 32 to the solution of the present invention, and traditional is simple Use training sequence to need length of window is 128, and it is 96 that the simple scheme for using border to transmit needs length of window.The present invention is needed The length of window wanted is respectively the 1/4 and 1/3 of training sequence scheme and border transmission scheme.
Fig. 5 show the performance simulation figure of the application program in the Turbo decoders in 3GPP LTE systems, wherein horizontal Coordinate representation signal to noise ratio (SNR), ordinate represents bit error rate (BER).Code block length is 6144, and code check is after punching 0.95, contrast for convenience, all of scheme decoding algorithm uses max-log-map algorithms.With existing combined training sequence Arrange the scheme transmitted with border to compare, when length of window is 32, the solution of the present invention is 10 in the bit error rate-5When it is more existing Some good 0.1dB of hybrid plan performance, but the hardware spending for using is consistent.Currently existing scheme is when length of window is 40 The performance same with the solution of the present invention can be reached.
By such scheme, compared with the simple mode using training sequence, the present invention increase only L-2 window of storage The memory of boundary value, decoder just can with length of window be 32 to reach very excellent performance under code check high, and traditional Training sequence scheme then need length of window for 128, scheme transmit with border with traditional combined training sequence is compared, this The window edge number of scheme storage is consistent with traditional scheme, i.e., the increased storage for storing window edge during each iteration Device size is the same, but sliding window is smaller than traditional hybrid plan can just to reach same performance.
Particular embodiments described above, has been carried out further in detail to the purpose of the present invention, technical scheme and beneficial effect Describe in detail bright, it should be understood that the foregoing is only specific embodiment of the invention, be not intended to limit the invention, it is all Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements done etc. should be included in protection of the invention Within the scope of.

Claims (9)

1. to border initial method after a kind of Turbo decoders, it is characterised in that comprise the following steps:
In combination with the method for border transmission between training sequence and iteration, i-th decoding window during by -1 iteration of kth rises The beginning β boundary values of position are stored, i-th β boundary value of window original position when in kth time iteration by upper once iteration Pass to the i-th -2 and decode the corresponding training window of window as its β borders initial value, by the backward recursive meter for training window The β borders initial value for producing the i-th -2 decoding windows is calculated, wherein i is natural number, i >=3.
2. method according to claim 1, it is characterised in that in the first iteration, in addition to last decoding window, The border initial value for decoding window is only produced by training sequence, and the β initial values of training sequence are set to probability values.
3. method according to claim 1, it is characterised in that in each iteration, decoding window since the 3rd to Last decoding window is, it is necessary to by the β boundary values storage of the head original position of these decoding windows to SMP memories.
4. method according to claim 1, it is characterised in that in second and follow-up iteration, decoded to first For the training sequence of window, its β borders initial value is set to the 3rd decoding preserved in SMP memories during last iteration The β boundary values of window head original position.
5. method according to claim 1, it is characterised in that most latter two decoding window does not need last iteration transmission Boundary value as training sequence border initial value.
6. method according to claim 1, it is characterised in that the Turbo decoders using log-map decoding algorithms or The component decoder of max-log-map decoding algorithms.
7. to border initial method after a kind of Turbo decoders, it is characterised in that comprise the following steps:
Step S1, carries out data point window, it is assumed that code block length is N, and the length of window is W, then haveIndividual decoding window, WhereinExpression rounds up, and N and W are positive integer;Length of window is consistent is W with decoding for the length of training sequence;
Step S2, in first time iteration, first carries out forward recursive calculating and the backward recursive of training sequence is calculated, thus simultaneously Forward recursive state measurement α is calculated by forward recursive, and is stored to last0in first-0out store, by training sequence Backward recursive be calculated decoding window β borders initial value;
Then start to decode the backward recursive in window to calculate, during backward recursive is calculated, the β borders that will be obtained are initial Value and the α values taken out from last0in first-0out store give the log-likelihood ratio that log-likelihood calculations unit calculates corresponding bit Value;After first LLR ratio for decoding all bits in window is all calculated to be terminated, next decoding window is calculated, And repeat top-operation;
Since the 3rd decodes window, the β value that will decode the head original position of window is stored in SMP memories, is used for As the β borders initial value of interdependent training sequence in next iteration;
Step S3, in second iteration, storage i-th window head start bit in SMP memories during by last iteration The β borders initial value put passes to the i-th -2 and decodes the corresponding training sequence of window as its β borders initial value, and wherein i is certainly So number, and i >=3;
Step S4, the S3 that repeats the above steps terminates until reaching fixed number of iterations, decoding.
8. method according to claim 7, it is characterised in that step 2 in the first iteration, after the training sequence Equiprobable value is both configured to the border initial value of recurrence β.
9. method according to claim 7, it is characterised in that toFor individual window, i.e., penultimate is translated Code window, the border initial value of its training sequence is exactly the β borders initial value of whole grid chart, it is not necessary to the border of last iteration Value;ToFor individual window, i.e., last decodes window, does not have training sequence, again without the side of last iteration Dividing value;But the β value of two head original positions of window need to pass to next iteration.
CN201611254047.6A 2016-12-29 2016-12-29 To border initial method after highly reliable Turbo decoders Pending CN106788899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611254047.6A CN106788899A (en) 2016-12-29 2016-12-29 To border initial method after highly reliable Turbo decoders

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611254047.6A CN106788899A (en) 2016-12-29 2016-12-29 To border initial method after highly reliable Turbo decoders

Publications (1)

Publication Number Publication Date
CN106788899A true CN106788899A (en) 2017-05-31

Family

ID=58953248

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611254047.6A Pending CN106788899A (en) 2016-12-29 2016-12-29 To border initial method after highly reliable Turbo decoders

Country Status (1)

Country Link
CN (1) CN106788899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113765622A (en) * 2021-08-26 2021-12-07 希诺麦田技术(深圳)有限公司 Branch measurement initialization method, device, equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104488A1 (en) * 2006-10-27 2008-05-01 Jung-Fu Cheng Sliding Window Method and Apparatus for Soft Input/Soft Output Processing
CN101807971A (en) * 2010-03-08 2010-08-18 上海华为技术有限公司 Turbo code decoding method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104488A1 (en) * 2006-10-27 2008-05-01 Jung-Fu Cheng Sliding Window Method and Apparatus for Soft Input/Soft Output Processing
CN101807971A (en) * 2010-03-08 2010-08-18 上海华为技术有限公司 Turbo code decoding method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
申敏等: ""基于改进滑动窗的Turbo译码算法研究"", 《通信技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113765622A (en) * 2021-08-26 2021-12-07 希诺麦田技术(深圳)有限公司 Branch measurement initialization method, device, equipment and storage medium
CN113765622B (en) * 2021-08-26 2024-01-23 希诺麦田技术(深圳)有限公司 Branch metric initializing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103873073B (en) A kind of based on Turbo code high-speed coding implementation method parallel with adding window structure
CN101388674B (en) Decoding method, decoder and Turbo code decoder
CN101951266B (en) Turbo parallel decoding method and decoder
CN103269229B (en) A kind of mixed iteration interpretation method of LDPC-RS two dimension product code
KR20010072498A (en) Partitioned deinterleaver memory for map decoder
US9048877B2 (en) Turbo code parallel interleaver and parallel interleaving method thereof
US6434203B1 (en) Memory architecture for map decoder
CN104579369B (en) A kind of Turbo iterative decodings method and code translator
US8112698B2 (en) High speed turbo codes decoder for 3G using pipelined SISO Log-MAP decoders architecture
CN104767537B (en) A kind of Turbo interpretation methods for OFDM electric line communication systems
CN103354483B (en) General high-performance Radix-4SOVA decoder and interpretation method thereof
RU2571597C2 (en) Turbocode decoding method and device
CN105846827A (en) Iterative joint source channel decoding method based on arithmetic coding and low-density parity-check
US20130007568A1 (en) Error correcting code decoding device, error correcting code decoding method and error correcting code decoding program
JP3741616B2 (en) Soft decision output decoder for convolutional codes
CN106301391B (en) A kind of improved soft output tail-biting convolutional code interpretation method
US7573962B1 (en) Diversity code combining scheme for turbo coded systems
CN108134612B (en) Iterative decoding method for correcting synchronous and substitute error cascade code
US8983008B2 (en) Methods and apparatus for tail termination of turbo decoding
CN102611464B (en) Turbo decoder based on external information parallel update
CN106788899A (en) To border initial method after highly reliable Turbo decoders
CN103595424A (en) Component decoding method, decoder, Turbo decoding method and Turbo decoding device
CN113872615A (en) Variable-length Turbo code decoder device
Benkeser et al. Turbo decoder design for high code rates
CN106712778B (en) Turbo decoding device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531