CN100589357C - LDPC code vector decode translator and method based on unit array and its circulation shift array - Google Patents

LDPC code vector decode translator and method based on unit array and its circulation shift array Download PDF

Info

Publication number
CN100589357C
CN100589357C CN200510114589A CN200510114589A CN100589357C CN 100589357 C CN100589357 C CN 100589357C CN 200510114589 A CN200510114589 A CN 200510114589A CN 200510114589 A CN200510114589 A CN 200510114589A CN 100589357 C CN100589357 C CN 100589357C
Authority
CN
China
Prior art keywords
vector
check
node
array
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200510114589A
Other languages
Chinese (zh)
Other versions
CN1956368A (en
Inventor
徐俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN200510114589A priority Critical patent/CN100589357C/en
Publication of CN1956368A publication Critical patent/CN1956368A/en
Application granted granted Critical
Publication of CN100589357C publication Critical patent/CN100589357C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Error Detection And Correction (AREA)

Abstract

This invention relates to a vector decode method of LDPC code and its device based on the unit array and circulation shift array, which first of all divides received data into an array R containing nbvectors, computes an initial value of the reliability vector array and transfer information vector array, utilizes the transfer information vector matrix got from k-1 secondary iteration, reliabilityvector array and an un-I element value hij of a basic matrix to compute the kth time iteration transfer information vector matrix and reliability vector array, then to hard decide if the decoding issuccessful, if not, it continues the iteration, and all the vectors are 1x z soft bit ones. The device includes a basic matrix process module, an initial value operation module, an iteration operationmodule, a hard-decision check module and a control module, all storage modules can store z soft bits, the minimum operation units are z soft bit vector operation and the computing unit of each modulereads and writes data directly from related storage modules.

Description

LDPC code vector decode translator and method based on unit matrix and cyclic shift matrix thereof
Technical field
The present invention relates to be used in a kind of digital communication system the decoder and the interpretation method thereof of transfer of data error correction, particularly relate to the decoder and the interpretation method thereof of structurized low density parity check code in the digital communicating field error correcting technique (LDPC sign indicating number).
Background technology
Transfer of data between the storage system of all digital communication systems such as communication, radar, remote-control romote-sensing, digital computer and internal arithmetic and the computer etc. can be summed up as model as shown in Figure 1.Source encoder wherein is in order to improve the validity of transmission, channel encoder is in order to resist various noises and interference in the transmission course, by increasing redundant information artificially, the system that makes has the ability of automatic correction mistake, thereby guarantees the reliability of Digital Transmission.Low density parity check code is that a class can be found by Gallager, so be called the Gallager sign indicating number at first with the linear block codes of very sparse parity matrix or bipartite graph definition.Through the silence of many decades, along with the development of computer hardware and correlation theory, MacKay and Neal have rediscovered it, and have proved that it has the performance of approaching shannon limit.Current research shows that the low-density parity check code sign indicating number has following characteristics: low decoding complexity, but linear time code has the performance of approaching shannon limit, but parallel decoding, and performance is better than Turbo code under the long code elongate member.
The LDPC sign indicating number is a kind of based on sparse parity check matrix, utilizes the sparse property of its check matrix just, could realize the coding and decoding of low complex degree, thereby make the LDPC sign indicating number move towards practicability.The LDPC sign indicating number that above-mentioned Gallager sign indicating number is a kind of canonical (regular ldpcc), and people such as Luby and Mitzenmacher promote the Gallager sign indicating number, propose non-regular LDPC sign indicating number (irregular ldpcc).The LDPC sign indicating number has a lot of decoding algorithms, wherein, information pass-algorithm (MessagePassing algorithm) or confidence spread algorithm (Belief Propagation algorithm, BP algorithm) are the main flow and the basic algorithm of LDPC sign indicating number, and a lot of algorithms all are based on the improvement of this algorithm.
Information is transmitted the BP algorithm in decoding algorithm and probability territory:
The information pass-algorithm is a kind of decoding algorithm that is operated on the graph theory basis, because in the running of algorithm, therefore reliability information back and forth transmission between the variable node of bipartite graph and check-node is called Message Passing algorithm.It is identical to send the symbol of message collection in the channel output symbol collection in the Message Passing algorithm and decode procedure, all be set of real numbers R, when promptly adopting successional MessagePassing, suitably select the message maps function, algorithm will be equivalent to famous BP algorithm, just sum-product algorithm (Sum Product algorithm).Earlier three kinds of specific algorithms commonly used in the BP algorithm are described below:
The BP algorithm in probability territory
If the check matrix of coding is H, the variable node set that participates in m verification is designated as N (m)={ n:H Mn=1}.Similarly, the check-node set that n variable node is participated in is designated as M (n)={ m:H Mn=1}.Two parts of alternately carrying out are arranged, the numerical value q relevant in the algorithm with non-zero entry in the check matrix MnAnd r MnIn the iteration of algorithm, upgrade one by one.Numerical value q Mn xBe meant that known n variable node value of the code word of transmission is the probability of x (value is " 0 " or " 1 ") except that m check-node during the message of other all check-nodes.Numerical value r Mn xBe meant that n variable node value in the code word of known transmission is x, other variable node satisfies probability distribution { q Mn ': n ' ∈ N (m) m probability that check-node is met during n}.If the pairing bipartite graph of matrix H is ring not, after the process iteration of certain number of times, this algorithm will provide the accurate posterior probability of each variable node value.
The BP algorithm in probability territory may further comprise the steps:
A) initializing variable q Mn 0And q Mn 1: corresponding to each satisfies H in the matrix H Mn=1 element (n, m), the q of variable node Mn 0And q Mn 1Be initialized to f respectively n 0And f n 1y nBe the output of n channel constantly, σ 2Be noise variance.
for n=0,...,N-1
for?m∈M(n)
q mn 0 = f n 0 = P ( x n = 0 | y n ) = 1 1 + e - 2 y n / σ 2 q mn 1 = f n 1 = P ( x n = 1 | y n ) = 1 - f n 0 = 1 1 + e 2 y n / σ 2
Above-mentioned coded representation be the flow process of dual circulation, the outer circulation variable is n, interior cyclic variable is m.
B) check-node upgrades (parity node update): this step is calculated two probability measures to each check-node m and corresponding each variable node n ∈ N (m): first, work as x n=0, other variable node { x N ': n ' ≠ n} obeys separate probability distribution { q Mn ' 0, q Mn ' 1The time, the probability r that check-node m is met Mn 0Second, correspondingly, work as x n=1 o'clock, the probability r that check-node m is met Mn 1
For any m and n, order Wherein H (m, n)=1.
for?m=0,..,M-1
for?n∈N(m)
δr mn = Π n ′ ∈ N ( m ) \ n δq mn ′ r mn 0 = ( 1 + δr mn ) / 2 , r mn 1 = 1 - r mn 0 = ( 1 - δr mn ) / 2
In the formula, " " oblique line represents to get rid of certain element.N (m) n represent the set of N (m) after having got rid of some column index n, it is a difference set.
C) variable node upgrades (variable node information update): this step is utilized and calculates income value r Mn 0And r Mn 1Upgrade probable value q Mn 0And q Mn 1
for?n=0,...,N-1
for?m∈M(n)
q mn 0 = α mn f n 0 Π m ′ ∈ M ( n ) \ m r m ′ n 0 q mn 1 = β mn f n 1 Π m ′ ∈ M ( n ) \ m r m ′ n 1
α wherein MnAnd β MnFor normalization coefficient makes
Figure C20051011458900124
Figure C20051011458900125
For the variable node value is the probability that 0 o'clock all sub-check-node is met,
Figure C20051011458900126
For the variable node value is the probability that 1 o'clock all sub-check-node is met.
After any iteration, can calculate variable node n value according to following formula is 0 and 1 pseudo-posterior probability q n 0And q n 1:
for?n=0,...,N-1
q n 0 = α n p n 0 Π m ∈ M ( n ) r mn 0 q n 1 = β n p n 1 Π m ∈ M ( n ) r mn 1
D) the decoding iteration stops detecting:
To these pseudo-posterior probability q n 0And q n 1Carry out hard decision and generate the test decode results
Figure C20051011458900131
, use
Figure C20051011458900132
Judge whether decoding is successful, if success, then decoding finishes, the output code word; Otherwise judge again whether iterations is less than certain predefined maximum, if repeat b) and c) continue iteration,, deciphers iterations success yet if reaching maximum, then declare decoding failure.
The BP algorithm of log-domain
If the BP algorithm is transformed on the log-domain carries out, can greatly reduce the number of times of multiplying, be suitable for practical application.At this moment, Decoding Message is regarded the estimation to the information bit in the code word as, comprises symbol (sign) and confidence level (reliability) two parts:
1. symbol of message represents that the estimation to information transmitted bit in the channel is (-) or (+);
2. the absolute value of message, promptly confidence level is represented the degree of reliability that this message is estimated information bit;
3. the expression of 0 in the message set can be smeared symbol (erasure), and the probability that the expression information bit is got (+1) or (1) equates that (+1) or (1) corresponds respectively to " 1 " and " 0 ".
Do in the BP algorithm of log-domain to give a definition:
L mn = LLR ( r mn ) = log r mn 0 r mn 1
Z mn = LLR ( q mn ) = log q mn 0 q mn 1
LLR n = LLR ( q n ) = log q n 0 q n 1
Wherein: L MnExpression is mail to the check-node of variable node n to variable node information (extrinsic information), Z from check-node m MnExpression is mail to the variable node of check-node m to check-node information, LLR from variable node n nThe log-likelihood ratio of representing n code word bits.
The BP algorithm of log-domain may further comprise the steps:
A) initialization:
for?n=0,...,N-1
for?m∈M(n)
{ Z mn ( 0 ) = LLR n ( 0 ) = 2 y n / σ 2 }
B) check-node upgrades
for?m=0,...,M-1
for?n∈N(m)
{ L mn ( k ) = 2 tanh - 1 Π n ′ ∈ N ( m ) \ n tanh ( Z m n ′ ( k - 1 ) 2 ) }
C) variable node upgrades
for?n=0,..,N-1
for?m∈M(n)
{ Z mn ( k ) = LLR n ( 0 ) + Σ m ′ ∈ M ( n ) \ m L m ′ n ( k ) }
The log-likelihood ratio of code word bits is:
for?n=0,...,N-1
{ LLR n ( k ) = LLR n ( 0 ) + Σ m ′ ∈ M ( n ) L m ′ n ( k ) }
D) then to code word log-likelihood ratio LLR (q n) carry out hard decision and generate the test decode results
Figure C20051011458900144
, use Judge whether success of decoding, if success then decoding finishes the output code word; Otherwise judge again whether iterations is less than certain predefined maximum, if repeat b) and c) continue iteration, if iterations reaches maximum decoding success yet, then declare decoding failure.
K in the subscript in the above-mentioned formula (k) is the decoding iterations, for BIAWGN (binary system input, Gauss's additive white noise-Binary Input Additive White Gaussian Noise) channel, y nBe channel output, σ 2Be noise variance.
The log-domain BP algorithm of reduced form:
With b) and c) merge cancellation Z MnAfter just obtain the log-domain BP algorithm of the following reduced form that is equal to.
A) initialization:
for?n=0,…,N-1
{ LLR n 0 = 2 y n / σ 2 }
for?n=0,...,N-1
for m∈M(n)
{ L mn ( 0 ) = 0 }
B) node updates
for m=0,...,M-1
for n∈N(m)
{ L mn ( k ) = 2 tanh - 1 Π n ′ ∈ N ( m ) \ n tanh ( LLR n ( k - 1 ) - L m n ′ ( k - 1 ) 2 ) }
C) log-likelihood ratio of compute codeword:
for n=0,...,N-1.
LLR n ( k ) = LLR n ( 0 ) + Σ m ′ ∈ M ( n ) L m ′ n ( k )
D) judgement detects, and content is the same.
When employing information pass-algorithm (Message Passing algorithm) or BP algorithm, the design difficulty of decoder mainly appears in the storage and visit of sparse check matrix.At the nonzero element of any one sparse matrix, we will store its index of indication or storage and point to its pointer, so memory capacity is very big, and then have hindered the application of LDPC sign indicating number.When different code lengths adopted different sparse check matrix, it is more outstanding that this problem will become.In addition, for traditional decoding algorithm, decoder all needs basis matrix is expanded into very big parity matrix, and the so big matrix of storage is a problem.If design hardware configuration with this parity matrix, linking number will be very big, and topology will be very complicated, and also need to design different hardware topology structures at different code length.These shortcomings have seriously hindered structurized LDPC sign indicating number and have moved towards practical application, become the development bottleneck of this type of low density parity check code.
In the IEEE802.16e standard, the LDPC sign indicating number all is based on the LDPC sign indicating number of unit matrix and cyclic shift matrices, the LDPC sign indicating number of each code check all has a basis matrix, hereinafter referred to as original basis matrix, the basis matrix of the LDPC sign indicating number of different code length only is that the value of basis matrix is carried out correction result, hereinafter referred to as revised basis matrix.Only need store a very little basis matrix for the LDPC sign indicating number of certain specific code check this moment.The LDPC sign indicating number of this structurized parity matrix will become the main flow design.But, at the LDPC sign indicating number of this structure, also lack a kind of effective decoding algorithm and decoder that can make full use of its characteristics at present.
Summary of the invention
The technical problem to be solved in the present invention proposes a kind of LDPC code vector interpretation method based on unit matrix and cyclic shift matrix thereof, need not store the parity matrix of LDPC sign indicating number, does not also need basis matrix expanded just can realize deciphering.The present invention also will provide a kind of device of realizing this method.
In order to solve the problems of the technologies described above, interpretation method of the present invention adopts flow process and the principle identical with traditional low density parity check code, but change on specific implementation and data structure, minimum arithmetic unit all is that length is the vector of z in all computings of decoding.Thereby the matrix operation of m * n is reduced to the big or small m that is b* n bMatrix operation, only need basis matrix rather than parity check code just can finish decoding.The hardware topology of decoder also drops to m from the matrix of m * n b* n bMatrix, greatly reduced the linking number of hardware.More importantly be that when code length was variable under the specific code check, different LDPC sign indicating numbers can adopt the decoder with a kind of topological structure.
Based on above-mentioned design, the invention provides a kind of LDPC code vector interpretation method based on unit matrix and cyclic shift matrix thereof, adopt check matrix
Figure C20051011458900161
Unique corresponding to basis matrix
Figure C20051011458900162
Figure C20051011458900164
Iterations is k, and spreading factor is z, and Iset (j) is H bIn j Lie Fei-1 element line index set, Jset (i) is H bIn capable non--1 element column index set of i, this method may further comprise the steps:
(a) will import the reception data Y=[y of decoder 0, y 1..., y N-1] be divided into n bGroup makes receiving sequence vector array
Figure C20051011458900165
In element R j=[y Jz, y Jz+1..., y (j+1) z-1];
(b) make k=0, obtain the initial value of confidence level vector array (as the vectorial array of code word log-likelihood ratio or posterior probability) according to receiving sequence vector array R, and obtaining the initial value of transmission information (referring to that variable node arrives the information of vectorial node or the vectorial node information to variable node) vector matrix, described vector is the vector of the soft bit of 1 * z;
(c) utilize non--1 element value h of described transmission information vector matrix, confidence level vector array and basis matrix that k-1 iteration obtain Ij bUpgrade computing, obtain transmission information vector matrix and the confidence level vector array after the iteration the k time, the minimum arithmetic unit in all computings all is the vector of the soft bit of 1 * z;
(d) described confidence level vector array is carried out hard decision and obtain the hard decision vector array
Figure C20051011458900171
S jBe the capable vector of 1 * z, basis then
Figure C20051011458900172
Calculate the parity vector array
Figure C20051011458900173
(e) judge that whether vectorial array T is complete 0, if then successfully decoded, the output hard decision finishes; Otherwise, make k=k+1, judge again whether k is less than maximum iteration time, if return step (c), otherwise decoding failure finishes.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: this method is the log-domain vector interpretation method of reduced form, wherein:
Institute reaches in the step (b), utilizes reception data vector array R to finish check-node to variable node information vector matrix
Figure C20051011458900174
With code word log-likelihood ratio vector array
Figure C20051011458900175
In the computing of all non-vanishing vector initial values, this step finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is
Figure C20051011458900177
σ 2Be noise variance;
Be further divided into following steps in the described step (c):
(c1) according to the check-node of last iteration to variable node information vector matrix U (k-1)With code word log-likelihood ratio vector array Q (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize node updates, this step finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
u ij ( k ) = P ij 2 tanh - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij - 1 Q j ( k - 1 ) - P i j ′ - 1 u i j ′ ( k - 1 ) 2 ) ,
Wherein, Jset (i) j represent the set of Jset (i) after having got rid of some column index j;
(c2) according to initial log-likelihood ratio vector data Q (0)With the check-node of this iteration to variable node information vector matrix U (k), calculate the code word log-likelihood ratio vector array Q of this iteration (k)In all non-vanishing vectors, promptly to any j=0 ..., n b-1, calculate
Figure C20051011458900179
And in the described step (d), be to code word log-likelihood ratio vector array Q (k)Carry out hard decision.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: this method is the log-domain vector interpretation method of general type, wherein:
In the described step (b), utilize reception data vector array R to finish variable node to check-node information vector matrix
Figure C20051011458900181
With code word log-likelihood ratio vector array In the calculating of all non-vanishing vector initial values, this step finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is:
Figure C20051011458900183
σ 2Be noise variance;
Be further divided into following steps in the described step (c):
(c1) according to the V of last iteration (k-1)And R (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize that check-node upgrades, this step finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
u ij ( k ) = P ij 2 tanh - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P i j ′ - 1 v i j ′ ( k - 1 ) 2 )
(c2) according to initial log-likelihood ratio vector array Q (0)With the check-node of this iteration to variable node information vector matrix U (k), the variable node that calculates this iteration is to check-node information vector matrix V (k)In all non-vanishing vectors, realize that variable node upgrades, and finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i=0 ..., m b-1, formula is:
v ij ( k ) = Q j ( 0 ) + Σ i ′ ∈ Jset ( j ) \ i u i ′ j ( k )
Calculate the code word log-likelihood ratio array Q of this iteration simultaneously (k)In all non-vanishing vectors, promptly to any j=0 ..., n b-1, calculate:
Figure C20051011458900186
And in the described step (d), be to code word log-likelihood ratio vector array Q (k)Carry out hard decision.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: this method is a probability territory vector interpretation method, wherein:
In the described step (b), be to utilize to receive array of data R, calculate variable node to check-node information vector matrix
Figure C20051011458900187
Figure C20051011458900188
And vector matrix
Figure C20051011458900189
And code word probability vector array
Figure C200510114589001810
With
Figure C200510114589001811
In the initial value of all non-vanishing vectors, finish by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is:
{ Q ij 0 = F j 0 = 1 1 + e - 2 R j / σ 2 , Q ij 1 = F j 1 = 1 - Q ij 0 , Δ Q ij = Q ij 0 - Q ij 1 }
Be further divided into following steps in the described step (c):
(c1) according to the Δ Q of last iteration (k-1), the check-node that upgrades this iteration is to variable node information vector matrix R 0 (k), R 1 (k)In all non-vanishing vectors, realize that check-node upgrades, and finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
Δ R ij ( k ) = P ij Π j ′ ∈ Jset ( i ) \ j P i j ′ - 1 Δ Q i j ′ ( k - 1 ) , R ij 0 ( k ) = ( 1 + Δ R ij ( k ) ) / 2 , R ij 1 ( k ) = 1 - R ij 0 ( k ) = ( 1 - Δ R ij ( k ) ) / 2
(c2) according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), the variable node that calculates this iteration is to check-node information vector matrix Q 0 (k), Q 1 (k)In all non-vanishing vectors, realize that variable node upgrades, and finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is:
{ Q ij 0 ( k ) = α ij F j 0 Π i ′ ∈ Jset ( j ) \ i R i ′ j 0 ( k ) , Q ij 1 ( k ) = β ij F j 1 Π i ′ ∈ Jset ( j ) \ i R i ′ j 1 ( k ) }
Simultaneously, according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), calculating variable node n value is 0 and 1 pseudo-posterior probability vector array F 0 (k), F 1 (k)In all non-vanishing vectors, promptly to arbitrary j=0 ..., n b-1, calculate:
{ F j 0 ( k ) = α j F j 0 Π i ′ ∈ Jset ( j ) R i ′ j 0 ( k ) , F j 1 ( k ) = β j F j 1 Π i ′ ∈ Jset ( j ) R i ′ j 1 ( k ) }
α wherein IjAnd β IjFor normalization coefficient makes
Figure C20051011458900194
And in the described step (d), be according to F 0 (k), F 1 (k)Big or small hard decision obtain vectorial array S.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: described computing to vector comprises the functional operation of vectorial arithmetic, vectorial cyclic shift and vector, P is finished in the arithmetic of vector by the arithmetic of two corresponding elements of vector Ij 'With multiplication of vectors by to the ring shift right h of vector element Ij bP is finished in the position Ij ' -1With multiplication of vectors by to the ring shift left h of vector element Ij bFinish the position, and the functional operation of vector is by asking function to finish to each element in the vector.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: described check-node to variable node information vector and variable node to check-node information vector fixed-point representation, each vector comprises z soft bit, and each soft bit fixed point is 6 binary bits.
Further, above-mentioned vectorial interpretation method has following characteristics in addition: the check-node of described iterative decoding upgrades to be handled, be a kind of realization of adopting in the following approximate data of above-mentioned standardized confidence spread algorithm or this algorithm: BP-Based algorithm, APP-based algorithm, maximum confidence propagation algorithm uniformly, minimum-sum algorithm and minimum and lookup table algorithm.
LDPC code vector decode translator based on unit matrix and cyclic shift matrix thereof provided by the invention comprises basis matrix processing module, initial value computing module, interative computation module, hard decision detection module and control module, wherein:
Described basis matrix processing module comprises the basis matrix memory cell, and this unit has L memory block, and each memory block is used to store basis matrix
Figure C20051011458900201
In non--1 element value
Figure C20051011458900202
L is the number of-1 element non-in the basis matrix,
Figure C20051011458900203
Figure C20051011458900204
Described initial value computing module is used for receiving input data Y=[y 0, y 1..., y N-1] and be buffered in n bIn the individual memory block, calculate the initial value of confidence level vector array then, be stored in n bIn the individual memory block, and obtain transmitting the initial value of information vector matrix;
Non--1 element value h of the transmission information vector matrix that described interative computation module is used to utilize last iteration to obtain, confidence level vector array and basis matrix Ij bUpgrade computing, obtain transmission information vector matrix and confidence level vector array after this iteration;
The confidence level vector array that described hard decision detection module is used for that iteration is obtained is carried out hard decision and is obtained the hard decision vector array
Figure C20051011458900205
Be stored in n bIn the individual memory block, basis then
Figure C20051011458900206
Calculate, and adjudicate the parity vector array that obtains
Figure C20051011458900207
Whether be complete 0;
Described control module is used to control other module and finishes initial value computing, interative computation and hard decision detection, is complete 0 o'clock at array T, and the output hard decision is successfully decoded, finishes; T is not complete 0 o'clock, whether judges iterations again less than maximum iteration time, in this way, continues next iteration, as reaches maximum iteration time, and then decoding failure finishes;
And, all memory blocks are the memory block of z soft bit of storage, computing between each array and matrix element is the vector operation of size for z soft bit, the computing unit of each module directly reads and writes data from corresponding memory block, each information transmitted data are the integral multiple of z soft bit always, wherein, z is a spreading factor.
Further, above-mentioned vectorial code translator also can have following characteristics: the size of described memory block is z MaxIndividual soft bit, wherein, z MaxIt is the spreading factor of the low density parity check code correspondence of specific code check maximum code length.
Further, above-mentioned vectorial code translator also can have following characteristics: be to set up with hardware to be fixedly coupled between described each computing unit and the corresponding memory block, realize the addressing to data.
Further, above-mentioned vectorial code translator also can have following characteristics: what store in the basis matrix memory cell in the described basis matrix processing module is the element value of original basis matrix; Perhaps, basis matrix memory cell in the described basis matrix processing module is meant revised basis matrix memory cell, this processing module also comprises original basis matrix memory cell and basis matrix amending unit, and the computing unit of described interative computation module also with in the corresponding memory block of this revised basis matrix memory cell links to each other with reading of data.
Further, above-mentioned vectorial code translator also can have following characteristics: described initial value computing module comprises:
Receive the codeword vector memory cell, be used for the codeword sequence Y=[y that buffer memory receives 0, y 1..., y N-1], with receiving sequence vector array
Figure C20051011458900211
Form be stored in n bIn the individual memory block, vectorial R of each memory block storage j=[y Jz, y Jz+1..., y (j+1) z-1];
Vector initial value computing unit is used to read receiving sequence vector R j, calculate initial log-likelihood ratio vector array
Figure C20051011458900212
Figure C20051011458900213
σ 2Be noise variance;
Initial log-likelihood ratio vector memory cell comprises n bIndividual memory block is stored the n of described initial log-likelihood ratio vector array respectively bIndividual vectorial Q j
Further, above-mentioned vectorial code translator also can have following characteristics: described interative computation module comprise check-node to variable node information vector memory cell, node updates handle array, by reading network and write bidirectional buffering network and the code word log-likelihood calculations unit that network is formed, wherein:
Described check-node comprises L memory block to variable node information vector memory cell, each memory block is used for the L that will a transmit check-node that memory node upgrade to handle array output to the variable node information vector, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element;
Described node updates is handled array by M bThe individual basis matrix M that corresponds respectively to bThe computing unit of row is formed, each computing unit comprises a plurality of computation subunit that correspond respectively to all non--1 elements in this row of basis matrix again, total L, each computation subunit is by reading network reading of data from check-node to variable node information vector memory cell and the corresponding memory block of code word log-likelihood ratio vector memory cell, finish a node updates computing, be written to check-node corresponding memory block in the variable node information vector memory cell by the check-node of writing after network will upgrade to the variable node information vector then;
In the described bidirectional buffering network, handle in the array computation subunit for described node updates corresponding to certain non-" 1 " element of basis matrix, read network with in the row of this element place basis matrix except that this element all other non-" 1 " elements coupled in check-node corresponding memory block in the variable node information vector memory cell, with in the row of this element place basis matrix except that this element the memory block of all other non-" 1 " elements correspondence in code word log-likelihood ratio vector memory cell coupled;
Described code word log-likelihood calculations unit is by n bIndividual computation subunit is formed, each computation subunit from initial log-likelihood ratio vector memory cell and check-node to variable node information vector memory cell corresponding memory block obtain check-node after log-likelihood ratio vector initial value and this iteration to the variable node information vector, calculate a code word log-likelihood ratio vector of this iteration.
Further, above-mentioned vectorial code translator also can have following characteristics: described interative computation module comprises that check-node is to variable node information vector memory cell, variable node is to check-node information vector memory cell, variable node is handled array, the code check node processing array, comprise and read network A, write network A, read network B and write the bidirectional buffering network of network B, and code word log-likelihood calculations unit, wherein:
Described check-node is to variable node information vector memory cell, comprise L memory block, each memory block is used to store a check-node to the variable node information vector, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element;
Described variable node is to check-node information vector memory cell, comprise L memory block, each memory block is used to store a variable node to the check-node information vector, each variable node to the check-node information vector corresponding to one in the basis matrix non--1 element;
Described variable node is handled array by N bIndividual variable node computing unit is formed, each computing unit comprises a plurality of computation subunit corresponding to all non--1 elements in this variable node respective column in the basis matrix, each computation subunit is by reading network B reading of data from check-node to variable node information vector memory cell and the corresponding memory block of initial log-likelihood ratio vector memory cell, finish variable node and upgrade computing, be written to variable node to the corresponding memory block of check-node information vector memory cell by the variable node of writing after network B is upgraded this to the check-node information vector then;
Described code check node processing array is by M bIndividual check node calculation unit is formed, each computing unit comprises a plurality of computation subunit compositions corresponding to all non--1 elements in this check-node corresponding row in the basis matrix, each calculate subelement by read network A from variable node to the corresponding memory block of check-node information vector memory cell reading of data, and in conjunction with in the basis matrix corresponding to the value of the element of this computation subunit, finish check-node and upgrade computing, be written to check-node memory block accordingly in the variable node information vector memory cell by the check-node of writing after network A will be upgraded to the variable node information vector then;
For in the code check node processing array corresponding to the computation subunit of certain non-" 1 " element of basis matrix, described read network A with in the row of this element place basis matrix except that this element all other non--1 elements coupled in variable node corresponding memory block in the check-node information vector memory cell, it is described that to write network A coupled in check-node corresponding memory block in the variable node information vector memory cell with this element;
Handle in the array computation subunit for variable node corresponding to certain non-" 1 " element of basis matrix, described read network B with in the row of this element place basis matrix except that this element all other non--1 elements coupled in check-node corresponding memory block in the variable node information vector memory cell, also that the memory block of the initial log-likelihood ratio of being listed in of this element place basis matrix vector memory cell correspondence is coupled, it is coupled in variable node corresponding memory block in the check-node information vector memory cell with this element to write network B;
Described code word log-likelihood calculations unit is by n bIndividual computation subunit is formed, each computation subunit from initial log-likelihood ratio vector memory cell and check-node to variable node information vector memory cell corresponding memory block obtain check-node after log-likelihood ratio vector initial value and this iteration to the variable node information vector, calculate a code word log-likelihood ratio vector of this iteration.
Further, above-mentioned vectorial code translator also can have following characteristics: described hard decision detection module comprises:
Code word log-likelihood ratio vector memory cell comprises n bIndividual memory block is used to store the n that each iteration obtains bIndividual code word log-likelihood ratio vector Q j (k)
The hard decision detecting unit, the code word log-likelihood ratio vector array Q that is used for that decoding is produced carries out hard decision and obtains n bIndividual hard decision vector, and judge whether parity vector array T is complete 0;
Hard decision vector memory cell comprises n bIndividual memory block is used to store the n that hard decision obtains bIndividual hard decision vector.
As from the foregoing, the present invention is directed to the peculiar code structure that becomes code length LDPC sign indicating number has proposed vectorial BP interpretation method and device and traditional BP interpretation method and code translator thereof following characteristics has relatively been arranged:
1) do not need to store and visit the parity check matrix H of LDPC sign indicating number, avoided address of node information in the storage parity matrix, so the required memory capacity of decoder of the present invention significantly reduces.
2) will be converted into m based on the matrix operation of the M * N of bit based on the z bit vectors b* n bMatrix operation, the linking number of node of decoding array has reduced z doubly, has simple topology.
3) for the LDPC sign indicating number of specific code check, have unified topological structure and decoding handling process, be fit to Parallel Implementation more at different code lengths.
4) the only topological and basis matrix H of decoding bRelevant, irrelevant with extended matrix H, do not need expansion.
Therefore, vectorial BP interpretation method provided by the invention is significant for the LDPC sign indicating number based on unit matrix and cyclic shift matrices thereof, will become the main flow interpretation method of this type of LDPC sign indicating number, this interpretation method makes moves towards practical based on unit matrix and cyclic shift matrices LDPC sign indicating number, and promoting this type of LDPC sign indicating number effectively becomes current LDPC sign indicating number principal mode.
Description of drawings
Fig. 1 is the structure chart of a digital communication system.
Fig. 2 is the flow chart of the present invention's vector BP method first embodiment.
Fig. 3 is the hardware structure diagram of code translator first embodiment of the present invention.
Fig. 4 be in the application example of the present invention check-node to the annexation figure of variable node information vector memory block and two node processing arrays.
Fig. 5 be in the application example of the present invention variable node to the annexation figure of check-node information vector memory block and two node processing arrays.
Fig. 6 A and Fig. 6 B are the structure chart of application example corresponding to the code check node processing array of basis matrix first row.
Fig. 7 A and Fig. 7 B are application example is handled array corresponding to the variable node of basis matrix first row structure charts.
Fig. 8 is the sparse matrix storage node composition of existing decoder storage and visit parity matrix.
Fig. 9 is the hardware structure diagram of code translator second embodiment of the present invention.
Embodiment
Research object of the present invention is based on the low density parity check code of unit matrix and cyclic shift matrices thereof, introduces these basic conceptions now earlier.
Based on unit matrix with and the LDPC sign indicating number of cyclic shift matrices and the definition of basis matrix
Any one LDPC sign indicating number with specific code check and code length all has a m * n parity check matrix H, can determine the encoder and the decoder of this LDPC sign indicating number by this m * n parity matrix, here, n is the length of code word, m is the number of check bit, and k=m-n is the systematic bits number.
Check matrix H based on the LDPC sign indicating number of unit matrix and cyclic shift matrices is the identical z of many sizes * z piecemeal square formation P I, jConstitute.H is defined as follows:
H = P 0,0 P 0,1 P 0,2 · · · P 0 , n b - 2 P 0 , n b - 1 P 1,0 P 1,1 P 1,2 · · · P 1 , n b - 2 P 1 , n b - 1 P 2,0 P 2,1 P 2,2 · · · P 2 , n b - 2 P 0 , n b - 1 · · · · · · · · · · · · · · · · · · P m b - 1,0 P m b - 1,1 P m b - 1,2 · · · P m b - 1 , n b - 2 P m b - 1 , n b - 1 = P H b - - - ( 1 )
These square battle arrays P I, jBe the cyclic shift matrices or the null matrix of unit matrix or unit matrix.H is m by a size b* n bBasis matrix H b(base matrix) expansion obtains, wherein n=zn b, m=zm b, z is called spreading factor, the columns N divided by basis matrix calculates according to code length, be one greater than 1 positive integer.H bCan be divided into two parts, establish H B1Corresponding to information bit, H B2Corresponding to check bit, have
Figure C20051011458900252
In H, defining basic permutation matrix is the matrix of one of unit matrix ring shift right, and each non-zero matrix in block form all is the different powers of the basic permutation matrix of z * z, and they all are the cyclic shift matrices (be defaulted as in the literary composition and move to right) of unit matrix.So just can each matrix in block form of unique identification by power time j, the power of unit matrix represents that with 0 the power of the matrix that the unit matrix ring shift right is 1 is inferior represents that with 1 the rest may be inferred.Null matrix generally uses " 1 " to represent.All use its power to replace each matrix in block form of H, just obtain a m b* n bPower submatrix H b, define this H bBasis matrix for H.
For example, matrix
Figure C20051011458900261
Unique basis matrix H corresponding to following parameter z and one 2 * 4 b:
Z=3 and H b = 0 1 0 - 1 2 1 2 1
By the unit matrix of usefulness z * z and the nonzero element of cyclic shift matrices or the alternative basis matrix of null matrix thereof, just can be with basis matrix H bBe expanded into parity check matrix H.
First embodiment of decoding algorithm of the present invention:
To construct and derive based on the log-domain BP interpretation method of existing reduced form below-the log-domain vector BP method of reduced form, concrete derivation be described below:
If certain structurized LDPC sign indicating number has M * N parity check matrix H, M is the number of check bit, and N is the number of code word bits, and K=N-M is the number of information bit.H has the structure in the formula (1), Be that H is by m b* n bIndividual z * z zero square formation, unit square formation with and cyclic shift matrices constitute.The unique corresponding basis matrix of H
Figure C20051011458900264
h Ij bBe the element in the basis matrix,
Figure C20051011458900265
With
Figure C20051011458900266
Definition basis matrix line index set Iset (j) and column index set Jset (i)
Basis matrix j Lie Fei-1 element line index set
Figure C20051011458900271
Capable non--1 element column index set of basis matrix i
Definition receiving sequence vector array R:
The soft receiving sequence Y=[y of 1 * N with the input decoder 0, y 1..., y N-1] be divided into n bGroup has the capable vector of 1 * z of z soft bit, represents with vectorial array R,
Figure C20051011458900273
Arbitrary element R among the R j=[y Jz, y Jz+1..., y (j+1) z-1],
Figure C20051011458900274
Definition code word log-likelihood ratio vector array Q:
Code word log-likelihood ratio sequence LLR=[LLR with 1 * N of decoder 0, LLR 1..., LLR N-1] be divided into n bThe capable vector of group 1 * z is represented with vectorial array Q,
Figure C20051011458900275
Each element Q among the Q j=[LLR Jz, LLR Jz+1..., LLR (j+1) z-1],
Figure C20051011458900276
So Q j(l)=LLR Jz+1
∀ l ∈ [ 0,1 , . . . z - 1 ]
Defined function
Figure C20051011458900278
(x, a):
Define a function
Figure C20051011458900279
Wherein, a and x are arbitrary integers, and z is a spreading factor.
Certain row nonzero element column index set N (m) among the definition H:
In parity check matrix H, fixedly i and l, can derive obtains the set that all capable nonzero elements of iz+l constitute and is:
Figure C200510114589002710
In H, the set that all nonzero element column indexes that iz+l is capable constitute is:
Figure C200510114589002711
Correspondingly, can draw, in H, the set that all nonzero element line index of zj+l row constitute is:
Figure C20051011458900281
The definition check-node is to variable node information vector matrix U:
The definition check-node is to variable node information vector matrix
Figure C20051011458900282
With Each element u among the U IjIt all is the capable vector of 1 * z.u IjBe used for the record and H in hypermatrix P IjThe z of position correspondence the check-node from the check-node to the variable node is to variable node information L MnValue, and this record is according to being listed as in sequence, utilizing the expression formula of the above-mentioned M that derives (zj+l), for arbitrary element u Ij(l), always have:
Figure C20051011458900285
In fact, check-node is being on all four to variable node information matrix and basis matrix aspect size, shape and the nonzero element position.
The definition check-node is to variable node information vector matrix W:
The definition matrix
Figure C20051011458900286
The arbitrary element w of W IjBe the capable vector of one 1 * z, definition
Figure C20051011458900287
Permutation matrix P among the H IjAnd H bMiddle nonzero element h Ij bBe corresponding, P Iju IjExpression is with 1 * z vector u IjRing shift right h Ij bThe position, definition z * z square formation P Ij -1Satisfy
Figure C20051011458900288
I Z * zBe the unit matrix of a big or small z * z, so P Ij -1u IjExpression is with 1 * z vector u IjRing shift left h Ij bThe position.According to the expression formula of top definition and the N that derives (iz+l), for arbitrary element w Ij(l), always have:
Figure C20051011458900289
Analyze physical meaning, w as can be known IjBe used for equally the record and H in hypermatrix P IjThe z of position correspondence the check-node from the check-node to the variable node is to variable node information L MnValue, but record is by row in sequence.
Define vectorial array Λ:
Order
Figure C20051011458900291
Wherein, each element is a capable vector of 1 * z among the Λ, and
Figure C20051011458900292
Vector Λ jBe with vectorial Q jRing shift left h Ij bThe result of position.What write down also is code word log-likelihood ratio sequence, is by row in sequence but be recorded in the hypermatrix.
Vector Λ jMiddle arbitrary element
Figure C20051011458900293
Figure C20051011458900294
Based on above-mentioned definition, below with log-domain BP decoding algorithm check-node more new formula be converted to vector form.
According to L mn ( k ) = 2 tanh - 1 Π n ′ ∈ N ( m ) \ n tanh ( LLR n ( k - 1 ) - L mn ′ ( k - 1 ) 2 )
Make m=iz+l,
Figure C20051011458900296
Figure C20051011458900297
Then the checkpoint of log-domain BP algorithm more new formula become:
Figure C20051011458900298
Wherein
Figure C20051011458900299
J '=Jset (i) j}}
According to w IjAnd Λ jDefinition has
Following formula is right
Figure C200510114589002911
Set up, so
According to w IjAnd Λ jDefinition has
Figure C200510114589002913
So have: u ij ( k ) = P ij 2 tan h - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij - 1 Q j ( k - 1 ) - P i j ′ - 1 u i j ′ ( k - 1 ) 2 ) - - - ( 2 )
Log-likelihood ratio according to compute codeword:
LLR n ( k ) = LLR n ( 0 ) + Σ m ′ ∈ M ( n ) L m ′ n ( k )
So have Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) u i ′ j ( k ) - - - ( 3 )
Wherein: u IjBe L MnThe capable vector of 1 * z record form, Q jBe LLR nThe capable vector of 1 * z record form, P IjIt is the piecemeal square formation of z * z among the H.Analysis mode (2) and (3), algorithm are to be minimum basic operation element with the capable vector of 1 * z.
Definition hard decision vector array S:
With log-likelihood ratio LLR (q n) obtain 1 * N sequence through hard decision
Figure C20051011458900301
Be divided into n bThe capable vector of 1 * z of group z bit is represented with vectorial array S,
Figure C20051011458900302
Each element S among the S jBe a capable vector of 1 * z, have:
∀ j ∈ [ 0,1 , . . . , n b - 1 ] .
Definition parity vector array T:
Define vectorial array
Figure C20051011458900305
T iIt is the column vector of z * 1.Make T=HS TThen have If T is complete 0, then HS T=0.
To describe the flow process of the log-domain vector BP algorithm of present embodiment reduced form below, repeat above definition earlier:
The check matrix that interpretation method adopts Corresponding basis matrix
Figure C20051011458900308
Figure C20051011458900309
Iterations is k;
Definition receiving sequence vector array
Figure C200510114589003011
Element R jIt is the capable vector of 1 * z;
The definition check-node is to variable node information vector matrix
Figure C200510114589003012
Element u IjAll are the capable vectors of 1 * z;
Definition code word log-likelihood ratio vector array
Figure C200510114589003013
Element Q jAll are the capable vectors of 1 * z;
Define a hard decision vector array
Figure C200510114589003014
Element S jAll are the capable vectors of 1 * z;
Define a parity vector array
Figure C200510114589003015
Element
Figure C200510114589003016
Its flow process may further comprise the steps as shown in Figure 2:
Step 110 is with the reception data Y=[y of input decoder 0, y 1..., y N-1] be divided into n bOrganize, make the element R of receiving sequence vector array R j=[y Jz, y Jz+1..., y (j+1) z-1],
Figure C20051011458900311
Step 120 makes k=0, utilizes receiving sequence vector array R to finish computing to check-node all non-vanishing vector initial values in variable node information vector matrix U and the code word log-likelihood ratio vector array Q:
for?j=0,...,n b-1
for i∈Iset(j)
u ij ( 0 ) = 0 , Q j ( 0 ) = 2 R j / σ 2
Iset (j) is H bJ Lie Fei-1 element line index set, σ 2Be noise variance.
Step 130, according to the check-node of last iteration to variable node information vector matrix U (k-1)With code word log-likelihood ratio vector array Q (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize node updates;
for?i=0,...,m b-1
for?j∈Jset(i)
u ij ( k ) = P ij 2 tan h - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij - 1 Q j ( k - 1 ) - P i j ′ - 1 u i j ′ ( k - 1 ) 2 ) - - - ( 4 )
Wherein, Jset (i) is H bThe capable non-0 element column index set of i.The more new formula of formula (4) can be divided into to be asked absolute value and asks two steps of symbol to finish:
| u ij ( k ) | = P ij φ ( Σ j ′ ∈ Jset ( i ) \ j φ ( P ij - 1 Q j ( k - 1 ) - P i j ′ - 1 u i j ′ ( k - 1 ) ) )
sign ( u ij ( k ) ) = - Π j ′ ∈ Jset ( i ) \ j sign ( P ij - 1 Q j ( k - 1 ) - P i j ′ - 1 u i j ′ ( k - 1 ) )
Wherein, definition φ (x)=-log (tanh (x/2))=log (coth (x/2)), x is the real number greater than zero.
Step 140 is according to initial log-likelihood ratio vector data Q (0)With the check-node of this iteration to variable node information vector matrix U (k), calculate the code word log-likelihood ratio vector array Q of this iteration (k)In all non-vanishing vectors;
for j=0,...,n b-1
Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Jset ( j ) u i ′ j ( k ) - - - ( 5 )
Step 150 is to Q (k)Carry out hard decision and obtain hard decision vector array S, according to
Figure C20051011458900322
Meter
Calculate parity vector array T=HS T
Step 160 judges that whether full T 0, if, execution in step 190, otherwise, next step carried out;
Step 170 makes k=k+1, judges whether iterations " k " is less than certain predefined maximum K Max, if, then return step 130, otherwise, next step carried out;
Step 180, the declaration decoding failure finishes;
Step 190 is judged successfully decodedly, and output hard decision sequence finishes.
As can be seen, the minimum arithmetic unit of present embodiment algorithm is the row vector of 1 * z, and the addition subtraction multiplication and division of algorithm is exactly the addition subtraction multiplication and division of vector, P IjTake advantage of certain vector representation to this vectorial ring shift right h Ij bThe position, P Ij -1Take advantage of certain vector representation to this vectorial ring shift left h Ij bPosition, certain vectorial f function are exactly the vector that the f function of this each element of vector reconstitutes.Because all computings all are based on vector operation, so we call vectorial BP algorithm to algorithm of the present invention.
In sum, because whole interpretation method is based on m b* n bBasis matrix comes computing, rather than the matrix operation of M * N, and basis matrix is very little, thereby avoided very numerous and diverse sparse matrix data structure, and because the low density parity check code based on unit matrix of specific code check has only a basis matrix usually, it is identical that different code length is revised the nonzero element position of the basis matrix that obtains down, therefore can adopt identical topological structure, thereby greatly reduce hard-wired complexity, greatly reduced the linking number of decoder inside.
Second embodiment of decoding algorithm of the present invention
Present embodiment provides corresponding vectorial decoding algorithm at the log-domain interpretation method of general type.
Present embodiment upgrades verification node updates and variable node and was divided into for two steps and finishes, and on the basis of first embodiment, only needs a storage of many definition Z MnVariable node to check-node information vector matrix V.
The defined variable node is to check-node information vector matrix
Figure C20051011458900323
Figure C20051011458900324
With
Figure C20051011458900325
Each element v among the V IjIt all is the capable vector of 1 * z.v IjBe be used for the record and H in hypermatrix P IjThe z of a position correspondence Z MnValue, and this record is the order according to row.
For v IjArbitrary element v Ij(l), always have:
Figure C20051011458900331
Wherein,
Figure C20051011458900332
The definition that repeats to be done: the check matrix that interpretation method adopts
Figure C20051011458900333
Basis matrix
Figure C20051011458900334
Figure C20051011458900335
Iterations is k; To receiving sequence vector array R, check-node to variable node information vector matrix
Figure C20051011458900337
Code word log-likelihood ratio vector array
Figure C20051011458900338
The hard decision vector array
Figure C20051011458900339
The parity vector array Definition identical with first embodiment, repeat no more here.
The flow process of the log-domain vector interpretation method of general type may further comprise the steps:
Steps A is with the reception data Y=[y of input decoder 0, y 1..., y N-1] be divided into n bOrganize, make the element R of receiving sequence vector array R j=[y Jz, y Jz+1..., y ( J+1) z-1],
Figure C200510114589003311
Step B makes k=0, utilize to receive data vector array R and finishes calculating to variable node all non-vanishing vector initial values in check-node information vector matrix V and the code word log-likelihood ratio vector array Q;
for?j=0,...,n b-1
for?i∈Iset(j)
v ij ( 0 ) = Q j ( 0 ) = 2 R j / σ 2
In the formula, Iset (j) is H bJ Lie Fei-1 element line index set, σ 2Be noise variance.
Step C is according to the V of last iteration (k-1)And R (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize that check-node upgrades;
for?i=0,...,m b-1
for?j∈Jset(i)
u ij ( k ) = P ij 2 tanh - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij ′ - 1 v ij ′ ( k - 1 ) 2 ) - - - ( 6 )
In the formula, Jset (i) is H bThe capable non-0 element column index set of i.Similarly, this step also can be by asking absolute value and asking two steps of symbol to finish.
Step D is according to initial log-likelihood ratio vector array Q (0)With the check-node of this iteration to variable node information vector matrix U (k), the variable node that calculates this iteration is to check-node information vector matrix V (k)In all non-vanishing vectors, realize that variable node upgrades;
for j=0,...,n b-1
for i=0,...,m b-1
v ij ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) \ t u i ′ j ( k ) - - - ( 7 )
Calculate the code word log-likelihood ratio array Q of this iteration simultaneously (k)In all non-vanishing vectors;
for?j=0,...,n b-1
Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) u i ′ j ( k ) - - - ( 8 )
The follow-up hard decision of step e~step I, identical to the judgement of decode results, processing etc. with step 150~190 of first embodiment, no longer repeat here.
The vectorial decoding algorithm of the log-domain of above-mentioned two embodiment, in the iterative decoding process, use Log function and tangent tanh function, can adopt approximate data---the BP-Based algorithm and the APP-based algorithm of the BP algorithm of M.Fossorier et al. proposition in practice.And at IEEEtrans.Commun. the 47th volume, the 673-680 page or leaf, in May, 1999, exercise question is in the research report of " REDUCEDCOMPLEXITY ITERATIVE DECODING OF LOW DENSITY PARITYCHECK CODES BASED ON BELIFE PORPAGATION ", people such as chen and Fossorier has proposed UMP-BP (Uniformaly Most Powerful-BeliefPropagation, maximum confidence propagation uniformly) algorithm and standardization-BP (Normalized-BeliefPropagation, standardized confidence spread) algorithm are realized the operate approximately to Log function and tangent tanh function.Can reduce the complexity of iterative decoding like this, and the performance reduction is less.In addition, also can adopt the disclosed exercise question of Samsung for propose in the apparatus and method of low density parity check code " in the communication system decoding " to approximate datas such as improvement algorithm, minimum-sum algorithm and the minimum of standardization-BP algorithm and lookup table algorithm.The core difference of interpretation method of the present invention and traditional interpretation method is vector of z bit-envelope is realized for basic operation unit with the vector.
The 3rd embodiment of decoding algorithm of the present invention
Present embodiment contrasts the BP algorithm in common probability territory, quotes its q n 0, q n 1, q Mn 0, q Mn 1, r Mn 0And r Mn 1, construct and derive corresponding probability territory vector BP algorithm, concrete derivation is described below:
Be defined as follows identical with first embodiment:
Figure C20051011458900352
Check matrix
Figure C20051011458900353
Basis matrix
Figure C20051011458900354
The element R of receiving sequence vector array R jAll be the capable vector of 1 * z, R j=[y Jz, y Jz+1..., y (j+1) z-1]
Basis matrix j Lie Fei-1 element line index set
Figure C20051011458900355
Capable non--1 element column index set of basis matrix i
Figure C20051011458900356
The hard decision sequence
Figure C20051011458900357
The element S of hard decision vector array S jAll be the capable vector of 1 * z,
Figure C20051011458900358
The parity vector array element
In addition, need to replenish the definition of following vectorial array and vector matrix:
Definition code word probability vector array F 0And F 1
With the code word bits of 1 * N of decoder 0 probability sequence { q n 0} 1 * nBe divided into n bThe capable vector of group 1 * z is used F 0Expression,
Figure C200510114589003510
Here, F 0In each element be a capable vector of 1 * z, as follows:
For, F j 0 = [ q jz 0 , q jz + 1 . 0 , · · · q ( j + 1 ) z - 1 0 ] . ∀ j ∈ [ 0,1 , · · · , n b - 1 ]
Wherein, F j 0 ( l ) = q jz + l 0 . ∀ l ∈ [ 0,1 , · · · , z - 1 ] .
Equally, with { q n 1} 1 * nBe divided into n bThe capable vector of group 1 * z is used F 1Expression,
So, F j 1 = [ q jz 1 , q jz + 1 1 , . . . , q ( j + 1 ) z - 1 1 ] . ∀ j ∈ [ 0,1 , . . . , n b - 1 ]
The definition check-node is to variable node information vector matrix R 0And R 1
Define a matrix
Figure C200510114589003518
Figure C200510114589003519
With
Figure C200510114589003520
R 0In each element R Ij 0It all is the capable vector of 1 * z.R Ij 0Be be used for the record and H in hypermatrix P IjThe z of a position correspondence r Mn 0Value, and this record is the order according to row.
For R Ij 0Arbitrary element R Ij 0(l), always have:
Wherein, ∀ l ∈ [ 0,1 , . . . , z - 1 ]
Define a matrix
Figure C20051011458900363
Figure C20051011458900364
With R 1In each element R Ij 1It all is the capable vector of 1 * z.R Ij 1Be used for the record and H in hypermatrix P IjThe z of a position correspondence r Mn 1Value, and this record is the order according to row.
For R Ij 1Arbitrary element R Ij 1(l), always have:
Figure C20051011458900366
Wherein, ∀ l ∈ [ 0,1 , . . . , z - 1 ]
Definition vector matrix Q 0, Q 1, Δ Q and Δ R:
The defined variable node is to check-node information vector matrix
Figure C20051011458900368
Figure C20051011458900369
With Q 0In each element Q Ij 0It all is the capable vector of 1 * z.Q Ij 0Be be used for the record and H in hypermatrix P IjThe z of a position correspondence q Mn 0Value, and this record is the order according to row.
For Q Ij 0Arbitrary element Q Ij 0(l), always have:
Figure C200510114589003611
Wherein, ∀ l ∈ [ 0,1 , . . . , z - 1 ]
The defined variable node is to check-node information vector matrix
Figure C200510114589003613
With
Figure C200510114589003615
Q 1In each element Q Ij 1It all is the capable vector of 1 * z.Q Ij 1Be be used for the record and H in hypermatrix P IjThe z of a position correspondence q Mn 1Value, and this record is the order according to row.
For Q Ij 1Arbitrary element Q Ij 1(l), always have:
Figure C200510114589003616
Wherein, ∀ l ∈ [ 0,1 , . . . , z - 1 ]
Define a matrix With
Figure C200510114589003620
Each element Δ Q among the Δ Q IjIt all is the capable vector of 1 * z.
For Δ Q IjArbitrary element Δ Q Ij(l), Δ Q is arranged Ij(l)=Q 0 Ij(l)-Q 1 Ij(l),
Figure C200510114589003621
Define a matrix
Figure C200510114589003622
Figure C200510114589003623
With
Figure C200510114589003624
Each element Δ R among the Δ R IjIt all is the capable vector of 1 * z.
For Δ R IjArbitrary element Δ R Ij(l), Δ R is arranged Ij(l)=R 0 Ij(l)-R 1 Ij(l),
Figure C20051011458900371
According to the similar derivation of log-domain algorithm, can obtain the step of the vectorial BP algorithm flow of following probability territory form:
Step 1 is with the reception data Y=[y of input decoder 0, y 1..., y N-1] be divided into n bThe capable vector of the 1 * z array of group z bit
Figure C20051011458900372
The arbitrary element R of R j=[y Jz, y Jz+1..., y (j+1) z-1];
Step 2 is utilized to receive array of data R, calculates variable node to check-node information vector matrix
Figure C20051011458900373
Figure C20051011458900374
With
Figure C20051011458900375
And code word probability vector array
Figure C20051011458900376
With In the initial value of all non-vanishing vectors;
for?j=0,...,n b-1
for?i∈Iset(j)
{ Q ij 0 = F j 0 = 1 1 + e - 2 R j / σ 2 , Q ij 1 = F j 1 = 1 - Q ij 0 , Δ Q ij = Q ij 0 - Q ij 1 }
Step 3 is according to the Δ Q of last iteration (k-1), the check-node that upgrades this iteration is to variable node information vector matrix R 0 (k), R 1 (k)In all non-vanishing vectors, realize that check-node upgrades;
for?i=0,...,m b-1
for?j∈Jset(i)
Δ R ij ( k ) = P ij Π j ′ ∈ Jest ( i ) \ j P ij ′ - 1 Δ Q ij ′ ( k - 1 ) R ij 0 ( k ) = ( 1 + Δ R ij ( k ) ) / 2 , R ij 1 ( k ) = 1 - r mn 0 = ( 1 - Δ R ij ( k ) ) / 2
Step 4 is according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), the variable node that calculates this iteration is to check-node information vector matrix Q 0 (k),
Figure C200510114589003710
In all non-vanishing vectors, realize that variable node upgrades;
for?j=0,...,n b-1
for?i∈Iset(j)
{ Q ij 0 ( k ) = α ij F j 0 Π i ′ ∈ Iset ( j ) \ i R i ′ j 0 ( k ) , Q ij 1 ( k ) = β ij F j 1 Π i ′ ∈ Iset ( j ) \ i R i ′ j 1 ( k ) }
Step 5 is according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), calculating variable node n value is 0 and 1 pseudo-posterior probability vector array F 0 (k), F 1 (k)In all non-vanishing vectors;
for?j=0,...,n b-1
{ F j 0 ( k ) = α j F j 0 Π i ′ ∈ Iset ( j ) R i ′ j 0 ( k ) , F j 1 ( k ) = β j F j 1 Π i ′ ∈ Iset ( j ) R i ′ j 1 ( k ) }
Step 6 is according to F 0 (k), F 1 (k)Size, hard decision obtains vectorial array S, according to T=HS T, judge that whether full T 0, if then successfully decoded, the output hard decision finishes; Otherwise, continue to judge whether iterations is less than certain predefined maximum, if be less than, then returns step 3, otherwise the declaration decoding failure finishes.
The minimum arithmetic unit of present embodiment interpretation method also is the vector of 1 * z, the rule of vector algorithm all is the same with preceding two embodiment, because all computings all are based on vector operation, so the algorithm that we are adopted the present embodiment interpretation method calls probability territory vector BP algorithm.
In sum, the calculating principle of vectorial interpretation method of the present invention is identical with handling process and traditional algorithm, will always the z Bit data be packaged into vector when just realizing, realization of decoding is m based on size always b* n bBasis matrix H b, do not need parity check matrix H, be that a size is m b* n bMatrix operation.New matrix operation is a basic processing unit with the vector of z bit (as wherein codeword vector, variable node to check-node information vector and check-node to the variable node information vector etc.) always, might computing comprise the functional operation of vectorial addition subtraction multiplication and division, vectorial shift operation and vector.Depend on not direct relation of basis matrix and parity matrix by the topology of the decoder of vectorial interpretation method design.
First embodiment of code translator of the present invention
The outstanding ldpc code decoder that the LDPC code coding method that proposes according to the present invention can be designed, its topological structure is only relevant with basis matrix, and irrelevant with parity matrix, so be particularly suitable for becoming the structurized LDPC sign indicating number of code length.
Present embodiment is used for realizing that the parallel vector decoder of vectorial BP algorithm is at the log-domain vector BP algorithm design of the second embodiment general type, its hardware configuration as shown in Figure 3, the parallel vector decoder mainly is by control section (control unit), calculation process part, storage area and bidirectional buffering network portion constitute, it etc. most important characteristic be that the least unit of transmission, storage and the calculating of all data all is a vector that size is z.Promptly all memory cell all are to be made of the memory block that can store z soft bit, and each soft bit needs 6 bits to fix a point to describe usually.Minimum arithmetic element also is the vector of size for z soft bit, and the each information transmitted data of process read-write network are the integral multiple of z soft bit always.
Memory module comprises original basis matrix memory cell (H b_ MEM), revised basis matrix memory cell (H Bz_ MEM), receive codeword vector memory cell (IN_MEM), initial log-likelihood ratio vector memory cell, hard decision vector memory cell (OUT_MEM), code word log-likelihood ratio vector memory cell, variable node to check-node information vector memory cell (VNOD_MEM) and check-node to variable node information vector memory cell (CNOD_MEM).Wherein:
Original basis matrix memory cell, comprise a plurality of memory blocks, be respectively applied for non--1 element in the original basis matrix of storage, below will be called a node corresponding to a memory block of non--1 element in the original basis matrix (or revised basis matrix), each memory block is occupied 8 bits.
Revised basis matrix memory cell also comprises a plurality of memory blocks, is respectively applied for non--1 element of storage in the revised basis matrix of basis matrix amending unit, and this element will be used to participate in check-node and upgrade computing.In the formula, P IjOr P Ij -1Taking advantage of a length is the vector of z vector, exactly this vector is carried out to the right or h left Ij bCyclic shift.So the circulative shift operation in the code check node processing array is relevant with the element value of basis matrix.Correction algorithm can adopt delivery (mod), round (scale+floor) or round off (scale+round) etc.
Receive the codeword vector memory cell, be used for the codeword sequence that buffer memory receives, and output to vectorial initial value computing unit, n is arranged bIndividual memory block, size of each memory block storage is the row vector of z.
Initial log-likelihood ratio vector memory cell is used to store the n that vectorial initial value computing unit calculates bIndividual initial log-likelihood ratio vector uses for variable node processing unit and code word log-likelihood calculations unit.
Code word log-likelihood ratio vector memory cell is used to store the n after each iteration of code word log-likelihood calculations unit output bIndividual code word log-likelihood ratio vector.
Hard decision vector memory cell is used to store the n after each iteration that the hard decision detecting unit obtains bIndividual hard decision vector.
Check-node is to variable node information vector memory cell, comprise L memory block, what each memory block was used to store the output of code check node processing array is delivered to L check-node of variable node to the variable node information vector from check-node, L is the number of-1 element non-in the basis matrix, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element.Check-node is to the general fixed-point representation of variable node information vector, and a check-node comprises z soft bit to the variable node information vector, and each soft bit fixed point is 6 binary bits, and 1 bit is represented symbol, and 5 bits are represented the absolute value part.
Variable node is to check-node information vector memory cell, comprise L memory block, what each memory block was used for storage of variables node processing array output is delivered to L variable node of check-node to the check-node information vector from variable node, each variable node corresponding to one in the basis matrix non--1 element, is also used fixed-point representation to the check-node information vector.
The calculation process module comprises variable node processing array (VNUs), code check node processing array (CNUs), vectorial initial value computing unit, code word log-likelihood calculations unit, basis matrix amending unit (Hb_Fix) and hard decision detecting unit (HDC).Wherein:
Variable node is handled array by N bIndividual variable node computing unit VNU_j forms.Each computing unit is made up of some computation subunit corresponding to all non--1 elements in this variable node respective column in the basis matrix, each subelement is by reading network B reading of data from check-node to variable node information vector memory cell and the corresponding memory block of initial log-likelihood ratio vector memory cell, finish variable node and upgrade computing (seeing formula (7)), be written to variable node to the corresponding memory block of check-node information vector memory cell by the variable node of writing after network B is upgraded this to the check-node information vector then.
The code check node processing array is by M bIndividual check node calculation unit CNU_i forms.Each computing unit is made up of some computation subunit corresponding to all non--1 elements in this check-node corresponding row in the basis matrix, each subelement is read variable node to the check-node information vector from variable node by reading network A to the corresponding memory block of check-node information vector memory cell, and in conjunction with in the basis matrix corresponding to the value of the element of this computation subunit, finish check-node and upgrade computing (seeing formula (6)), be written to check-node memory block accordingly in the variable node information vector memory cell by the check-node of writing after network A will be upgraded to the variable node information vector then.
Vector initial value computing unit is used for calculating n according to receiving code word vector sum noise variance bIndividual initial log-likelihood ratio vector writes initial log-likelihood ratio vector memory cell, also calculates L variable node simultaneously and is written to variable node to check-node information vector memory cell to the initial value of check-node information vector.
Basis matrix amending unit (Hb_Fix) is used for according to different code lengths basis matrix being revised, and stores revised basis matrix into revised basis matrix memory cell.
Code word log-likelihood calculations unit is by n bIndividual computation subunit is formed, each computation subunit from initial log-likelihood ratio vector memory cell and check-node to variable node information vector memory cell corresponding memory block obtain check-node after log-likelihood ratio vector initial value and this iteration to the variable node information vector, calculate a code word log-likelihood ratio vector of this iteration.
Hard decision detecting unit (HDC) is used for the code word log-likelihood ratio vector that decoding produces is carried out hard decision, with the hard decision vector hard decision vector memory cell that obtains, and judge that whether parity vector array T is complete 0, if it is be complete 0, then successfully decoded.
The computing of carrying out in each computing unit can be divided into computing and vectorial internal arithmetic between vector.The basic handling unit of computing is a vector between vector; The vector internal arithmetic is meant the computing of each arithmetic operation inside of each computing unit or subelement, and its basic handling unit is a bit, generally all is the processing of z soft bit.Vector operation comprises the functional operation of vectorial arithmetic, vectorial cyclic shift and vector etc.The arithmetic of vector can be passed through two 1 * z MaxThe arithmetic of the corresponding element of register is finished, and the cyclic shift of vector can be passed through 1 * z MaxThe cyclic shift of register is finished, and the functional operation of vector can be passed through 1 * z MaxEach element asks function to finish in the register.Wherein, z MaxIt is the spreading factor of the low density parity check code correspondence of specific code check maximum code length.Use z MaxSize as vector designs, and just goes for the needs of the decoding under any code length, needn't change the topological structure of decoder.Vector operation is easy to realize that concrete arithmetic logic should be determined according to the implementation method of selecting for use, for example, can adopt multiple logic to realize Log function and tangent tanh function with hardware.
The bidirectional buffering network portion comprises buffer network A and B: network A is divided into again to be read network A and writes network A, and network B is divided into again to be read network B and write network B.Read the address when reading network A and providing the code check node processing array to read variable node to the check-node information vector from variable node to check-node information vector memory cell, write address when writing network A and providing the code check node processing array that check-node is write check-node to variable node information vector memory cell to the variable node information vector, read the address when reading network B and providing variable node to handle array to read check-node to the variable node information vector, the write address when writing network B and providing variable node to handle array variable node is write variable node to check-node information vector memory cell to the check-node information vector from check-node to variable node information vector memory cell.
By above knowing, variable node is to L memory block of check-node information vector memory cell, and check-node is to L memory block of variable node information memory cell, N bL computation subunit in the individual variable node computing unit, M bL computation subunit in the individual check node calculation unit all corresponds respectively in the basis matrix-individual non--1 element.Variable node according to the present invention among interpretation method second embodiment and check-node be new formula more, can draw to draw a conclusion:
For in the code check node processing array corresponding to the computation subunit of certain non-" 1 " element of basis matrix, read network A with in the row of this element place basis matrix except that this element all other non--1 elements coupled in variable node corresponding memory block in the check-node information vector memory cell.It is coupled in check-node corresponding memory block in the variable node information vector memory cell with this element to write network A.In addition, reading network A also links to each other this element corresponding memory block in revised basis matrix memory cell with this computation subunit.
Handle in the array computation subunit for variable node corresponding to certain non-" 1 " element of basis matrix, read network B with in the row of this element place basis matrix except that this element all other non--1 elements coupled, also that the memory block of the initial log-likelihood ratio vector of being listed in of this element place basis matrix memory cell correspondence is coupled in check-node corresponding memory block in the variable node information vector memory cell.It is coupled in variable node corresponding memory block in the check-node information vector memory cell with this element to write network B.
In addition, at check-node to L the memory block of variable node information vector memory cell and the n of code word log-likelihood calculations unit bAlso exist following addressing to concern between the individual computation subunit: to be connected to the memory block in the variable node information vector memory cell for all non--1 element corresponding check nodes in the code word log-likelihood calculations subelement of a certain row of basis matrix and these row.
As can be seen, above-mentioned corresponding relation is only relevant and very simple with non--1 positions of elements in the basis matrix.Buffer network is set up the annexation of arithmetic element and memory cell, can set up with hardware to be fixedly coupled, and also can set up variable addressing.Concern for the addressing among the figure, present embodiment is with programmable arrays such as FPGA each computation subunit directly to be connected by above-mentioned corresponding relation with corresponding memory block to generate the read-write network, can certainly in DPS, realize above-mentioned addressing relation with programming, non--1 positions of elements is set up the addressing relation between above-mentioned memory block and computation subunit in being system when work again according to basis matrix, this moment since the memory block that relates to and corresponding vector seldom, can directly visit, thereby the pointer that need not to store its index of indication or point to it.Above addressing relation is also with in addition explanation more intuitively in the application example hereinafter.
Traditional decoding algorithm is based on parity matrix, and it needs sparse matrix storage organization as shown in Figure 8, and this structure is the doubly linked list of a two dimension.Except the soft bit information that needs storage decoding needs, also need the address information of storage access node, promptly point to address of node pointer up and down.These address informations generally all are 32.So, for each node, not only need to store 2 soft bit decoding data, also need to store 4 address pointers.And the present invention has avoided storing 4 above-mentioned address pointers, so memory space has saved 2/3 at least.
Control module (control unit) is mainly used in control, coordinate each unit finishes following decoding flow process:
The first step, initialization
When data when input is ready to, decoder read in from the soft bit of the code word of I/O mouth (i.e. the codeword sequence of Jie Shouing) in each clock cycle, was kept to receive the codeword vector memory cell.After the monoblock data obtain storage, vector initial value computing unit writes initial log-likelihood ratio vector memory cell according to the initial value that the reception code word vector calculation of reading in the log-likelihood ratio vector, also calculates variable node and writes variable node to check-node information vector memory cell to the initial value of check-node information vector.
Second step, iterative decoding, realize by following two sub-steps:
In first substep, the code check node processing array of decoder will carry out the calculating that check-node upgrades, and finish the iterative decoding of horizontal direction.In each clock cycle, read a variable node to the check-node information vector each memory block from variable node to check-node information vector memory cell, deliver to the sub-computing unit of corresponding check-node and finish check-node renewal computing, then the check-node that obtains is written to check-node in the corresponding memory block of variable node information vector memory cell to the variable node information vector.
In second sub-steps, the variable node of decoder is handled array will carry out the calculating that variable node upgrades, and finish the iterative decoding of vertical direction.In each clock cycle, read an initial log-likelihood ratio vector or variable node from initial log-likelihood ratio vector memory cell and check-node to each memory block of variable node information vector memory cell to the check-node information vector, deliver to variables corresponding node computation subunit and finish variable node renewal computing, then the variable node that obtains is written to variable node to the corresponding memory block of check-node information vector memory cell to the check-node information vector;
Simultaneously, in each clock cycle, to each memory block of variable node information vector memory cell, read an initial log-likelihood ratio vector sum check-node from initial log-likelihood ratio vector memory cell and check-node to the variable node information vector, deliver to corresponding code word log-likelihood calculations subelement, calculate the code word log-likelihood ratio vector of this iteration, be then written to the corresponding memory block of code word log-likelihood ratio vector memory cell.
In the 3rd step, decoding detects and output
Hard decision detecting unit (HDC) carries out hard decision to the code word log-likelihood ratio vector of storage, and the hard decision sequence that obtains of storage also detects the hard decision result, if correct, then finishes decoding, output hard decision sequence; As if mistake, judge whether to reach the iterations of maximum again, if reach, then decoding failure finishes, and continues iterative decoding otherwise returned for second step.
To adopt the decoder of log-domain vector BP algorithm to describe to present embodiment with a fairly simple application example below.
Suppose that basis matrix is the example that last this paper enumerates, promptly
Figure C20051011458900441
So, the overall structure of its decoder as shown in Figure 3, its variable node is counted n b=4, check-node is counted m b=2, non-in the basis matrix " 1 " unit have 7, supposes spreading factor z=2, so the base unit of all information stores and computing is the vector of 1*2 soft bit in this application example.
Correspondingly, the number of variable node computing unit also is 4, corresponding to the 1st, 2, comprises 2 computation subunit in each variable node computing unit of 3 row, comprises 1 computation subunit corresponding to the 4th variable node computing unit that is listed as.The number of check node calculation unit also is 2, and comprises 3 and 4 computation subunit respectively in the check node calculation unit corresponding to the 1st, 2 row.
In the application example, check-node respectively has 7 memory block CNOD_MEM to variable node information vector memory cell and variable node in check-node information vector memory cell IjAnd VNOD_MEM Ij, be respectively applied for storage corresponding to the variable node of 7 basis matrix nodes to check-node information vector and check-node to the variable node information vector.And in the receiving sequence vector memory cell, hard decision vector memory cell, initial log-likelihood ratio vector memory cell, code word log-likelihood ratio vector memory cell 4 memory blocks are arranged all, store respectively corresponding to basis matrix n bThe vector information of individual row.
Introduce the realization of its interpretation method below.
After receiving the soft-decision bit, calculate log-likelihood ratio vector array Q and check-node earlier to the initial value of variable node information vector matrix U, obtain Q 0 (0), Q 1 (0), Q 2 (0), Q 3 (0), and the initial value v of 7 elements of size, nonzero element position and basis matrix information corresponding matrix V 00 (0), v 01 (0), v 02 (0), v 10 (0), v 11 (0), v 12 (0), v 13 (0)(no v 03 (0));
Upgrade the check-node of this iteration to variable node information matrix U at the code check node processing array then (k), formula is:
For i=0 ..., m b-1
For j ∈ Jset (i)
u ij ( k ) = P ij 2 tan h - 1 Π j ′ ∈ Jest \ j tanh ( P ij ′ - 1 v ij ′ ( k - 1 ) 2 )
Wherein, first check node (corresponding to first row) has comprised the computing of 3 nodes, for example for the first time during iteration:
u 00 ( 1 ) = P 00 2 tan h - 1 { tan h P 01 - 1 v 01 ( 0 ) 2 × tanh P 02 - 1 v 02 ( 0 ) 2 }
The more new formula of check-node is divided into to be asked absolute value and asks two processes of symbol to finish, as follows:
| u ij ( k ) | = P ij φ ( Σ j ′ ∈ Jest \ j φ ( P i j · - 1 · v ij ′ ( k - 1 ) ) ) )
sign ( u ij ( k ) ) = - P ij Π j ′ ∈ Jest \ j sign ( P i j ′ - 1 · v ij ′ ( k - 1 ) ) )
For signed magnitude arithmetic(al), have:
| u 00 ( k ) | = P 00 φ ( φ ( P 01 - 1 · v 01 ( k - 1 ) ) ) + φ ( P 02 - 1 · v 02 ( k - 1 ) ) ) )
| u 01 ( k ) | = P 00 φ ( φ ( P 00 - 1 · v 00 ( k - 1 ) ) ) + φ ( P 02 - 1 · v 02 ( k - 1 ) ) ) )
| u 02 ( k ) | = P 02 φ ( φ ( P 00 - 1 · v 00 ( k - 1 ) ) ) + φ ( P 01 - 1 · v 01 ( k - 1 ) ) ) )
Wherein, definition φ (x)=-log (tanh (x/2))=log (coth (x/2)), x is the real number greater than 0.
Symbolic operation can be used with door and realize.Here each amount all is a vector that is made of z soft bit, when absolute value and symbol separately after, symbolic vector represents that with z * 1 binary bits absolute value is vectorial to be represented with z * 5 binary bits.After each soft binary bits fixed point, symbol represents that with 1 bit-binary absolute value is represented with 5 bit-binary.
This computing with function representation is exactly:
Figure C20051011458900464
Figure C20051011458900465
Therefore, at the first check node calculation unit CNU 1In comprise again and the computation subunit of 3 parallel computations be respectively applied for the calculation check node to variable node information vector u 00, u 01, u 02, correspondingly, the second check node calculation unit CNU 1Should comprise 4 computation subunit, be respectively applied for the calculation check node to variable node information vector u 10, u 11, u 12, u 13Whole code check node processing array has 7 computation subunit CNU Ij, correspond respectively to 7 nodes of basis matrix.
Please be simultaneously with reference to Fig. 4 and Fig. 5, show code check node processing array and check-node annexation to variable node information vector memory cell, variable node to check-node information vector memory cell, i.e. addressing concerns.As can be seen from Figure 4, read data in each check node calculation unit memory block that is-1 element corresponding variable node non-from corresponding basis matrix is capable in the check-node information vector memory cell.Though the computation subunit in the check node calculation unit is not shown among the figure, but as can be seen from the more new formula of check-node, each computation subunit be from corresponding line except that this subelement corresponding element other non--1 element at variable node read data in the corresponding memory block in the check-node information vector memory cell, as computation subunit CNU 00Be from memory block VNOD_MEM_ 01, VNOD_MEM_ 02In fetch data.
As can be seen from Figure 5, each check node calculation unit be output to corresponding basis matrix capable in non--1 element in check-node corresponding memory block in the variable node information vector memory cell.Though the computation subunit in the check node calculation unit is not shown among the figure, from the more new formula of check-node as can be seen, both are related and connect one to one by non--1 element in each self-corresponding basis matrix, i.e. computation subunit CNU IjOutput to memory block CNOD_MEM Ij
Fig. 6 A and Fig. 6 B show in the application example, the structure of the code check node processing unit of corresponding basis matrix first row.In Fig. 6 A, CLS is the abbreviation of ring shift left (circular left shift), CLS h B (ij)Expression is for the vectorial ring shift left h of a z soft bit B (ij)Position, wherein h B (ij)Be basis matrix H bThe capable j column element of i, can be from basis matrix corresponding memory block read.In like manner, CRS h B (ij)Expression is for the vectorial ring shift right h of a z soft bit B (ij)The position.Module LUT is look-up table (Look up table), be mainly used to realize function phi (x), can adopt the piecewise function linear approximation (quantification of 8 rank) of 3 bits, can adopt non-uniform quantizing to come the lower quantization error simultaneously, this is by the specific implementation algorithm decision of adopting.Above-mentioned all computings all are to be that the vector of z soft bit is a base unit with the size, and LUT has realized the look-up table operations to each element of vector.In like manner, the check node calculation cellular construction of corresponding basis matrix second row is similar.These two code check node processing arrays have constituted the code check node processing array (CNUs) of this decoder jointly.Fig. 6 B has shown that the code check node processing unit of corresponding basis matrix first row realizes the structure of symbolic operation by " with door ".
Continue to discuss the renewal of variable node in the application example below, represent by following formula:
for?j=0,...,n b-1
for?i=0,...,m b-1
v ij ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iest / j u i ′ j ( k - 1 )
for?j=0,...,n b-1
Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iest u i ′ j ( k )
Variable node corresponding to basis matrix the 1st row comprises two computation subunit, and computing is as follows:
v 00 ( k ) = Q 0 ( 0 ) + u 10 ( k - 1 )
v 10 ( k ) = Q 0 ( 0 ) + u 00 ( k - 1 )
And the reception code word log-likelihood ratio of corresponding basis matrix first row in this iteration:
Q 0 ( k ) = Q 0 ( 0 ) + u 00 ( k ) + u 10 ( k )
Fig. 7 shows the structure of the variable node processing array of corresponding basis matrix first row, and this processor is made up of some adders, and all add operations all are that big or small vector for z soft bit adds.The variable node processing array technique and the computing of the 2nd to the 4th row of corresponding basis matrix are all similar, and they have constituted together and should handle array with the variable node of example decoder.
Close from addressing and to fasten, as can be seen from Figure 4, each variable node computing unit is a read data in the memory block of non--1 element corresponding check node in the variable node information vector memory cell from corresponding basis matrix row.Though the computation subunit in the variable node computing unit is not shown among the figure, but as can be seen from the more new formula of variable node, each computation subunit from the basis matrix respective column except that this subelement corresponding element other non--1 element at variable node read data in the pairing memory block in the check-node information vector memory cell, as computation subunit VNU 00Be from memory block CNOD_MEM_ 10In fetch data.
As can be seen from Figure 5, each variable node computing unit is to output to the memory block in the check-node information vector memory cell of non--1 element corresponding variable node in the corresponding basis matrix row.Though the computation subunit in the variable node computing unit is not shown among the figure, from the more new formula of variable node as can be seen, both are related and connect one to one by non--1 element in each self-corresponding basis matrix, i.e. computation subunit VNU IjOutput to memory block VNOD_MEM Ij
Illustrated among Fig. 7 A and Fig. 7 B with adder and subtracter and realized the more structure of new formula of above-mentioned variable node, one is used to finish signed magnitude arithmetic(al), and another is used to finish symbolic operation.
Code word log-likelihood calculations unit also is divided into n bIndividual sub-computing unit, correspond respectively to the row in the basis matrix, corresponding to the initial log-likelihood ratio vector in the memory block of this node column, the check-node that also will obtain check-node memory block of all node correspondences of these row in the variable node information vector memory cell is to the variable node information vector in receiving initial log-likelihood ratio vector memory cell for each sub-computing unit.
Second embodiment of code translator of the present invention
The hardware configuration of the code translator of present embodiment is corresponding to the log-domain vector interpretation method of reduced form, as shown in Figure 9.Present embodiment is consistent with the function that first embodiment will realize.But because corresponding to different algorithms, so difference to some extent structurally.
For both similarities and differences are described, can divide the construction unit among Fig. 3 alternatively, promptly be divided into the initial value computing module of forming by reception codeword vector memory cell, vectorial initial value computing unit, initial log-likelihood ratio vector memory cell; Handle the interative computation module that array, code word log-likelihood calculations unit are formed to check-node information vector memory cell, check-node to variable node information vector memory cell, code check node processing array, variable node by bilateral network A and B, variable node; The basis matrix processing module of forming by original basis matrix memory cell, basis matrix amending unit, revised basis matrix memory cell; The hard decision detection module of forming by code word log-likelihood ratio vector memory cell, hard decision detecting unit and hard decision vector memory cell; And control module.
Comparison diagram 3 and Fig. 9, as can be seen, the function of the included unit of present embodiment initial value computing module, basis matrix processing module and hard decision detection module, unit and the annexation between the unit and first embodiment are that identical, unique difference is that vectorial initial value computing unit needn't calculate the initial value of variable node to the check-node information vector.Each unit for these 3 modules does not repeat them here.
As shown in Figure 9, the interative computation module of present embodiment code translator comprise node updates handle array (MPUs), by reading network and writing bilateral network that network forms, check-node to variable node information vector memory cell and code word log-likelihood calculations unit.Wherein:
Check-node comprises L memory block to variable node information vector memory cell, L is the quantity of-1 element non-in the basis matrix, each memory block is used for the L that will a transmit check-node that memory node upgrade to handle array output to the variable node information vector, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element.
Node updates is handled array by M bThe individual basis matrix M that corresponds respectively to bThe computing unit of row is formed.Each computing unit comprises the some computation subunit that correspond respectively to all non--1 elements in this row of basis matrix again, total L computation subunit, each subelement by read network from check-node to the corresponding memory block of variable node information vector memory cell the reader check node to the variable node information vector, from the corresponding memory block of code word log-likelihood ratio vector memory cell, read code word log-likelihood ratio vector, from revised basis matrix memory cell, read the respective element value, finish a node updates computing (seeing formula (4)), be written to check-node corresponding memory block in the variable node information vector memory cell by the check-node of writing after network will upgrade to the variable node information vector then.
In the bidirectional buffering network, reading network provides node updates to handle the address of reading that array reads corresponding vector from check-node to variable node information vector memory cell, code word log-likelihood ratio vector memory cell and the revised basis matrix memory cell, the write address when writing network and providing the node updates array that check-node is write check-node to variable node information vector memory cell to the variable node information vector.More specifically, handle in the array computation subunit for node updates corresponding to certain non-" 1 " element of basis matrix, read network with in the row of this element place basis matrix except that this element all other non-" 1 " elements coupled in check-node corresponding memory block in the variable node information vector memory cell, with in the row of this element place basis matrix except that this element all other non-" 1 " elements corresponding memory block in code word log-likelihood ratio vector memory cell coupled, also that this element corresponding memory block in revised basis matrix memory cell is coupled.
The function of code word log-likelihood calculations unit is identical with first embodiment, no longer repeats.
The decoding flow process that the control module of present embodiment is used to control, coordinate to finish each unit also comprises the step of initialized step, iterative decoding and the step that decoding detects and exports, the step and first embodiment that wherein initialized step and decoding detect and exports are basic identical, difference only is need not calculate the initial value of variable node to the check-node information vector, and these two steps are no longer repeated.
The iterative decoding step is realized by following operation: the node updates of decoder is handled array in each clock cycle, the reader check node is to the variable node information vector from check-node to variable node information vector memory cell, from code word log-likelihood ratio vector memory cell, read code word log-likelihood ratio vector, from revised basis matrix memory cell, read the element value of basis matrix, deliver to corresponding sub-computing unit and finish the node updates computing, then the check-node that obtains is written to check-node in the corresponding memory block of variable node information vector memory cell to the variable node information vector.
Simultaneously, in each clock cycle, to each memory block of variable node information vector memory cell, read an initial log-likelihood ratio vector sum check-node from initial log-likelihood ratio vector memory cell and check-node to the variable node information vector, deliver to corresponding code word log-likelihood calculations subelement, calculate the code word log-likelihood ratio vector of this iteration, be then written to the corresponding memory block of code word log-likelihood ratio vector memory cell.
In sum, the present invention can adopt identical above-mentioned decoder architecture for different code lengths.Difference is preserved h for no other reason than that the reason of revising makes Ij bThe content difference of register because the different reason of spreading factor makes in the circulating register of each node effectively vector length difference.Aspect computing, computing and decoding flow process are all identical between vector; Be the vector length difference of intranodal vector operation, the figure place difference of cyclic shift.Therefore, decoder provided by the invention is based on vectorial decoding algorithm, for the LDPC sign indicating number of same code check different code length, have identical hardware topology structure, for common BP algorithm, it is minimum that needed memory space reaches, the hardware implementation complexity is low, is fit to Parallel Implementation.Decoder of the present invention is adapted at using among large scale integrated circuit or the FPGA (hardware realization), also can use in DSP (software realization).
Therefore, algorithm provided by the invention and decoder do not need to store and visit very big parity matrix, only need basis matrix, so implementation complexity reduces greatly; Since do not need the storage parity matrix, thus do not need to store the access index of sparse matrix element, so reduce the needs of memory capacity significantly; Owing to only need basis matrix just can finish computing, expand this step so omitted matrix; Because the topology of decoder only depends on basis matrix, so the LDPC sign indicating number of specific code check different code length can adopt unified decoder.Because algorithm is based on vector operation, so be very suitable for concurrent operation.In a word, this algorithm and decoder are based on the preferred plan of unit matrix and cyclic shift matrices LDPC sign indicating number, and be especially bigger for becoming code length situation meaning.Encoder also can be finished with similar vector operation, and this LDPC sign indicating number based on unit matrix and cyclic shift matrices just can become and was vectorial LDPC sign indicating number this moment.
The present invention is on the basis of above embodiment, various conversion can also be arranged, for example, in another embodiment, when decoder during only corresponding to a kind of code length, can not need the basis matrix amending unit, directly from the basis matrix memory cell, read element value, perhaps corresponding data directly is configured to node corresponding computing array and get final product.

Claims (15)

1, a kind of LDPC code vector interpretation method based on unit matrix and cyclic shift matrix thereof adopts check matrix H = { ( P ij ) z × z } m b × n b , Unique corresponding to basis matrix H b = { h ij b } m b × n b , ∀ i ∈ [ 0,1 , . . . , m b - 1 ] , ∀ j ∈ [ 0,1 , . . . , n b - 1 ] , Iterations is k, and spreading factor is z, and Iset (j) is H bIn j Lie Fei-1 element line index set, Jset (i) is H bIn capable non--1 element column index set of i, this method may further comprise the steps:
(a) will import the reception data Y=[y of decoder 0, y 1..., y N-1] be divided into n bGroup makes receiving sequence vector array R = { R j } 1 × n b In element R j=[y Jz, y Jz+1..., y (j+1) z-1];
(b) make k=0, R obtains the initial value of confidence level vector array according to receiving sequence vector array, and obtains transmitting the initial value of information vector matrix, and described vector is the vector of the soft bit of 1 * z;
(c) utilize non--1 element value h of described transmission information vector matrix, confidence level vector array and basis matrix that k-1 iteration obtain Ij bUpgrade computing, obtain transmission information vector matrix and the confidence level vector array after the iteration the k time, the minimum arithmetic unit in all computings all is the vector of the soft bit of 1 * z;
(d) described confidence level vector array is carried out hard decision and obtain the hard decision vector array S = { S j } 1 × n b , S jBe the capable vector of 1 * z, basis then T i = Σ j = 1 n b P ij - 1 S j T Calculate the parity vector array T = { T i } m b × 1 ;
(e) judge that whether vectorial array T is complete 0, if then successfully decoded, the output hard decision finishes; Otherwise, make k=k+1, judge again whether k is less than maximum iteration time, if return step (c), otherwise decoding failure finishes.
2, vectorial interpretation method as claimed in claim 1 is characterized in that, this method is the log-domain vector interpretation method of reduced form, wherein:
In the described step (b), utilize reception data vector array R to finish check-node to variable node information vector matrix U = { u ij } m b × n b With code word log-likelihood ratio vector array Q = { Q j } 1 × n b In the computing of all non-vanishing vector initial values, this step finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is u ij ( 0 ) = 0 , Q j ( 0 ) = 2 R j / σ 2 , σ 2Be noise variance;
Be further divided into following steps in the described step (c):
(c1) according to the check-node of last iteration to variable node information vector matrix U (k-1)With code word log-likelihood ratio vector array Q (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize node updates, this step finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
u ij ( k ) = P ij 2 tanh - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij - 1 Q j ( k - 1 ) - P ij ′ - 1 u ij ′ ( k - 1 ) 2 )
Wherein, Jset (i) j represent the set of Jset (i) after having got rid of some column index j;
(c2) according to initial log-likelihood ratio vector data Q (0)With the check-node of this iteration to variable node information vector matrix U (k), calculate the code word log-likelihood ratio vector array Q of this iteration (k)In all non-vanishing vectors, promptly to any j=0 ..., n b-1, calculate Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) u i ′ j ( k ) ;
And in the described step (d), be to code word log-likelihood ratio vector array Q (k)Carry out hard decision.
3, vectorial interpretation method as claimed in claim 1 is characterized in that, this method is the log-domain vector interpretation method of general type, wherein:
In the described step (b), utilize reception data vector array R to finish variable node to check-node information vector matrix V = { v ij } m b × n b With code word log-likelihood ratio vector array Q = { Q j } 1 × n b In the calculating of all non-vanishing vector initial values, this step finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is: v ij ( 0 ) = Q j ( 0 ) = 2 R j / σ 2 , σ 2Be noise variance;
Be further divided into following steps in the described step (c):
(c1) according to the V of last iteration (k-1)And R (k-1), the check-node that upgrades this iteration is to variable node information vector matrix U (k)In all non-vanishing vectors, realize that check-node upgrades, this step finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
u ij ( k ) = P ij 2 tanh - 1 Π j ′ ∈ Jset ( i ) \ j tanh ( P ij ′ - 1 v ij ′ ( k - 1 ) 2 )
(c2) according to initial log-likelihood ratio vector array Q (0)With the check-node of this iteration to variable node information vector matrix U (k), the variable node that calculates this iteration is to check-node information vector matrix V (k)In all non-vanishing vectors, realize that variable node upgrades, and finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i=0 ..., m b-1, formula is:
v ij ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) \ i u i ′ j ( k )
Calculate the code word log-likelihood ratio array Q of this iteration simultaneously (k)In all non-vanishing vectors, promptly to any j=0 ..., n b-1, calculate: Q j ( k ) = Q j ( 0 ) + Σ i ′ ∈ Iset ( j ) u i ′ j ( k ) ;
And in the described step (d), be to code word log-likelihood ratio vector array Q (k)Carry out hard decision.
4, vectorial interpretation method as claimed in claim 1 is characterized in that, this method is a probability territory vector interpretation method, wherein:
In the described step (b), be to utilize to receive array of data R, calculate variable node to check-node information vector matrix Q 0 = { Q ij 0 } m b × n b , Q 1 = { Q ij 1 } m b × n b And vector matrix ΔQ = { Δ Q ij } m b × n b , And code word probability vector array F 0 = { F j 0 } 1 × n b With F 1 = { F j 1 } 1 × n b In the initial value of all non-vanishing vectors, finish by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is:
{ Q ij 0 = F j 0 = 1 1 + e - 2 R j / σ 2 , Q ij 1 = F j 1 = 1 - Q ij 0 , Δ Q ij = Q ij 0 + Q ij 1 }
Be further divided into following steps in the described step (c):
(c1) according to the Δ Q of last iteration (k-1), the check-node that upgrades this iteration is to variable node information vector matrix R 0 (k), R 1 (k)In all non-vanishing vectors, realize that check-node upgrades, and finishes by following loop computation: outer circulation i=0 ..., m b-1, interior circulation j ∈ Jset (i), formula is:
Δ R ij ( k ) = P ij Π j ′ ∈ Jset ( i ) \ j P ij ′ - 1 Δ Q ij ′ ( k - 1 ) , R ij 0 ( k ) = ( 1 + Δ R ij ( k ) ) / 2 , R ij 1 ( k ) = 1 - R ij 0 ( k ) = ( 1 - Δ R ij ( k ) ) / 2
(c2) according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), the variable node that calculates this iteration is to check-node information vector matrix Q 0 (k), Q 1 (k)In all non-vanishing vectors, realize that variable node upgrades, and finishes by following loop computation: outer circulation j=0 ..., n b-1, interior circulation i ∈ Iset (j), formula is:
{ Q ij 0 ( k ) = α ij F j 0 Π i ′ ∈ Iset ( j ) \ i R i ′ j 0 ( k ) , Q ij 1 ( k ) = β ij F j 1 Π i ′ ∈ Iset ( j ) \ i R i ′ j 1 ( k ) }
Simultaneously, according to initial code word probability vector array F 0, F 1With the check-node of this iteration to variable node information vector matrix R 0 (k), R 1 (k), calculating variable node n value is 0 and 1 pseudo-posterior probability vector array F 0 (k), F 1 (k)In all non-vanishing vectors, promptly to arbitrary j=0 ..., n b-1, calculate:
{ F j 0 ( k ) = α j F j 0 Π i ′ ∈ Iset ( j ) R i ′ j 0 ( k ) , F j 1 ( k ) = β j F j 1 Π i ′ ∈ Iset ( j ) R i ′ j 1 ( k ) }
α wherein IjAnd β IjFor normalization coefficient makes Q ij 0 + Q ij 1 = 1 ;
And in the described step (d), be according to F 0 (k), F 1 (k)Big or small hard decision obtain vectorial array S.
5, vectorial interpretation method as claimed in claim 1, it is characterized in that, described computing to vector comprises the functional operation of vectorial arithmetic, vectorial cyclic shift and vector, and P is finished in the arithmetic of vector by the arithmetic of two corresponding elements of vector Ij 'With multiplication of vectors by to the ring shift right h of vector element Ij bP is finished in the position Ij ' -1With multiplication of vectors by to the ring shift left h of vector element Ij bFinish the position, and the functional operation of vector is by asking function to finish to each element in the vector.
6, as claim 2 or 3 described vectorial interpretation methods, it is characterized in that, to check-node information vector fixed-point representation, each vector comprises z soft bit to described check-node to variable node information vector and variable node, and each soft bit fixed point is 6 binary bits.
7, as claim 2,3 or 4 described vectorial interpretation methods, it is characterized in that, the check-node of described iterative decoding upgrades to be handled, be a kind of realization of adopting in the following approximate data of above-mentioned standardized confidence spread algorithm or this algorithm: BP-Based algorithm, APP-based algorithm, maximum confidence propagation algorithm uniformly, minimum-sum algorithm and minimum and lookup table algorithm.
8, a kind of LDPC code vector decode translator based on unit matrix and cyclic shift matrix thereof is characterized in that, comprises basis matrix processing module, initial value computing module, interative computation module, hard decision detection module and control module, wherein:
Described basis matrix processing module comprises the basis matrix memory cell, and this unit has L memory block, and each memory block is used to store basis matrix H b = { h ij b } m b × n b In non--1 element value h ij b ≠ - 1 , L is the number of-1 element non-in the basis matrix, ∀ i ∈ [ 0,1 , . . . , m b - 1 ] , ∀ j ∈ [ 0,1 , . . . , n b - 1 ] ;
Described initial value computing module is used for receiving input data Y=[y 0, y 1..., y N-1] and be buffered in n bIn the individual memory block, calculate the initial value of confidence level vector array then, be stored in n bIn the individual memory block, and obtain transmitting the initial value of information vector matrix;
Non--1 element value h of the transmission information vector matrix that described interative computation module is used to utilize last iteration to obtain, confidence level vector array and basis matrix IjUpgrade computing, obtain transmission information vector matrix and confidence level vector array after this iteration;
The confidence level vector array that described hard decision detection module is used for that iteration is obtained is carried out hard decision and is obtained the hard decision vector array S = { S j } 1 × n b , Be stored in n bIn the individual memory block, basis then T i = Σ j = 1 n b P ij - 1 S j T Calculate, and adjudicate the parity vector array that obtains T = { T i } m b × 1 Whether be complete 0;
Described control module is used to control other module and finishes initial value computing, interative computation and hard decision detection, is complete 0 o'clock at array T, and the output hard decision is successfully decoded, finishes; T is not complete 0 o'clock, whether judges iterations again less than maximum iteration time, in this way, continues next iteration, as reaches maximum iteration time, and then decoding failure finishes;
And, all memory blocks are the memory block of z soft bit of storage, computing between each array and matrix element is the vector operation of size for z soft bit, the computing unit of each module directly reads and writes data from corresponding memory block, each information transmitted data are the integral multiple of z soft bit always, wherein, z is a spreading factor.
9, vectorial code translator as claimed in claim 8 is characterized in that, the size of described memory block is z MaxIndividual soft bit, wherein, z MaxIt is the spreading factor of the low density parity check code correspondence of specific code check maximum code length.
10, vectorial code translator as claimed in claim 8 is characterized in that, is to set up with hardware to be fixedly coupled between described each computing unit and the corresponding memory block, realizes the addressing to data.
11, vectorial code translator as claimed in claim 8 is characterized in that, what store in the basis matrix memory cell in the described basis matrix processing module is the element value of original basis matrix; Perhaps, basis matrix memory cell in the described basis matrix processing module is meant revised basis matrix memory cell, this processing module also comprises original basis matrix memory cell and basis matrix amending unit, and the computing unit of described interative computation module also with in the corresponding memory block of this revised basis matrix memory cell links to each other with reading of data.
12, vectorial code translator as claimed in claim 8 is characterized in that, described initial value computing module comprises:
Receive the codeword vector memory cell, be used for the codeword sequence Y=[y that buffer memory receives 0, y 1..., y N-1], with receiving sequence vector array R = { R j } 1 × n b Form be stored in n bIn the individual memory block, vectorial R of each memory block storage j=[y Jz, y Jz+1..., y (j+1) z-1];
Vector initial value computing unit is used to read receiving sequence vector R j, calculate initial log-likelihood ratio vector array Q = { Q j } 1 × n b , Q j ( 0 ) = 2 R j / σ 2 , σ 2Be noise variance;
Initial log-likelihood ratio vector memory cell comprises n bIndividual memory block is stored the n of described initial log-likelihood ratio vector array respectively bIndividual vectorial Q j
13, vectorial code translator as claimed in claim 12, it is characterized in that, described interative computation module comprise check-node to variable node information vector memory cell, node updates handle array, by reading network and write bidirectional buffering network and the code word log-likelihood calculations unit that network is formed, wherein:
Described check-node comprises L memory block to variable node information vector memory cell, each memory block is used for the L that will a transmit check-node that memory node upgrade to handle array output to the variable node information vector, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element;
Described node updates is handled array by M bThe individual basis matrix M that corresponds respectively to bThe computing unit of row is formed, each computing unit comprises a plurality of computation subunit that correspond respectively to all non--1 elements in this row of basis matrix again, total L, each computation subunit is by reading network reading of data from check-node to variable node information vector memory cell and the corresponding memory block of code word log-likelihood ratio vector memory cell, finish a node updates computing, be written to check-node corresponding memory block in the variable node information vector memory cell by the check-node of writing after network will upgrade to the variable node information vector then;
In the described bidirectional buffering network, handle in the array computation subunit for described node updates corresponding to certain non-" 1 " element of basis matrix, read network with in the row of this element place basis matrix except that this element all other non-" 1 " elements coupled in check-node corresponding memory block in the variable node information vector memory cell, with in the row of this element place basis matrix except that this element the memory block of all other non-" 1 " elements correspondence in code word log-likelihood ratio vector memory cell coupled;
Described code word log-likelihood calculations unit is by n bIndividual computation subunit is formed, each computation subunit from initial log-likelihood ratio vector memory cell and check-node to variable node information vector memory cell corresponding memory block obtain check-node after log-likelihood ratio vector initial value and this iteration to the variable node information vector, calculate a code word log-likelihood ratio vector of this iteration.
14, vectorial code translator as claimed in claim 12, it is characterized in that, described interative computation module comprises that check-node is to variable node information vector memory cell, variable node is to check-node information vector memory cell, variable node is handled array, and the code check node processing array comprises and reads network A, writes network A, reads network B and write the bidirectional buffering network of network B, and code word log-likelihood calculations unit, wherein:
Described check-node is to variable node information vector memory cell, comprise L memory block, each memory block is used to store a check-node to the variable node information vector, each check-node to the variable node information vector corresponding to one in the basis matrix non--1 element;
Described variable node is to check-node information vector memory cell, comprise L memory block, each memory block is used to store a variable node to the check-node information vector, each variable node to the check-node information vector corresponding to one in the basis matrix non--1 element;
Described variable node is handled array by N bIndividual variable node computing unit is formed, each computing unit comprises a plurality of computation subunit corresponding to all non--1 elements in this variable node respective column in the basis matrix, each computation subunit is by reading network B reading of data from check-node to variable node information vector memory cell and the corresponding memory block of initial log-likelihood ratio vector memory cell, finish variable node and upgrade computing, be written to variable node to the corresponding memory block of check-node information vector memory cell by the variable node of writing after network B is upgraded this to the check-node information vector then;
Described code check node processing array is by M bIndividual check node calculation unit is formed, each computing unit comprises a plurality of computation subunit compositions corresponding to all non--1 elements in this check-node corresponding row in the basis matrix, each calculate subelement by read network A from variable node to the corresponding memory block of check-node information vector memory cell reading of data, and in conjunction with in the basis matrix corresponding to the value of the element of this computation subunit, finish check-node and upgrade computing, be written to check-node memory block accordingly in the variable node information vector memory cell by the check-node of writing after network A will be upgraded to the variable node information vector then;
For in the code check node processing array corresponding to the computation subunit of certain non-" 1 " element of basis matrix, described read network A with in the row of this element place basis matrix except that this element all other non--1 elements coupled in variable node corresponding memory block in the check-node information vector memory cell, it is described that to write network A coupled in check-node corresponding memory block in the variable node information vector memory cell with this element;
Handle in the array computation subunit for variable node corresponding to certain non-" 1 " element of basis matrix, described read network B with in the row of this element place basis matrix except that this element all other non--1 elements coupled in check-node corresponding memory block in the variable node information vector memory cell, also that the memory block of the initial log-likelihood ratio of being listed in of this element place basis matrix vector memory cell correspondence is coupled, it is coupled in variable node corresponding memory block in the check-node information vector memory cell with this element to write network B;
Described code word log-likelihood calculations unit is by n bIndividual computation subunit is formed, each computation subunit from initial log-likelihood ratio vector memory cell and check-node to variable node information vector memory cell corresponding memory block obtain check-node after log-likelihood ratio vector initial value and this iteration to the variable node information vector, calculate a code word log-likelihood ratio vector of this iteration.
15, as claim 13 or 14 described vectorial code translators, it is characterized in that described hard decision detection module comprises:
Code word log-likelihood ratio vector memory cell comprises n bIndividual memory block is used to store the n that each iteration obtains bIndividual code word log-likelihood ratio vector Q j (k)
The hard decision detecting unit, the code word log-likelihood ratio vector array Q that is used for that decoding is produced carries out hard decision and obtains n bIndividual hard decision vector, and judge whether parity vector array T is complete 0;
Hard decision vector memory cell comprises n bIndividual memory block is used to store the n that hard decision obtains bIndividual hard decision vector.
CN200510114589A 2005-10-26 2005-10-26 LDPC code vector decode translator and method based on unit array and its circulation shift array Expired - Fee Related CN100589357C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200510114589A CN100589357C (en) 2005-10-26 2005-10-26 LDPC code vector decode translator and method based on unit array and its circulation shift array

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200510114589A CN100589357C (en) 2005-10-26 2005-10-26 LDPC code vector decode translator and method based on unit array and its circulation shift array

Publications (2)

Publication Number Publication Date
CN1956368A CN1956368A (en) 2007-05-02
CN100589357C true CN100589357C (en) 2010-02-10

Family

ID=38063490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200510114589A Expired - Fee Related CN100589357C (en) 2005-10-26 2005-10-26 LDPC code vector decode translator and method based on unit array and its circulation shift array

Country Status (1)

Country Link
CN (1) CN100589357C (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101345601B (en) * 2007-07-13 2011-04-27 华为技术有限公司 Interpretation method and decoder
CN101350695B (en) * 2007-07-20 2012-11-21 电子科技大学 Method and system for decoding low density parity check code
CN101911503A (en) * 2007-12-29 2010-12-08 上海贝尔股份有限公司 Encoding method and encoding device of LDPC codes
CN102904581B (en) * 2011-07-26 2017-03-01 无锡物联网产业研究院 The building method of LDPC check matrix and device
CN105227191B (en) * 2015-10-08 2018-08-31 西安电子科技大学 Based on the quasi-cyclic LDPC code coding method for correcting minimum-sum algorithm
CN106201781B (en) * 2016-07-11 2019-02-26 华侨大学 A kind of cloud date storage method based on the right canonical correcting and eleting codes
CN107733440B (en) * 2016-08-12 2022-12-02 中兴通讯股份有限公司 Polygonal structured LDPC processing method and device
CN108270510B (en) * 2016-12-30 2020-12-15 华为技术有限公司 Communication method and communication equipment based on LDPC code
WO2019114992A1 (en) * 2017-12-15 2019-06-20 Huawei Technologies Co., Ltd. Design of base parity-check matrices for ldpc codes that have subsets of orthogonal rows
CN110661593B (en) * 2018-06-29 2022-04-22 中兴通讯股份有限公司 Decoder, method and computer storage medium
CN111106837B (en) * 2018-10-26 2023-09-08 大唐移动通信设备有限公司 LDPC decoding method, decoding device and storage medium
CN109766214A (en) * 2019-04-01 2019-05-17 苏州中晟宏芯信息科技有限公司 A kind of optimal H-matrix generation method and device
CN111431543B (en) * 2020-05-13 2023-08-01 东南大学 Variable code length and variable code rate QC-LDPC decoding method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LDPC码译码算法的研究. 杨兴丽.中国优秀硕士学位论文全文数据库. 2004 *

Also Published As

Publication number Publication date
CN1956368A (en) 2007-05-02

Similar Documents

Publication Publication Date Title
CN100589357C (en) LDPC code vector decode translator and method based on unit array and its circulation shift array
Dutta et al. A unified coded deep neural network training strategy based on generalized polydot codes
CN104868925B (en) Coding method, interpretation method, code device and the code translator of structured LDPC code
US7730377B2 (en) Layered decoding of low density parity check (LDPC) codes
Voicila et al. Low-complexity decoding for non-binary LDPC codes in high order fields
JP5199463B2 (en) Turbo LDPC decoding
JP5483875B2 (en) Method and apparatus for LDPC code block and rate independent decoding
CN101924565B (en) LDPC encoders, decoders, systems and methods
KR101405962B1 (en) Method of performing decoding using LDPC code
US10536169B2 (en) Encoder and decoder for LDPC code
KR101438072B1 (en) Multiple programming of flash memory without erase
CN111615793A (en) Vertical layered finite alphabet iterative decoding
US8984365B1 (en) System and method for reduced memory storage in LDPC decoding
CN101295988B (en) Decoding apparatus
US7493548B2 (en) Method and apparatus for encoding and decoding data
JP2008509635A (en) Data encoding and decoding method and apparatus
US9853661B2 (en) On-the-fly evaluation of the number of errors corrected in iterative ECC decoding
CN101064591B (en) Decoding method for low density parity check code and its check node refreshing circuit
US10848182B2 (en) Iterative decoding with early termination criterion that permits errors in redundancy part
US20220255560A1 (en) Method and apparatus for vertical layered decoding of quasi-cyclic low-density parity check codes built from clusters of circulant permutation matrices
US8271851B2 (en) Encoding and decoding a data signal as a function of a correcting code
CN101154948A (en) Methods and apparatus for low-density parity check decoding using hardware-sharing and serial sum-product architecture
CN109586732A (en) Middle short code LDPC coding/decoding system and method
CN100544212C (en) The loe-density parity-check code decoder of minimizing storage demand at a high speed
CN1973440A (en) LDPC encoders, decoders, systems and methods

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100210

Termination date: 20151026

EXPY Termination of patent right or utility model