GB2252702A - Channel coding for speech - Google Patents

Channel coding for speech Download PDF

Info

Publication number
GB2252702A
GB2252702A GB9200659A GB9200659A GB2252702A GB 2252702 A GB2252702 A GB 2252702A GB 9200659 A GB9200659 A GB 9200659A GB 9200659 A GB9200659 A GB 9200659A GB 2252702 A GB2252702 A GB 2252702A
Authority
GB
United Kingdom
Prior art keywords
bits
sequence
speech
frame
coder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9200659A
Other versions
GB9200659D0 (en
Inventor
Ivan Boyd
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
British Telecommunications PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB919100823A external-priority patent/GB9100823D0/en
Priority claimed from GB919106156A external-priority patent/GB9106156D0/en
Application filed by British Telecommunications PLC filed Critical British Telecommunications PLC
Publication of GB9200659D0 publication Critical patent/GB9200659D0/en
Publication of GB2252702A publication Critical patent/GB2252702A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/35Unequal or adaptive error protection, e.g. by providing a different level of protection according to significance of source information or by adapting the coding according to the change of transmission channel characteristics
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M13/00Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
    • H03M13/03Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words
    • H03M13/23Error detection or forward error correction by redundancy in data representation, i.e. code words containing more digits than the source words using convolutional codes, e.g. unit memory codes

Landscapes

  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Error Detection And Correction (AREA)

Abstract

Speech is analysed (202) on a frame-by-frame basis to produce LPC coefficients L and excitation information E1 E2. This information is converted (204) into a serial sequence prior to convolutional coding (205). The serial sequence is modified by the addition of trailing zeros (tailing bits) and also by the insertion of one or more further intermediate groups of zeros at an intermediate position in the modified sequence. A further modification to the apparatus involves the use of a systematic code in the convolutional coder (104), wherein the additional inserted groups are not transmitted in the coded sequence, but are re-inserted at the decoder. <IMAGE>

Description

CHANNEL CODING FOR SPEECH The present invention is concerned with the coding of digital signals using a convolutional code, and decoding such coded signals.
According to the present invention there is provided a method of coding speech comprising: (i) analysing speech, to obtain, for each of successive time frames of speech, a sequence of information bits; (ii) concatenating the sequence with a terminating group containing one or more consecutive bits having a predetermined value or values to form a modified sequence; and (iii) coding the modified sequence using a convolutional code; characterised in that the modified sequence includes at least one further group containing one or more consecutive bits having a predetermined value or values, the or each further group being located at an intermediate position in the first-mentioned sequence.
Some embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figures 1 and 2 are block diagrams of two known forms of convolutional coder; Figure 3 is a state diagram for the coder of Figure 1; Figure 4 is a trellis diagram illustrating the operation of a known Viterbi decoder; Figure 5 is a block diagram of a known speech coder and decoder; Figure 6 shows graphically the error profile of the system of Figure 5; Figure 7 is a block diagram of one embodiment of speech coder and decoder in accordance with the present invention; Figure 8 shows graphically the error profile of the system of Figure 7.
First, the basic concepts of convolutional coding and Viterbi decoding will be explained.
A convolutional code is a type of error-correcting code; that is, the signals are coded in such a way that when the coded signals are received, with some of the bits in error, they can be decoded to produce decoded signals which are error-free, or at least have a lower occurrence of errors than would be the case without the use of an errorcorrecting code. Necessarily this process involves the introduction of redundancy. In convolutional coding, each bit (or each group of more than one bit) of the signal to be coded is coded by a convolutional coder to produce a larger number of bits; the coder has a memory of past events and the output thus depends not only on the current input bit(s) but also on the history of the input signal.
The rate of the code is the ratio of input bits to output bits; for example a coder which produces n coded bits for each k input bits has a rate R of k/n. The coder output may consist of k input bits and n-k bits each generated as a function of two or more input bits, or may consist entirely of such generated bits (referred to as parity bits). The former situation is referred to as a systematic code and the latter as a non-systematic code.
In this specification the term systematic code will be used to include the situation where the output includes some but not all of the signal bits. The parity bits are commonly formed by modulo-2 addition of bits selected from the current data bit and earlier data bits, though non-linear elements and feedback paths may also be included. A typical 1/2 rate systematic coder wth k=1, n=2 is shown in Figure 1. It receives serial data bits D. into a two-stage delay line with delay elements 1, 2 and generates in a modulo-2 adder 3 a bit Pj which is D1 &commat; Dj2.
In general this function is defined by a subgenerator g which is a binary sequence indicating which of the delay line taps do (1) or do not (0) participate in the addition.
In Figure 1 the subgenerator for D. is 100 and that for P is 101. These may be written in octal notation viz (4,5).
The commutator 4 indicates that bits Dj, Pj, Dj+l, Pj+lZ etc are output serially in that order.
Figure 2 shows a convolutional coder for a (23,35) nonsystematic code - i.e. with subgenerators 10011 and 11101.
Because of the coder memory, a given data bit affects the values of a number of successive generated bits. As the delay line in Figure 1 has one stage the data bit D.
contributes to two generated bits Pj; it is said to have a constraint length of K=2. The coder of Figure 2 has a constraint length of K=5. We also define (for the purposes of this description) a coded constraint length i. e. the number of output bits which are a function of a given input bit. This length K = nK. If (unusually) the coder has two subgenerators with different constraint lengths then we take the larger value for computing Kc.
The process of decoding a signal coded by means of a convolutional code now needs to be considered. Because the code is a redundant code, not all output sequences are possible. One method of decoding with a systematic code is to use an algebraic decoder to feed the data bits into a local coder and compare the locally generated parity bits with the received parity bits. If these are the same, the received data bits are deemed to be valid. If they are different, then this shows that one or more of the received bits is in error. If the level of errors does not exceed the correction capability of the code, the erroneous bit can be located and hence corrected. Looking at the problem more generally, consider for the moment a complete message which has been received after coding and transmission over a channel which is subject to errors.If the received sequence is an allowable sequence then it has been received without error (or with errors such as to transform it from the original sequence into another allowable sequence); either way, errors are assumed not to be present and the message can be readily decoded. If however the received message is not an allowable sequence, the decoding problem becomes one of identifying that allowable sequence which has the maximum correlation with the received sequence and decoding that. It can be shown that dealing with the whole message in this way produces the maximum likelihood of decoding the message correctly. It is not at once instinctively obvious why, when determining the value of a given data bit, one would wish to look at coded bits more than the coded constraint length.For example, in the following sequence, generated by the coder of Figure 2, Dj1 D. D. D 1+2 D 1+3 D 1+4 Dj+s .
1+1 P1.1(1) Pj(1) Pj+1(1) Pj+2(1) Pj+3(1) Pj+4(1) Pj+5(1) ...
P11(2) Pj(2) P1+1(2) Pj+2(2) Pj+4(2) Pj+4(2) P Do+5' P+5(1) and Pj+5(2) clearly are independent of D1 and ostensibly are of no use in ascertaining its correct value in the presence of transmission errors. However, supposing that parity bit Pj+3(1) (for example) has been incorrectly received along with other unspecified errors which together mean that D. cannot be correctly decoded. It may be that the information carried by Pj+5(1) (which is less than the coded constraint length distant from Pj+3(1) and, like it, is a function of Dj+1 Dj+2, Dj+3 is of value in correcting the error in Pj+3(1) and thus permits a resolution of the value of Dj.
In many cases it is not practical in terms of system delay or decoding complexity to look at the whole of a message, but rather to look at bits within a time window of limited duration. The algebraic decoder discussed above and the maximum likelihood case correspond to window durations of the coded constraint length and infinity respectively. The error performance of a decoder approaches asymptotically that of the maximum likelihood case as the window length increases.
As the size of window increases, the complexity of performing the required correlation increases, and it is therefore common to use a Viterbi decoder which in effect deals with each received n-bit group in succession and updates partial correlations but reduces the complexity by continually discarding correlation values (and hence candidate sequences) judged to be poor - although in fact the Viterbi decoder is generally described in terms of accumulated signal distance.
In order to describe its operation, it is necessary to note that the contents of the coder delay line are referred to as the state of the coder at any time each time the coder receives a data bit (or, more generally, k data bits) it undergoes a " state transition" from that state to another (or to the same) state. If the decoder assumes that the coder was in a particular state when a particular bit group was transmitted it can compare the received bit group with the bit groups which the coder is capable of producing where a transition occurs from that state; the signal distance between them being a measure of the probability of that transition having actually occurred.
Figure 3 shows a state diagram for the coder of Figure 1, where the states 00,01,10,11 (where the contents of the delay line states 1,2 are shown as the most and least significant bits respectively) are represented by circles with the corresponding numbers within them. Arrows show all the possible transitions between any two states, the corresponding output bits D,P are shown adjacent each arrow. If, at some point in time, the decoder has ascribed to each state a signal distance value, then having carried out the above comparison to calculate a distance for each of the eight possible transition paths one then adds this to the distance value for the state from which the transition proceeds, thereby obtaining two updated distance values for each destination state. This process may be illustrated by way of example.Suppose that the sequence 101101001 has been coded using the coder of Figure 1: its generator is (11 00 01 00 00 etc) so its output is 11 00 01 00 00 00 11 00 01 11 00 01 00 00 00 11 00 01 00 00 00 00 00 00 11 00 01 11 00 10 11 01 10 00 01 11 00 01 Suppose, further, that the received sequence is 11 00 11* 11 01 10 00 .....
with a single error in the asterisked bit.
The decoding process shown in Figure 4 assumes that the transmission starts with the coder in state (00) in column 1 (the first node). The first pair of bits 11 has a Hamming distance of 2 and 0 respectively from the paths to states (00) and (10) at node 2: these distances are written adjacent the arrowheads. The second pair of received bits 00 has distances 0, 0, 2, 2 from the next four possible paths to node 3: adjacent each state are written the transmitted data associated with the path to that point, and the accumulated Hamming distance. From the third pair of received bits one identifies eight possible paths to the four states at node 4; if this process continues, we will have sixteen possible paths to the next node, then 32 and so on.However, each pair of paths converging on a given state in node 4 has a certain difference in accumulated Hamming distance - e.g.- a difference of 1 at state (00) and extension of these paths over a common extension path to later nodes will not change this difference. Therefore we can at once discard the path having the larger distance, leaving one survivor path at each node. The discarded paths are struck through in Figure 4 (where two paths have the same distance, usually the one having the lower position in the diagram is arbitrarily chosen for deletion). Note that at node 4, all survivor paths imply that the first data bit is "1" despite the error in the sixth (parity) bit, although the point at which such lack of ambiguity is apparent (if at all) will, in general, depend on the particular error pattern.
Continuing the same process to node 5, we see that the correct data 1001 is identifiable as having the lowest accumulated distance (though of course a decision on all four bits would not be taken at this point because the decoder does not know that the error is in the sixth rather than seventh bit).
Assuming a finite decoding window is employed, then the usual procedure is to decide, on the basis of the results over the window, upon the earliest data bit within that window, either by observing an unambiguous result or by choosing that result having the lowest accumulated Hamming distance (e.g. 10 ... at state 11 in node 5). Note that such a decision may be inconsistent with earlier decisions - i.e. the final result may imply assumption of a nonallowable path sequence. For many applications this does not matter, but if it does then correction may be made.
The above example assumed that the coder starting state was known; although not essential this provides improved results if it can be done. At the end of a transmission, it is usual to append "tailing bits" e.g. zeros to ensure the generation of more than one parity bit from the last few data bits. If the number of tailing bits is equal to (or greater than) the length of the coder memory, and the identity of the tailing bits is "known" to the decoder, then this has the added advantage that since the final coder state is known no decision needs to be taken in the decoder between conflicting results at the last node.
In the above description, it has been assumed that the data to be decoded are in binary (i.e. 1/0) form; if the received signal is derived from a modulated signal it is inherent in this assumption that a hard decision as to the value of the received bit has already been made. However a soft demodulator may be used, i. e. one which gives an analogue output (or more commonly a digital word) representing a value in the range 0 to 1 - for example an FSK (frequency shift keying) demodulator may indicate the relative position of the received frequency between the nominal transmitted frequencies rather than indicating which it is closest to. In this case the above decoding can proceed as a soft decision decoder and actual signal distances are used.
In coding speech using LPC coding or other parametric representation of the speech, it is usual to code the speech on a frame by frame basis; i. e. the speech is divided into successive time frames and parameters (one or more of which hold for the whole frame) are generated for each frame.
It has been proposed, where such speech data are convolutionally coded, to insert tailing bits at the end of each frame, thereby improving error performance. This effectively decouples the frames from one another - i.e.
decoding of a frame is not affected by errors in adjacent frames.
Figure 5 shows a coder and decoder. Speech received at an input 101 is divided into 20ms frames by an LPC speech coder 102 which produces, for each frame (a) a set of LPC coefficients which define the response of a synthesis filter to be used at a decoder and (b) excitation information defining a signal to be generated at the decoder to provide the input to the synthesis filter. In this example, the excitation information consists of four codewords each of which identifies one of a "codebook" of pulse sequences stored in the speech coder 102. The decoder has a replica of the codebook and upon receipt of each codeword retrieves the corresponding stored sequence; the retrieved sequences are then concatenated to form the required excitation.
Some of the information output by the speech coder is to be transmitted uncoded; the remainder is concatenated with (assuming a four-delay convolutional coder) four tailing bits in a parallel-in/serial-out register 103. The contents of the register are clocked out serially into a convolutional coder 104. The latter has two outputs and these are intercalated by a commutator 105 to form a single serial bit stream. A unit 106 serves to add the uncoded bits from the speech coder 102 and signalling information (inter alia for frame synchronisation). The output signal is transmitted over a suitable channel 107 to the decoder.
At the decoder a unit 108 establishes frame synchronisation and produces a frame synchronisation signal F. The coded bits output from the unit 108 pass to a Viterbi decoder 109. This also receives the frame synchronisation signal F which signals a frame boundary; this indicates to the Viterbi decoder that the convolutional coder is in a known state (i.e. 0000), and the decoder uses this information to constrain its path selection; thus if the Viterbi decoder were to receive the framing signal when it had reached (for example) node 6 in Figure 4, it would at once select the data sequence corresponding to the best path terminating at state 0, irrespective of the accumulated Hamming distances at that node.
The effect of this is to improve performance in the presence of errors, and to render the output for each frame independent of errors occurring in preceding frames.
Moreover, it is noted that a particular improvement is found at the beginning and end of each frame; the residual bit error rate having somewhat the form shown in Figure 6.
It is possible to take advantage of this by ensuring that coded speech parameters which when received incorrectly, produce particularly unpleasant results (e. g. the most significant bits of the LPC coefficients) are located in the low-error "corners" - i.e. at the beginning and end of the frame. The Viterbi decoder 109 is followed by a speech decoder 110 which also receives the uncoded bits from the unit 108 and supplies decoded speech to an output 11.
The present invention extends this approach by inserting one or more sequences of predetermined bits at an intermediate position or positions within the frame.
Thus in Figure 7 a PISO register 203 has groups of four "0" bits inserted not only at the end of the frame, but also in the middle. The effect of this is to produce an error rate profile as shown in Figure 8. Returning to Figure 7, a speech coder 202 produces, from signals supplied to a speech input 201, LPC coefficient data L, and excitation information El and E2 for respectively the first and second halves of the frame. As indicated by the connections, coefficient data are entered into the register 203 at positions corresponding to the ends and middle of the frame, such that the most significant coefficient bits fall in the shaded regions in Figure 8. The less significant bits and the excitation codewords fall in the regions between. A (23,35) convolutional coder 204 follows, and a commutator 205, framing unit 206 and transmission link 207 as in Figure 5.
At the decoder of Figure 7, framing signals F are extracted by a framing unit 208, which forwards the received data bits to a Viterbi decoder 209. In this case however the signal F is used to synchronise a divide by N/2 counter 212 (when N is the total number of transmitted bits per frame) which produces signals F' at the mid-point and at the end of the frame, and these are supplied to the Viterbi decoder 209 which responds (as explained above) by fixing its trellis path to correspond to decoder state 0 at that point. A speech decoder 210 receives the decoded bits from the Viterbi decoder 209 and uncoded bits from the framing unit 208 and outputs decoded speech at an output 211.
Naturally the use of extra tailing bits in this way involves (compared with Figure 6) the transmission of eight extra bits: this example assumes that this is accommodated by reducing the number of protected bits by eight.
A comparison of the error performance of the transmission arrangements of Figures 5 and 7 gives the following results: Residual Bit Error Rate C to I Ratio Unprot. Fig. 5 Fig. 7 Triple 10dB 4.5x102 0. 496x102 0.368x102 0. 252x102 7dB 7.8x102 2.2x102 1.65x102 1.21x102 4dB 12.5x102 7.12x102 5.71x102 4.48x102 The 'Triple' entry is for three sets of four zero bits. It can be seen that a significant improvement in bit error rate has been obtained. Additionally, more locations are available for sensitive bits.A further benefit may be obtained if a system (such as that described in our copending UK patent application No. 9105101. 1 for identifying burst errors) which identifies periods having an unacceptably high error rate is employed to initiate a process of discarding received parameters and substituting parameters from an earlier frame. In the case of the embodiment of Figure 7, if such a high error indication is obtained only for one half of a frame, then because the errors in one half do not affect the other half, substitution need only occur for the half frame affected, and parameters received in the other half-frame may be us ed.
A further modification that may be applied to the apparatus of Figure 7 can be implemented (as described in our co-pending UK patent application No. 9106180.4) if a systematic code is used in the coder 105. With such a code the coder output contains copies of the bits output from the register 104. In the modified version, the copies of the inserted "0" bits - as their values are known - are not transmitted, but are re-inserted at the receiver prior to Viterbi decoding.

Claims (9)

1. A method of coding speech comprising: (i) analysing speech, to obtain, for each of successive time frames of speech, a sequence of information bits; (ii) concatenating the sequence with a terminating group containing one or more consecutive bits having a predetermined value or values to form a modified sequence; and (iii) coding the modified sequence using a convolutional code; characterised in that the modified sequence includes at least one further group containing one or more consecutive bits having a predetermined value or values, the or each further group being located at an intermediate position in the first-mentioned sequence.
2. A method according to claim 1 in which the said sequence of information bits includes a plurality of parameters represented in digital form which includes at least one parameter which is characteristic of the whole frame and at least n+1 (where n is the number of further groups) sub-frame parameters characteristic of respective parts of the frame, and in which each of the sub-frame parameters is located in a respective one of the n+1 portions of the modified sequence which are separated by the n further groups.
3. A method according to claim 1 or 2 in which a systematic convolutional code is employed, and the said bits having a predetermined value or values are omitted from the output sequence.
4. An apparatus for coding speech comprising: (i) a speech coder to produce for each of successive time frames of speech, a sequence of information bits comprising a plurality of parameters represented in digital form including at least one parameter which is characteristic of the whole frame; (ii) insertion means for concatenating the sequence with a terminating group containing one or more consecutive bits having a predetermined value or values to form a modified sequence; (iii) a convolutional coder for coding the modified sequence, the coder having a memory length not exceeding the number of said consecutive bits; and (iv) transmission means including means for inserting frame synchronisation information;; characterised in that the insertion means is also arranged to insert into each modified sequence at least one further group containing one or more consecutive bits having a predetermined value or values, the or each further group being located at an intermediate position in the first-mentioned sequence.
5. An apparatus according to claim 4 in which the convolutional coder employs a systematic code, the transmission means is arranged in operation to produce an output sequence containing the bits of the first-mentioned sequence, and all the bits generated by the convolutional coder but not containing the said bits having a predetermined value or values.
6. An apparatus for receiving signals coded by the apparatus of claim 4, comprising: (i) frame synchronisation means including means to produce timing signals at predetermined positions during a frame which positions correspond to the locations of the said terminating group and the said further group or groups; (ii) a Viterbi decoder for decoding received data, the Viterbi decoder being connected to receive the timing signals and to constrain its search upon receipt thereof to the coder state corresponding to the relevant said one or more consecutive bits; and (iii) a speech decoder for decoding the output of the Viterbi decoder.
7. An apparatus for receiving signals coded by the apparatus of claim 5, comprising: (a) frame synchronisation means including means to produce timing signals at predetermined positions during a frame which positions correspond to the locations of the said terminating group and the said further group or groups; (b) means for inserting into a received bit sequence one or more bits having a predetermined value or values at locations determined by the frame synchronisation means; (c) a Viterbi decoder for decoding received data, the Viterbi decoder being connected to receive the timing signals and to constrain its search upon receipt thereof to the coder state corresponding to the relevant said one or more consecutive bits; and (d) a speech decoder for decoding the output of the Viterbi decoder.
8. Speech coding apparatus substantially as herein described with reference to Figure 7 of the accompanying drawings.
9. Speech decoding apparatus substantially as herein described with reference to Figure 7 of the accompanying drawings.
GB9200659A 1991-01-15 1992-01-14 Channel coding for speech Withdrawn GB2252702A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB919100823A GB9100823D0 (en) 1991-01-15 1991-01-15 Digital speech signals
GB919106156A GB9106156D0 (en) 1991-03-22 1991-03-22 Channel coding for speech

Publications (2)

Publication Number Publication Date
GB9200659D0 GB9200659D0 (en) 1992-03-11
GB2252702A true GB2252702A (en) 1992-08-12

Family

ID=26298261

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9200659A Withdrawn GB2252702A (en) 1991-01-15 1992-01-14 Channel coding for speech

Country Status (1)

Country Link
GB (1) GB2252702A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997042716A1 (en) * 1996-05-03 1997-11-13 Ericsson Inc. Data communications systems and methods using interspersed error detection bits
EP0820052A2 (en) * 1996-03-29 1998-01-21 Mitsubishi Denki Kabushiki Kaisha Voice-coding-and-transmission system
EP0689311A3 (en) * 1994-06-25 1999-08-18 Nec Corporation Method and system for forward error correction using convolutional codes and a maximum likelihood decoding rule
US6944234B2 (en) 2000-03-03 2005-09-13 Nec Corporation Coding method and apparatus for reducing number of state transition paths in a communication system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0689311A3 (en) * 1994-06-25 1999-08-18 Nec Corporation Method and system for forward error correction using convolutional codes and a maximum likelihood decoding rule
EP0820052A2 (en) * 1996-03-29 1998-01-21 Mitsubishi Denki Kabushiki Kaisha Voice-coding-and-transmission system
EP0820052A3 (en) * 1996-03-29 2000-04-19 Mitsubishi Denki Kabushiki Kaisha Voice-coding-and-transmission system
WO1997042716A1 (en) * 1996-05-03 1997-11-13 Ericsson Inc. Data communications systems and methods using interspersed error detection bits
US5910182A (en) * 1996-05-03 1999-06-08 Ericsson Inc. Data communications systems and methods using interspersed error detection bits
AU721475B2 (en) * 1996-05-03 2000-07-06 Ericsson Inc. Data communications systems and methods using interspersed error detection bits
US6944234B2 (en) 2000-03-03 2005-09-13 Nec Corporation Coding method and apparatus for reducing number of state transition paths in a communication system

Also Published As

Publication number Publication date
GB9200659D0 (en) 1992-03-11

Similar Documents

Publication Publication Date Title
US5577053A (en) Method and apparatus for decoder optimization
EP0127984B1 (en) Improvements to apparatus for decoding error-correcting codes
JP3046988B2 (en) Method and apparatus for detecting frame synchronization of data stream
WO1996008895A9 (en) Method and apparatus for decoder optimization
JP3249405B2 (en) Error correction circuit and error correction method
EP0101218A2 (en) Methods of correcting errors in binary data
EP0897620B1 (en) VERFAHREN ZUR DEKODIERUNG VON DATENSIGNALEN MITTELS EINES ENTSCHEIDUNGSFENSTERS fester Länge
US4476458A (en) Dual threshold decoder for convolutional self-orthogonal codes
US8046670B1 (en) Method and apparatus for detecting viterbi decoder errors due to quasi-catastrophic sequences
JPH0445017B2 (en)
GB2252702A (en) Channel coding for speech
US5944849A (en) Method and system capable of correcting an error without an increase of hardware
KR20030036148A (en) Decoder and decoding method
GB2253974A (en) Convolutional coding
JPH06338807A (en) Method and equipment for sign correction
US6560745B1 (en) Method of identifying boundary of markerless codeword
WO1995001008A1 (en) Bit error counting method and counter
JP3272173B2 (en) Error correction code / decoding device
JP2591332B2 (en) Error correction decoding device
JP3235333B2 (en) Viterbi decoding method and Viterbi decoding device
KR0152055B1 (en) Eraser correction decoder
JP2570369B2 (en) Error correction decoding device
JPS63161732A (en) Reset signal generator for sequential error correction decoding device
KR100488136B1 (en) Method for decoding data signals using fixed-length decision window
JP3530451B2 (en) Viterbi decoding device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)