USRE35781E - Coding method of image information - Google Patents

Coding method of image information Download PDF

Info

Publication number
USRE35781E
USRE35781E US08/553,235 US55323595A USRE35781E US RE35781 E USRE35781 E US RE35781E US 55323595 A US55323595 A US 55323595A US RE35781 E USRE35781 E US RE35781E
Authority
US
United States
Prior art keywords
number line
range
probable symbols
lpss
symbols
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/553,235
Inventor
Fumitaka Ono
Shigenori Kino
Masayuki Yoshida
Tomohiro Kimura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP1021672A external-priority patent/JPH0834432B2/en
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Priority to US08/553,235 priority Critical patent/USRE35781E/en
Application granted granted Critical
Publication of USRE35781E publication Critical patent/USRE35781E/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • H04N1/417Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information using predictive or differential encoding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/40Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
    • H03M7/4006Conversion to or from arithmetic code

Definitions

  • This invention relates to a coding method of image information or the like.
  • FIG. 1 is a conceptual diagram thereof. For simplicity a bi-level memoryless information source is shown and the occurrence probability for "1" is set at r, the occurrence probability for "0" is set at 1-r.
  • the coordinates of each of the rightmost C(000) to C(111) is represented in a binary expression and cut at the digit which can be distinguished each other, and is defined as its respective code words, and decoding is possible at a receiving side by performing the same procedure as at the transmission side.
  • mapping interval A i the mapping interval A i , and the lower-end coordinates C i of the symbol sequence at time i are given as follows:
  • the present invention has been devised to solve the above-mentioned problems, and in particular, it is directed at an increase in efficiency when the occurrence probability of LPS is close to 1/2.
  • r is stabilized and coding in response to the occurrence probability of LPS can be performed.
  • coding in which r is assumed 1/2 at all times rather than based on A i-1 can be performed, and high efficiency can be expected.
  • an area allocated to LPS can be selected depending on the occurrence probability of LPS, therefore it has an advantage in that efficient coding can be realized.
  • FIG. 1 is a view of the prior art illustrating the concept of a number line coding
  • FIG. 2 is a view illustrating a coding device in accordance with one embodiment of the present invention.
  • FIG. 3 is a flow chart for coding of one embodiment of the present invention.
  • FIG. 4 is a flow chart of decoding in one embodiment of the present invention.
  • FIG. 5 is an example of an operation in one embodiment of the present invention.
  • FIG. 2 shows one embodiment of the present invention.
  • An adder 1 adds the value of S, which is input thereto, and the output of an offset calculating unit 3 to calculate the upper-limit address of LPS.
  • a comparator 2 compares the calculated value with 0.5. When the value is 0.5 or smaller and an occurrence symbol is MPS, the processing of the offset calculating unit 3 is stopped at the addition of the above-mentioned S. Similarly, if the comparator 2 judges that the value is 0.5 or smaller and the occurrence symbol is LPS, the base calculating unit 4 performs a base calculation and outputs the base coordinates as codes.
  • a number of shift digits calculating unit 5 determines a multiple (2 n times) required for normalization (makes the effective range 0.5 to 1.0) from the value of S and outputs it as the number of shift digits.
  • the upper-limit address of LPS is corrected by LPS upper-limit address correcting unit 6.
  • a base calculation is performed by the base calculating unit 4 to output the base coordinates therefrom.
  • a shift digit calculating is performed by the number of shift digits calculating unit 5 to output the number of shift digits therefrom.
  • the output base coordinates are processed in an addition register (not shown) to form a code word.
  • the number of shift digits which has been output from the unit 5, indicates how many digits of a code word to be next output are shifted.
  • the code word is then added in the register.
  • the range of the number line varies from 0.011 to 1.000
  • a base value 0.1000 must be output as an output, and then it must be normalized 2 1 times. In other words, 0.1000 is a base value, so a new offset value is 0.001, which is 2 1 times (0.1001-0.1).
  • an offset value 0.0100 is output as a code word.
  • a final code word becomes one which is calculated on the basis of the number of shift digits and the code words which are output as explained above (refer to the lower portion of FIG. 5).
  • the multiples of powers of 2 for normalization can be constant, even if the value of S is varied by the correction when the allocated area of MPS is below 0.5 on the normalization number line. This is advantageous.
  • the multi-level information source can be converted into a binary information source by tree development. Therefore, it goes without saying that the present invention can be applied to a multi-level information source.

Abstract

A coding method of a binary Markov information source comprises the steps of providing a range on a number line from 0 to 1 which corresponds to an output symbol sequence from the information source, and performing data compression by binary expressing the position information on the number line corresponding to the output symbol sequence. The present method further includes the steps of providing a normalization number line to keep a desired calculation accuracy by expanding a range of the number line which includes a mapping range, by means of a multiple of a power of 2, when the mapping range becomes below 0.5 of the range of the number line; allocating a predetermined mapping range on the normalization number line for less probable symbols LPS proportional to its normal occurrence probability; allocating the remaining mapping range on the normalization number line for more probable symbols MPS; and reassigning the predetermined mapping range to the remaining mapping range the half of a portion where the allocated remaining range is less than 0.5, when the allocated remaining range becomes below 0.5.

Description

.Iadd.This application is a continuation of application Ser. No. 08/139,561, filed Oct. 20, 1993, now abandoned. .Iaddend.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a coding method of image information or the like.
2. Description of Related Art
For coding a Markov information source, the number line representation coding system is known in which a sequence of symbols is mapped on the number line from 0.00 to 1.0 and its coordinates are coded as code words which are, for example, represented in a binary expression. FIG. 1 is a conceptual diagram thereof. For simplicity a bi-level memoryless information source is shown and the occurrence probability for "1" is set at r, the occurrence probability for "0" is set at 1-r. When an output sequence length is set at 3, the coordinates of each of the rightmost C(000) to C(111) is represented in a binary expression and cut at the digit which can be distinguished each other, and is defined as its respective code words, and decoding is possible at a receiving side by performing the same procedure as at the transmission side.
In such a sequence, the mapping interval Ai, and the lower-end coordinates Ci of the symbol sequence at time i are given as follows:
When the output symbol ai is 0 (More Probable Symbol: hereinafter called MPS).
A.sub.i =(1-r)A.sub.i-1
. .C.sub.i =C.sub.i-1 .!..Iadd.C.sub.i =C.sub.i-1 +rA.sub.i-1 .Iaddend.
When the output symbol ai is 1 (Probable Symbol: hereinafter called LPS),
A.sub.i =rA.sub.i-1
. .C.sub.i =C.sub.i-1 +(1-r)A.sub.i-1 .!..Iadd.C.sub.i =C.sub.i-1 .Iaddend.
As described in "an overview of the basic principles of the Q-Coder adaptive binary arithmetic coder (IBM journal of Research and Development Vol. 32, No. 6, November, 1988)", it is considered that in order to reduce the number of calculations such as multiplication, a set of fixed values are prepared and a certain value is selected from among them, not necessarily calculating rAi-1.
That is, if rAi-1 of the above-mentioned expression is set at S,
when ai=0,
A.sub.i =A.sub.i-1 -S
. .C.sub.i =C.sub.i-1 .!..Iadd.C.sub.i =C.sub.i-1 +S .Iaddend.
when ai=1,
A.sub.i =S
. .C.sub.i =C.sub.i-1 +(A.sub.i-1 -S)..!..Iadd.C.sub.i =C.sub.i-1 .Iaddend.
However, as Ai-1 becomes successively smaller, S is also needed to be smaller in this instance. To keep the calculation accuracy, it is necessary to multiply Ai-1 by the second power (hereinafter called normalization). In an actual code word, the above-mentioned fixed value is assumed to be the same at all times and is multiplied by powers of 1/2 at the time of calculation (namely, shifted by a binary number).
If a constant value is used for S as described above, a problem arises when, in particular, S is large and a normalized Ai-1 is relatively small.
An example thereof is given in the following. If Ai-1 is slightly above 0.5, Ai is very small when ai is an MPS, and is even smaller than the area being given when ai is LPS. That is, in spite of the fact that the occurrence probability of MPS is essentially high, the area allocated to MPS is smaller than that allocated to LPS, leading to an decrease in coding efficiency. If it is assumed that an area allocated to MPS is always larger than that allocated to LPS, since Ai-1 >0.5, S must be 0.25 or smaller. Therefore, when Ai-1 is 1.0, r=0.25, and when Ai-1 is close to 0.5, r=0.5, with the result that the occurrence probability of LPS is considered to vary between 1/4 and 1/2 in coding. If this variation can be made small, an area proportional to an occurrence probability can be allocated and an improvement in coding efficiency can be expected.
SUMMARY OF THE INVENTION
The present invention has been devised to solve the above-mentioned problems, and in particular, it is directed at an increase in efficiency when the occurrence probability of LPS is close to 1/2.
Accordingly, it is an object of the present invention to provide a coding system which, in the case where the range provided to a more probable symbol is below 0.5 on a normalized number line, by moving half of the portion where the allocated area of the more probable symbol is below 0.5 to the range of a more probable symbol from the range of LPS, a coding based on the occurrence probability of LPS can be performed.
According to the present invention, by changing S according to the value of Ai-1, r is stabilized and coding in response to the occurrence probability of LPS can be performed. According to the present invention, in particular when r is 1/2, coding in which r is assumed 1/2 at all times rather than based on Ai-1, can be performed, and high efficiency can be expected.
Also, according to the present invention, in the number line coding, an area allocated to LPS can be selected depending on the occurrence probability of LPS, therefore it has an advantage in that efficient coding can be realized.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a view of the prior art illustrating the concept of a number line coding;
FIG. 2 is a view illustrating a coding device in accordance with one embodiment of the present invention;
FIG. 3 is a flow chart for coding of one embodiment of the present invention;
FIG. 4 is a flow chart of decoding in one embodiment of the present invention; and
FIG. 5 is an example of an operation in one embodiment of the present invention.
EMBODIMENT
FIG. 2 shows one embodiment of the present invention. An adder 1 adds the value of S, which is input thereto, and the output of an offset calculating unit 3 to calculate the upper-limit address of LPS. A comparator 2 compares the calculated value with 0.5. When the value is 0.5 or smaller and an occurrence symbol is MPS, the processing of the offset calculating unit 3 is stopped at the addition of the above-mentioned S. Similarly, if the comparator 2 judges that the value is 0.5 or smaller and the occurrence symbol is LPS, the base calculating unit 4 performs a base calculation and outputs the base coordinates as codes. A number of shift digits calculating unit 5 determines a multiple (2n times) required for normalization (makes the effective range 0.5 to 1.0) from the value of S and outputs it as the number of shift digits.
Next, when the comparator 2 judges the value to be above 0.5 (decimal), the upper-limit address of LPS is corrected by LPS upper-limit address correcting unit 6. A base calculation is performed by the base calculating unit 4 to output the base coordinates therefrom. A shift digit calculating is performed by the number of shift digits calculating unit 5 to output the number of shift digits therefrom. Then, the output base coordinates are processed in an addition register (not shown) to form a code word. The number of shift digits which has been output from the unit 5, indicates how many digits of a code word to be next output are shifted. The code word is then added in the register. To more accurately explain the above-described process, flowcharts for coding and decoding are shown in FIGS. 3 and 4, respectively. In each of these flowcharts the where S is defined as a power of 1/2 is illustrated.
Next, a concrete example of coding will be explained. Suppose that, in FIG. 5, the coordinates are expressed in a binary system and that S is set at 1/8 or 1/4. First if S=1/8 is known from the Markov state in a Markov information source then 1 (LPS) is assigned to from . .0.001.!. .Iadd.0.000 .Iaddend.to 0.001 and 0 (MPS) is assigned to from 0.001 to 1.000. Now, if a 0 symbol occurs, the range is limited to between 0.001 to 1.000. At this time, the offset value is 0.001. For the next symbol, since it is known from the occurrence probability of 1, that S=1/4 is used in both reception and transmission 1 is assigned to from 0.001 to 0.011. At this point, if 0 occurs, the range of the number line varies from 0.011 to 1.000 Next, if S=1/4, the upper limit of the allocated range of LPS is 0.011+0.01=0.101 which exceeds 0.1 (0.5 in decimal). So a correction in which the portion exceeding 0.1 is halved is made, and the upper limit becomes 0.1001. At this point, LPS has occurred and the size of the area of LPS is 0.1001-0.011=0.0011. So if it is multiplied by 22, it exceeds 0.1 (0.5 in decimal). Therefore, the number of shift digits is 2. The base value is 0.1001-0.01=0.0101 and this value is output as a code word. A new offset value becomes 0.01, since 0.011-0.0101=0.0001 is shifted by two digits. Next, S is set at . .1/4.!..Iadd.1/8.Iaddend. and 0.01+0.001=0.011 becomes the border between 0 and 1. If 0 occurs at this point, the offset value is increased to 0.011. If S is set at 1/4 at this point, this results in 0.011+0.01=0.101, which exceeds 0.1. . .If.!. .Iadd.As .Iaddend.the portion exceeding 0.1 is halved, the value becomes 0.1001. Since the area of 0 is less than 0.1 if the symbol is 0, . .1000 corresponding to.!. a base value 0.1000 must be output as an output, and then it must be normalized 21 times. In other words, 0.1000 is a base value, so a new offset value is 0.001, which is 21 times (0.1001-0.1). Suppose that the next state is S=. .1/4.!. .Iadd.1/8.Iaddend. and MPS has occurred, then the . .offset value is 0.0001.!. .Iadd.border value is 0.001.Iaddend.+0.001=0.010. Further, suppose that the next state is S=1/4 and 1 (LPS) has occurred, an offset value 0.0100 is output as a code word.
A final code word becomes one which is calculated on the basis of the number of shift digits and the code words which are output as explained above (refer to the lower portion of FIG. 5).
If the value of S is selected from a set of values which are powers of 1/2, such as 1/2, 1/4, or 1/8, the multiples of powers of 2 for normalization can be constant, even if the value of S is varied by the correction when the allocated area of MPS is below 0.5 on the normalization number line. This is advantageous.
When an area is provided to 0 (MPS) and 1 (LPS) according to the above-described manner, the relationship between the value of S and the assumed occurrence probability of LPS when S is determined, is given as follows:
S≦r<S/(1/2+S)
Therefore, when S=1/2, . .r 1/2.!. .Iadd.r=1/2, .Iaddend.which indicates it is stable.
If S=1/4, 1/4≦r<1/2.
On the ocher hand, if S is fixed in a conventional manner, the assumed occurrence probability r becomes as follows:
S≦r<S/(1/2)=2S
If S=1/2, 1/2≦r<1/2,
If S=1/4, 1/4≦r<1.
That is, since the variation range of r is larger for a conventional system, the system of the present invention is more efficient.
The multi-level information source can be converted into a binary information source by tree development. Therefore, it goes without saying that the present invention can be applied to a multi-level information source.

Claims (12)

What is claimed is:
1. A method for coding information from a binary Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPS) and more probable symbols (MPS), each having an occurrence probability, on a normalization number line, said method comprising the steps of;
a) storing in a memory storage device a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,
b) keeping a desired calculation accuracy by expanding a range of the normalization number line which includes a mapping range by means of a multiple of a power of 2 when the mapping range becomes less than 0.5,
c) allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,
d) allocating the remaining portion of said number line as a mapping interval for said MPSs,
e) reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when the LPS mapping range exceeds 0.5, and
f) repeating steps b, c, d and e.
2. A coding method as set forth in claim 1 whereas said LPS mapping interval is a power of 1/2 of the range of said number line.
3. A coding method as set forth in claim 1 further including the steps of assigning as an offset value the difference between 1 and the . .LPS.!. mapping interval after .Iadd.a current .Iaddend.step . .(e).!. .Iadd.(b).Iaddend., and coding a base value . .for subsequent use for.!. .Iadd.as a codeword by .Iaddend.calculating the offset value .Iadd.as a codeword .Iaddend.by . .subtracting said offset value from.!. .Iadd.using the difference between .Iaddend.the . .lower.!. .Iadd.upper .Iaddend.limit of said . .MPS.!. mapping range .Iadd.just after a previous step (b) and a lower limit of mapping range just .Iaddend.before . .normalization.!. .Iadd.the current step (b).Iaddend..
4. An apparatus for coding information from a binary Markov information source by binary coding an output symbol sequence comprising less probable symbols (LPSs) and more probable symbols (MPS) from said information source on a normalization number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a normalization number line having a range from 0 to 1 which corresponds to said output symbol sequence,
means for keeping desired calculation accuracy by expanding a range on said normalization number line, which includes a mapping range, by a multiple power of 2 when the mapping range becomes less than 0.5,
means for allocating a portion of said normalization number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs,
means for allocating the remaining portion of said normalization number line as a mapping interval for said MPSs,
means for reassigning half of the LPS mapping interval above 0.5 to said MPS mapping interval when said LPS mapping interval exceeds 0.5.
5. An apparatus as set forth in claim 4 wherein said LPS mapping interval is a power of 1/2 of the range of said number line.
6. An apparatus as set forth in claim 4 further comprising means for assigning an offset value, said offset value being the difference between 1 and the . .LPS.!. .Iadd.mapping .Iaddend.interval . .before reassignment of the LPS mapping interval.!. .Iadd.after the range of the normalization number line is expanded.Iaddend., and means for coding a base value . .for subsequent use for calculating the offset value after reassignment of the LPS mapping interval by subtracting said offset value from the.!. .Iadd.as a codeword by using the difference between the upper limit of the mapping range just after a previous expansion of the normalization number line and a .Iaddend.lower limit of said . .MPS.!. mapping range before . .normalization.!. .Iadd.expansion.Iaddend.. .Iadd.
7. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs), each sequence having an occurrence probability on a number line, said method comprising,
(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;
(b) allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPS;
(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and
(d) controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.
8. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a number line having a range which corresponds to said output symbol sequence;
means for allocating a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and
control means for controlling the allocating portion of said number line as a mapping interval for said LPSs by assigning a predetermined portion of the mapping interval for said LPSs above a prescribed value of said number line to said mapping interval for said MPSs, so as to maintain said portion proportional to the occurrence probability of said LPSs. .Iaddend..Iadd.9. A method for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) each having an occurrence probability on a number line, said method comprising,
(a) storing in memory storage device a number line having a range which corresponds to said output symbol sequence;
(b) allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
(c) allocating the remaining portion of said number line as a mapping interval for said MPSs; and
(d) reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when the LPSs mapping range exceeds the prescribed value, and
(e) repeating steps b, c, and d. .Iaddend..Iadd.10. An apparatus for coding information from a Markov information source by binary coding an output symbol sequence from said information source comprising less probable symbols (LPSs) and more probable symbols (MPSs) from said information source on a number line, said LPSs and MPSs each having an occurrence probability, said apparatus comprising:
memory storage means for storing a number line having a range which corresponds to said output symbol sequence;
means for allocating a a portion of said number line as a predetermined mapping interval for said LPSs, said portion being proportional to the occurrence probability of said LPSs;
means for allocating the remaining portion of said number line as a mapping interval for said MPSs; and
means for reassigning half of the LPSs mapping interval above a prescribed value to said MPSs mapping interval when said LPSs mapping range exceeds
the prescribed value. .Iaddend..Iadd.11. A decoding method for a Markov information source coded by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;
comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and
adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does
not exceed that of the more probable symbols. .Iaddend..Iadd.12. A decoding method for a Markov information source coded by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
outputting a decoding signal according to a result of correspondence between the ranges and an inputted codeword;
comparing a range on the number line of more probable symbols with a fixed value; and
adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range more probable symbols is moved from the range of less probable symbols to that of more probable symbols.
.Iaddend..Iadd.13. A coding method for a Markov information source by binary coding comprising the steps of:
associating more probable symbols (symbols of a higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line for a preceding symbols;
coding a signal according to a result of correspondence between the ranges to generate a codeword;
comparing the range on the number line of more probable symbols with the range on the number line of less probable symbols; and
adjusting the range on the number line of less probable symbols and the range on the number line of more probable symbols by assigning a predetermined portion of the range for said less probable symbols above a prescribed value of said number line to said range for said more probable symbols so that the range on the number line of less probable symbols does
not exceed that of the more probable symbols. .Iaddend..Iadd.14. A coding method for a Markov information source by binary coding comprising the steps of:
associating more probable symbols (symbols of higher occurrence probability) and less probable symbols (symbols of a lower occurrence probability) to predetermined ranges on a number line on the basis of a range on a number line or a preceding symbols;
coding a signal according to a result of correspondence between the ranges to generate a codeword;
comparing a range on the number line of more probable symbols with a fixed value; and
adjusting the range on the number line of more probable symbols and the range on the number line of less probable symbols so that when a range of more probable symbols is below the fixed value on a number line, half of a value below the fixed value of a range of more probable symbols is moved from the range of less probable symbols to that of more probable symbols. .Iaddend.
US08/553,235 1989-01-31 1995-11-07 Coding method of image information Expired - Lifetime USRE35781E (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/553,235 USRE35781E (en) 1989-01-31 1995-11-07 Coding method of image information

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
JP1-21672 1989-01-31
JP1021672A JPH0834432B2 (en) 1989-01-31 1989-01-31 Encoding device and encoding method
US07/470,099 US5059976A (en) 1989-01-31 1990-01-25 Coding method of image information
US13956193A 1993-10-20 1993-10-20
US08/553,235 USRE35781E (en) 1989-01-31 1995-11-07 Coding method of image information

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US07/470,099 Reissue US5059976A (en) 1989-01-31 1990-01-25 Coding method of image information
US13956193A Continuation 1989-01-31 1993-10-20

Publications (1)

Publication Number Publication Date
USRE35781E true USRE35781E (en) 1998-05-05

Family

ID=27283513

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/553,235 Expired - Lifetime USRE35781E (en) 1989-01-31 1995-11-07 Coding method of image information

Country Status (1)

Country Link
US (1) USRE35781E (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5936559A (en) * 1997-06-09 1999-08-10 At&T Corporation Method for optimizing data compression and throughput
US6188334B1 (en) * 1997-07-31 2001-02-13 At&T Corp. Z-coder: fast adaptive binary arithmetic coder
US6225925B1 (en) * 1998-03-13 2001-05-01 At&T Corp. Z-coder: a fast adaptive binary arithmetic coder
US6373408B1 (en) 1999-04-12 2002-04-16 Mitsubishi Denki Kabushiki Kaisha Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method and decoding method
US20030113030A1 (en) * 2001-12-18 2003-06-19 Tomohiro Kimura Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method, decoding method, encoding/decoding method, and programs
US20040013311A1 (en) * 2002-07-15 2004-01-22 Koichiro Hirao Image encoding apparatus, image encoding method and program
US6756921B2 (en) 2000-12-27 2004-06-29 Mitsubishi Denki Kabushiki Kaisha Multiple quality data creation encoder, multiple quality data creation decoder, multiple quantity data encoding decoding system, multiple quality data creation encoding method, multiple quality data creation decoding method, and multiple quality data creation encoding/decoding method
US20040240742A1 (en) * 2002-09-27 2004-12-02 Toshiyuki Takahashi Image coding device image coding method and image processing device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4028731A (en) * 1975-09-29 1977-06-07 International Business Machines Corporation Apparatus for compression coding using cross-array correlation between two-dimensional matrices derived from two-valued digital images
US4070694A (en) * 1974-12-24 1978-01-24 Olympus Optical Company Limited Picture image information band compression and transmission system
US4099257A (en) * 1976-09-02 1978-07-04 International Business Machines Corporation Markov processor for context encoding from given characters and for character decoding from given contexts
US4177456A (en) * 1977-02-10 1979-12-04 Hitachi, Ltd. Decoder for variable-length codes
US4191974A (en) * 1977-02-08 1980-03-04 Mitsubishi Denki Kabushiki Kaisha Facsimile encoding communication system
US4286256A (en) * 1979-11-28 1981-08-25 International Business Machines Corporation Method and means for arithmetic coding utilizing a reduced number of operations
US4355306A (en) * 1981-01-30 1982-10-19 International Business Machines Corporation Dynamic stack data compression and decompression system
US4905297A (en) * 1986-09-15 1990-02-27 International Business Machines Corporation Arithmetic coding encoder and decoder system
US4933883A (en) * 1985-12-04 1990-06-12 International Business Machines Corporation Probability adaptation for arithmetic coders

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4070694A (en) * 1974-12-24 1978-01-24 Olympus Optical Company Limited Picture image information band compression and transmission system
US4028731A (en) * 1975-09-29 1977-06-07 International Business Machines Corporation Apparatus for compression coding using cross-array correlation between two-dimensional matrices derived from two-valued digital images
US4099257A (en) * 1976-09-02 1978-07-04 International Business Machines Corporation Markov processor for context encoding from given characters and for character decoding from given contexts
US4191974A (en) * 1977-02-08 1980-03-04 Mitsubishi Denki Kabushiki Kaisha Facsimile encoding communication system
US4177456A (en) * 1977-02-10 1979-12-04 Hitachi, Ltd. Decoder for variable-length codes
US4286256A (en) * 1979-11-28 1981-08-25 International Business Machines Corporation Method and means for arithmetic coding utilizing a reduced number of operations
US4355306A (en) * 1981-01-30 1982-10-19 International Business Machines Corporation Dynamic stack data compression and decompression system
US4933883A (en) * 1985-12-04 1990-06-12 International Business Machines Corporation Probability adaptation for arithmetic coders
US4905297A (en) * 1986-09-15 1990-02-27 International Business Machines Corporation Arithmetic coding encoder and decoder system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw Hill Book Company, New York, copyright 1987, pp. 342 351. *
K. S. Fu et al., Robotics: Control, Sensing, Vision, and Intelligence, McGraw-Hill Book Company, New York, copyright 1987, pp. 342-351.
Pennebaker et al., An Overview of the Basic Priciples of the Q Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717 726. *
Pennebaker et al., An Overview of the Basic Priciples of the Q-Coder Adaptive Binary Arithmetic Coder, IBM Journal of Research and Development, vol. 32, No. 6, Nov. 1988, pp. 717-726.

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5936559A (en) * 1997-06-09 1999-08-10 At&T Corporation Method for optimizing data compression and throughput
US6188334B1 (en) * 1997-07-31 2001-02-13 At&T Corp. Z-coder: fast adaptive binary arithmetic coder
US6281817B2 (en) * 1997-07-31 2001-08-28 At&T Corp. Z-coder: a fast adaptive binary arithmetic coder
US6476740B1 (en) 1997-07-31 2002-11-05 At&T Corp. Z-coder: a fast adaptive binary arithmetic coder
US6225925B1 (en) * 1998-03-13 2001-05-01 At&T Corp. Z-coder: a fast adaptive binary arithmetic coder
US6373408B1 (en) 1999-04-12 2002-04-16 Mitsubishi Denki Kabushiki Kaisha Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method and decoding method
US6756921B2 (en) 2000-12-27 2004-06-29 Mitsubishi Denki Kabushiki Kaisha Multiple quality data creation encoder, multiple quality data creation decoder, multiple quantity data encoding decoding system, multiple quality data creation encoding method, multiple quality data creation decoding method, and multiple quality data creation encoding/decoding method
US20030113030A1 (en) * 2001-12-18 2003-06-19 Tomohiro Kimura Encoding apparatus, decoding apparatus, encoding/decoding apparatus, encoding method, decoding method, encoding/decoding method, and programs
US7209593B2 (en) 2001-12-18 2007-04-24 Mitsubishi Denki Kabushiki Kaisha Apparatus, method, and programs for arithmetic encoding and decoding
US20040013311A1 (en) * 2002-07-15 2004-01-22 Koichiro Hirao Image encoding apparatus, image encoding method and program
US7305138B2 (en) * 2002-07-15 2007-12-04 Nec Corporation Image encoding apparatus, image encoding method and program
US20040240742A1 (en) * 2002-09-27 2004-12-02 Toshiyuki Takahashi Image coding device image coding method and image processing device
US7333661B2 (en) 2002-09-27 2008-02-19 Mitsubishi Denki Kabushiki Kaisha Image coding device image coding method and image processing device

Similar Documents

Publication Publication Date Title
US5059976A (en) Coding method of image information
US5710562A (en) Method and apparatus for compressing arbitrary data
US4935882A (en) Probability adaptation for arithmetic coders
EP0772364B1 (en) Image processing apparatus and method
US5404140A (en) Coding system
US4122440A (en) Method and means for arithmetic string coding
JP3484310B2 (en) Variable length encoder
US4989000A (en) Data string compression using arithmetic encoding with simplified probability subinterval estimation
JP3410629B2 (en) Variable length coding circuit and variable length coding method
USRE35781E (en) Coding method of image information
US4799242A (en) Multi-mode dynamic code assignment for data compression
JP2968112B2 (en) Code conversion method
EP0260460B1 (en) Arithmetic coding with probability estimation based on decision history
US6271775B1 (en) Method for reducing data expansion during data compression
US5285520A (en) Predictive coding apparatus
US6049633A (en) Adaptive arithmetic codec method and apparatus
US5694126A (en) Adaptive predictive data compression method and apparatus
US5715258A (en) Error detection code processing device
US5638067A (en) Variable length coder
EP0047382A2 (en) Adaptive compression encoding of a binary-source symbol string
EP0499225B1 (en) Variable-length code decoding device
EP0820150B1 (en) System, coding section, arrangement, coding apparatus, and method
JP2783221B2 (en) Decryption method
KR0182055B1 (en) Adpcm decoder
JP2998532B2 (en) Address generation circuit for two-dimensional encoding table

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12