Title of Invention

METHOD FOR ENCODING AND RECORDING ON AN INFORMATION CARRIER A DIGITAL INFORMATION SIGNAL AND AN APPARATUS FOR CARRYING OUT THE METHOD

Abstract The invention relates to measures to improve an arithmetic encoder and a corresponding arithmetic decoder. More specifically, proposals are given to truncate the A parameter, prior to carrying out the multiplication A.p. Further, a proposal is given for the carry-over control in the re-nomalization step in the encoder.
Full Text Arithmetic encoding and decoding of an information signal.
The invention relates to a method of arithmetic encoding an information signal, to an apparatus for arithmetic encoding the information signal and to an apparatus for decoding the arithmetically encoded information signal-Arithmetic coding is a well-known technique for lossless coding and an introduction can be found in any current source coding book. For a thorough understanding of the implementations of arithmetic coding that are most relevant for the current work, the reader is referred to [Lang84]. The history of aritlimetic coding is nicely described in the appendix of this document. Further, [Howard94] gives an extensive explanation of arithmetic coding.
The implementation of aritlimetic coding that is the subject of the present invention uses two finite-size registers, which are usually called C and A. The flow diagram of the encoder operation is shown in figure 1. The C register points to the bottom of an interval on the number line, the size of which is stored in A, see, e.g [Lang81] and [Pemn88]. The interval is split into sub-intervals, each sub-interval corresponding to a symbol to be encoded and the size of each sub-interval corresponding to the probability of the associated symbol. For actually encoding a symbol, the C register is adjusted to point to the bottom of the sub-interval corresponding to the symbol and the A register is set to the size of the selected sub-interval. The A register (as well as C) is then normalized (left-shifted), before the next symbol is encoded. In general, after re-normalization, the value of A lies between the values k and 2k: k For example, in the binary case, there are two sub-intervals and thus two possible updates of the C and A registers, depending on whether the bit to be encoded is the most probable symbol (MPS) or the least probable symbol (LPS). It is assumed that the MPS is assigned to the lower interval. The "Update A and C" block of figure 1 is shown for the binary case in figure 2. The probability of the input bit being the LPS is denoted by p (notice that p 1/2). The input bit to be encoded is denoted by b. The values of b and p are provided by the "Read......." block. Now, if a MPS is
to be encoded, C does not change, since the lower interval is selected and C already points to this interval. However, A does change and its update is A=A-A.p (using the fact that the probability of the MPS equals 1-p). If a LPS is to be encoded, both C and A are changed: C is updated as C=C+A-A.p and the ne\v interval size is A= A.p. It should further be noted that, by a pre- and post-processing, it can be assured that the MPS is always e.g. the "0" bit and the LPS is always the "1" bit. Finally, figure 2 shows an "approximate multiplication" block, because it turns out that the multiplication A.p can be performed with low accuracy, at only a small loss of performance, thus reducing the hardware complexity. Techniques to do the approximate multiplication are discussed later on below.
For the non-binary case, the "Update A and C" block of figure 1 is shown in
figure 3. The "Read......" block now provides the symbol to be encoded, s, as well as two
probability values: the probability ps of symbol s and the cumulative probability p, of all symbols ranked below symbol s. As can be observed from figure 3, symbol M is treated differently from the others, in order to exactly "fill" A. It is shown in [Riss89] that it is advantageous to assign the MPS to symbol M.
In order to be able to decode, the decoder must know the value of C, since this determines the symbol that was encoded. So, what is sent to the decoder is the value of the C register. Actually, each time the A register is left-shifted in the renormalization process, the MSB of C (also referred to as "carry bit") is processed for transmission to the decoder. The problem with using a finite-size register for C is that a bit that was already shifted out of C could later have to be adjusted by a carry caused by incrementing C. To take care of this, carry-over control is needed. The state-of-the-art techniques completely solve the problem at the encoder, so the decoder is not affected by this. These solutions, which minimize decoder complexity, will also be discussed later on.
The decoder flow diagram is shown in figure 4. For the binary case, the "Output symbol ...." block is shown in figure 5. In the non-binary case, the decoder is more complex, since it has to find the inverse of "C=C+D", without knowing the value of s.
The invention aims at providing improvements for the above described arithmetic coders. In accordance with the invention, the encoding method comprises a serial sequence of n-bit symbols, n being an integer for which holds n>l, using finite sized first and second registers for storing an A parameter and a C parameter, respectively, the C parameter
having a relationship with a boundary' of a value interval and the A parameter having a relationship with the size of the said interval, the method comprising the steps of
(a) inputting a symbol of the information signal and at least one corresponding probability value of the associated symbol for encoding,
(b) retrieving the values for the A and C parameters from the first and second registers, respectively,
(c) splitting the value interval corresponding to the value retrieved from the first register into sub intervals corresponding to the said at least one probability value, and selecting one of the subintervals in response to the said symbol,
(d) updating at least the A parameter so as to bring its value in accordance with the size of the selected subinterval, in order to become the new size of the interval for encoding the next symbol in the information signal,
(e) storing the updated value for the A parameter in the first register,
(f) continue the method in step (a) for encoding the next symbol,
characterized in that the step (b) further comprises the substep of truncating the value of the A
parameter 0.bob1.....bi-1b,.... to the bit b,.| and adding T at the position of the bit to the
truncated value of A, if bj equals '1'. In another elaboration, the encoding method comprises a serial sequence of n-bit symbols, n being an integer for which holds n>l, using finite sized first and second registers for storing an A parameter and a C parameter, respectively, the C parameter having a relationship with a boundary of a value interval and the A parameter having a relationship with the size of the said interval, the method comprising the steps of
(a) inputting a symbol of the information signal and at least one corresponding probability value of the associated symbol for encoding,
(b) retrieving the values for the A and C parameters from the first and second registers, respectively,
(c) splitting the value interval corresponding to the value retrieved from the first register into sub intervals corresponding to the said at least one probability value, and selecting one of the subintervals in response to the said symbol,
(d) updating at least the A parameter so as to bring its value in accordance with the size of the selected subinterval, in order to become the new size of the interval for encoding the next symbol in the information signal,
(e) storing the updated value for the A parameter in the first register,
(f) continue the method in step (a) for encoding the next symbol,
characterized in that the step (b) further comprises the substep of truncating the value of the A
parameter 0.b0bi ..... to the bit b,.| and, if bi = 0' and b, = T, raise b,.i to T.
In again another elaboration, the encoding method comprises a serial sequence of n-bit symbols, n being an integer for which holds n>l, using finite sized first and second registers for storing an A parameter and a C parameter, respectively, the C parameter having a. relationship with a boundary of a value interval and the A parameter having a relationship with the size of the said interval, the method comprising the steps of
(a) inputting a symbol of the information signal and at least one corresponding probability value of the associated symbol for encoding,
(b) retrieving the values for the A and C parameters from the first and second registers, respectively,
(c) splitting the value interval corresponding to the value retrieved from the first register into sub intervals corresponding to the said at least one probability value, and selecting one of the subintervals in response to the said symbol,
(d) updating at least the A parameter so as to bring its value in accordance with the size of the selected subinterval, in order to become the new size of the interval for encoding the next symbol in the information signal,
(e) storing the updated value for the A parameter in the first register,
(f) continue the method in step (a) for encoding the next symbol,
characterized in that the step (b) further comprises the substep of truncating the value of the A
parameter to the bit bi. and make the bit bi. equal to ' 1'.
The improvements presented in this invention relate to the approximate multiplication blocks (which arc used in both the encoder and the decoder) and the carry-over control, which takes place in the "Renormalize......" block, in the encoder only.
These and other aspects of the invention will be described in more detail hereafter,
figure 1 shows a flow diagram of the arithmetic encoder,
figure 2 shows the flow diagram of the encoder block "Update A and C" in
figure 1, for the binary case. The LPS probability is p and the value of the bit that is to be
encoded is held by b,
figure 3 shows the flow diagram of the encoder block "Update A and C" in figure 1, for the non-binary case. The value of the symbol that is to be encoded is held by s and its probability is held in ps. The M+l symbols are numbered 0,....M. p is the cumulative probability of all symbols ranked below symbol s. figure 4 shows a flow diagram of the decoder,
figure 5 shows the flow diagram for decoder block "Output symbol...." in figure 4, for the binary case. The LPS probability is p and the value of the bit that is decoded is put in b.
figure 6 shows the flow diagram of the encoder block denoted "Renormalize ...." in figure 1,
figure 7 shows the flow diagram of the encoder block denoted "Initialize" in figure 1,
figure 8 shows a flow diagram of the encoder block denoted "Terminate" in figure 1,
figure 9 shows a flow diagram of the decoder block denoted 'Initialize" in figure 4,
figure 10 shows a flow diagram of the decoder block denoted "Renormalize....."
in figure 4,
figure 11 shows an embodiment of the encoder apparatus, and
figure 12 shows an embodiment of the decoder apparatus.
As regards the improvements to the multiplication, the following can be said. The problem of "avoiding" the multiplication A.p was solved in [Lang81] by approximating p by 2 exp{-Q}, where Q is an integer. Multiplication by p then simply corresponds to a right shift by Q positions. Q is called the skew number. Later, such as in [Riss89] the A register was normalized such that 0.75 Still better performance was obtained in [Chev91a] and [Chev91b]. They claim approximating A by eliminating all binary digits less significant than a predetermined binary 1 digit, and their preferred embodiment is to use the second most significant binary 1 digit. Thus, in the preferred embodiment, the value of A is approximated by a binary number containing two non-zero bits, which implies that the multiplication can be performed using a single shift and add operation. Finally, [Feyg93] describes an improved approximation of A, that also can be implemented using a single shift and add operation.
The method(s) that is(are) actually used in the present invention to approximate the multiplication are as follows. Let the probabilities (p) be described using NP bits. For example, ifNP=8, then p= 1/4=2 exp{-2}=0.01 (binary) would be represented by binary 01000000, i.e. the "0." is not included, since it is the same for all probabilities. The size of the A register is chosen as NA=NP+NX bits, where NX represents the number of bits that are used to approximate the value of A that is used for the multiplication. For example, let NX=3 and A=3/4=0.11, then A would be a 8+3=11-bit register containing 11000000000 (notice that again the "0." is dropped, since we normalize A such that it is always less than one). For the multiplication, we approximate A by a 3-bit number; in this case, it is clear that the best approximation is 110. The result of the approximate multiplication A . p would then be 00110000000, i.e. again an 11-bit number. This way of implementing the approximate multiplication was suggested, amongst others, in [Feyg93].
Below, an discussion will be given as to how the NA-bit number A should be approximated by NX bits.
The first way of approximating A (the method PI) comprises the measure to round A to NX bits instead of truncating it. Rounding means that A is truncated to NX bits if the (NX+l)th bit of A is a 0 and that 1 is added to this truncated representation if the (NX+l)th
bit is a 1. For example, if A=l 101......, the 3-bit approximation would be 111. The rounding
that is applied increases the complexity, since, in about half of the cases, 1 has to be added to the truncated representation, which means either an add operation or a table lookup must be done.
As an alternative (method P2), it is proposed to adopt a what is called "partial rounding". By partial rounding, a 1 is only added to the truncated representation of A in case the (NX+l)th bit is a T and the NXth bit is a '0'. In the implementation this means that the NXth bit of the approximation of A equals the logical OR of the NXth and (NX+l)th bits of
the original A. For example, A=1011......would be approximated by 101 and A=1001.....
would be approximated by 101 as well, whereas A=1000........would be approximated by 100.
Notice that the partial rounding results in the same approximation as the "full rounding" in about 75 % of the cases.
In another alternative (method P3), it is proposed to approximate A by
truncating it to NX bits and to always set the NXth bit of the approximation to 1, with the idea that this further reduces the complexity of a hardware implementation, since it eliminates half of the possible approximate values for A.
The performance of some known methods have been compared to the three new methods described above. The performances of the various methods are listed in table I The methods that are shown in the table, in addition to the three approximation methods (PI, P2 and P3) described above, are a reference method, denoted 'reference', which computes the full NAxNP-bit multiplication and only then truncates A to NA bits, the method of [Moff95], the method of [Chev91a], the method described in section 3 of Feyg93, denoted Feyg93(l), and the method described in section 4 of Feyg93, denoted Feyg93(2).
The numbers that are listed are relative sizes of the compressed files, with 100 corresponding to "no loss" due to a non-perfect approximation of the multiplication. For example, a number of 100.57 means that the size of the compressed file is increased by 0.57 % due to the approximate multiplication.
As expected, the performance of method P2 is better than that of method Moff95, but not as good as that of method PI.
The method P2 is a good compromise. More specifically, P2 for NX=3 and NX=4 provides a good trade-off between performance and complexity, since its performance is practically the same as that of method PI (see the above table), at a lower complexity. For NX=2, method PI is the preferred method, whilst the method P3 can be used for NX=5 and larger.
In the non-binary case, methods that can approximate the value of A by rounding up have the potential problem that there could be no "room" left for the MPS, when the alphabet size increases [Feyg93]. For method Feyg93(2), the worst-case limit on the alphabet size is 11 [Feyg93]. The presently proposed new approximation methods have the advantage that the amount by which A can be increased by rounding up decreases as NX is
increased. Therefore, if there is an application in which the probability distribution is such that the alphabet size is limited (and the performance of the methods that can only truncate, or round down, is insufficient), a larger alphabet can be handled by increasing NX.
The carry-over control problem in the renormalization step in the encoder was originally solved by a technique called "bit stuffing" [LangSl]. This teclmique "catches" the carry by inserting a 0 stuff bit into the coded stream in case a series of 1 bits has been encountered. The disadvantages of this technique are that it reduces the compression efficiency, because of the extra stuff bits, and that it requires special processing in the decoder.
A method to prevent the carry-over without affecting the compression performance was described in [Witt87]. This method has the disadvantage that the decoder complexity is somewhat increased. The idea of [Witt87] was adapted in [Cham90], such that it could be used without increasing the decoder complexity.
Here, a different solution is presented that also does not increase the decoder complexity. The flow diagram of the encoder rcnonnalization procedure in accordance with
the invention is shown in figure 6. The main improvement is the block with C this same block, the prior art uses C+A compared to the present proposal.
To complete the description of the encoder, the initialization and termination blocks are shown in figure 7 and figure 8, respectively. The counter variable is the same as the one that is used in the encoder renormalization block shown in figure 6. Since the size of C is (NA+1) bits (in the encoder), it has NA fractional bits ("bits after the point"), which are outpul on termination, as shown in figure 8.
The decoder initialization is shown in figure 9. The C register is filled by reading (NA+1) bits from the stream. The first bit read is a "dummy", since it is always a "0" bit. The size of the C register in the decoder is only NA bits, so one less than in the encoder. There is no special termination in the decoder (the "Terminate" block in figure 4 is empty).
The renormalization in the decoder (the "Renormalize......" block in figure 4) is shown in
figure 10.
Figure 11 shows an embodiment of the encoder apparatus in accordance with the invention. The apparatus comprises input terminals 100 and 102, for receiving the information signal and a probability signal, respectively. The information signal comprises a serial sequence of n-bit symbols, n being an integer for which holds n>l. The probability signal applied to the input terminal 102 comprises one or more probabilities for each symbol in the information signal. For binary1 symbols, the probability signal comprises one probabilit)
for each symbol. Finite sized first and second registers 104 and 106, respectively, are present, for storing the A parameter and the C parameter.
A processing unit 108 is available for carrying out the arithmetic coding on the information signal. It should be understood that, without going into very much detail as regards the processing unit 108, this unit comprises circuitry for retrieving the values for the A and C parameters from the first and second registers, as well as circuitry for storing the updated and renormalized values for A and C in the first and second registers 104 and 106, respectively, after having encoded a symbol. Further, the unit 108 comprises circuitry for splitting the value interval corresponding to the value retrieved from the first register 104 into sub intervals corresponding to the said at least one probability value applied to the input terminal 102, and circuitry for selecting one of the sub intervals in response to the said symbol applied to the input terminal 100.
Circuitry for updating the A and C parameters are also present, where this circuitry is required so as to bring the A value in accordance with the size of the selected subinterval, and so as to bring the C value in accordance with a boundary of the said subinterval.
An output terminal 110 is available outputting encoded bits in response to the encoded symbols.
The retrieval means for retrieving the A and C parameters from their corresponding registers further comprises means for truncating the value of the A parameter, prior to carrying out the calculation A . p. More specifically, this truncation can be as follows:
suppose the value for A is expressed as 0.b0b1.....bi-1bi.....This value is truncated to to the bit
0.b and a T is added at the position of the bit b,.| to the truncated value of A, if b, equals T.
In another elaboration, the value of the A parameter is truncated to the bit b,.| and, if b, = '0' and b, = T, the bit b,.| is raised to '1'. In again another elaboration, the A parameter is truncated to the bit b,.| and the bit b,.| is made equal to T.
It will be appreciated that the processing unit 108 is capable of carrying out the method, as disclosed in the figures 1, 2, 3, 6, 7 and 8.
Preferably, the encoder apparatus is further provided with a channel encoding unit 112, well known in the art, for channel encoding (and, if needed, error correction encoding) the encoded information signal into a channel encoded information signal, and a write unit 104 for writing the channel encoded signal onto a record carrier, such as a magnetic record carrier 116, or an optical record carrier 11 8.
Figure 12 shows an embodiment of the decoder apparatus in accordance with the invention. The decoder apparatus comprises an input terminal 120 for receiving the encoded information signal. Finite sized First and second registers 122 and 124 arc present, for storing the A parameter and the C parameter, respectively.
A processing unit 126 is available for carrying out the arithmetic decoding on the encoded information signal that is received via its input 120, in response to a probability signal supplied to the processing unit 126 via an input 134. The probability signal can be obtained in a well known way. An example of deriving the probabilities for a 1-bit audio signal is shown in [Bruek97], In this example, the probabilities are derived from the decoded output signal that is supplied to the output 128, namely by carrying out a prediction filtering on the decoded output signal in prediction filter 136 and generating the probobility in response to the output signal of the prediction filter 136 in the probability determining unit 138. It should be understood that, without going into very much detail as regards the processing unit 126, this unit comprises circuitry for retrieving the values for the A and C parameters from the first and second registers, as well as circuitry for storing the updated and renormalized values for A and C in the first and second registers 122 and 124, respectively, after having decoded a symbol. Further, the unit 126 comprises circuitry for carrying out the steps shown in the figures 4, 5, 9 and 10.
The circuitry for retrieving the value of the A parameter from the register 122 further comprises means for truncating the value of the A parameter prior to carrying out the calculation A . p. This truncation is in the same way as described above for the encoder, so that from a further explanation will be refrained.
Preferably, the decoder apparatus is further provided with a channel decoding unit 132, well known in the art, for channel decoding (and, if needed, error correcting) the channel encoded information signal into the arithmetically encoded information signal for the arithmetic decoder 126, and a read unit 130 for reading the channel encoded signal from a record carrier, such as the magnetic record carrier 116, or the optical record carrier 118.
Arithmetic coding is applied in most modem lossless and lossy coding schemes for video and audio. It can also be applied in the compression of computer data (such as, e.g., text files). The application envisaged here, is in lossless coding of 1-bit audio signals. Reference is made in this respect to US ser. no 08/966,375, corresponding to EP patent application no.97201680.2 (PHN16405), US scr. no. 08/937,435, corresponding to international patent application no. IB 97/01156 (PHN 16452).
Whilst the invention has been described with reference to preferred embodiments thereof, it is to be understood that these are not limitative examples. Thus, various modification may become apparent to those skilled in the art, without departing from the scope of the invention, as defined by the claims.
Further, the invention lies in each and every novel feature or combination of features.
REFERENCES:
[Lang81] G.G. Langdon et al, "Compression of black-white images with aritlimctic
coding", IEEE Trans, on Com., Vol. COM-29, pp. 858-67, June 1981." [Witt87] I.H. Witten et al, "Arithmetic coding for data compression", Communications
ACM, Vol. 30, pp. 520-540, June 1987. [Lang 84] G.G. Langdon, "An introduction to arithmetic coding", IBM J. Res. Develop.,
Vol. 28, pp. 135-149, March 1984.
[Penn88] W.B. Pennebaker et al, "An overview of the basic principles of the Q-coder adaptive binary arithmetic coder", IBM J. Res. Develop., Vol. 32, pp. 717-26, Nov. 1988. [Riss89] J. Rissanen et al, "A multiplication-free multialphabet arithmetic code", IEEE
Trans on Com, Vol. 37, pp. 93-8, Febr. 1989 [Cham90] USP 4,973,961 [Chev91a] D. Chevion et al, "High efficiency, multiplication free approximation of
arithmetic coding" in Data Compression Conference (DCC '91), pp. 43-52, 1991 [Chev91b] USP 4,989,000.
[Feyg93] G. Feygin et al, "Minimizing error and VLSI complexity in the multiplication free approximation of arithmetic coding" in Data Compression Conference" (DCC '93), pp. 118-127, Mar. 30-Apr. 1, 1993. [Howard94] P.G. Howard et al, "Arithmetic coding for data compression", Proc. IEEE,
Vol. 82, no. 6, pp. 857-65, June 1994. [Moff95] A. Moffat et al, "Arithmetic coding revisited" in Data Compression Conference
(DCC'95), pp. 202-11,1995.
[Bruek97] F. Bruekers et al, "Improved lossless coding of 1 -bit audio signals", presented at 103rd Convention of the AES, Sept., 26-29, 1997, preprint 4563(1-6)
We Claim,
1. Method of encoding and recording on an information carrier a digital information signal, the digital information signal comprising a serial sequence of n-bit symbols, n being an integer for which holds n>l, the method comprising the steps of:
• arithmetically encoding the digital information signal, using finite sized first and second registers for storing an A parameter and a C parameter, respectively, the C parameter having a relationship with a boundary of a value interval and the A parameter having a relationship with the size of the said interval, by:
(a) inputting a symbol of the information signal and at least one corresponding probability value of the associated symbol for encoding,
(b) retrieving the values for the A and C parameters from the first and second registers, respectively,
(c) splitting the value interval corresponding to the value retrieved from the first register into sub intervals corresponding to the said at least one probability value, and selecting one of the subintervais in response to the said symbol,
(d) updating at least the A parameter so as to bring its value in accordance with the size of the selected subinterval, in order to become the new size of the interval for encoding the next symbol in the information signal,
(e) storing the updated value for the A parameter in the first register, and
(f) continue the method in step (a) for encoding the next symbol, so obtaining an encoded information signal,
• channel encoding the encoded information signal into a channel encoded signal, and
• recording the channel encoded signal on a record carrier, characterized in that
the step (b) further comprises the substep of truncating the value of the A parameter 0.b0b1.....bi-1bi.... to the bit bj and the substep of manipulating the bit bj-i.
2. The method of claim 1, characterized in thst the substep of manipulating the bit bj-i comprises the step of adding '1' at the position of the bit b|-i to the truncated value of A, if b; equals '1'.
3. The method of claim i, characterized in that the substep of manipulating the bit b-1 comprises the step of raising bj.i of the truncated value of A to '1' if bj.i = '0' and bi = 1'.
4. The method of ciaim i, characterized in that the substep of manipulating the bit bj_i comprises the step of making the bit bj.i of the truncated value of A equal to '1'.
5. The method of ciaim 1, 2, 3 or 4, the step of updating also comprising updating the C value so as to bring the value of tiie C parameter into a corresponding relationship with a boundary of the selected sub interval, in order to become the new C parameter for encoding the next symbol in the information signal, the step of storing further comprising storing the updated value of the C parameter in the second register.
6. The method of claim 1, 2, 3, or 4, the step of updating further comprising the substep of renormalizing the values for the A and C parameter, prior to storing the renormalized values for the A and C parameters in the first and second registers, respectively, characterized in that the renormalization substep comprising
(g1) comparing the value for A with a first binary value, if the value for A is not smaller
than said first binary value, leave the renormaiization step, and if the value for A is
smaller than said first binary value, then
(g2) multiply the value for A with a first integer value,
(g3) return to (g1).
7. The method of claim 6, characterized in that if A Is smaller than said first binary value in (gl)
(g4) compare the value for C with a second and a third binary value, the second binary value being larger than said third binary vaiue, and that, if the vaiue for C is smaller than said second binary value, and larger than or equal to said third binary value, then (g4) subtract a fourth binary value from the value for C so as to obtain an intermediate value for C, (g5) multiplying the intermediate value for C with a second integer value.
8. The method of ciaim 6, characterized in that the first binary value equals 0.100....0.
9. The method cf claim 7 characterized in that the second binary value equals 1.000...0..
10. The method of claim 6, characterized in that the first integer value equals 2.
11. The method of claim 7, characterized in that the third binary value equals 0.100...0.
12. The method of ciaim 7, characterized in that the fourth binary value equals 0.1000...0.
13. The method of claim 7, characterized in that the second integer value equals 2.
14. Apparatus for carrying out the method as claimed in anyone of the preceding cairns, for arithmetically encoding a digital information signai comprising a serial sequence of n-bit symbols, n being an integer for which holds n>l, the apparatus comprising
- finite sized first and second registers for storing an A parameter and a C parameter, respectively, the C parameter having a relationship with a boundary of a vaiue interval and the A parameter having a relationship with the size of the said Interval,
- input means for receiving a symbol of the information signal and at least one corresponding probability value for the associated symbol for encoding,
- retrieval means for retrieving the values for the A and C parameters from the first and second registers, respectively,
- means for splitting the value interval corresponding to the value retrieved from the first register into sub intervals corresponding to the said at least one probability value, and selecting one of the sub intervals in response to the said symbol,
- means for updating at least the A parameter so as to bring its value in accordance with the size of the selected subinterval in order to become the new size of the interval for encoding the next symbol in the information signal,
- means for storing the updated value for the A parameter in the first register, characterized in that the retrieval' means are further being adapted for truncating the
value of the A parameter 0.b0b1.....bi-1bi .... to the bit bj_1 and for manipulating the bit
15. Apparatus as claimed in claim 14 , characterized in that the retrieval means are further adapted for adding T at the position of the bit b;-i to the truncated value of A, if b, equals '1'.
16. Apparatus as claimed in claim 14, characterized in that the retrieval means are further adapted for raising bj-i of the truncated value of A to '1' If b = *0' and b, = T.
17. Apparatus as claimed in claim 14, characterized in that the retrieval means are further adapted for making the bit bj-i of the truncated value of A equal to T.
18. The apparatus of claim 15, 16 or 17, further comprising means for renormalizing the values for the A and C parameter, prior to storing the renormalized values for the A and C parameters in the first and second registers, respectively, characterized in that the means for renormaSizing comprises means for
(gi) comparing the value for A with a first binary value, if the vaiue for A is not smalier than said first binary value, leave the renormalization step, and if the value for A is smaller than said first binary value, then (g2) multiply the value for A with a first integer value.
19= The apparatus of claim 18, characterized in that the renormalizing means
further comprises means for
(g4) comparing the value for C with a second and a third binary value, the second binary value being larger than said third binary value,
(g4) subtracting a fourth binary value from the value for C so as to obtain an
intermediate value for C,
(g5) multiplying the intermediate value for C with a second integer value.
20. The apparatus of claim 14, i5, 16 or 17, characterized in that it further comprises means for channel encoding the encoded information signal into a channel encoded signal.
21. The apparatus of claim 20, characterized in that it further comprises recording means for recording the channel signal on a record carrier.
22. Apparatus for arithmetically decoding an arithmetically encoded information signal into an information signal comprising a serial sequence of n-bit symbols, n being an integer for which holds n>l, the apparatus comprising
- input means for receiving the arithmetically encoded information signal,
- finite sized first and second registers, the first register for storing an A parameter, the A parameter having a relationship with the size of a value interval, the second register for storing a C parameter, the contents of the second register before 3 decoding step being obtained from the contents of the second register obtained in a previous decoding step, by shifting m bits of the arithmetically encoded information signal into the second register, where m is a variable integer for which holds: m > 0,
- generator means for generating at least one probability value for an associated symbol to be decoded,
- retrieval means for retrieving tine values for the A and C parameters from the first and second registers, respectively,
- deriving means for deriving a symbol in response to the said at least one probability value, and in response to a value for A and a value for C,
- means for updating at least the A parameter in order to become the new size of the interval for decoding the next symbol of the information signal,
- means for outputting the derived symbol,
- means for storing the updated value for the A parameter in the first register, characterized in that the retrieval means are further adapted for truncating the value of the A parameter O.bobi.....bj-ibi.... to the bit bj-i and manipulating the bit bj-t-
23. The apparatus as claimed in claim 22, characterized in that the retrieval means further are arranged for adding 1' at the position of the bit b-,-i to the truncated value of A, if bj equals '!'.
24. The apparatus as claimed in claim 22, characterized in that the retrieval means further are arranged for raising bi-i of the truncated value of A to T if b-i = '0' and bj = T.
25. The apparatus as claimed in claim 22, characterized in that the retrieval means further are arranged for making the bit b,_i of the truncated value of A equal to T.
25. The decoding apparatus as claimed in claim 23, 24 or 25, characterized in
that it further comprises channel decoding means for channel decoding the arithmetically encoded information signal, prior to arithmetic decoding.
27. The apparatus as claimed in claim 25, characterized in that it further
comprises read means for reading the channel encoded arithmetically* encoded information signal from a record carrier.
The invention relates to measures to improve an arithmetic encoder and a corresponding arithmetic decoder. More specifically, proposals are given to truncate the A parameter, prior to carrying out the multiplication A.p. Further, a proposal is given for the carry-over control in the re-nomalization step in the encoder.

Documents:


Patent Number 224202
Indian Patent Application Number IN/PCT/1999/0079/KOL
PG Journal Number 41/2008
Publication Date 10-Oct-2008
Grant Date 03-Oct-2008
Date of Filing 04-Nov-1999
Name of Patentee KONINKLIJKE PHILIPS ELECTRONICS N.V.
Applicant Address GROENEWOUDSEWEG 1, 5621 BA EINDHOVEN
Inventors:
# Inventor's Name Inventor's Address
1 VAN DER VLEUTEN, RENATUS, J PROF. HOLSTLAAN 6, NL-5656 AA EINDHOVEN
PCT International Classification Number H03M 7/40
PCT International Application Number PCT/IB99/00310
PCT International Filing date 1999-02-22
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 98200914.4 1998-03-23 EUROPEAN UNION