Title of Invention

A METHOD AND APPARATUS FOR DETECTING AN ATTACK IN AN INPUT AUDIO SIGNAL

Abstract The improved audio coder converts the input audio signal to a digital audio signal. The audio coder then divides the digital audio signal into larger frames having a long-block frame length and partitions each of the frames into multiple short-blocks. The audio coder then computes short-block audio signal characteristics for each of the partitioned short-blocks based on changes in the input audio signal. The audio coder further compares the computed short-block characteristics to a set of threshold values to detect presence of an attack in each of the short-blocks and changes the long-block frame length of one or more short-blocks upon detecting the attack in the respective one or more short-blocks.
Full Text

A METHOD AND APPARATUS FOR DETECTING AN ATTACK IN AN INPUT AUDIO SIGNAL
Field of the Invention
This invention relates to a method and apparatus for detecting an attack in an input audio signal, and more particularly, pertains to techniques for encoding audio signals inclusive of voice and music using a perceptual audio coder.
Background
A Perceptual audio coder is an apparatus that takes series of audio samples as input and compresses them to save disk space or bandwidth. The Perceptual audio coder uses properties of the human ear to achieve the compression of the audio signals.
The technique of compressing audio signals involves recording an audio signal through a microphone and then converting the recorded analog audio signal to a digital audio signal using an A/D converter. The digital audio signal is nothing but a series of numbers. The audio coder transforms the digital audio signal into large frames of fixed-length. Generally, the fixed length of each large frame is around 1024 samples. The analog signal is sampled at a specific rate (called the sampling frequency) and this results in a series of audio samples. Typically a frame of samples is a series of numbers. The audio coder can only process one frame at a time. This means that the audio coder can process only 1024 samples at a time. Then the audio coder transforms the received fixed-length frames (1024 samples) into a corresponding frequency domain. The transformation to a frequency domain is accomplished by using an algorithm, and the output of this algorithm is another set of 1024 samples representing a spectrum of the input. In the spectrum of samples, each sample corresponds to a frequency. Then the audio coder computes masking thresholds from the spectrum of samples. Masking thresholds are nothing but another set of numbers, which are useftil in compressing the audio signal. The following illustrates the computing of masking thresholds.
The audio coder computes an energy spectrum by squaring the spectrum of the 1024 samples. Then the samples are further divided into series of bands.

For example, the first 10 samples can be one band and the next 10 samples can
be another subsequent band and so on. Note that the number of samples (width)
in each band varies. The width of the bands is designed to best suit the properties
of the human ear for listening to frequencies of sound. Then the computed
energy spectrum is added to each of the bands separately to produce a grouped
energy spectrum.
The audio coder applies a spreading function to the grouped energy
spectrum to obtain an excitation pattern. This operation involves simulating and
applying the effects of sounds in one critical band to a subsequent (neighboring)
critical band. Generally this step involves convolution with a spreading fimction,
which results in another set of fixed numbers.
Then, based on the tonal or noise-like nature of the spectrum in each
critical band, a certain amount of frequency-dependent attenuation is apphed to
obtain initial masking tlrreshold values. Then, by using an absolute threshold of
hearing, the final masked thresholds are obtained. Absolute threshold of hearing
is a set of amplitude values below which the human ear will not be able to hear.
Then the audio coder combines the initial masking threshold values with
the absolute threshold values to obtain the final masked threshold values.
Masked threshold value means a sound value below which a sound is not audible
to the human ear (i.e., an estimate of maximum allowable noise that can be
introduced during quantization).
Using the masked tlrreshold values, the audio coder computes perceptual
entropy (PE) of a current frame. The perceptual entropy is a measure of the
minimum number of bits required to code a current frame of audio samples. In
other words, the PE indicates how much the current frame of audio samples can
be compressed. Various types of algorithms arc currently used to compute the
PE.
The audio coder receives the grouped energy spectrum, the computed
masking tlrreshold values, and the PE and quantizes (compresses) the audio
signals. The audio coder has only a restricted number of bits allocated for each
frame depending on a bit rate. It distributes these bits across the spectrum based
on the masking threshold values, if the masking tln'eshold value is high, then the
audio signal is not important and is hence represented using a smaller number of


bits. Similarly, if masking threshold is low, the audio signal is important and hence represented using a higher number of bits. Also, the audio coder chocks to ensure that the allocated number of bits for the audio signals is not exceeded. The audio coder generally applies a two-loop strategy to allocate and monitor the number of bits to the spectrum. The loops are generally nested and are called Rate Control and Distortion Control Loops. The Rate Control Loop controls the distribution of the bits not to exceed the allocated number of bits, and the Distortion control loop does the distribution of the bits to the received spectrum Qunatization is a major part of the perceptual audio coder. The performance of the audio coder can be significantly improved by reducing the number of calculations performed in the control loops. The current quantization algorithms are very computation intensive and hence result in a slower operation.
Earlier we have seen that the audio coder receives one frame of samples (1024 samples in length) as input and converts the frame of samples into a spectrum and then quantizes using masking thresholds. Sometimes the input audio signal may vary quickly (when the properties of a signal change abruptly). For example, if there is a sudden heavy beat in the audio signal, and if the audio coder receives a frame of 1024 samples in length (including the heavy beat) due to inadequate temporal masking in a signal including abrupt changes, a problem called pre-echo can occur. This is because the sound signal contains error after quantization, and this error can result in an audible noise before the onset of the heavy beat, hence called the pre-echo. Heavy beats are also called 'attacks.' A signal is said to have an attack if it exhibits a significant amount of non-stationarily within the duration of a frame under analysis. For example, sudden increase in amplitudes of a time signal within a typical duration of analysis is an attack. To avoid this problem the audio signal is coded with frames having smaller frame lengths instead of the long 1024 samples. To keep continuity in the number of samples given as input usually 8 smaller blocks of 128 samples are coded (8x128 samples == 1024 samples). This will restrict the heavy beat to one set of 128 samples among 8 smaller blocks, and hence the noise introduced will not spread to the neighboring smaller blocks as pre-echo. But the disadvantage of coding in 8 smaller blocks of 128 samples is that they require more bits to code than required bv the larger blocks of 1024 samples in length.

So the compression efficiency of the audio coder is significantly reduced. To improve the compression efficiency, the heavy beats have to be detected accurately so that the smaller blocks can be applied only around the heavy beats. It is important that the heavy beats be accurately detected, or else pre-echo can occur. Also, a false detection of heavy beats can result in significantly reduced compression efficiency. Current methods to detect the heavy beats use the PE. Calculating the PE is computationally very intensive and also not very accurate.
Also, we have seen earlier that the blocks that have attacks should be coded as smaller blocks having 128 samples and others as larger blocks having 1024 samples. The smaller frame lengths of 128 samples are called 'short-blocks', and the 1024 samples frame length are called Tong-blocks,' We have also seen that the short-blocks require more bits to code than the long-blocks. Also for each large frame there is a fixed number of bits allocated. If we can intelligently save some bits while coding a long-block and use the saved bits in a short-block, the compression efficiency of the audio coder can be signiiicantly increased. For storing the bits, a 'Bit Reservoir mechanism' is needed. Since long-blocks do not need a large number of bits, the unuscd bits from the long-blocks can be saved in the bit reservoir and used later for a short-block. Currently there are no efficient techniques to save and allocate bits between long and short-blocks to improve the compression efficiency of the audio coder.
The audio signal can be of two types (i) single channel or mono-signal and (ii) multi-channel or stereo signal to produce spatial effects. The stereo signal is a multi-channel signal comprised of two channels, namely left and right channels. Generally the audio signals in the two channels have a large correlation between them. By using this correlation the stereo channels can be coded more efficiently. Instead of directly coding the stereo channels, if their sum and difference signals are coded and transmitted where the con-elation is high, a better quality of sound is achieved at a same bit rate. When the audio signal is a stereo signal, the audio coder can operate in two modes (a) normal mode and (b) M-S mode. The M-S mode means encoding the sum and difference of the left and nght channels of the stereo. Currently the decision to switch between the normal and M-S modes is based on the PE. As explained before, computing PE is very computation intensive and inconsistent.

BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
Figure 1 is block diagram of a prior-art perceptual audio coder.
Figure 2 is a block diagram of a perceptual audio coder according to the teaching of the present invention.
Figure 3 is a block diagram of one example embodiment of computing inter-block differences.
Figure 4 is a block diagram of one embodiment of major components of the Quantizer shown in Figure 2 and their interconnections.
Figure 5 is a flowchart illustrating the overall operation of the embodiment shown in Figure 2.
Figure 6 is a flowchart illustrating the operation of the Bit Allocator shown in Figure 4.
Figure 7 is a flowchart illustrating the operation of the Quantizer shown in Figures 1 and 2 according to the teachings of the present invention.
Figure 8 is a flowchart illustrating the overall operation of the embodiment shown in Figure 2 when compression encoding a stereo audio signal according to the teachings of the present invention.
Figure 9 shows an example of a suitable computing system environment for implementing embodiments of the present invention, such as those shown in Figures 1-8.
Detailed Description
The present invention provides an improved audio coder by increasing the efficiency of the audio coder during compression of an input audio signal. This is accomplished by providing computationally efficient and more accurate attack detection and quantization technique. Also, compression efficiency is improved by providing a technique to allocate bits between long and short-blocks. In addition, the present invention provides an audio coder that is computationally efficient and more accurate in switching between the normal and M-S modes when the audio signal is a stereo signal The words 'encode' and 'code' are used interchangeably throughout this document to represent the same audio compression scheme. Also the words 'encoder' and 'coder' are used

Brief Description of the Drawings
Figure 1 is block diagram of a prior-art perceptual audio coder.
Figure 2 is a block diagram of a perceptual audio coder according to the teaching of the present invention.
Figure 3 is a block diagram of one example embodiment of computing mter-block differences.
Figure 4 is a block diagram of one embodiment of major components of the Quantizer shown in Figure 2 and their interconnections.
Figure 5 is a flowchart illustrating the overall operation of the embodiment shown in Figure 2.
Figure 6 is a flowchart illustrating the operation of the Bit Allocator shown in Figure 4.
Figure 7 is a flowchart illustrating the operation of the Quantizer shown in Figures 1 and 2 according to the teachings of the present invention.
Figure 8 is a flowchart illustrating the overall operation of the embodiment shown in Figure 2 when compression encoding a stereo audio signal according to the teachings of the present invention.
Figure 9 shows an example of a suitable computing system environment for implementing embodiments of the present invention, such as those shown in Figures 1-8.
Detailed Description
The present invention provides an improved audio coder by increasing the efficiency of the audio coder during compression of an input audio signal. This is accomplished by providing computationally efficient and more accurate attack detection and quantization teclmique. Also, compression efficiency is improved by providing a technique to allocate bits between long and short-blocks. In addition, the present invention provides an audio coder that is computationally efficient and more accurate in switching between the normal and M-S modes when the audio signal is a stereo signal. The words 'encode' and 'code' are used interchangeably throughout tliis document to represent the same audio compression scheme. Also the words 'encoder' and 'coder' are used

interchangeably throughout this document to represent the same audio compression system.
Figure 1 shows a prior-art perceptual audio coder 100 including major components and their interconnections. Shown in Figure 1 arc Time frequency generator 110, Psychoacoustic model 120, Quantizer 130, and BitStream Fonnatter MO. The technique of compressing audio signals involves recording an audio signal through a microphone and then converting the recorded analog audio signal to a digital audio signal using an A/D converter. The digital audio signal is nothing but a scries of numbers.
The Time frequency generator 110 receives the series of numbers in large frames (blocks) of fixed-length 105. Generally, the fixed length of each frame is around 1024 samples (series of numbers). Time frequency generator 110 can only process one frame at a time. Tills means that the audio coder 100 can process only 1024 samples at a time. The Time frequency generator 110 then transforms the received fixed-length frames (1024 samples) into conesponding fr-equency domains. The transfonnalion to the frequency domain is accomplished by using an algorithm, and the output of this algorithm is another set of 1024 samples called a spectrum of the input. In the spectrum, each sample corresponds to a frequency. Then the Time frequency generator 110 computes masking thresholds from the spectrum. .Masking thresholds are nothing but another set of numbers that are useful in compressing the audio signal. The following illustrates one example embodiment of computing masldng thresholds.
The Time frequency generator 110 computes an energy spectrum by squaring the spectrum of 1024 samples. Then the samples are further divided into series of bands. For example, the first 10 samples can be one band and the next 10 samples can be another subsequent band and so on. Note that the number of samples (width) in each band varies. The width of the bands is designed to best suit the properties of the human ear for listening to fi-equencies of sound. Then the computed energy spectnrm is added to each of the bands separately to produce a grouped energy specrum.
The Time frequency generatori- 110 then applies a spreading function to the grouped energy spectrum to obtain an excitation pattern. This operation involves simulating and applying the effects of sounds in one critical band to a

subsequent (neighboring) critical band. Generally this step involves using a convolution algorithm betAveen the spreading function and the energy spectrum.
Based on the tonal or noise-like nature of the spectrum in each critical band, a certain amount of frequency dependent attenuation is applied to obtain initial masking threshold values, Using an absolute threshold of hearing, the final masked thresholds are obtained. Absolute threshold of hearing is a set of amplitude values below which the human car will not be able to hear.
The Psychoacoustic model 120 combines the initial masking threshold values with the absolute threshold values to obtain the final masked threshold values. Masked threshold value means a sound value below which quantization noise is not audible to the human ear (it is an estimate of the maximum allowable noise that can be introduced during quantization).
Using the masked threshold values, the Psychoacoustic model 120 computes perceptual entropy (PE). The perceptual entropy is a measure of the minimum number of bits required to code a current frame of audio samples. In other words, the PE indicates how much the current frame of audio samples can be compressed. Various types of algorithms are currently used to compute the PE.
The Quantizer 130 then receives the spectrum, the computed masking tlireshold values, and the PE, and compresses the audio signals. The Quantizer 130 has only a spccific number of bits allocated for each frame. It distributes these bits across the spectrum based on the masking threshold values. If the masking threshold value is high, then the audio signal is not important and hence can be represented using a smaller number of bits and similarly, if the masking threshold is low, the audio signal is important and hence can only be represented using a higher number of bits. Also, the Quantizer 130 checks to make sure that the allocated number of bits for the audio signals is not exceeded. The Quantizer 130 generally applies a two-loop strategy to allocate and monitor the number of bits to the received spectrum. The loops are generally nested and are called Rate control and Distortion control loops. The Rate Control loop controls the global gain so that the number of bits used to code the spectrum does not exceed the allocated number of bits, and the Distortion control loop does the distribution of the bits to the received spectrum.- Quantization is a major part of the perceptual

audio coder 100. The performance of the Quatizer 130 can be significantly improved by reducing the number of calculations performed in the control loops, The current quantization algorithms used in the Quantizer 130 are very computation intensive and hence result in slower operation.
BitStream formatter 140 receives the compressed audio signal (coded bits) from the Quatizer 130 and converts it into a desired format/syntax (specified coding standard) such as ISO MPEG-2 AAC,
Figure 2 is a block diagram of one embodiment of a perceptual audio coder 200 according to the teachings of the present invention. In addition to what is shown in Figure 1, in this embodiment the perceptual audio coder 200 includes a transient detection module 210. The transient detection module is coupled to receive the input audio signal. Also, the transient detection module 210 is coupled to provide an input to the time frequency generator 110 and psychoacoustic model 120.
In operation, the transient detection module 210 receives the input audio signal 105 as a series of numbers in frames of fixed-length and partitions each of the frames into multiple short-blocks. In some embodiments, the fixed length is a long-block frame length of 1024 samples of digital audio signal. The digital audio signal comprises series of numbers- The long-block is used when there is no attack in the input audio signal. In some embodiments, the short-blocks have a frame length in the range of about 100 to 300 samples of digital audio signal. The transient detection module 210 computes short-block audio signal characteristics for each of the partitioned short-blocks. In. some embodiments, computing the short-block audio signal characteristics includes computing inter¬block differences {xdiff(m) for an mth short-block) and inter-block ratios, and further determining maximum inter-block difference and ratio, respectively. In some embodiments, computing the inter-block differences includes smnming a square of the differences between samples in adjacent short-blocks. Further, in some embodiments, the inter-block ratios are computed to better isolate (detect) the attacks. In this embodiment, the inter-block ratios are computed by dividing the adjacent computed inter-block differences as follows:
r[o] - xdiff[0]/pxdif

r[lj - xdifft[1]Axdlfff[1]
r[2] = xdiff[2]/xdiff[l]
r[3] = xdiff[3]/xdiff[2]
r[4] = xdiff [4] /xdiff [3]
where 'pxdif is xdiffp[4] (which is xdiff[4] of the previous frame) The transient detection module 210 compar'es the computed short-block characteristics with a set of threshold values to detect the presence of an attack in each of the short-blocks. Then the transient detection module 210 changes the long-block frame length of the frame including the attack based on the outcome of the comparison, and inputs the changed frame length to the time frequency generator 110 to reduce the effect of the pre-echo caused by the attack. In some embodiments, the time frequency generator uses short-blocks to restrict the attack to a smaller frame so that the attack does not spread to adjacent smaller frame lengths to reduce the pre-echo artifact caused by the attack. In this embodiment, the smaller frames have a frame length in the range of about 1.00 to 200 samples of digital audio signal.
Figure 3 illustrates an overview of one embodiment of computing inter¬block differences to detect the presence of an attack in an input audio signal according to the teachings of the present invention. As explained earlier with reference to Figures 1 and 2, the input audio signal 305 is divided into ku:ge frames by a signal splitter 330 and processed by the perceptual audio coder 200 into frames. Each of the frames has a long-block frame length of 1024 samples of digital audio signal. The transient detection module 210 detects the presence of an attack by using two adjacent incoming frames at a time. In the example embodiment shown in Figure 3 the transient detection module 210 receives two adjacent current and previous frames 310 and 320, respectively. Also shown are the partitioned short-blocks 315 and 325 corresponding to the frames 310 and 320, respectively. In the embodiment shown in Figure 3, each of the short-blocks 315 and 325 corresponding to the frames 310 and 320, respectively, have frame lengths of 256 samples. The last five short-blocks (the four short-blocks 315 from the frame 310 and one adjacent short-block 325 from the frame 320) arc

used in detecting the presence an attack in the adjacent frame 320 before transfomration to frequency domain by the Time frequency generation 110.
The following computational sequence is used in detecting the presence of an attack in the adjacent frame 320:
The inter block differences xdiff(m) 340 in the time domain are computed using the following algotithm:

where s(j,m) they j'th time domain sample of the m 'th short-block and s(i,m-l) corresponds to time domain samples of the last short-block of the adjacent frame 320. The Diff blocks 350 shown in Figure 3 compute the Σ blocks 370 compute the sum, and finally the xdiff(m) is computed as indicated in the above algorithm.
In some embodiments, the short-block frame lengths are tuned to the application in use. In these embodiments, distance between the large frames is computed to determine an optimum size for the short-block frame lengths. The following algorithm is used to compute the distance between the large frames:

short-blocks, and d (.) is a function that returns a distance measure between the two vectors.
Figure 4 illustrates one embodiment of the major components of the Quantizer 130 and their interconnections as shown in Figure 2 used in a bit allocation strategy according to the teachings of the present invention. Shown in Figure 4 are Bit Allocator 410, Bit Reserv.oir 420, and Memory 425. The

technique of bit allocation strategy according to the teachings of the present invention includes efficicnt distribution of bits to different portions of the audio signal. Bits required to code the current frame can be estimated from the perceptual entropy of that frame. Extensive experimentation suggests that the number of bits required to encode is considerably less for a larger frame length than for a smaller frame length. Also, it has been found that the larger frames generally require less than the average nmnber of bits to encode large frames. The amount of reduction below the average number of bits is a frmction of bit rate. Using this technique also results in large savings of bits during stationary portions of the audio signal. The technique of bit allocation strategy according to the teachings of the present invention is explained in detail in the following section.
The Quantizer 130 receives the large and small frames including the samples of digital audio signal from the tune frequency generator 110. Further, the Quantizer 130 receives the computed perceptual entropy from the psychoacoustic model 120 shown in Figure 2. The Bit Allocator 410 computes an average number of bits that can be allocated to each of the received large frames. In some embodiments, the Bit Allocator 410 detennines the average number of bits by using the long-block frame length and sampling frequency of the input audio signal. Further, the Bit Allocator 410 computes a bit rate and a reduction factor based on the computed bit rate, and the received perceptual entropy. In addition, the Bit Allocator 410 computes a reduced average number of bits that can be allocated for each of the large frames using the computed reduction factor. Further, the Bit Allocator 410 computes remaining bits by subtracting the computed average number of bits using the computed reduced average number of bits. The Bit Allocator 410 includes a Bit Reservoir 420 to receive the remaining bits. The Bit Allocator 410 allocates a reduced average number of bits to the current frame and stores the remaining bits in the Bit Reservoir 420 when the current frame is a large frame. Further, the Bit Allocator allocates the reduced number of bits along with the stored bits from the Bit Reservoir 420 when the current irame is a small fi-ame to improve the bit allocalion between the large and small frames, to enhance sound quality of the compressed audio signal. The Bit Allocator 410 repeats the above process of bit

allocation to a next adjacent frame. In some embodiments, the allocation of bits to a small frame is based on number of bits available in the Bit Reservoir 420, bit rate, and a scaling applied to the denominator, which actually distributes the bits across continuous sequence of frames that use finer time resolution. At the same time, the Bit Allocator 410 makes sure that the Bit Reservoir 420 is not depleted too much.
Figure 4 also illustrates one embodiment of major components and (heir intercomrections in the Quantizer 130 shown in Figure 2 used in reducing computational complexity in the Quantizer 130 according to the teachings of the present invention. Also shown in Figure 4 are Rate Control Loop 430 (also generally referred to as 'Inner Iteration Loop"), Comparator 427, and Distortion Control Loop 440 (also generally referred to as "Outer Iteration Loop"),
The Rate Control Loop 430 computes global gain, which is commonly referred to as "common scalefac" for a given set of spectral values with a pre¬determined value for the maximum number of bits available for encoding the frame (referred to as "available bits"). The Rate Control Loop arrives at a unique solution for the conmron scalefac value for a given set of spectral data for a fixed value of available bits, so any other variation of the Rate Control Loop must necessarily arrive at the same solution. Efficiency of the Rate Control Loop is increased by reducing the number of iterations required to compute the common scalefac value. The technique of reducing the number of iterations required to compute the common scalefac value according to the teachings of the present invention is discussed in detail in the following section.
The Quantizer 130 stores a start common scalefac value of a previous adjacent frame to use in quantization of a current frame, The Rate Control Loop 430 computes the comnron scalefac value for the current frame using the stored start common scalefac value as a starting value during computation of iterations by the Rate Control Loop 430 to reduce the number of iterations required to compute the common scalefac value of the current frame. Further, the Rate control Loop 430 computes counted bits using the common scalefac value of the current frame. The comparator 427 coupled to the Rate control Loop compares the computed count bits with available bits. The Rate Control Loop changes the computed common scalefac value based on the outcome of the comparison. In

some embodiments, the count bits comprises bits required to encode a given set of spectral values for the current frame.
The Distortion Control Loop 440 is coupled to the Rate Control Loop 430 to distribute the bits among the samples in the spectrum based on the masking thresholds received from the psychoacoustic model Also, the Distortion Control Loop 440 tries to allocate bits in such a way that quantization noise is below the masking tliresholds. The Distortion Control Loop 440 also sets the starting value of start common scalefac to be used in the Rate Control Loop 430.
Figure 5 illustrates one example embodiment of a process 500 of detecting an attack in an input audio signal to reduce a pre-echo artifact caused by the attack during a compression encoding of the input audio signal. The process 500 begins with step 510 by receiving an input audio signal and converting the received input audio signal into a digital audio signal. In some embodiments, the attack comprises a sudden increase in signal amplitude.
Step 520 includes dividing the converted digital audio signal into large frames having a long-block frame length. In some embodiments, the long-block frame length comprises 1024 scmiples of digital audio signal. In this embodiment, the samples of digital audio signal comprise series of numbers. In this embodiment, the long-block frame length comprises a frame length used when there is no attack in the input audio signal.
Step 530 includes partitioning each of the large iframes into multiple short-blocks. In some embodiments, partitioning large frames into short-blocks includes partitioning short-blocks having short-block frame lengths in the range of about 100 to 300 samples.
Step 540 includes computing short-block characteristics for each of the partitioned short-blocks based on changes in the input audio signal. In some embodiments, the computing of the short-block characteristics includes computing inter-block differences and detennining a maximum, inter-biock difference from the computed inter block differences. In some embodiments, the computing of short-block characteristics iurther includes computing inter-block ratios and determining a maximum inter-block ratio from the computed biter-block ratios. In this embodiment, the computing of inter-block differences

includes summing a square of the differences between samples in adjacent short-blocks. Also in this embodiment the computing of the inter-block ratios includes dividing the adjacent computed inter-block differences. The process of computmg the short-block characteristics is discussed in more detail with reference to Figure 3-
Step 550 includes comparing the computed short-block characteristics to a set of threshold values to detect a presence of the attack in each of the short-blocks. Step 560 includes changing the long-block frame length of one or more large frames based on the outcome of the comparison to reduce the pre-echo artifact caused by the attack. In some embodiments, the changing of the long-block frame length means changing to include multiple smaller frames to restrict the attack to one or more smaller frames so that the pre-echo artifact caused by the attack does not spread to the adjacent larger frames. In some embodiments, the smaller frame lengths include about 100 to 200 samples of digital audio signal.
Figure 6 illustrates one example embodiment of an operation 600 of an efficient strategy for bit allocation to the large and small frames by the Bit Allocator shown in Fig. 4 according to the present invention. The operation 600 begins with step 610 by computing an average number of bits that can be allocated for each of the large frames. In some embodiments, the average number of bits is computed by determining the long-block frame length, the sampling frequency of the input audio signal, and the bit rate of the coding the input audio signal.
Step 620 includes computing a perceptual entropy for the current frame of audio samples using the masking thresholds computed as described in detail with reference to Figure 1. Step 630 includes computing a bit rate using a sampling frequency and the cuiTcnt frame length. Step 640 includes computing a reduction factor based on the computed bit rate and the perceptual entropy. Step 650 includes computing a reduced average number of bits that can be allocated to each of the large frames using the computed reduction factor. Step 660 includes computing remaining bits by subtracting the computed average number of bits with the computed reduced average number of bits. Step 670 includes allocating bits based on the large or small frame. In some embodiments, if the

current. frame to be coded is large, then a reduced number of bits are allocated to the current frame and the remaining bits are stored in a Bit Reservoir, and if the current frame to be coded is small, then the reduced number of bits are allocated along with the stored bits from the Bit Reservoir. In some embodiments, the above-described operation 600 repeats itself for a next frame adjacent to the current frame.
The following example further illustrates the operation of the above-described operation 600 of the bit allocation strategy:
For example, if a given mono (single) audio signal at a bit rate of 64kbps is sampled at a sampling frequency of 44100 Hz (meaning there are 44100 samples per second which needs to be encoded at a bit rate of 64000 bits per second) and the long-block frame length is 1024 samples, the average number of bits are computed as follows:

Therefore each frame is coded using 1486 bits. Each of the frames does not require the same number of bits. Also each of the frames does not require all of the bits. Assuming the first frame to be coded requires 1400 bits, the remaining unused 86 bits are stored in the Bit Resei-voir and cau be used in succeeding frames. For the next adjacent frame we will have a total of 1572 bits (1486 bits -1- 86 bits in the Bit Reservoir) available for coding. For example, if the next adjacent frame is a short frame more bits can be allocated for coding.
In some embodiments, less than the average number of bits are used for encoding the large frames (using a reduction factor) and the remaining bits are stored in the Bit Reservoir. For example, in the above case only 1300 bits are allocated for each of the large frames. Then the remaining 186 bits (reduction factor) are stored in the Bit Reservoir.
GencraiJy the Bit Reservoir cannot be used to store a large number of remaining bits. Therefore, a maximum limit is set for the number of bits that can be stored in the Bit Reservoir, and anytime the number of bits exceeds the

maximum limit, the excess bits are allocated to the next frame, In the above example, if the bit reservoir has exceeded the maximum limit, then the next frame will receive 1300 bits along with the number of bits by which the Bit reservoir has exceeded the limit.
In the above-described operation 600 when the next frame is a small frame (small frames generally occur rarely), then more bits are allocated to the small frame from the Bit Reservoir, The number of extra bits that can be allocated to the small frame is dependent on two factors. One is the number of bits present in the Bit Reservoir and the other is the number of consecutive small blocks present in the input audio signal. Basically the strategy described in the above operation 600 is to remove bits fi-om the long frames and to allocate the removed bits to the small frames as needed.
Figure 7 illustrates one example embodiment of operation 700 of reducing computational iterations during compression by a perceptual encoder to improve the operational efficiency of the perceptual audio coder. The operation 700 begins with step 710 by initializing common scalefac for the current frame. In some embodiments, the common scalefac is initialized using a common scalefac value of a previous frame adjacent to the current frame. In some embodiments, this is the common scalefac value obtained during the first call of the Rate Control Loop in the previous frame of the corresponding channel and is denoted as predicted common scalefac. In some embodiments, the initial value of the cormnon scalefac is set to start common scalefac + 1 when the predicted common scalefac value is not greater than the common scalefac value. In some embodiments, the common scalefac includes a global gam for a given set of spectral values within the frame. The mmimum value of common scalefac or the global gain is referred to as start common scalefac value. The value of quantizer change, which is the step-size for changing the value of common scalefac in the iterative algorithm, is set to 1.
At 720 counted bits associated with the current frame are computed. In some embodiments, computing counted bits includes qunat.zing the spectrum of the current frame and then computing the number of bits required to encode the quantized spectrum of the current frame.

At 730 a difference between the computed counted bits and available bits are computed, In some embodiments, the available bits are the number of bits made available to encode the spectrum of the current frame. In some embodiments, the diftbrence between the computed counted bits and the available bits are computed by comparing the computed counted bits with the available bits.
At 740 the computed difference is compared with a pre-determined MAXDIFF value. Generally, the value of pre-determincd MAXDIFF is set to be in the range of about 300-500.
At 750 the common scalefac value and quantizer change value are reset based on the outcome of the comparison. In some embodiments, the common scalefac value is reset when the computed difference is greater than the pre-determined MAXDIFF, and the common scalefac value is changed based on the outcome of the comparison when the computed difference is less than or equal to the pre-determined MAXDIFF value,
In some embodiments, the changing of the common scalefac value based on the outcome of the comparison further includes storing the computed counted bits along with the associated common scalefac value, then comparing the counted bits with the available bits, and finally changing the common scalefac value based on the outcome of the comparison.
In some embodiments, changing the common scalefac value based on the outcome of the comparison further includes assigning a value to a quantizer change, and changing the common scalefac value using the assigned value to the quantizer change and repeating the above steps when the counted bits is greater than the available bits. some embodiments include restoring the counted bits and outputting the common scalefac value when the counted bits is less than or equal to available bits.
In some embodiments, resetting the common scalefac value further includes computing predicted common scalefac value based on stored common scalefac value of the previous frame adjacent to t:he carrent frame, and resetting the conmion scalefac value, In case counted bits is greater than available bits, common scalefac is set to the start conmion scalefac value + 64, when the start common scalefac value 4- 64 is not greater than predicted conmion scalefac

value. otherwise common scalefac is set to predicted common scalefac and quantizer change is set to 64. Some embodiments include setting common scalefac to start common scalefac -I- 32, and further setting quantizcr change to 32 when the counted bits is less than or equal to available bits and the common scalefac is not greater than start common scalefac + 32 and if predicted common scalefac is greater than the present cormnon scalefac, recomputing counted bits. Further, some embodiments include setting the start common scalefac + 64 when the counted bits is less than or equal to available bits, and the common scalefac value is greater than the start common scalefac + 32 and if predicted common scalefac is greater than the present common scalefac, recomputing counted bits. Figure 8 illustrates one example embodiment of operation 800 of stereo coding to improve sound quality according to the present invention. The operation 800 begins with step 810 by converting left and right audio signals into left and right digital audio signals, respectively. Step 820 divides each of the converted left and right digital audio signals into frames having a long-block frame length, In some embodiments, the long-block frame length includes 1024 samples of digital audio signal.
Step 830 includes partitioning each of the frames into con-esponding multiple left and right short-blocks having short-block frame length. In some embodiments, the short-block frame-length includes samples ia the range of about 100 to 300 samples of digital audio signal.
Step 840 includes computing left aiid right short-block characteristics for each of the partitioned left and right short-blocks. In some embodiments, the computing the short-block characteristics includes computing the sum and difference short-block characteristics by summing and subtracting respective samples of the digital audio signals in the left and right short-blocks. In some embodiments, computing the sum and difference short-block characteristics further includes computing sum and difference energies in each of the short-blocks in the left and right short-blocks by squaring each of the samples and adding the sc}uared samples in each of the left and right short-blocks, In addition, the short-block energy ratio is computed for each of the short-blocks computed sum and difference energies, further determining a number of short-blocks

whose computed short-block energy ratio exceeds a pre-detcrmined energy ratio value.
Step 850 includes encoding the stereo audio signal based on the computed short-block characteristics, In some embodiments, the encoding of the stereo signal includes using a sum and difference compression encoding technique to encode the left and right audio signals based on the determined number of short-blocks exceeding the pre-detenruned energy ratio value. In some embodiments, the pie-determined energy value is greater than 0.75 and less than 0.25.
Figure 9 shows an example of a suitable computing system environment 900 for implementing embodiments of the present mvention, such as those shown in Figures 1-8. Various aspects of the present invention are implemented in software, which may be run in the environment shown m Figure 9 or any other suitable computing enviromnent. The present invention is operable in a number of other general purpose or special prupose computing environments. Some computing environments are personal computers, server computers, hand held devices, laptop devices, multiprocessors, microprocessors, set top boxes, programmable consumer electronics, network PCS, minicomputers, mainframe computers, distributed computing environments, and the like. The present invention may be implemented in part or in whole as computer-executable instructions, such as program modules that are executed by a computer. Generally, program modules include routines, programs, objects, components, data structures and the like to perform particular tasks or implement particular abstract data types. In a distributed computing environment, program modules may be located in local or remote storage devices.
Figure 9 shows a general computing device in the form of a computer 910, which may include a processing unit 902, memory 904, removable storage 912, and non-volatile memory 90S. Computer 910 may include - or have access to a computing environment that includes - a variety of computer-readable media, such as volatile 906 and non-volatile memory 908, removable and non¬removable storages 912 and 914, respectively. Computer storage includes RAM, ROM., EPROM & EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic

cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium capable of storing computer-readable instructions. Computer 910 may include - or have access to a computing enviromnent that includes ~ mput 916, output 918, and a communication connection 920. The computer 910 may operate in a networked environment using a communication connection 920 to connect to one or more remote computers. The remote computer may include a personal computer, sender, router, network PC, a peer device or other common network node, or the hke. The communication connection 920 may include a local area network (LAN), a wide area network (WAN") or other networks.
Conclusion
The above-described invention increases compression efficiency by providing a technique to allocate bits between long and short-blocks. Also, the present invention significantly enhances the sound quality of the encoded audio signal by more accurately detecting an attack and reducing pre-echo artifacts caused by attacks. In addition, the present invention provides an audio coder that is computationally efficient and more accurate in switching between the normal and the M-S modes when the audio signal is a stereo signal.
The above description is intended to be illustrative, and not restrictive. Many other embodiments will be apparent to those skilled in the art. The scope of the invention should therefore be determined by the appended claims, along with the full scope of equivalents to which such claims are entitled.




WE CLAIM :
1. A method for detecting an attack in an input audio signal to reduce a pre-echo
artifact caused by an attack during compression encoding of the input audio signal,
comprising:
converting the input audio signal into a digital audio signal;
dividing the digital audio signal into large frames having a long-block frame length;
partitioning each of the large frames into multiple short-blocks;
computing short-block audio signal characteristics for each of the short-blocks based on changes in the input audio signal;A
comparing the computed short-block audio signal characteristics to a set of threshold values to detect a presence of the attack in each of the short-blocks; and
changing the long-block frame length of one or more large frames based on the outcome of the comparison to reduce the pre-echo artifact caused by the attack.
2. The method as claimed in claim 1, wherein detecting the attack comprises:
detecting a sudden increase in amplitude within the long-block frame length.
3. The method as claimed in claim 2, wherein the long-block frame length comprises 1024 samples of digital audio signal.
4. The method as claimed in claim 3, wherein the samples of digital audio signal comprise series of numbers.
5. The method as claimed in claim 3, wherein the long-block frame length comprises a frame length used when there is no attack in the input audio signal.

6. The method as claimed in claim 5, wherein the large frames comprise: current and previous adjacent frames.
7. The method as claimed in claim 5. wherein the short-blocks comprise:

short-blocks having short-block frame lengths in the range of about 100 to 300 samples,
8. The method as claimed in claim 5, wherein computing the short-block audio
signal characteristics comprises:
computing inter-block differences; and
determining a maximum inter-block difference from the computed inter-block differences.
9. The method as claimed in claim 8, wherein computing the short-block audio
signal characteristics comprises:
computing inter-block ratios; and
determining a maximum inter-block ratio from the computed inter-block ratios.
10. The method as claimed in claim 9, wherein computing the inter-block
differences comprises:
summing a square of differences between samples in adjacent short-blocks.
11. The method as claimed in claim 10, wherein computing the inter-block ratios
comprises:
dividing the adjacent computed inter-block differences.
12. The method as claimed in claim 10, wherein comparing the computed short-
block values to the set of threshold values comprises:
comparing the determined maximum inter-block difference and the maximum inter-block ratio to the set of threshold values.
13. The method as claimed in claim 10, wherein changing the long-block frame
length comprises:
changing the long-block frame length of the one or more large frames including the attack comprises changing the long-block frame length to multiple smaller frames having smaller frame lengths to restrict the attack to one or more smaller frames so

that the pre-echo artifact caused by the attack does not spread to the adjacent larger frames.
14. The method as claimed in claim 13, wherein each of the smaller-frame lengths comprises about 100 to 300 samples of digital audio signal.
15. The method as claimed in claim 13, comprising:
computing an average number of bits that can be allocated for each of the large frames;
computing a perceptual entropy for the current frame of audio samples;
computing a bit rate using a sampling frequency and the current frame length;
computing a reduction factor based on the computed bit rate and the perceptual entropy;
computing a reduced average number of bits that can be allocated for each of the large frames using the computed reduction factor;
computing remaining bits by subtracting the computed average number of bits with the computed reduced average number of bits;
if the current frame to be coded is a large frame, then allocating the reduced average number of bits to the current frame and storing the remaining bits in a Bit Reservior; and
if the current frame to be coded is a small frame, then allocating the reduced average number of bits along with the stored bits from the Bit Reservoir to the current frame.
16. The method as claimed in claim 15 comprising:
repeating the above steps for a next adjacent frame.
17. The method as claimed in claim 16, wherein computing the average number of
bits comprises:
determining a bit rate of the input audio signal; determining the long-block frame length of the large frame; determining a sampling frequency of the input audio signal; and

computing the average number of bits that can be allocated for each of the large frames based on the determined bit rate, long-block frame length, and sampling frequency.
18. A method of reducing computation during quantization iterations for
compression of an input audio signal to improve the efficiency of operation of a
perceptual encoder, comprising:
initializing a commonscalefac value of a current frame; initializing a quantizer change value of the current frame; computing counted bits associated with the current frame; computing a difference between the computed counted bits and available bits; comparing the computed difference with the a pre-determined MAXDIFF value; if the computed difference is greater than the pre-determined MAXDIFF value, then resetting the common scalefac value and quantizer change value; and
if the computed difference is less than or equal to the pre-determined MAXDIFF value, then changing common scalefac value based on the outcome of the comparison.
19. The method as claimed in claim 18, wherein the common scalefac value
comprises:
a global gain for a given set of spectral values within a frame.
20. The method as claimed in claim 18, wherein a start common scalefac comprises a theoretical minimum value of the common scalefac.
21. The method as claimed in claim 18, wherein the quantizer change comprises a step size to arrive at a final value of common scalefac.
22. The method as claimed in claim 18, wherein initializing the common scalefac
value comprises:
initializing the common scalefac value of the current frame with a predicted
common scalefac.

23. The method as claimed in claim 18, wherein initializing the common scalefac comprises setting the value of common scalefac to start common scalefac + 1 when the predicted common scalefac is (ess than the start common scalefac.
24. The method as claimed in claim 18 wherein initializing the quantizer change comprises setting the value of quantizer change to 1.
25. The method as claimed in claim 18, wherein computing the counted bits associated with the current frame comprises:
quantizing spectrum of the current frame; and
computing number of bits required to encode the quantized spectrum of the current frame.
26. The method as claimed in claim 18, wherein available bits comprises:
number of bits made available to encode the spectrum of the current frame.
27. The method as claimed in claim 18, wherein the pre-determined MAXDIFF value is in the range of about 300 - 500.
28. The method as claimed in claim 18, wherein changing common scalefac value based on the outcome of the comparison comprises:
storing the computed counted bits along with the associated common scalefac value;
comparing the counted bits with the available bits; and
changing the common scalefac value based on the outcome of the comparison.
29. The method as claimed in claim 18, wherein changing the common scalefac
value based on the outcome of the comparison comprises;
assigning a value to a quantizer change;
if the counted bits is greater than the available bits, then changing the common scalefac value using the assigned value to the quantizer change and repeating the above steps starting with the computing of the counted bits; and

if the counted bits is less than or equal to available bits, then restoring the counted bits and outputting the common scalefac value,
30. The method as claimed in claim 18, wherein resetting the common scalefac
value and the quantizer change value comprises:
computing predicted common scalefac value based on stored common scalefac value of the previous frame adjacent to the current frame;
if counted bits is greater than available bits and if the start common scalefac value + 64 is not greater than predicted common scalefac value, then resetting the common scalefac value to the start common scalefac value + 64;
if the counted bits is less than or equal to available bits and the common scalefac is not greater than start common scalefac + 32. then the common scalefac is set to start common scalefac+32, and the quantizer change is set to 32 and counted bits is recomputed if predicted common scalefac is greater than common scalefac; and
if the counted bits is less than or equal to available bits, and the common scalefac value is greater than the start common scalefac + 32, then the common scalefac is set to start common scalefac + 64 and counted bits is recomputed if predicted common scalefac is greater than common scalefac.
31. A method of compression encoding a stereo audio signal, including left and
right audio signals, comprising:
converting the left and right audio signals into left and right digital audio signals, respectively;
dividing each of the left and right digital audio signals into frames having a long-block frame length;
partitioning each of the frames into corresponding multiple left and right short-blocks having short-block frame length;
computing left and right short-block characteristics for each of the partitioned left and right short-blocks; and
compression encoding the stereo audio signal based on the computed short-block characteristics.

32. The method as claimed in claim 31, wherein the long-block frame length comprises 1024 samples of digital audio signal.
33. The method as claimed in claim 32, wherein the samples of digital audio signal comprise series of numbers.
34. The method as claimed in claim 32, wherein the short-block frame length
comprises: samples in the range of about 100 to 300 samples of digital audio
signal.
35. The method as claimed in claim 34, wherein computing left and right short-block
characteristics comprises:
computing sum and difference short-block characteristics by summing and subtracting respective samples of digital audio signals in the left and right short-blocks.
36. The method as claimed in claim 35, wherein computing the sum and difference
short-block characteristics comprises:
computing sum and difference energies in each of the short-blocks in the left and right short-blocks by squaring each of the samples and adding the squared samples in each of the left and right short-blocks;
computing a short-block energy ratio using the respective short-block computed sum and difference energies;
determining a number of short-blocks whose computed short-block energy ratio exceeds a pre-determined energy ratio value; and
using a sum and difference compression encoding technique based on the determined number of short-blocks exceeding the pre-determined energy ratio value,
37. The method as claimed in claim 36, wherein the pre-determined energy ratio value is greater than 0.75 and less than 0.25.
38. A method for processing an audio signal, comprising;

converting the audio signal into a digital audio signal;
dividing the digital audio signal into large frames having a long-block frame length;
partitioning each of the large frames into multiple short-blocks;
computing short-block audio signal characteristics for each of the short-blocks based on changes in the input audio signal;
comparing the computed short-block audio signal characteristics to a set of threshold values to detect a presence of the attack in each of the short-blocks; and
changing the long-block frame length of one or more large frames based on the outcome of the comparison to reduce the pre-echo artifact caused by the attack.
39. The method as claimed in claim 38, wherein detecting the attack comprises:
detecting a sudden increase in amplitude within the long-block frame length.
40. The method as claimed in claim 38, wherein the long-block frame length comprises 1024 samples of digital audio signal.
41. The method as claimed in claim 40, wherein the samples of digital audio signal comprise series of numbers.
42. The method as claimed in claim 41, wherein the long-block frame length comprises a frame length used when there is no attack in the input audio signal.
43. The method as claimed in claim 41, wherein the short-blocks comprise:
short-blocks having short-block frame lengths in the range of about 100 to 300
samples.
44. The method as claimed in claim 41, wherein computing the short-block audio
signal characteristics comprises:
computing inter-block differences; and
determining a maximum inter-block difference from the computed inter«block differences.

45. An apparatus to detect an attack in an input digital audio signal to reduce a pre-
echo artifact caused by the attack during compression encoding of the input digital
audio signal, comprising:
a time frequency generator to receive the digital audio signal and divide the digital audio signal into large frames having a long-block frame length, and to further partition each of the large frames into multiple short-blocks; and
a transient detection module coupled to the time frequency generator to receive the multiple short-blocks and compute short-block audio signal characteristics for each of the received multiple short-blocks based on changes in the input digital audio signal, wherein the transient detection module compares the computed short-block audio signal characteristics to a set of threshold values to detect a presence of the attack in each of the multiple short-blocks, and the transient detection module further changes the long-block frame length of one or more large frames including the attack based on the outcome of the comparison, wherein the time frequency generator receives the changed one or more large frames and compresses the changed one or more large frames to reduce the pre-echo artifact caused by the attack,
46. The apparatus as claimed in claim 45, wherein the attack comprises:
a sudden increase in amplitude within the long-block frame length of the large frame of digital audio signal.
47. The apparatus as claimed in claim 46, wherein the long-block frame length comprises 1024 samples of digital audio signal.
48. The apparatus as claimed in claim 47, wherein the samples of digital audio signal comprise samples selected from the group consisting of series of numbers and bits.
49. The apparatus as claimed in claim 47, wherein long-block frame length comprises a frame length used when there is no attack in the input digital audio signal.

50, The apparatus as claimed in claim 47, wherein the large frames comprise; a current and a previous adjacent frame.
51- The apparatus as claimed in claim 50, wherein the short-blocks comprise a frame length in the range of about 100 to 300 samples,
52. The apparatus as claimed in claim 50, wherein the transient detection module further computes inter-block differences and determines a maximum inter-block difference from the computed inter-block differences.
53. The apparatus as claimed in claim 52, wherein the transient detection module further computes the inter-block differences by summing the samples in each of the short-blocks to obtain a short-block signal for each of the short-blocks, and further computes the inter-block differences by using the summed short-block signals of adjacent short-blocks,
54. The apparatus as claimed in claim 52, wherein the transient detection module further computes inter-block ratios and determines a maximum inter-block ratio from the computed inter-block ratios.
55. The apparatus as claimed in claim 54, wherein the transient detection module further computes inter-block ratios by dividing the adjacent computed inter-block differences.
56. The apparatus as claimed in claim 54, wherein the transient detection module compares the determined maximum inter-block difference and the maximum inter¬block ratio to a set of threshold values to detect the presence of the attack.
57. The apparatus as claimed in claim 54, wherein the transient detection module changes the long-block frame length of the one or more large frames including the attack to multiple smaller frames having smaller frame lengths to restrict the attack to one or more smaller frames so that the attack does not spread to the adjacent large frames to reduce the pre-echo artifact caused by the attack.

58. The apparatus as claimed in claim 57, wherein each of the smaller frame lengths comprises samples in the range of about 100 to 200 samples of digital audio signal.
59. The apparatus as claimed in claim 54, comprising:
a psychoacoustic model coupled to the transient detection module to compute a perceptual entropy for the current frame including samples of digital audio signal;
a quantizer coupled to the time frequency generator and the psychoacoustic model to receive the large and smaller frames including the samples of digital audio signal from the time frequency generator and the computed perceptual entropy from the psychoacoustic model, wherein the quantizer comprises:
a Bit Allocator to compute an average number of bits that can be allocated to each of the received large frames, and to compute a bit rate and a reduction factor based on the computed bit rate, and the received perceptual entropy, the Bit Allocator further computing a reduced average number of bits that can be allocated for each of the large frames using the computed reduction factor, and further computing remaining bits by subtracting the computed average number of bits using the computed reduced average number of bits; and
a Bit Reservoir to receive the remaining bits, wherein the Bit Allocator allocates a reduced average number of bits to the current frame and stores the remaining bits in the Bit Reservoir when the current frame is a large frame, and wherein the Bit Allocator further allocates the reduced number of bits along with the stored bits from the Bit Reservoir when the current frame is a small frame to improve the bit allocation between the large and small frames to enhance sound quality of the compressed audio signal.
60. The apparatus as claimed in claim 59, wherein the Bit Allocator repeats the bit allocation to a next adjacent frame.
61. The apparatus as claimed in claim 59, wherein the Bit Allocator determines the average number of bits by using the bit rate, the long-block frame length, and the sampling frequency.

62. The apparatus as claimed in claim 59, wherein the quantizer comprises:
a memory to store a start common scalefac of the previous adjacent frame to use in computation of the current frame;
a Rate Control Loop to compute common scalefac of the current frame using the stored start common scalefac as a starting value during computation of iterations by the Rate control Loop to reduce the number of iterations required to compute the common scalefac of the current frame, and the Rate Control Loop further to compute counted bits using the common scalefac of the current frame; and
a comparator coupled to the Rate Control Loop to compare the computed count bits with available bits, wherein the Rate Control Loop changes computed common scalefac based on the outcome of the comparison,
63. The apparatus as claimed in claim 62, wherein the start common scalefac
comprises:
a global gain for a given set of spectral values within the previous adjacent frame.
64. The apparatus as claimed in claim 63, wherein the count bits comprises:
bits required to encode a given set of spectral values for the current frame.
65. The apparatus as claimed in claim 62, wherein the Rate Control Loop intializes
the common scalefac value with a predicted common scalefac obtained during a first
call of the Rate Control Loop in the previous adjacent frame of a corresponding
channel.

Documents:

1221-chenp-2003-abstract.pdf

1221-chenp-2003-claims filed.pdf

1221-chenp-2003-claims granted.pdf

1221-chenp-2003-correspondnece-others.pdf

1221-chenp-2003-correspondnece-po.pdf

1221-chenp-2003-description(complete)filed.pdf

1221-chenp-2003-description(complete)granted.pdf

1221-chenp-2003-drawings.pdf

1221-chenp-2003-form 1.pdf

1221-chenp-2003-form 18.pdf

1221-chenp-2003-form 26.pdf

1221-chenp-2003-form 3.pdf

1221-chenp-2003-form 5.pdf

1221-chenp-2003-other documents.pdf

1221-chenp-2003-pct.pdf


Patent Number 213066
Indian Patent Application Number 1221/CHENP/2003
PG Journal Number 13/2008
Publication Date 28-Mar-2008
Grant Date 19-Dec-2007
Date of Filing 07-Aug-2003
Name of Patentee SASKEN COMMUNICATION TECHNOLOGIES LIMITED
Applicant Address HAL 2nd STAGE, 5008 12th B Main, Indiranagar Bangalore-560008
Inventors:
# Inventor's Name Inventor's Address
1 CHAKRAVARTHY K.P.P.KALYAN HAL 3rd STAGE, 361, 6th CROSS, 4th MAIN, Bangalore-560 075,
2 RUTHRAMOORTHY NAVANEETHA K APARTMENT 7122, 1400 WORCESTER ROAD, FRAMINGHAM, MA 01702,
3 PATWARDHAN PUSHKAR P SHREERANG SOCIETY, CD-65 C11, THANEWEST, 400 601,
4 MONDAL BISHWARUP 79, KALITALA ROAD, PURBACHAL, KALIKAPUR, KOLKATA 700 078,
PCT International Classification Number G10L 19/00
PCT International Application Number PCT/IB2001/001371
PCT International Filing date 2001-07-31
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 60/261,358 2001-01-11 U.S.A.