Title of Invention  A METHOD OF ENCODING AND DECODING AND AN AUDIO ENCODER AND PLAYER 

Abstract  The present invention relates to a method of encoding and decoding and and audio encoder and player, encoding (2) an audio signal (A) is provided, wherein basic waveforms in the audio signal (A) are determined (200), a noise component (8) is obtained (21) from the audio signal (A) by subtracting (21) the basic waveforms from the audio signal (A), a spectrum of the noise component (8) is modeled (22) by determining autoregressive and moving average parameters (P<sub>i</sub>,q<sub>i</sub>) and the auto regressive and the moving average parameters (P<sub>i</sub>,q<sub>i</sub> are included (23) in an encoded audio signal (A) together with waveform parameters (C<sub>i</sub> representing the basic waveform. 
Full Text  The invention relates to audio coding. WO 97/28527 discloses the enhancement of speech parameters by detemining a background noise PSD estimate, detemining noisy speech parameters, determining a noisy speech PSD estimate from the speech parameters, subtracting a background noise PSD estimate from the noisy speech PSD estimate, and estimating enhanced speech parameters from the enhanced speech PSD estimate. The enhanced parameters may be used for filtering noisy speech in order to suppress the noise or be used directly as speech parameters in speech encoding. The parameters and the PSD estimates are obtained by autoregressive modeling. It is noted m this document that such an estimate is not a statistically consistent one, but that in speech signal processing that is not a serious problem. An object of the invention is to provide advantageous audio coding. To this end, the invention provides a method of encoding an audio signal, a method of decoding an encoded audio signal, an audio encoder, an audio player, an audio system, an encoded audio signal and a storage medium as defined in the independent claims. Advantageous embodiments are defined m the dependent claims. According to a first aspect of the invention, parametric ARMA modeling is used for modeling a noise component in an audio signal, which noise component is obtained by subtracting basic waveforms from the audio signal. The audio signal may comprise audio in general, like music, but also speech. ARMA modeling of the noise component according to the invention has a further advantage that for an accurate modeling of a noise component less parameters are necessary than would be the case in frill AR or MA modeling with a comparable accuracy. Less parameters means, inter alia, better compression. The invention uses an ARMA model estimation that is suitable for a realtime implementation. The invention recognizes that AR or MA models are not always sufficiently accurate or parsimonious in conveying the information of the power spectral estimate. On. a logarithmic scale, with Linear Predictive Coding (LPC) methods (allpole modeling) peaks of the function are usually well modeled but valleys are underestimated. The reverse occurs in an allzero model. In audio and speech coding, a logarithmic scale is more appropriate than a linear scale. Therefore, a good fit to the power spectrum on a logarithmic scale is preferred. The model according to the invention gives a better tradeoff between complexity and accuracy. The error in this model can be evaluated on a logarithmic scale. In a first embodiment of the invention, the spectrum to be modeled is split into a first part and a second part wherein the first part is modeled by a first model to obtain autoregressive parameters and the second part is modeled by a second model to obtain movingaverage parameters. The combination of the constituent processes provides an accurate ARMA model. The splitting is preferably performed in an iterative procedure. In a method according to the invention, a nonlinear optimization problem may be omitted. In a preferred embodiment of the invention, the second modeling operation comprises the step of using the first modeling operation on a reciprocal of the second part of the target spectrum. In this embodiment, only one modeling operation needs to be defined wherein the autoregressive parameters are obtained by modeling the first part of the spectrum and the movingaverage parameters are obtained by modeling a reciprocal of the second part of the spectrum by the same, i.e. first modeling operation. Although less preferred, it is also possible to use a second modeling operation that yields movingaverage parameters on the second part and, to obtain autoregressive parameters use the same second modeling operation on a reciprocal of the first part of the spectrum. P. Stoica and R.L. Moses, Introduction to spectral analysis. Prentice Hall, New Jersey, 1997, pp. 101108, disclose parametric methods for modeling rational spectra. In general, a movingaverage (MA) signal is obtained by filtering white noise with an allzero filter. Owing to this allzero structure, it is not possible to use an MA equation to model a spectrum with sharp peaks unless the MA order is chosen "sufficiently large". This is to be contrasted to the ability of the autoregressive (AR), or allpole, equation to model narrow¬band spectra by using fairly low model orders. The MA model provides a good approximation for those spectra which are characterized by broad peaks and sharp nulls. Such spectra are encountered less frequently in applications than narrowband spectra, so there is somewhat limited engineering interest in using MA signal model for spectral estimation. Another reason for this limited interest is that the MA parameter estimation problem is basically a nonlinear one, and is significantly more difficult to solve than the AR parameter estimation problem. In any case, the types of difficulties in M.A. and ARMA estimation problems are quite similar. spectra with both sharp peaks and deep nulls cannot be modeled by either AR or MA equations of reasonably small orders. It is in these cases where the more general ARMA model, also called polezero model, is valuable. However, the great initial promise of ARMA spectral estimation diminishes to some extent because there is yet no wellestablished algorithm from both theoretical and practical standpoints for ARMA parameter estimation. The "theoretically optimal ARMA estimators" are based on iterative procedures whose global convergence is not guaranteed. The "practical ARMA estimators" are computational simple and often reliable, but their statistical accuracy may be poor in some cases. The prior art discloses two stage models, in which first an AR estimation is performed and thereafter an MA estimation. Both methods give inaccurate estimates or require high computational effort in those cases where the poles and zeroes of the ARMA model description are closely spaced together at positions near the unit circle. Such ARMA models, with nearly coinciding poles and zeroes of modulus close to one, correspond to narrowband signals. In both methods, the estimation of the zeros translates to a nonlinear optimization problem. In the prior art methods according to Stoica and Moses, computational burden exists in matrix inversions. Further, it is unclear to which value the order of the AR model should be set, except that it needs to be high for zeros close to the unit circle. Therefore, the computational complexity is difficult to access. In the method according to the invention, computational burden exists in the iterative nature of the splitting process and the transformation to the frequency domain (Stoica and Moses calculate primarily in the time domain). The invention provides better results in case of zeros close to the unit circle. Furthermore, the transformation to the frequency domain opens the possibility of manipulations. An example is to make the split frequency dependent on the basis of a priori or measurement data. Another advantage is the applicability to warped frequency data, as is explained below. In order to guarantee realtime ARMA modeling, a fast transformation to the frequency domain should be applied, e.g. Welch"s averaged periodogram method which is well known in the art. Autoregressive and moving average parameters can be represented in different ways by e.g. polynomials, zeros of the polynomials (together with a gain factor), reflection coefficients or log(Area) ratios. In an audio coding application, representation of the autoregressive and moving average parameters is preferably in log(Area) ratios. The autoregressive and moving average parameters that are determined in the ARMA modeling according to the invention are combined to obtain the filter parameters that are transmitted. USA 5,943,429 discloses a spectral subtraction noise suppression method in a frame based digital communication system. The method is performed by a spectral subtraction function which is based on an estimate of the power spectral density of . background noise of nonspeech frames and an estimate of the power spectral density of speech frames. Each speech frame is approximated by a parametric model that reduces the number of degrees of freedom. The estimate of the power spectral density of each speech frame is estimated from the approximative parametric model. Also in this case, the parametric model is an AR model. USA 4,188,667 discloses an ARMA filter and a method for obtaining the parameters for such filter. The first step of this method involves performing an inverse discrete Fourier transform of the arbitrary selected frequency spectrumof amplitude to obtain a truncated sequence of coefficients of a stable pure movingaverage filter model, i.e. the parameters of a nonrecursive filter model. The truncated sequence of coefficients, which has N+1 tenns, is then convolved with a random sequence to obtain an output associated with the random sequence. A timedomain, convergent parameter identification is then performed, in a manner that minimizes an integral enor function norm, to obtain the near minimum order autoregressive and movingaverage parameters of the model having the desired amplitudeand phasefrequency responses. The parameters are identified offline. The object of this embodiment is to provide a minimum or near minimum stable ARMA filter. The parameters are determined in a batch filter program. In general, estimating a power spectral density function differs from characterizing a linear system in that, inter alia, in such characterization, the input and output signals are available and used, whereas in estimating a power spectral density function, only the power spectral density function is available (not an associated input signal). The aforementioned and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. In the drawings: Fig. 1 shows an illustrative embodiment comprising an audio encoder according to the invention; Fig. 2 shows an illustrative embodiment comprising an audio player according to the invention; Fig, 3 shows an illustrative embodunent of an audio system according to the invention; and Fig. 4 shows an exemplary mapping function m. The drawings only show those elements that are necessary to understand the invention. The invention is preferably applied in audio and speech coding schemes in which synthetic noise generation is employed. Typically, the audio signal is coded on a frame to frame basis. The power spectral density function (or a possibly nonuniform sampled version thereof) of the noise in a frame is estimated and a best approximation of the function from a set of squared amplitude responses of a certain class of filters is found. In one embodiment of the invention, an iterative procedure is used to estimate an ARM model based on existing lowcomplexity techniques for fitting AR and MA models to the power spectral density function. Fig. I shows an exemplary audio encoder 2 according to the invention. An audio signal A is obtained from an audio source 1, such as a microphone, a storage medium, a network etc, The audio signaM is input to the audio encoder 2. The audio signal A is parametrically modeled in the audio encoder 2 on a frame to frame basis. A coding unit 20 comprises an analysis unit (AU) 200 and a syntliesis unit (SU) 201. The AU 200 performs an analysis of the audio signal and determines basic waveforms in the audio signal A. Further, the AU 200 produces waveform parameters or coefficients C, to represent the basic waveforms. The waveform parameters Q are funished to the SU 201 to obtain a reconstructed audio signal, which consists of synthesized basic waveforms. This reconstructed audio signal is furnished to a subtractor 21 to be subtracted from the original audio signal A. This rest signal S is regarded as a noise component of the audio signaM. In a preferred embodiment, the coding unit 20 comprises two stages: one that performs transient modeling, and another that performs sinusoidal modeling on the audio signal after subtraction of the modeled transient components. According to an aspect of the invention, the power spectral density function of the noise component S in the audio signal A is ARMA modeled resulting in autoregressive parameters/7, and movingaverage parameters (7,. The spectrum of the noise component 5 is modeled according to the invention in noise analyzer (NA) 22 to obtain filter parameters (pi,qi). The estimation of the parameters (pi,qi) is performed by determining filter parameters of a filter in NA 22 which has a transfer function H"" that makes the function S after filtering, i.e. ff (S), spectrally as fiat as possible, i.e. Vhitening the frequency spectrum". In a decoder, a reconstructed noise component can be generated which has approximately the same properties as the noise component S by filtering white noise with a filter with transfer function H that is opposite to the filter used in the encoder. The filtering operation of this opposite filter is determined by the ARMA parameters pi and qi. The filter parameters (p,,,) are included together with the waveform parameters C, in an encoded audio signal 4" in a multiplexer 23. The audio stream A " is furnished from the audio encoder to an audio player over a communication channel 3, which may be a wireless connection, a data bus or a storage medium, etc. An embodiment comprising an audio player 4 according to the invention is shown in Pig, 2. An audio signal A " is obtained from the communication channel 3 and de¬multiplexed in demultiplexer 40 to obtain the parameters (pi.qi) and the waveform parameters C, that are included in the encoded audio signal A". The parameters (pi,qi) are funmished to a noise synthesizer (NS) 41. TheNS 41 is mainly a filter with a transfer function H. A white noise signal y is input to the NS 41. The filtering operation of the NS 41 is determined by the ARMA parameters (/J,,,). By filtering the white noise_y with theNS 41, that is opposite to the filter (NA) 22 used in the encoder 2, a noise component S" is generated which has approximately the same stochastic properties as the noise component iS" in die original audio signaM. The noise component .S"is added in adder 43 to other reconstructed components, which are e.g. obtained from a synthesis unit (SU) 42 to obtain a reconstmcted audio signal (A"). The SU 42 is similar to the SU 201. The reconsfructed audio signal A" is funmished to an output 5, which may be a loudspeaker, etc. Fig. 3 shows an audio system according to the invention comprising an audio encoder 2 as shown in Fig. 1 and an audio player 4 as shown in Fig. 2. Such a system offers playing and recording features. The communication channel 3 may be part of the audio system, but will often be outside the audio system. In case the connnunication channel 3 is a storage medium, the storage medium may be fixed in the system or be a removable disc, memory stick, tape etc. Below, the modeling of the spectrum of is further described. Suppose 5 is a power spectral density function of a discretetime real valued signal. Further, Sis a. realvalued friction defined on the interval /= (K,"K). S is assumed to be symmetric with min (5) > 0 and max (S) ~linS()d = 0 (1) The extension to cases with a mean on the log scale unequal to zero is straight forward, but can be handled in various ways. Note that S can be derived from samples of an actually measured power spectral density function by suitable interpolation and normalization. Forward Linear Prediction (FLP), which is an example of an LPC method. Therefore, the polynomial A can be found by calculating (or at least approximating) the autocorrelation function associated with S and solving the WienerHopf equations. The qualitative results of such a procedure are also well known. The above sketched procedure will give good approximations to the peaks of 5 (when measured or visualized on a logarithmic scale) but usually provides only poor fits to the valleys ofS. To conclude the above, a standard procedure is available for estimating an allpole model from the power spectral density function, which provides an approximation to the optimal solution with (2) and which basically is good at modeling the peaks ofS. It is noted that peaks and valleys of In 5 have essentially the same characteristic except for a reversal of sign: a peak Is a positive excursion, whereas a trough is a negative one. Consequently, taking 5 = 1 / 5, an allzero model can be estimated by using the above sketched procedure for an allpole model. From the result of this procedure, a good fit to the valleys of is expected, but only poor or at most fair fits to the peaks oiS. An object of the invention is to provide a good representation of S for both the peaks and the valleys. In an embodiment of the invention, an ARMA model is provided in which allpole modeling and allzero modeling are combined in the following way, S is split According to a preferred aspect of the invention the split of iS" is performed in an iterative process. The iteration step is called /, At each step of the iteration, a new split S41 and SB.I is generated and the corresponding estimates Ai and Bi are calculated. A given considered. In this way, from S those parts that can be modeled accurately by the allpole model are excluded from contributing to SB. Similarly, those parts of 5 that could be modeled by an allzero filter are excluded from SA From SA.I and SB.I the functions Ai and Bi are estimated. In this way, parts which in the previous iteration could not be modeled appropriately are swapped. The best fit to S of these four candidate filters is defined as the one with minimum error; the associated fiher is the final result of step /. Preferably, Hi (and thus Ai and B,) is selected as Any common stop procedure can be used, e.g. a maximum number of iterations, a sufficient accuracy of the current estimate, or insufficient progress in going from one step to another. A slightly different procedure performs the AR and MA modeling alternately. If the previous step returned a refined estimate of the numerator BM, then There are many alternatives to initialize the iterative scheme. Without limitation, the following possibilities are mentioned: First, a simple way of initializing is provided by taking 5*,0 S and Sa.o 1 and SAA = 1 and 1/S0,o = S. Next, AQ and Bo are calculated. From these two initial estimates, a best fit (according to some criterion) is chosen. In this way, the first guess is either an allpole or an allzero model. Third, since SA should contain the peaks and 5 the valleys, a favorable split is to attribute everything above a mean logarithmic level (e.g. above zero) to SA,Q and anything below said level to SB,Q This division may be made at the global logarithmic mean, but also at some local logarithmic mean. Fourth, a further splitting process takes into account that in power spectral density functions on a logarithmic scale, poles and zeros close to the unit circle give rise to pTonounced peaks and valleys, respectively. The data S is split on the notion that peaks and valleys in logiS" are more appropriately handled by the allpole and allzero model. and zero behavior on a log scale. However, nonsymmetric functions can be used as well and have the effect of giving more weight to either the pole or the zero modeling. An exemplary  consequently, modeled by the allpole filler. Negative excursions (valleys) ofP are mostly attributed to Pa and, consequently, modeled by the allzero filter. From PA and Pa, SA and SB are constructed and, next Ao and Bo are calculated. There are two limiting cases of m (which are similar to the second and the third initialization The proposed spectrum modeling is very apt at modeling peaks and valleys since, basically, these constitute the patterns generated by the degrees of freedom offered by the poles and zeros. Consequently, the procedure is sensitive to outliers: rather than smoothing, these will appear in the approximation. Therefore, the input data S has to be an accurate estimate (in the sense of a small ratio of standard deviation and mean per frequency sample) or S must be preprocessed (e.g. smoothed) in order to suppress undesired modeling of outliers. This observation holds especially if the number of degrees of freedom in the model is relatively large with respect to the number of data points on which the power spectral density function is based. Convergence can not be established without knowledge of the actual optimization steps A and B and the selection criterion. It is not guaranteed that the error reduces at every step in the iteration process. In many cases, it is desired to have a good approximation of the power spectra! density function on a logarithmic scaled frequency axis. For example, it is common practice to evaluate the result of a fit on a spectrum visually in the form of a Bode plot. Similarly, for audio and speech applications, the preferred scale would be a Bark or Equivalent Rectangular Bandwidth (ERB) scale which is more or less a logarithmic scale. The method according to the invention is suitable for frequencywarped modeling. The spectral density measurements can be calculated on any frequency grid whatsoever. Under the condition that the frequency warping is close to that of a firstorder allpass section, this can be rewrapped while maintaining the order of the ARMA model. It should be noted that the abovementioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of other elements or steps than those Usted in a claim. The invention can be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In a device claim enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. In summary, encoding an audio signal is provided, wherein basic waveforms in the audio signal are determined, a noise component is obtained from the audio signal by subtracting the basic waveforms &om the audio signal, a spectrum of the noise component is modeled by determining autoregressive and movingaverage parameters, and the autoregressive and the movingaverage parameters are included in an encoded audio signal together with waveform parameters representing the basic waveforms. WE CLAIM : 1. A method of encoding (2) an audio signal (A), comprising the steps of: . determining (200) basic waveforms m the audio signal (A); obtaining (21) a noise component (S) from the audio signal (A) by subtracting (21) the basic waveforms from the audio signal (A); modeling (22) a spectrum of the noise component (8) by determining autoregressive and movingaverage parameters (pi,qi); and comprising (23) the autoregressive and the movingaverage parameters (pi,qi), and waveform parameters (Cj) representing the basic waveforms in an encoded audio signal (A"). 2. A method of decoding (4) an encoded audio signal (A "), comprising the steps of: receiving (40) an encoded audio signal (A ) comprising waveform parameters (Cj) representing basic waveforms and autoregressive and movingaverage parameters (pi,qi) representing a spectrum of a remaining noise component; filtering (41) a white noise signal (y) to obtam a reconstructed noise component (S "), which filtering is detennined by the autoregressive parameters (Pi) and the movingaverage parameters (qi) synthesizing (42) basic waveforms based on the waveform parameters (Cj); and adding (43) the reconstructed noise component (S ") to the synthesized basic waveforms to obtain a decoded audio signal (A "). 3. An audio encoder (2) comprising: means (200) for determining basic waveforms in the audio signal (A); means for (21) obtaining a noise component (S) from the audio signal (A) by subtracting (21) the basic waveforms from the audio signal (A); means (22) for modeling a spectrum of the noise component (S) by determining autoregressive and movingaverage parameters(pi,qi); and means (23) for comprising the autoregressive and the movingaverage parameters(pi,qi), and waveform parameters (Cj) representing the basic waveforms in an encoded audio signal (A "). 4. An audio player (4) comprising: means (40) for receiving an encoded audio signal (A ") comprising waveform parameters (Cj) representing basic waveforms and autoregressive and movingaverage parameters (pi,qi) representing a spectrum of a noise component; means (41) for filtering a white noise signal (y) to obtain a reconstructed noise component (S "), which filtering is determined by the autoregressive parameters (P,) and the movingaverage parameters (qi); means (42) for synthesizing basic waveforms based on the waveform parameters (0); and means (43) for adding the reconstructed noise component (8") to the synthesized basic waveforms to obtam a decoded audio signal (A "). 5. An audio system comprising an audio encoder (2) as claimed in claim 3 and an audio player (4) as claimed in claim 4. 6. An encoded audio signal (A ") comprising: waveform parameters (C,) representing basic waveforms; autoregressive parameters and movingaverage parameters (pi,qj)representing a spectrum of a remaining noise component (S). 

inpct20020083che abstract.jpg
inpct20020083che abstract.pdf
inpct20020083che claimsduplicate.pdf
inpct20020083che claims.pdf
inpct20020083che correspondenceothers.pdf
inpct20020083che correspondencepo.pdf
inpct20020083che description (complete)duplicate.pdf
inpct20020083che description (complete).pdf
inpct20020083che drawingsduplicate.pdf
inpct20020083che drawings.pdf
inpct20020083che form1.pdf
inpct20020083che form19.pdf
inpct20020083che form26.pdf
inpct20020083che form3.pdf
inpct20020083che form5.pdf
inpct20020083che petition.pdf
Patent Number  216149  

Indian Patent Application Number  IN/PCT/2002/83/CHE  
PG Journal Number  13/2008  
Publication Date  31Mar2008  
Grant Date  10Mar2008  
Date of Filing  15Jan2002  
Name of Patentee  KONINKLIJKE PHILIPS ELECTRONICS N.V.  
Applicant Address  Groenewoudseweg 1, NL5621 BA Eindhoven,  
Inventors:


PCT International Classification Number  G10L 21/02  
PCT International Application Number  PCT/EP00/04601  
PCT International Filing date  20000517  
PCT Conventions:
