Title of Invention

"A SYSTEM AND METHOD FOR MINIMIZING COMPUTATIONS REQUIRED FOR COMPRESSION OF MOTION VIDEO FRAME SEQUENCES"

Abstract The present invention provides a system and method for minimizing computations required for compression of motion video frame sequences wherein processing requirements are reduced, the reduction being dependent on the content being processed. The method performs motion estimation of a current video image using a search window of previous video image. The method comprises as a first step the formation of the mean pyramids of the reference macroblock and the search area. This is followed by full search at the lowest resolution. The number of CMVs propagated to lower levels is dependent on the QADE of the current macroblock and the maximum distortion band obtained during training for that QADE value at that particular level. The process of training over a sequence is triggered at the beginning of every sequence. This training technique is required to determine the value of the maximum distortion band for all QADEs of the macroblocks, occurring over the training frames.
Full Text Field of the invention:
The present invention relates to a system and method for minimizing computations required for compression of motion video frame sequences. The invention is suitable for use in low bit rate coding of compressed digital video images such as in coders using the H.263 source-coding model. Specifically, this invention describes a method for adapting the computational complexity involved in the motion estimation operation, which is required for generating the motion vectors used in video coding.
Background of the invention:
The presence of multimedia capabilities on mobile terminals opens up a spectrum of applications, such as video-conferencing, video telephony, security monitoring, information broadcast and other such services. Video compression techniques enable the efficient transmission of digital video signals. Video compression algorithms take advantage of spatial correlation among adjacent pixels in order to derive a more efficient representation of the important information in a video signal. The most powerful compression systems not only take advantage of spatial correlation, but can also utilize temporal correlations among adjacent frames to further boost the compression ratio, hi such systems, differential encoding is used to transmit only the difference between an actual frame and a prediction of the actual frame. The prediction is based on information derived from a previous frame of the same video sequence.
In motion compensation systems, motion vectors are derived by comparing a portion (i.e., a macroblock) of pixel data from a current frame to similar portions (i.e search area) of the previous frame. A motion estimator determines the closest match of the reference macroblock in the present image using the pixels in the previous image. The criterion used to evaluate similarity is usually the Mean Absolute Difference between the reference macroblock and the pixels in the search area corresponding to that search position. The use of motion vectors is very effective in reducing the amount of data to be fransmitted.
The MPEG-4 simple profile which is intended for wireless video applications is representative of the current level of technology in low-bit rate, error resilient video coding. From the viewpoint of system design, all the proposed techniques have to be implemented in the highly power constrained, battery operated environment. Hence, to prolong battery life.

we need to make the proposed techniques data aware such that system and algorithm parameters are modified depending on the data being processed.
The source coding model of MPEG-4 simple profile (which is based on the H.263 standard) employs block based motion compensation for exploiting temporal redundancy and discrete cosine transform for exploiting spatial redundancy. The motion estimation process is computationally intensive and accounts for a large percentage of the total encoding computations. Hence there is a need for developing methods that accvirately compute the motion vectors in a computationally efficient manner.
The FSBM technique for determining the motion vectors is the most computationally intensive technique among all known techniques, but it gives the best results as it evaluates all the possible search positions in the given search region. Techniques based on the unimodal error surface assumption, such as the N-step search and logarithmic search achieve a large fixed magnitude of computational reduction irrespective of the content being processed. But, the drop in PSNR due to local minima problems leads to perceptible difference in visual quality, especially for high activity sequences.
The multiresolution motion estimation technique of finding the motion vectors is a computationally efficient technique compared to the FSBM algorithm. In this technique, coarse values of motion vectors are obtained by performing the motion vector search on a low-resolution representation of the reference macroblock and the search area. This estimate is progressively refined at higher resolutions by searching within a small area around these coarse motion vectors (also referred to as candidate motion vectors, CMVs) obtained from the higher level. The number of candidate motion vectors propagated to the higher resolution images is usually fixed by the algorithm designer to be a single number, irrespective of the sequence or the macroblock characteristics. Each CMV contributes to a determinate number of computations. Hence by using a prefixed number of CMVs, either the PSNR obtained may be low if a small number of CMVs are propagated or the computational complexity becomes large if many CMVs are propagated. In a power constrained environment, propagating many CMVs would reduce battery life. Hence, fixed solutions for multiresolution motion estimation either have a high power requirement if PSNR is to be maintained or may result in poor image quality when a fixed low computational complexity technique is used.

The object and summary of the invention:
The object of the present invention is to provide an efficient low power motion estimation of a video frame sequence wherein processing requirements are minimised while maintaining picture quality at desired levels.
Second object of the present invention is to provide a system and method of scaling computations in the technique of multi-resolution mean pyramid depending on the video content being processed.
Another object of the present invention to provide a system and method for reducing the computations required in determining the motion vectors associated with reference macroblocks having low frequency content over the macroblock.
To achieve the said objective this invention provides a system for minimizing computations required for compression of motion video frame sequences involving motion estimation using multi-resolution mean pyramid technique while maintaining at least a pre-defined picture quality level by dynamically adjusting the number of Candidate Motion Vectors propagated to each higher resolution level comprising:
means for establishing a relationship between quantized values of frequency
content of the reference macro-blocks in said video frames and distortion
levels resulting from the mean pyramid averaging process,
means for determining the frequency content of each said macro-block.
means for predicting the distortion resulting from mean pyramid generation
over said frequency content using said relationship,
means for computing the limiting Mean Absolute Difference value for
maintaining picture quality using said predicted distortion value and
means for propagating those motion vectors whose Mean Absolute Difference
value falls below said limiting Mean Absolute Difference value.

The said relationship is established using a training sequence of video frames, comprising:
means for generating mean pyramids on the reference blocks and on the
corresponding search area at each level,
means for generating deviation pyramids for said reference block by
computing the mean deviation of each pixel at a given level from
corresponding pixels at the lower level,
means for computing the Average Deviation Estimate at each resolution level
by averaging said deviation pyramid values at that level
means for quantizing said Average Deviation Estimate value as to determine
quantized Average Deviation Estimate for the corresponding reference block,
means for computing corresponding Mean Absolute Difference for all search
positions at lowest resolution level,
means for propagating the maximum allowed number of candidate motion
vectors corresponding to the lowes Mean Absolute Difference values to next
higher resolution level.
means for computing Mean Absolute Difference values at search positions
around the Candidate Motion Vector positions obtained from lower resolution
level,
means for identifying those search positions in each level that correspond to
the least Mean Absolute Difference obtained at the highest resolution level the
final motion vector position for that level and the corresponding Mean
Absolute Difference value as the corresponding Mean Absolute Difference for
that level,
means for computing distortion as the difference between corresponding Mean
Absolute Difference and minimum Mean Absolute Difference at each level
means for saving the maximum of the distortion values obtained at each level
over all training frames corresponding to each Quantized Average Deviation
Estimate value in a Look-up table.
The said frequency content is determined by means for computing Quantized Average Deviation Estimate for each macro block in said video frame.

The said distortion level is predicted by means for extracting the estimated distortion value corresponding to said frequency content using said relationship.
The said limiting Mean Absolute Difference value for each level is obtained by means for incrementing the minimum computed Mean Absolute Difference at that level by said predicted distortion value.
The said training sequence is re-triggered whenever the Frame Average Mean Absolute Difference Variation over said sequence exceeds a pre-defined threshold value over a few frames, said Frame Average Mean Absolute Difference Variation being determined by means for computing the difference between the Frame Average Mean Absolute Difference value for the current Frame and the-Delayed-N-Frame Average Mean Absolute Difference value for the previous "N' frames where Frame Average Mean Absolute Difference is the average of the averaged Mean Absolute Difference values for all the reference macro-blocks is a frame and Delayed-N-Frame Average Mean Absolute Difference is the average of the Frame Average Mean Absolute Difference values for the previous 'N' frames.
The said Quantized Average Deviation Estimate is a value obtained using means for quantizing the average of the mean deviation of the mean pyramid values from the original pixel values, over said reference macro-block.
The said estimated distortion value is obtained by means of a look-up table that matches Quantized Average Deviation Estimate values to predicted distortion values.
The present invention also provides a method for minimizing computations required for compression of motion video frame sequences involving motion estimation using a multi-resolution mean pyramid technique while maintaining at least a predefined picture quality level by dynamically adjusting the number of Candidate Motion Vectors propagated to each higher resolution level comprising:
establishing a relationship between quantized values of frequency content of the reference macro-blocks in said video frames and distortion levels resulting from the mean pyramid averaging process, determining the frequency content of each said macro-block,

predicting the distortion resulting from mean pyramid generation over said
frequency content using said relationship,
computing the limiting Mean Absolute Difference value for maintaining
picture quality using said predicted distortion value and
propagating those Candidate Motion Vectors whose Mean Absolute
Difference value falls below said limiting Mean Absolute Difference value.
The said relationship is established using a training sequence of video frames, comprising the steps of:
generating mean pyramids on the reference blocks and on the corresponding
search area at each level,
generating deviation pyramids for said reference block by computing the mean
deviation of each pixel at a given level from corresponding pixels at the lower
level,
computing the Average Deviation Estimate at each resolution level by
averaging said deviation pyramid values at that level
quantizing said Average Deviation Estimate value as to determine quantized
Average Deviation Estimate for the corresponding reference block,
computing corresponding Mean Absolute Difference for all search positions at
lowest resolution level,
propagating the maximum allowed number of candidate motion vectors
corresponding to the lowest Mean Absolute Difference values to next higher
resolution level,
computing Mean Absolute Difference values at search positions around the
Candidate Motion Vector positions obtained from lower resolution level,
identifying those search positions in each level that correspond to the least
Mean Absolute Difference obtained at the highest resolution level as the final
motion vector position for that level and the corresponding Mean Absolute
Difference value as the corresponding Mean Absolute Difference for that
level,
computing distortion as the difference between corresponding Mean Absolute
Difference and minimum Mean Absolute Difference at each level

saving the maximum of the distortion values obtained at each level over all training frames corresponding to each Quantized Average Deviation Estimate value in a Look-up table.
The said frequency content is determined by computing Quantized Average Deviation Estimate for each macro block in said video frame.
The said distortion level is predicted by extracting the estimated distortion value corresponding to said frequency content using said relationship established during training.
The said limiting Mean Absolute Difference for each level is equal to the minimum computed Mean Absolute Difference at that level incremented by said predicted distortion value.
The said training sequence is re-triggered whenever the Frame Average Mean Absolute Difference Variation over said sequence exceeds a pre-defined threshold value over a few frames, said Frame Average Mean Absolute Difference Variation being the difference between the Frame Average Mean Absolute Difference value for the current Frame and the Delayed-N-Frame Average Mean Absolute Difference value for the previous N' frames where Frame Average Mean Absolute Difference is the average of the averaged Mean Absolute Difference value for all the reference macro-blocks is a frame and Delayed-N-Frame Average Mean Absolute Difference is the average of the Frame Average Mean Absolute Difference values for the previous "N' frames.
The said Quantized Average Deviation Estimate is a value obtained after quantizing the average of the mean deviation of the mean pyramid values from the original pixel values, over said reference macro-block.
The said estimated distortion value is obtained from a look-up table that matches Quantized Average Deviation Estimate values to predicted distortion values.
Brief description of the drawings:
FIG. 1 illustrates the process of generation of mean pyramids for the reference macroblock and the process of performing the multiresolution motion estimation

FIG. 2 illustrates the schematic diagram of the proposed motion estimator
FIG. 3 illustrates the variation of average CMVs per frame for different frames, across different levels and for different sequences.
Detailed description of the drawings:
A diagrammatic view of the process of generating the mean pyramid is shown in Fig. 1(a). In equation form:
where L denotes the pyramid level, k denotes the frame number, p and q are pixel positions and N_H and A/_F denote the horizontal and vertical frame size respectively.
Mean Absolute Difference (MAD) of pixel values is used as a measure to determine the motion vectors (MV) at each level L and is given by
(Formula Removed)
where m,n denote the search coordinates for the macroblock at the position (ij). s_L is the level dependent search range and /, J denote the macroblock height and width respectively.
FSBM is performed at the highest level of the mean pyramid in order to detect random motion and obtain a low-cost estimate of the motion associated with the macroblock. This estimate is progressively refined at lower levels by searching within a small area around the motion vector obtained from the higher level. This process is shown in Fig. 1(b) where 2 CMVs are propagated from every level to the next lower level. In equation form,
Since we determine the number of CMVs to be propagated based on the frequency content in the reference macroblock we need to estimate this quantity. The deviation pyramid, is used to estimate the frequency characteristics of the macroblock being matched is defined as:

(Formula Removed)
The deviation pyramid measures the deviation of the mean from the original pixel values. It is representative of the error introduced by the process of averaging. In order to obtain a single quantity representative of the frequency content of the macroblock, we sum up the deviation pyramid values generated for the reference macroblock at each level. A reference macroblock with low frequencies sums up to a small value, whereas the presence of high frequencies results in a large aggregate.
The Average Deviation Estimate (ADE) of the macroblock at position (i, j) is given by
In order to estimate the content complexity characteristics, we define a term called the distortion band, which gives the difference between the minimum MAD found at a particular level and the MAD value corresponding to the correct motion vector position. This is given by:This value if known can be used to determine the threshold MAD value and all motion vectors whose MAD falls below this value can be passed to the lower level as CMVs. The distortion band value needs to be predicted & in our method we predict the distortion band
value using the ADE defined earlier. The relationship between the distortion band value and the ADE can be non-linear for a particular video sequence and this relation is learnt during training.
The relationship between the ADE values and the distortion band is determined during the training phase and the method is as given below:
During training, the maximum allowed number of CMVs are propagated between all adjacent levels, and based on the final value of the motion vector, the distortion band at each of the levels can be calculated as in Eq.(7). The ADE axis is uniformly quantized, and the maximum value of the distortion band at each of the quantized ADE (QADE) values is stored in memory.
The schematic diagram of the proposed motion estimator is shown in Fig. 2. The functioning of each module is described below.
1] Distortion Band Estimator:
The distortion band estimator is operational during the training of the motion estimator. During the training phase, the number of CMVs passed between adjacent levels is kept at the maximum value for all macroblocks and at all levels.
The distortion band estimator matches the decimated value of the final motion vector with the CMVs passed between adjacent levels and hence determines the MAD of the correct solution at each level. Based on the MAD of the correct solution and the best solution at every level, the distortion band is calculated as in Eq.(7). During normal operation the distortion band estimator is turned off.
2] Distortion Band Predictor:
The distortion band predictor is look-up table based. The table stores the maximum value of the distortion band corresponding to each of the QADEs at levels 1 and 2 obtained during training. During normal operation, the distortion band for the current reference macroblock is
predicted based on the QADE value of the macroblock, using the maximum value obtained during training corresponding to that particular QADE value at that level.
3] QADE:
For every reference macroblock, the QADE of the reference macroblock is determined both for levels 1 & 2. Determining the QADE at levels 1 and 2 involves Eq.(5-6) followed by uniform quantization.
4] Content (Complexity) Change Detector:
On-line learning is performed at the beginning of every sequence. The content change detector is used to detect content complexity change within a sequence. It uses the Frame Average MAD (FAM), Delayed N Frame Average MAD (DNFAM) and Frame Average MAD Variation (FAMV) which are defined below:

Here M represents the number of macroblocks per frame, (m-,n-) represent the motion vectors corresponding to the best MAD values for the macroblock at position (i,j). Re-traning is initiated whenever the value of FAMV consistently crosses a preset threshold over \ few frames. Delayed N frame averages are used to accurately determine local variations in FAM values. Computationally, determining FAM results in a single addition per macroblock. DNFAM and FAMV are computed once every frame. Hence the computational overhead of his block is very low.
n simulations, the maximum number of CMVs passed is fixed at 9. Search data is reused at
all levels of the search pyramid. The refinement range at levels 0 and 1 is fixed at +-1 along
both axes. As a result, the search area due to 2 CMVs at levels 1 and 0 can overlap a
naximum of 3 out of the 9 positions. This event is detected when either the x or y component
of the two CMVs have the same value and the difference in the other component value is unity. When such a condition occurs, the search area is reduced correspondingly to one of the CMVs in order to eliminate the redundant computations.
In order to estimate the speedup factor, it is assumed that the addition operation involved in pyramid generation, contributes to half the cost of the basic operation involved in MAD calculation, which involves the absolute difference operation followed by addition.
Simulation results are given in Tables 1-3 given below. All PSNR values quoted are for the Y-component of the image.

(Table Removed)
TABLE 2: Computational complexity comparisons using average CMVs and speedup factors
Table 2 shows that the proposed algorithm scales computations depending on the complexity of the sequence. The reduction in average computations per macroblock for the FSBM and the N-step algorithm due to the macroblocks at the frame edges (which have smaller search regions) is taken into consideration while computing these speedup factors. The close PSNR match between FSBM and the proposed algorithm and the range of computational scaling validates the utility of content specific training.
Fig.3 shows the variation of average CMVs per frame for levels 1 and 0 for two sequences. The discontinuities in the curves denote the retraining frames where 9 CMVs are passed for all macroblocks at both the levels 1 & 2. Depending on the content, there is a large variation in the average CMV value within the sequence for the FOREMAN sequence. The content change detector is able to determine the frame positions where content & hence content complexity changes significantly and hence triggers retraining.
Method for eliminating computations for sequences with plain backgrounds:
Sequences with plain backgrounds generate a large number of search positions that give approximately the same MAD and this leads to an increase in the CMVs. Solutions to this problem of plain backgrounds which have been proposed in literature use a static/non-static classifier operating at the macroblock level in the FSBM framework. The drop in PSNR is
highly dependent on the accuracy of the classifier that makes a static/non-static decision based on the similarity between the higher order bits of the reference macroblock pixels and the exactly overlapping position of the search area. Two solutions for this phenomenon are as follows:
Solution A:
When the QADE at both levels 1 and 2 is zero and the best MAD at level 2 is less than a threshold threshold_level2, by limiting the CMVs passed to level 1 to two. When the QADE at level 1 is zero and the best MAD at level 1 is less than a threshold thresholdlevel 1, one CMV is passed to level 0.
Solution B:
This solution is similar to Solution A except that under the same conditions at level 1, the best motion vector is interpolated at level 1 to obtain the final MV position. This completely eliminates computations at level 0.
These solutions are based on the reasoning that if the current best matches are likely to give a high PSNR and the deviation values are low, then the final best solution is unlikely to be significantly different from the best solution tracked at the current level.

(Formula Removed)
TABLE 3: Speedup factors and PSNR drop with background elimination
Table 3 shows that the speedup factor improves significantly for all sequences with plain
backgrounds, with a small drop in PSNR compared to FSBM.
Advantages of the present invention:
1) Computations for some benchmark sequences is reduced by a factor of around 70 compared to the full-search block matching motion estimation algorithm.
2) The reduction in computations does not lead to a drastic drop in PSNR, as seen with the use of N-step search; the PSNR from the instant method is maintained close to that obtained from the FSBM algorithm.
LEGEND:
MAD: Mean Absolute Difference
FSBM: Full Search Block Matching
CMV: Candidate Motion Vector
QADE: Quantized Average Deviation Estimate
FAM: Frame Average MAD
FAMV: Frame Average MAD Variation
DNFAM: Delayed N Frame Average MAD






We claim:
1. A system for minimizing computations required for compression of motion video
frame sequences involving motion estimation using multi-resolution mean pyramid
technique while maintaining at least a pre-defined picture quality level by
dynamically adjusting the number of Candidate Motion Vectors propagated to each
higher resolution level comprising:
means for establishing a relationship between quantized values of frequency
content of the reference macro-blocks in said video frames and distortion
levels resulting from the mean pyramid averaging process,
means for determining the frequency content of each said macro-block.
means for predicting the distortion resulting from mean pyramid generation
over said frequency content using said relationship,
means for computing the limiting Mean Absolute Difference value for
maintaining picture quality using said predicted distortion value and
means for propagating those motion vectors whose Mean Absolute Difference
value falls below said limiting Mean Absolute Difference value.
2. The system as claimed in claim 1, wherein said relationship is established using a
training sequence of video frames, comprising:
means for generating mean pyramids on the reference blocks and on the
corresponding search area at each level,
means for generating deviation pyramids for said reference block by
computing the mean deviation of each pixel at a given level from
corresponding pixels at the lower level,
means for computing the Average Deviation Estimate at each resolution level
by averaging said deviation pyramid values at that level
means for quantizing said Average Deviation Estimate value as to determine
quantized Average Deviation Estimate for the corresponding reference block,
means for computing corresponding Mean Absolute Difference for all search
positions at lowest resolution level,
means for propagating the maximum allowed number of candidate motion
vectors corresponding to the lowes Mean Absolute Difference values to next
higher resolution level.
means for computing Mean Absolute Difference values at search positions
aroimd the Candidate Motion Vector positions obtained from lower resolution
level,
means for identifying those search positions in each level that correspond to
the least Mean Absolute Difference obtained at the highest resolution level the
final motion vector position for that level and the corresponding Mean
Absolute Difference value as the corresponding Mean Absolute Difference for
that level,
means for computing distortion as the difference between corresponding Mean
Absolute Difference and minimum Mean Absolute Difference at each level
means for saving the maximum of the distortion values obtained at each level
over all training frames corresponding to each Quantized Average Deviation
Estimate value in a Look-up table.
3. The system as claimed in claim 1, wherein said frequency content is determined by means for computing Quantized Average Deviation Estimate for each macro block in said video frame.
4. The system as claimed in claim 1, wherein said distortion level is predicted by means for extracting the estimated distortion value corresponding to said frequency content using said relationship.
5. The system as claimed in claim 1, wherein said limiting Mean Absolute Difference value for each level is obtained by means for incrementing the minimum computed Mean Absolute Difference at that level by said predicted distortion value.
6. The system as claimed in claim 2, wherein said training sequence is re-triggered whenever the Frame Average Mean Absolute Difference Variation over said sequence exceeds a pre-defined threshold value over a few frames, said Frame Average Mean Absolute Difference Variation being determined by means for computing the difference between the Frame Average Mean Absolute Difference value for the
current Frame and the-Delayed-N-Frame Average Mean Absolute Difference value for the previous N' frames where Frame Average Mean Absolute Difference is the average of the averaged Mean Absolute Difference values for all the reference macro-blocks is a frame and Delayed-N-Frame Average Mean Absolute Difference is the average of the Frame Average Mean Absolute Difference values for the previous "N' frames.
7. The system as claimed in claim 3, wherein said Quantized Average Deviation Estimate is a value obtained using means for quantizing the average of the mean deviation of the mean pyramid values from the original pixel values, over said reference macro-block.
8. The system as claimed in claim 4, wherein said estimated distortion value is obtained by means of a look-up table that matches Quantized Average Deviation Estimate values to predicted distortion values.
9. A method for minimizing computations required for compression of motion video frame sequences involving motion estimation using a multi-resolution mean pyramid technique while maintaining at least a predefined picture quality level by dynamically adjusting the number of Candidate Motion Vectors propagated to each higher resolution level comprising:
establishing a relationship between quantized values of frequency content of
the reference macro-blocks in said video frames and distortion levels resulting
from the mean pyramid averaging process,
determining the frequency content of each said macro-block,
predicting the distortion resulting from mean pyramid generation over said
frequency content using said relationship,
computing the limiting Mean Absolute Difference value for maintaining
picture quality using said predicted distortion value and
propagating those Candidate Motion Vectors whose Mean Absolute
Difference value falls below said limiting Mean Absolute Difference value.
10. The method as claimed in claim 9, wherein said relationship is established using a
training sequence of video frames, comprising the steps of:
generating mean pyramids on the reference blocks and on the corresponding
search area at each level,
generating deviation pyramids for said reference block by computing the mean
deviation of each pixel at a given level from corresponding pixels at the lower
level,
computing the Average Deviation Estimate at each resolution level by
averaging said deviation pyramid values at that level
quantizing said Average Deviation Estimate value as to determine quantized
Average Deviation Estimate for the corresponding reference block,
computing corresponding Mean Absolute Difference for all search positions at
lowest resolution level,
propagating the maximum allowed number of candidate motion vectors
corresponding to the lowest Mean Absolute Difference values to next higher
resolution level,
computing Mean Absolute Difference values at search positions aroimd the
Candidate Motion Vector positions obtained from lower resolution level,
identifying those search positions in each level that correspond to the least
Mean Absolute Difference obtained at the highest resolution level as the final
motion vector position for that level and the corresponding Mean Absolute
Difference value as the corresponding Mean Absolute Difference for that
level,
computing distortion as the difference between corresponding Mean Absolute
Difference and minimum Mean Absolute Difference at each level
saving the maximum of the distortion values obtained at each level over all
training frames corresponding to each Quantized Average Deviation Estimate
value in a Look-up table.
. 1. The method as claimed in claim 9, wherein said frequency content is determined by computing Quantized Average Deviation Estimate for each macro block in said video frame.
12. The method as claimed in claim 9, wherein said distortion level is predicted by extracting the estimated distortion value corresponding to said frequency content using said relationship established during training.
13. The method as claimed in claim 9, wherein said limiting Mean Absolute Difference for each level is equal to the minimum computed Mean Absolute Difference at that level incremented by said predicted distortion value.
14. The method as claimed in claim 9, wherein said training sequence is re-triggered whenever the Frame Average Mean Absolute Difference Variation over said sequence exceeds a pre-defined threshold value over a few frames, said Frame Average Mean Absolute Difference Variation being the difference between the Frame Average Mean Absolute Difference value for the current Frame and the Delayed-N-Frame Average Mean Absolute Difference value for the previous N' frames where Frame Average Mean Absolute Difference is the average of the averaged Mean Absolute Difference value for all the reference macro-blocks is a frame and Delayed-N-Frame Average Mean Absolute Difference is the average of the Frame Average Mean Absolute Difference values for the previous "N' frames.
15. The method as claimed in claim 11, wherein said Quantized Average Deviation Estimate is a value obtained after quantizing the average of the mean deviation of the mean pyramid values from the original pixel values, over said reference macro-block.
16. The method as claimed in claim 12, wherein said estimated distortion value is obtained from a look-up table that matches Quantized Average Deviation Estimate values to predicted distortion values.
17. A system for minimizing computations required for compression of motion video frame sequences substantially as herein described with reference to and as illustrated in the accompanying drawings.
18. A method for minimizing computations required for compression of motion video frame sequences substantially as herein described with reference to and as illustrated in the accompanying drawings.

Documents:

538-del-2001-abstract.pdf

538-del-2001-claims.pdf

538-del-2001-correspondence-others.pdf

538-del-2001-correspondence-po.pdf

538-del-2001-description (complete).pdf

538-del-2001-drawings.pdf

538-del-2001-form-1.pdf

538-del-2001-form-18.pdf

538-del-2001-form-2.pdf

538-del-2001-form-3.pdf

538-del-2001-pa.pdf

538-del-2001-petition-others.pdf

abstract.jpg


Patent Number 247266
Indian Patent Application Number 538/DEL/2001
PG Journal Number 13/2011
Publication Date 01-Apr-2011
Grant Date 30-Mar-2011
Date of Filing 30-Apr-2001
Name of Patentee STMicroelectronics Ltd., an Indian company
Applicant Address PLOT NO. 2 & 3, SECTOR 16 A, INSTITUTIONAL AREA, NOIDA-201 3001, UTTAR PRADESH
Inventors:
# Inventor's Name Inventor's Address
1 PAUL SATHYA #115, 6th MAIN, 7th CROSS, MALLESWARAM, BANGALORE, INDIA
2 ARSHAD AHMED 317, 100 FT RD, BANASHANKARI 3rd STAGE, 6th BLCK, 3rd PHASE, BANGALORE 560085
3 SOUMITRA KUMAR NANDY 67, 4th MAIN, AGO LAYOUT, RMV 2nd STAGE, BANGALORE 560054
PCT International Classification Number H04N 7/12
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA