Title of Invention

METHOD AND APPARATUS FOR INTERPOLATED FRAME DEBLOCKING OF VIDEO DATA

Abstract A method and apparatus to enhance the quality of interpolated video, constructed from decompressed video data, comprising denoising the interpolated video data, is described. A low pass filter is used to filter the interpolated video data. In one embodiment, the level of filtering of the low pass filter is determined based on a boundary strength value determined for the interpolated video data and neighboring video data (interpolated and/or non-interpolated). In one aspect of this embodiment, the boundary strength is determined based on proximity of reference video data for the interpolated video data and the neighboring video data.
Full Text FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10, rule 13)
"INTERP0LATED FRAME DEBLOCKING OPERATION IN FRAME RATE UP CONVERSION APPLICATION"
QUALCOMM INCORP0RATED
an American company of 5775 Morehouse Drive, San Diego, California 92121 (United States of America)
The following specification particularly describes the invention and the manner in which it is to be performed.

WO 2006/099321 PCT/US2006/008946
INTERP0LATED FRAME DEBLOCKING OPERA! ION IN FRAME RATE UP
CONVERSION APPLICATION
CLAIM OF PRIORITY UNDER 35 U.S.C. §119
[0001] The present Application for Patent claims priority to Provisional Application No. 60/660,909, filed March 10, 2005, and assigned to the assignee hereof and hereby expressly incorp0rated by reference herein.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] The invention relates to data compression in general and to denoising the process video in particular.
Description of the Related Art
[0003] Block-based compression may introduce artifacts between block boundaries, particularly if the correlation between block boundaries arc not taken into consideration.
[0004] Scalable video coding is acquiring widespread acceptance into low bit rate applications, particularly in heterogeneous networks with varying bandwidths (e.g. Internet and wireless streaming). Scalable video coding enables coded video to be transmitted as multiple layers typically, a base layer contains the most valuable information and occupies the least bandwidth (lowest bit rate for the video) and enhancement layers offer refinements over the base layer. Most scalable video compression technologies exploit the fact that the human visual system is more forgiving of noise (due to compression) in high frequency regions of the image than the flatter, low frequency regions. Hence, the base layer predominantly contains low frequency information and high frequency information is carried in enhancement layers. When network bandwidth falls short, there is a higher probability of receiving just the base layer of the coded video (no enhancement layers).
[0005] If enhancement layer or base layer video information is lost due to channel conditions or dropped to conserve battery p0wer, any of several types of interp0lation techniques may be employed to replace the missing data. For example, if an enhancement layer frame is lost, then data representing another frame, such as a base

WO2006/099J21 PCI7US2«06/008946
layer frame, could be used to interp0late data for replacing the missing enhancement
layer data. Interp0lation may comprise interp0lating motion compensated prediction
data. The replacement video data may typically suffer from artifacts due to imperfect
interp0lation.
[0006] As a result, there is a need for p0st-processing algorithms for denoising
interp0lated data so as to reduce and/or eliminate interp0lation artifacts.
SUMMARY OF THE INVENTION
[0007] A method of processing video data is provided. The method includes interp0lating video data and denoising the interp0lated video data. In one aspect, the interp0lated video data comprises first and second blocks, and the method includes determining a boundary strength value associated with the first and second blocks and denoising the first and second blocks by using the determined boundary strength value.
[0008] A processor for processing video data is provided. The processor is configured to interp0late video data, and denoise the interp0lated video data. In one aspect, the interp0lated video data includes first and second blocks, and the processor is configured to determine a boundary strength value associated with the first and second blocks, and denoise the first and second blocks by using the determined boundary strength value.
[0009] An apparatus for processing video data is provided. The apparatus includes an interp0lator to interp0late video data, and a denoiser to denoise the interp0lated video data. In one aspect, the interp0lated video data comprises first and second blocks, and the apparatus includes a determiner to determine boundary strength value associated with the first and second blocks, and the denoiser denoises the first and second blocks by using the determined boundary strength value.
[0010] An apparatus for processing video data is provided. The apparatus includes means for interp0lating video data, and means for denoising the interp0lated video data. In one aspect, the interp0lated video data includes first and second blocks, and the apparatus includes means for determining boundary strength value associated with the first and second blocks, and means for denoising the first and second blocks by using the determined boundary strength value.

WO 2006/099321 PC T/US2006/008946
[0011] A computer readable medium embodying a method of processing video data is provided. The method includes interp0lating video data, and denoising the interp0lated video data. In one aspect, the interp0lated video data comprises first and second blocks, and the method includes determining boundary strength value associated with the first and second blocks, and denoising the first and second blocks by using the determined boundary strength value.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] FIG. 1 is an illustration of an example of a video decoder system for decoding and displaying streaming video.
[0013] FIG. 2 is a flowchart illustrating an example of a process for performing denoising of interp0lated video data to be displayed on a display device.
[0014] FIG. 3A shows an example of motion vector interp0lation used in some embodiments of the process of Figure 1.
[0015] FIG. 3B shows an example of spatial interp0lation used in some embodiments of the process of Figure 1.
[0016] FIG. 4 is an illustration of pixels adjacent to vertical and horizontal 4x4 block boundaries.
[0017] FIGS. 5A, 5B and 5C illustrate reference block locations used in determining boundary strength values in some embodiments of the process of Figure 1.
[0018] FIGS. 6A and 6B are flowcharts illustrating examples of processes for determining boundary strength values.
[0019] FIG. 7 illustrates an example method for processing video data. [0020] FIG. 8 illustrates an example apparatus for processing video data.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0021] A method and apparatus to enhance the quality of interp0lated video, constructed from decompressed video data, comprising denoising the interp0lated video data, are described. A low pass filter is used to filter the interp0lated video data. In one example, the level of filtering of the low pass filter is determined based on a boundary strength value determined for the interp0lated video data and neighboring video data (interp0lated and/or non-interp0lated). In one aspect of this

WO 2006/099321 PCT/US2006/008946

example, the boundary strength is determined based on proximity of reference video data for the interp0lated video data and the neighboring video data. In the following description, specific details are given to provide a thorough understanding of the embodiments. However, it can be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, electrical comp0nents may be shown in block diagrams in order not to obscure the embodiments in unnecessary detail. In other instances, such comp0nents, other structures and techniques maybe shown in detail to further explain the embodiments. It is also understood by skilled artisans that electrical comp0nents, which arc shown as separate blocks, can be rearranged and/or combined into one comp0nent.
[0022] It is also noted that some embodiments may be described as a process, which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently and the process can be repeated. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may corresp0nd to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresp0nds to a function, its termination corresp0nds to a return of the function to the calling function or the main function.
[0023] FIG. 1 is a block diagram of a video decoder system for decoding streaming data. The system 100 includes decoder device 110, network 150, external storage 185 and a display 190. Decoder device 110 includes a video interp0lator 155, a video denoiser 160, a boundary strength determiner 165, an edge activity determiner 170, a memory comp0nent 175, and a processor 180. Processor 180 generally controls the overall operation of the example decoder device 110. One or more elements may be added, rearranged or combined in decoder device 110. For example, processor 180 may be external to decoder device 110.
[0024] Figure 2 is a flowchart illustrating an example of a process for performing denoising of interp0lated video data to be displayed on a display device. With reference to Figures 1 and 2, process 300 begins at step 305 with the receiving of encoded video data. The processor 180 can receive the encoded video data (such as MPEG-4 or H.264 compressed video data) from the network 150 or an image source

WO 2006/099321 PC T/LS2006/008946
such as the internal memory comp0nent 175 or the external storage 185. The encoded video data may be MPEG -4 or H.264 compressed video data. Here, the memory comp0nent 175 and/or the external storage 185 may be a digital video disc (DVD) or a hard-disc drive that contains the encoded video data.
[0025] Network 150 can be part of a wired system such as telephone, cable, and fiber optic, or a wireless system. In the case of wireless, communication systems, network 150 can comprise, for example, part of a code division multiple access (CDMA or CDMA2000) communication system or alternately, the system can be a frequency division multiple access (FDMA) system, an orthogonal frequency division multiple access (OFDMA) system, a time division multiple access (TDMA) system such as GSM/GPRS (General Packet Radio Service)/EDGE (enhanced data GSM environment) or TETRA (Terrestrial Trunked Radio) mobile telephone technology for the service industry, a wideband code division multiple access (WCDMA), a high data rate (lxEV-DO or lxEV-DO Gold Multicast) system, or in general any wireless communication system employing a combination of techniques.
[0026] Process 300 continues at step 310 with decoding of the received video data, wherein at least some of the received video data may be decoded and used as reference data for constructing interp0lated video data as will be discussed below. In one example, the decoded video data comprises texture information such as luminance and chrominance values of pixels. The received video data may be intra-coded data where the actual video data is transformed (using, e.g., a discrete cosine transform, a Hadamard transform, a discrete wavelet transform or an integer transform such as used in H.264), or it can be inter-coded data (e.g., using motion compensated prediction) where a motion vector and residual error are transformed. Details of the decoding acts of step 310 are known to those of skill in the art and will not be discussed further herein.
[0027] Process 300 continues at step 315 where the decoded reference data is interp0lated. In one example, interp0lation at step 315 comprises interp0lation of motion vector data from reference video data. In order to illustrate interp0lation of motion vector data, a simplified example will be used. Figure 3A shows an example of motion vector interp0lation used in step 315. Frame 10 represents a frame at a first temp0ral p0int in a sequence of streaming video. Frame 20 represents a frame at

WO 2006/099321 PCT/IJS2006/00S946
a second temp0ral p0int in the sequence of streaming video. Motion compensated prediction routines, known to those of skill in the art, may be used to locate a p0rtion of video containing an object 25A in frame 10 that closely matches a p0rtion of video containing an object 35 in frame 20. A motion vector 40 locates the object 25A in frame 10 relative to the object 35 in frame 20 (a dashed outline labeled 25C in frame 20 is used to illustrate the relative location of objects 25 A and 35). If frame 10 and frame 20 are located a time "T" from each other in the sequence, then a frame 15, located in between frames 10 and 20, can be interp0lated based on the decoded video data in frame 10 and/or frame 20. For example, if frame 15 is located at a p0int in time midway between (a time T/2 from both) frames 10 and 20, then the pixel data of object 35 (or object 25A) could be located at a p0int located by motion vector 45 which may be determined through interp0lation to be half the size of and in the same heading as motion vector 40 (a dashed outline labeled 25B in frame 15 is used to illustrate and the relative location of objects 25A and 30). Since object 35 was predicted based on object 25A (represented as a motion vector p0inting to object 25A and a residual error added to the pixel values of object 25A), object 25A and/or object 35 could be used as reference p0rtions for interp0lating object 30 in frame 15. As would be clear to those of skill in the art, other methods of interp0lating motion vector and/or residual error data of one or more reference p0rtions (e.g., using two motion vectors per block as in bi-directional prediction) can be used in creating the interp0lated data at step 315.
[0028] In another example, interp0lation at step 315 comprises combining of pixel values located in a different spatial region of the video frame. Figure 3b shows an example of spatial interp0lation used in step 315 of the process 300. A frame 50 contains a video image of a house 55. A region of the video data, labeled 60, is missing, e.g., due to data corruption. Spatial interp0lation of features 65 and 70 that are located near the missing p0rtion 60 may be used as reference p0rtions to interp0late region 60. Interp0lation could be simple linear interp0lation between the pixel values of regions 65 and 70. In another example, pixel values located in different temp0ral frames from the frame containing the missing data, can be combined (e.g., by averaging) to form the interp0lated pixel data. Interp0lating

WO 2006/099321 I»( 17US2006/008946
means such as the video interp0lator 155 of Figure 1 may perform the interp0lation acts of step 315.
[0029] Besides motion vectors, other temp0ral prediction methods such as optical flow data and image morphing data may also be utilized for interp0lating video data. Optical flow interp0lation may transmit the velocity field of pixels in an image over the time. The interp0lation may be pixel-based derived from the optical flow field, for a given pixel. The interp0lation data may comprise speed and directional information.
[0030] Image morphing is an image processing technique used to compute a transformation, from one image to another. Image morphing creates a sequence of intermediate images, which when put together with the original images, represents the transition from one image to the other. The method identifies the mesh p0ints of the source image, and warping functions of the p0ints for a non-linear interp0lation, see Wolberg, G., "Digital Image Warping". EEEE Computer Society Press, 1990. [0031] Steps 320, 325 and 330 are optional steps used with some embodiments of denoising performed at step 335 and will be discussed in detail below. Continuing to step 335, the interp0lated video data is denoised so as to remove artifacts that may have resulted from the interp0lation acts of step 315. Denoising means such as the video denoiscr 160 of Figure 1 may perform the acts of step 335. Denoising may comprise one or more methods known to those of skill in the art including deblocking to reduce blocking artifacts, deringing to reduce ringing artifacts and methods to reduce motion smear. After denoising, the denoised video data is displayed, e.g., on the display 190 as shown in Figure 1.
[0032] An example of denoising at step 335 comprises using a deblocking filter, for example, the deblocking filter of the H.264 video compression standard. The deblocking filter specified in H.264 requires decision trees that determine the activity along block boundaries. As originally designed in H.264, block edges with image activity beyond set thresholds are not filtered or weakly filtered, while those along low activity blocks are strongly filtered. The filters applied can be, for example, 3-tap or 5-tap low pass Finite Impulse Resp0nse (FIR) filters.
[0033] Figure 4 is an illustration of pixels adjacent to vertical and horizontal 4x4 block boundaries (a current block "q" and a neighboring block "p"). Vertical

WO 2006/099321 PCT/US2006/008946
boundary 200 represents any boundary between two side-by-side 4x4 blocks. Pixels 202, 204, 206 and 208, labeled p0, p1, p2 and p3 respectively, lie to the left of vertical boundary 200 (in block "p") while pixels 212, 214, 216 and 218, labeled q0, ql, q2 and q3 respectively, lie to the right of vertical boundary 200 (in block "q")-Horizontal boundary 220 represents any boundary between two 4x4 blocks, one directly above the other. Pixels 222, 224, 226 and 228, labeled p0, pi, p2 and p3 respectively, lie above horizontal boundary 200 while pixels 232, 234, 236 and 238, labeled q0, ql, q2 and q3 respectively, lie below horizontal boundary 200. In an embodiment of deblocking in H.264, the filtering operations affect up to three pixels on either side of, above or below the boundary. Depending on the quantizer used for transformed coefficients, the coding modes of the blocks (intra or inter coded), and the gradient of image samples across the boundary, several outcomes are p0ssible, ranging from no pixels filtered to filtering pixels p0, pi, p2, q0, ql and q2.
[0034] Deblocking filter designs for block based video compression predominantly follow a common principle, the measuring of intensity changes along block edges, followed by a determination of strength of the filter to be applied and then by the actual low pass filtering operation across the block edges. The deblocking filters reduces blocking artifacts through smoothing (low pass filtering across) of block edges. A measurement, known as boundary strength, is determined at step 320. Boundary strength values may be determined based on content of the video data, or on the context of the video data. In one aspect, higher boundary strengths result in higher levels of filtering (e.g., more blurring). Parameters affecting the boundaiy strength include context and/or content dependent situations, such as whether the data is infra-coded or inter-coded, where infra-coded regions are generally filtered more heavily than inter-coded p0rtions. Other parameters affecting the boundary strength measurement are the coded block pattern (CPB) which is a function of the number of non-zero coefficients in a 4 by 4 pixel block and the quantization parameter.
[0035] In order to avoid blurring of edge features in the image, an optional edge activity measurement may be performed at step 325 and low pass filtering (at the denoising step 335) is normally applied in non-edge regions (the lower the edge activity measurement in the region, the stronger the filter used in the denoising at step

:> 2006/099321 PCMJS2006/008946
335). Details of boundary strength determination and edge activity determination are known to those of ordinary skill in the art and are not necessary to understand the disclosed method. At step 330, the boundary strength measurement and/or the edge activity measurement are used to determine the level of denoising to be performed at step 335. Through modifications to the deblocking parameters such as boundary strength and/or edge activity measurements, interp0lated regions can be eifectively denoised. Process 300 may conclude by displaying 340 the denoised interp0lated video data. One or more elements may be added, rearranged or combined in process 300.
[0036] Figures 5A, 5B and 5C show illustrations of reference block locations used in determining boundary strength values at step 320 in some embodiments of the process of Figure 1 where the denoising act of step 335 comprises deblocking. The scenarios depicted in Figure 5 arc representative of motion compensated prediction with one motion vector per reference block, as discussed above in relation to Figure 3A. In Figures 5A, 5B and 5C, a frame being interp0lated 75, is interp0lated based on a reference frame 80. An interp0lated block 77 is interp0lated based on a reference block 81, and an interp0lated Block 79, that is a neighboring block of block 77, is interp0lated based on a reference block 83. In Figure 5A, the reference blocks 81 and 83 are also neighboring. This is indicative of video images that are stationary between the interp0lated frame 75 and the reference frame 80. In this case, the boundary strength may be set low so that the level of denoising is low. In Figure 5B, the reference blocks 81 and 83 are overlapped so as to comprise common video data. Overlapped blocks may be indicative of some slight motion and the boundary strength may be set higher than for the case in Figure 5A. In Figure 5C, the reference blocks 81 and 83 are apart from each other (non-neighboring blocks). This is an indication that the images are not closely associated with each other and blocking artifacts could be more severe. In the case of Figure 5C, the boundary strength would be set to a value resulting in more deblocking than the scenarios of Figures 5A or 5B. A scenario not shown in any of Figures 5 comprises reference blocks 81 and 83 from different reference frames. This case may be treated in a similar manner to the case shown in Figure 5C or the boundary strength value may be determined to be a value that results in more deblocking than the case shown in Figure 5C.

WO 2006/099321 PCT/US2006/008946

[0037] Figure 6A is a flowchart illustrating an example of a process for determining boundary strength values for the situations shown in Figures 5A, 513 and 5C with one motion vector per block. The process shown in Figure 6A may be performed in step 320 of the process 300 shown in Figure 2. With reference to Figures 5 and 6, a check is made at decision block 405, to determine if the reference blocks 81 and 83 are also neighboring blocks. If they are neighboring blocks as shown in Figure 5 A, then the boundary strength is set to zero at step 407. In those embodiments, where the neighboring reference blocks 81 and 83 are already dcnoised (deblocked in this example), the denoising of the interp0lated blocks 77 and 79 at step 335 may be omitted. If the reference blocks 81 and 83 are not neighboring reference blocks, then a check is made at decision block 410 to determine of the reference blocks 81 and 83 are overlapped. If the reference blocks 81 and 83 are overlapped, as shown in Figure 5B, then the boundary strength is set to 1 at step 412. If the reference blocks are not overlapped (e.g., the reference blocks 81 and 83 are apart in the same frame or in different frames), then the process continues at decision block 415. A check is made at decision block 415 to determine if one or both of the reference blocks 81 and 83 are intra-coded. If one of the reference blocks is intra-coded, then the boundary strength is set to two at step 417, otherwise the boundary strength is set to three at step 419 In this example, neighboring blocks that are interp0lated from reference blocks that are located proximal to each other, are denoised at lower levels than blocks interp0lated from separated reference blocks.
[0038] Interp0lated blocks may also be formed from more than one reference block. Figure 6B is a flowchart illustrating another embodiment of a process for determining boundary strength values (as performed in step 320 of Figure 2) for interp0lated blocks comprising two motion vectors p0inting to two reference blocks. The example shown in Figure 6B assumes that the motion vectors p0int to a forward frame and a backward frame as in bi-directional predicted frames. Those of skill in the art would recognize that multiple reference frames may comprise multiple forward or multiple backward reference frames as well. The example looks at the forward and backward motion vectors of a current block being interp0lated and a neighboring block in the same frame. If the forward located reference blocks, as indicated by the forward motion vectors of the current block and the neighboring

WO 2006/(09321



P( "T/US2006/008946

block, are determined to be neighboring blocks at decision block 420, then the process continues at decision block 425 to determine if the backward reference blocks, as indicated by the backward motion vectors of the current block and the neighboring block, are also neighboring. If both the forward and backward reference blocks are neighboring then this is indicative of very little image motion and the boundary strength is set to zero at step 427 which results in a low level of deblocking. If one of the forward or backward reference blocks is determined to be neighboring (at decision block 425 or decision block 430) then the boundary strength is set to 1 (at step 429 or step 432) resulting in more deblocking than the case where both reference blocks are neighboring. If, at decision block 430, it is determined that neither the forward nor the backward reference blocks are neighboring, then the boundary strength is set to two, resulting in even more deblocking.
[0039] The decision trees shown in Figures 6A and 6B are only examples of processes for determining boundary strength based on the relative location of one or more reference p0rtions of interp0lated video data, and on the number of motion vectors per block. Other methods may be used as would be apparent to those of skill in the art. Determiner means such as boundary strength determiner 165 in Figure 1 may perform the acts of step 320 shown in Figure 2 and illustrated in Figures 6A and 6B. One or more elements may be added, rearranged or combined in the decisions trees shown in Figures 6A and 6B.
[0040] Figure 7 illustrates one example method 700 of processing video data in accordance to the description above. Generally, method 700 comprises interp0lating 710 video data and denoising 720 the interp0lated video data. The denoising of the interp0lated video data may be based on a boundary strength value as described above. The boundary strength may be determined based on content and/or context of the video data. Also, the boundary strength may be determined based on whether the video data was interp0lated using one motion vector or more than one motion vector. If one motion vector was used, the boundary strength may be determined based on whether the motion vectors are from neighboring blocks of a reference frame, from overlapped neighboring blocks of a reference frame, from non-neighboring blocks of a reference frame, or from different reference frames. If more than one motion vectors were used, the boundary strength may be determined based on whether the

WO 2006/090321 PCT/US2006/008946
forward motion vectors p0int to neighboring reference blocks or whether the backward motion vectors p0int to neighboring reference blocks.
[0041] Figure 8 shows an example apparatus 800 that may be implemented to carry out a method 700. Apparatus 800 comprises an interp0lator 810 and a denoiser 820. The interp0lator 810 may interp0late video data and the denoiser 820 may denoise the interp0lated video data, as described above.
[0042] The embodiment of deblocking discussed above is only an example of one type of denoising. Other types of denoising would be apparent to those of skill in the art. The deblocking algorithm of H.264 described above utilizes 4 by 4 pixel blocks. It would be understood by those of skill in the art that blocks of various sizes, e.g., any N by M block of pixels where N and M are integers, could be used as interp0lated and/or reference p0rtions of video data.
[0043] Those of ordinary skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
[0044] Those of ordinary skill would further appreciate that the various illustrative logical blocks, modules, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, firmware, computer software, middleware, microcode, or combinations thereof. To clearly illustrate this interchangeability of hardware and software, various illustrative comp0nents, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends up0n the particular application and design constraints imp0sed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed methods.
[0045] The various illustrative logical blocks, comp0nents, modules, and circuits described in connection with the examples disclosed herein may be implemented or

WO 2006/099321 PCT/US2006/008946
performed with a general purp0se processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware comp0nents, or any combination thereof designed to perform the functions described herein. A general purp0se processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0046] The steps of a method or algorithm described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An example storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC). The ASTC may reside in a wireless modem. In the alternative, the processor and the storage medium may reside as discrete comp0nents in the wireless modem.
[0047] The previous description of the disclosed examples is provided to enable any person of ordinary skill in the art to make or use the disclosed methods and apparatus. Various modifications to these examples would be readily apparent to those skilled in the art, and the principles defined herein may be applied to other examples and additional elements may be added.
[0048] Thus, methods and apparatus to decode real-time streaming multimedia, utilizing bit corruption flagging information and corrupt data, in a decoder application, to perform intelligent error concealment and error correction of the corrupt data, have been described.

WO 2006/099321 PCT/US2006/008946
CLAIMS
1. A method of processing video data, comprising:
interp0lating video data; and denoising the interp0lated video data.
2. The method of claim 1, wherein the interp0lated video data comprises first
and second blocks, the method further comprising:
determining boundary strength value associated with the first and second blocks; and
denoising the first and second blocks by using the determined boundary strength value.
3. The method of claim 2, wherein determining the boundary strength value
comprises:
determining the boundary strength value based on content of the video data.
4. The method of claim 2, wherein determining the boundary strength value
comprises:
determining the boundary strength value based on context of the video data.
5. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks are from neighboring blocks of a reference frame.
6. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:

WO 2006/099321 PCT/US2006/008946
determining whether the motion vectors of the first and second blocks arc from overlapped neighboring blocks of a reference frame.
7. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks are from non-neighboring blocks of a reference frame.
8. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks arc from different reference frames.
9. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on two motion vectors; and wherein the determining the boundary strength value comprises:
determining whether the forward motion vectors of the first and second blocks p0int to neighboring reference blocks.
10. The method of claim 2, wherein the interp0lating comprises:
interp0lating based on two motion vectors; and wherein the determining the boundary strength value comprises:
determining whether the backward motion vectors of the first and second blocks p0int to neighboring reference blocks.
11. A processor for processing video data, the processor configured to:
interp0late video data; and denoise the interp0lated video data.
12. The processor of claim 11, wherein the interp0lated video data comprises
first and second blocks, the processor further configured to:

WO 2006/099321 IK T/US2006/008946
determine boundary strength value associated with the first and second blocks; and
denoise the first and second blocks by using the determined boundary strength value.
13. The processor of claim 12 further configured to:
determine the boundary strength value based on content of the video data.
14. The processor of claim 12 further configured to:
determine the boundary strength value based on context of the video data.
15. The processor of claim 12, further configured to:
interp0late based on one motion vector; and
determine boundary strength value based on whether the motion vectors of the first and second blocks are from neighboring blocks of a reference frame.
16. The processor of claim 12 further configured to:
interp0late based on one motion vector; and
determine boundary strength value based on whether the motion vectors of the first and second blocks are from overlapped neighboring blocks of a reference frame.
17. The processor of claim 12 further configured to:
interp0late based on one motion vector; and
determine boundary strength value based on whether the motion vectors of the first and second blocks are from non-neighboring blocks of a reference frame.
18. The processor of claim 12 further configured to:
interp0late based on one motion vector; and

WO 2006/090321 PCT/US2006/008946
determine boundary strength value based on whether the motion vectors of the first and second blocks are from different reference frames.
19. The processor of claim 12 further configured to:
interp0late based on two motion vectors; and
determine boundary strength value based on whether the forward motion vectors of the first and second blocks p0int to neighboring reference blocks.
20. The processor of claim 12 further configured to:
interp0late based on two motion vectors; and
determine boundary strength value based on whether the backward motion vectors of the first and second blocks p0int to neighboring reference blocks.
21. An apparatus for processing video data, comprising:
an interp0lator to interp0late video data; and a denoiser to denoise the interp0lated video data.
22. The apparatus of claim 21, wherein the interp0lated video data comprises
first and second blocks, the apparatus further comprising:
a determiner to determine boundary strength value associated with the first and second blocks; and
wherein the denoiser denoises the first and second blocks by using the determined boundary strength value.
23. The apparatus of claim 22, wherein the determiner determines the boundary strength value based on content of the video data.
24. The apparatus of claim 22, wherein the determiner determines the boundary strength value based on context of the video data.

WO 2006/099321 PCI7US2006/008946

25. The apparatus of claim 22, wherein the interp0lator interp0lates based on one motion vector; and wherein the determiner determines the boundary strength value based on whether the motion vectors of the first and second blocks are from neighboring blocks of a reference frame.
26. The apparatus of claim 22, wherein the interp0lator interp0lates based on one motion vector; and wherein the determining determines the boundary strength value based on whether the motion vectors of the first and second blocks are from overlapped neighboring blocks of a reference frame.
27. The apparatus of claim 22, wherein the interp0lator interp0lates based on one motion vector; and wherein the determiner determines the boundary strength value based on whether the motion vectors of the first and second blocks are from non-neighboring blocks of a reference frame.
28. The apparatus of claim 22, wherein the interp0lator interp0lates based on one motion vector; and wherein the determiner determines the boundary strength value based on whether the motion vectors of the first and second blocks are from different reference frames.
29. The apparatus of claim 22, wherein the interp0lator interp0lates based on two motion vectors; and wherein the determiner determines the boundary strength value based on whether the forward motion vectors of the first and second blocks p0int to neighboring reference blocks.
30. The apparatus of claim 22, wherein the interp0lator interp0lates based on two motion vectors; and wherein the determiner determines the boundary strength value based on whether the backward motion vectors of the first and second blocks p0int to neighboring reference blocks.
31. An apparatus for processing video data, comprising:
means for interp0lating video data; and means for denoising the interp0lated video data.

WO 2006/1)99321 PCT/US2006/008946
32. The apparatus of claim 31, wherein the interp0lated video data comprises
first and second blocks, the apparatus further comprising:
means for determining boundary strength value associated with the first and second blocks; and
means for denoising the first and second blocks by using the determined boundary strength value.
33. The apparatus of claim 32, wherein the means for determining the boundary
strength value further comprises:
means for determining the boundary strength value based on content of the video data.
34. The apparatus of claim 32, wherein the means for determining the boundary
strength value further comprises:
means for determining the boundary strength value based on context of the video data.
35. The apparatus of claim 32, wherein interp0lating means further comprises:
means for interp0lating based on one motion vector; and wherein the means for determining the boundary strength value further comprises:
means for determining whether the motion vectors of the first and second blocks are from neighboring blocks of a reference frame.
36. The apparatus of claim 32, wherein the interp0lating means further
comprises:
means for interp0lating based on one motion vector; and wherein the means for determining the boundary strength value further comprises:
means for determinmg whether the motion vectors of the first and second blocks are from overlapped neighboring blocks of a reference frame.

WO 2006/099321 PCT/US2006/008946
37. The apparatus of claim 32, wherein the interp0lating means further
comprises:
means for interp0lating based on one motion vector; and wherein the means for determining the boundary strength value further comprises:
means for determining whether the motion vectors of the first and second blocks are from non-neighboring blocks of a reference frame.
38. The apparatus of claim 32, wherein the interp0lating means further
comprises:
means for interp0lating based on one motion vector; and wherein the means for determining the boundary strength value further comprises:
means for determining whether the motion vectors of the first and second blocks arc from different reference frames.
39. The apparatus of claim 32, wherein the means for interp0lating further
comprises:
means for interp0lating based on two motion vectors; and wherein the means for determining the boundary strength value further comprises:
determining whether the forward motion vectors of the first and second blocks p0int to neighboring reference blocks.
40. The apparatus of claim 32, wherein the means for interp0lating further
comprises:
means for interp0lating based on two motion vectors; and wherein the means for determining the boundary strength value comprises:
means for determining whether the backward motion vectors of the first and second blocks p0int to neighboring reference blocks.
41. A computer readable medium embodying a method of processing video
data, the method comprising:
interp0lating video data; and denoising the interp0lated video data.

WO 2006/099321 PCT/US2006/008946

42. The computer readable medium of claim 41, wherein the interp0lated video
data comprises first and second blocks, and further wherein the method further
comprises:
determining boundary strength value associated with the first and second blocks; and
denoising the first and second blocks by using the determined boundary strength value.
43. The computer readable medium of claim 42, wherein determining the
boundary strength value comprises:
determining the boundary strength value based on content of the video data.
44. The computer readable medium of claim 42, wherein determining the
boundary strength value comprises:
determining the boundary strength value based on context of the video data.
45. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks arc from neighboring blocks of a reference frame.
46. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks are from overlapped neighboring blocks of a reference frame.

WO 2006/099321 PCT/US2006/008946
47. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks are from non-neighboring blocks of a reference frame.
48. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on one motion vector; and wherein the determining the boundary strength value comprises:
determining whether the motion vectors of the first and second blocks are from different reference frames.
49. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on two motion vectors; and wherein the determining the boundary strength value comprises:
determining whether the forward motion vectors of the first and second blocks p0int to neighboring reference blocks.
50. The computer readable medium of claim 42, wherein the interp0lating
comprises:
interp0lating based on two motion vectors; and wherein the determining the boundary strength value comprises:
determining whether the backward motion vectors of the first and second blocks p0int to neighboring reference blocks.


ABSTRACT
Title: "INTERP0LATED FRAME DEBLOCKING OPERATION IN FRAME RATE UP CONVERSION APPLICATION"
A method and apparatus to enhance the quality of interp0lated video, constructed from decompressed video data, comprising denoising the interp0lated video data, is described. A low pass filter is used to filter the interp0lated video data. In one embodiment, the level of filtering of the low pass filter is determined based on a boundary strength value determined for the interp0lated video data and neighboring video data (interp0lated and/or non-interp0lated). In one aspect of this embodiment, the boundary strength is determined based on proximity of reference video data for the interp0lated video data and the neighboring video data.

Documents:

1528-MUMNP-2007-ABSTRACT(15-12-2010).pdf

1528-MUMNP-2007-ABSTRACT(25-9-2007).pdf

1528-MUMNP-2007-ABSTRACT(AMENDED)-(15-12-2010).pdf

1528-MUMNP-2007-ABSTRACT(GRANTED)-(26-9-2011).pdf

1528-mumnp-2007-abstract.doc

1528-mumnp-2007-abstract.pdf

1528-MUMNP-2007-CANCELLED PAGES(13-9-2011).pdf

1528-MUMNP-2007-CLAIMS(25-9-2007).pdf

1528-MUMNP-2007-CLAIMS(AMENDED)-(15-12-2010).pdf

1528-MUMNP-2007-CLAIMS(AMENDED)-(2-9-2011).pdf

1528-MUMNP-2007-CLAIMS(AMENDED)-(5-5-2011).pdf

1528-MUMNP-2007-CLAIMS(GRANTED)-(26-9-2011).pdf

1528-MUMNP-2007-CLAIMS(MARKED COPY)-(5-5-2011).pdf

1528-mumnp-2007-claims.doc

1528-mumnp-2007-claims.pdf

1528-MUMNP-2007-CORRESPONDENCE(10-10-2011).pdf

1528-MUMNP-2007-CORRESPONDENCE(13-9-2011).pdf

1528-mumnp-2007-correspondence(18-6-2008).pdf

1528-MUMNP-2007-CORRESPONDENCE(21-3-2011).pdf

1528-MUMNP-2007-CORRESPONDENCE(25-9-2007).pdf

1528-MUMNP-2007-CORRESPONDENCE(IPO)-(26-9-2011).pdf

1528-mumnp-2007-correspondence-received.pdf

1528-mumnp-2007-description (complete).pdf

1528-MUMNP-2007-DESCRIPTION(COMPLETE)-(25-9-2007).pdf

1528-MUMNP-2007-DESCRIPTION(GRANTED)-(26-9-2011).pdf

1528-MUMNP-2007-DRAWING(15-12-2010).pdf

1528-MUMNP-2007-DRAWING(25-9-2007).pdf

1528-MUMNP-2007-DRAWING(AMENDED)-(15-12-2010).pdf

1528-MUMNP-2007-DRAWING(GRANTED)-(26-9-2011).pdf

1528-mumnp-2007-drawings.pdf

1528-MUMNP-2007-FORM 1(15-12-2010).pdf

1528-MUMNP-2007-FORM 18(25-9-2007).pdf

1528-MUMNP-2007-FORM 2(COMPLETE)-(25-9-2007).pdf

1528-MUMNP-2007-FORM 2(GRANTED)-(26-9-2011).pdf

1528-MUMNP-2007-FORM 2(TITLE PAGE)-(15-12-2010).pdf

1528-MUMNP-2007-FORM 2(TITLE PAGE)-(25-9-2007).pdf

1528-MUMNP-2007-FORM 2(TITLE PAGE)-(GRANTED)-(26-9-2011).pdf

1528-MUMNP-2007-FORM 26(25-9-2007).pdf

1528-MUMNP-2007-FORM 3(15-12-2010).pdf

1528-MUMNP-2007-FORM 3(18-6-2008).pdf

1528-mumnp-2007-form 3(25-9-2007).pdf

1528-MUMNP-2007-FORM 5(25-9-2007).pdf

1528-mumnp-2007-form-1.pdf

1528-mumnp-2007-form-18.pdf

1528-mumnp-2007-form-2.doc

1528-mumnp-2007-form-2.pdf

1528-mumnp-2007-form-26.pdf

1528-mumnp-2007-form-3.pdf

1528-mumnp-2007-form-5.pdf

1528-mumnp-2007-form-pct-ib-304.pdf

1528-MUMNP-2007-MARKED COPY(2-9-2011).pdf

1528-MUMNP-2007-OTHER DOCUMENT(15-12-2010).pdf

1528-mumnp-2007-pct-search report.pdf

1528-MUMNP-2007-PETITION UNDER RULE 137(15-12-2010).pdf

1528-MUMNP-2007-REPLY TO EXAMINATION REPORT(15-12-2010).pdf

1528-MUMNP-2007-REPLY TO HEARING(2-9-2011).pdf

1528-MUMNP-2007-REPLY TO HEARING(5-5-2011).pdf

1528-MUMNP-2007-SPECIFICATION(AMENDED)-(13-9-2011).pdf

1528-MUMNP-2007-SPECIFICATION(AMENDED)-(15-12-2010).pdf

1528-MUMNP-2007-SPECIFICATION(AMENDED)-(2-9-2011).pdf

1528-mumnp-2007-wo international publication report(25-9-2007).pdf

abstract1.jpg


Patent Number 249032
Indian Patent Application Number 1528/MUMNP/2007
PG Journal Number 39/2011
Publication Date 30-Sep-2011
Grant Date 26-Sep-2011
Date of Filing 25-Sep-2007
Name of Patentee QUALCOMM INCORPORATED
Applicant Address 5775 MOREHOUSE DRIVE, SAN DIEGO, CALIFORNIA 92121,
Inventors:
# Inventor's Name Inventor's Address
1 SHI FANG 4460 CALLE MAR DE ARMONIA, SAN DIEGO, CALIFORNIA 92130
2 RAVEENDRAN VIJAYALAKSHMI R. 4272 CALLE MAR DE BALLENAS, SAN DIEGO, CALIFORNIA 92130
PCT International Classification Number H04N7/68,H04N7/26
PCT International Application Number PCT/US2006/008946
PCT International Filing date 2006-03-10
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 60/660,909 2005-03-10 U.S.A.