Title of Invention  METHOD AND APPARATUS FOR SCALING A THREEDIMENSIONAL MODEL 

Abstract  ABSTRACT: A method of scaling a threedimensional input model (200208) into a scaled threedimensional output model (210224) is disclosed. The method comprises determining for portions of the threedimensional input model respective probabilities that the corresponding portions of the scaled threedimensional output model are visible in a twodimensional view of the scaled threedimensional output model and geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities. The determining of probability of visibility is based on a projection of the threedimensional input model in a viewing direction. By taking into account that some portions are not visible, no depthrange is wasted. Figs. 2A,2B,2C 
Full Text  The invention relates to a method of scaling a threedimensional input model into a scaled threedimensional output model. The invention further relates to a scaling unit for scaling a threedimensional input model into a scaled threedimensional output model. The invention iurther relates to an image display apparatus comprising:  receiving means for receiving a signal representing a threedimensional input model;  a scaling unit for scaling the threedimensional input model into a scaled threedimensional output model; and  display means for visualizing a view of the scaled threedimensional output model. The probability that the size of a threedimensional scene does not match with the display capabilities of an image display apparatus is high. Hence, a scaling operation is required. Other reasons why scaling might be required is to adapt the geometry of theo f the dimensional model representing the threedimensional scene to a transmission channel or to adap the threedimensional model to the viewer's preferences. Linear scaling operations on a threedimensional model representing a threedimensional scene are well known. An embodiment of the image display apparatus of the kind described in the openiing paragraph is known from the US patent 6,313,866. This image display apparatus comprises a circuit for acquiring a depth information maximum value from a fust image signal. The image display apparatus further comprises a parallax control circuit to control the amoimt of parallax of a second image signal on the basis of depth information contained in the Erst and second image signals such that an image corresponding to the second image signal can be threedimensionaliy displayed in front of an image corresponding to the first image signal. A threedimensional image synthesizer syndicsizes the first and second image signals which have been controlled by the parallax control circuit, on the basis of the parallax amount of each image signal, such that images correspond to that .first and second image signals in the threedimensional display space. Scaling of depth information is in principle performed by means of a linear adaptation of the depth information except for depth information which exceeds the limits of the display capabilities. These latter values are clipped, A disadvantage of depth adaptation or scaling is that it might result in reduction of depth impression. Especially the linear depth scaling might be disadvantageous for the depth impression of the scaledthree dimensional model. EP0817123 relates to method that aims to move a 3D object into a position where the viewer is likely to fix his or her eyes. In this manner eyestrain can be reduced resulting in a reduced eye fatigue. In order to obtain this result a gaze range is determined indicative of a position where the user is likely to fix his or her eyes. Once this position is established the 3D object is moved in a perpendicular direction to the display screen, in the process the model is enlarged or reduced in order for the viewer to notice the movement. EP0905988 relates to a method of overlaying two threedimensional image signals without the respective image signals interfering. US2001/0012018 relates to a method of rendering a three dimensional model on a nonstereoscopic display. It is an object of the invention to provide a method of the kind described in the opening paragraph which results in a scaled threedimensional output model which resembles the threedimensional input model perceptually and which has a pleasant threedimensional impression. This object of the invention is achieved in that the method comprises;  providing the threedimensional input model on an input connector (410);  determining for portions of the threedimensional input model respective probabilities that the corresponding portions of the scaled threedimensional output model are visible in a twodimensional view of the scaled threedimensional output model, the determining being based on a projection of the threedimensional input model in a viewing direction;  geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities, wherein the portions of the scaled threedimensional output model which will not be visible are disregarded; and  providing the scaled threedimensional output model on an output connector. As described above, scaling is required to match the threedimensional input model with the e.g. the display capabilities of a display device. After the scaling of the threedimensional input model into the scaled threedimensional output model, multiple views will be created on basis of the scaled threedimensional output model. The idea is that no depthrange, e.g. of the display device, should be wasted in the scaling for eventually invisible portions of the scaled threedimensional output model. That means that those portions of the threedimensional input model which corresponds to portions of the scaled threedimensional output model which will not be visible in one of the views should be disregarded for the scaling. By making a particular view of the threedimensional input model, by means of projecting the threedimensional input model in a viewing direction, to be applied by the display device, it is possible to determine the visibility of the portions of the threedimensional input model in that particular view. Based on that, it is possible to determine the probability of visibility of portions of the scaled threedimensional output model. Portions of the scaled threedimensional output model which correspond to portions of the threedimensional input model which are visible in the particular view will in general also be visible in a view based on the scaled threedimensional output model. Other portions of the scaled threedimensional output model which correspond to other portions of the threedimensional input model which are not visible in the particular view will have a relatively low probability of being visible in a view based on the scaled threedimensional output model. By making multiple projections of the threedimensional input model, each in a direction which corresponds with a viewing direction of the probabilities of being visible can be adapted. However, even without really making these projections the probabilities of visibility can be determined on basis of other parameters, e.g. parameters related to the known capabihties of a display device. Alternatively, the probabilities are determined on basis of parameters of a transmission channel. In an embodiment of the method according to the invention, determining the probability that a first one of the portions is visible, is based on comparing a first value of a first coordinate of the first one of the portions with a second value of the first coordinateof a second one of the portions. Determining whether portions of the threedimensional input model occlude each other in the direction of the view can easily be done by means of comparing the values of the coordinates of the portions of the threedimensional input. Preferably, the first coordinate corresponds to the viewing direction. In an embodiment of the method according to the invention, determining the probability that the first one of the portions is visible, is based on capabilities of a display device on which the scaled threedimensional output model will be displayed. The capabilities of the display device might correspond to a maximum viewing angle and the depthrange of the display device. These properties of the display device determine which views can be created, i.e. the maximum differences between the different views. On basis of these properties of the display device in combination with an appropriate view, i.e. projection of the threedimensional input, the probability of visibility of portions in any of the possible views can easily be determined. In an embodiment of the method according to the invention the geometrically transforming of the portions of the threedimensional input model into the respective portions of the scaled threedimensional output model, comprise one of translation, rotation or deformation. The topology of the portions is not changed because of these geometrical transformation. It is a further object of the invention to provide a scaling unit of the kind described in the opening paragraph which provides a scaled threedimensional output model which resembles the threedimensional input model perceptually and which has a pleasant threedimensional impression. This object of the invention is achieved in that the scaling unit comprises:  probability determining means for detemining for portions of the threedimensional input model respective probabilities that the corresponding portions of the scaled threedimensional output model are visible in a two dimensional view of the scaled threedimensional output model, the determining being based on a projection of the threedimensional input model in a viewing direction; and  transforming means for geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities, wherein the portions of the scaled threedimensional output model which will not be visible are disregarded. It is a further object of the invention to provide an image display apparatus of the kind described in the opening paragraph which provides a scaled threedimensional output model which resembles the threedimensional input model perceptually and which has a pleasant threedimensional impression. This object of the invention is achieved in that the scaling unit comprises:  probability determining means for determining for portions of the threedimensional input model respective probabilities that the corresponding portions of the scaled threedimensional output model are visible in a two dimensional view of the scaled threedimensional output model, the determining being based on a projection of the threedimensional input model in a viewing direction; and  transforming means for geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities, wherein the portions of the scaled threedimensional output model which will not be visible are disregarded. Modifications of the scaling unit and of the image display apparatus and variations thereof may correspond to modifications and variations thereof of the method described. These and other aspects of the method, of the scaling unit of the image display apparatus according to the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein: Fig. 1 schematically shows an autostereoscopic display device according to the prior art; Fig. 2 A schematically shows a top view of a threedimensional input model; Fig. 2B schematically shows a frontal view of the threedimensional input model of Fig. 2A; Fig. 2C schematically shows a top view of a scaled threedimensional output model which is based on the threedimensional input model of Fig. 2A; Fig. 3A schematically shows the contents of a zbuffer stack after the computation of a view on basis of a threedimensional input model; Fig. 3B schematically shows the contents of the zbuffer stack of Fig. 3A after segmentation; Fig. 3C schematically shows the contents of the zbuffcr stack of Fig. 3B after updating the probabilities of visibility; Fig. 4 schematically shows a scaling unit according to the invention; Fig. 5 schematically shows the geometrical transformation unit of the scaling unit according to the invention; Fig. 6 schematically shows the scaling ofa threedimensional input model into a scaled threedimensional output model; and ■ Fig. 7 schematically shows an image display apparatus according to the invention. Same reference numerals are used to denote similar parts throughout the figures. There are several types of models for the storage of threedimensional information:  Wireframes, e.g. as specified for VRML. These models comprise a structure of lines and feces,  Volumetric datestructures or voxel maps (Voxel means volume element These volumetric datashuctures comprise a threedimensional array of elements. Each element has three dimensions and represents a value of a property. E.g, CT (Computer tomography) data is stored as a volumetric datastructure in which each element corresponds to a respective Hounsfield value.  Twodimensional image with depth map, e.g. a twodimensional image with RGBZ values. This means that each pixel comprises a three color component values and a depth value. The three color compranent values also represent a luminance value.  Image based models, e.g. stereo image pairs or multiview images. These types of images are also called light fields. Conversions ofdatarepresented by one type of threedimensional model into another threedimensional model is possible. E.g. data represented with a wireframe or a twodimensional image with depth map can be converted by means of rendering into data represented with a volumetric datastructure or image based model. The amount of depth which can be realized with a threedimensional image display device depends on its type. With a volumetric display device the amount of depth is fully detennined by the dimensions of the display device. Stereo displays with e.g. glasses have a soft limit for the amount of depth which depends on the observer. Observers might become fatigued if the amount of depth Is too much caused by a "conflict" between lens accommodation and mutual eye convergence. Autostereoscopic display devices, e.g, based on an LCD with a lenticular screen for multiple views, have a theoretical maximum depthrange d which is determined by the amount of views. Fig. 1 schematically shows an autostereoscopic display device 100. Outside the physical display device 100, but within a virtual box 102, it can show objects within a certain depthrange, to viewers within a certain viewing anglea, These two together defme a constant k in pixels, which is a percentage of the number N of pixels horizontally on the display device 100. This k equals the maximum disparity that the display device can show. The maximum depthrange can be exceeded, resulting in loss of sharpness. Fig. 2A schematically shows a top view of a threedimensional input model. The threedimensional input model comprises a number of objects, 200208 which differ in size and sbapje. Fig. 2B schematically shows a frontal view of the threedimensional input model of Fig. 2A. It can be clearly seen that some of the objects occlude others objects, completely or partly. That means that some portions of the threedimensional input model are not visible in the frontal view. E.g. one of the objects, namely object 200 is completely invisible in the frontal view. Fig. 2C schematically shows atop view of a scaled threedimensional output model which is based on the threedimensional input model of Fig. 2A. A first one of the objects of the scaled threedimensional input model which corresponds to a first one 200 of the objects of the threedimensional input model is clipped to the border of the depthrange. A second one 224 of the objects of the scaled threedimensional input mode! which corresponds to a second one 208 of the objects of the threedimensional input model is located nearby the other border of the depthrange. A third one of the objects of the scaled threedimensional mput model which corresponds to a third one 202 of the objects of the threedimensional mput model comprises three portions 210214 of which two are visible and a third one 212 is not visible in any of the possible views. A fourth one of the objects of the scaled threedimensional input model which corresponds to a fourth one 204 of the objects of the threedimensional input model comprises two portions 216,218 of which a furst one 216 is visible and a second one 218 is not visible in any of the possible views. A fifth one of the objects of the scaled threedimensional input model which corresponds to a fifth one 206 of the objects of the threedimensional input model comprises two portions 220,222 of which a first one 210 is visible and a second one 222 is not visible in any of the possible views. In connection with Figs. 3A3C it will be described how the probability of visibility can be detennined for portions of a threedimensional input model comprising a nmnber of objects 18. This is based on the method according to the invention comprising the following steps:  computing the projection of the threedimensional input mode! by means of a zbuffer stack;  indicating which of the zbuffer stack elements are visible in the projection by means of comparing zvalues of paiis of zbuffer st^k elements having mutually equal Xvalues and mutually equal yvalues; and  determining wliich groups of zbuffer stack elements form the respective portions of the tiireedimensional input model, by means of segmentation of the zbuffer stack elements;  indicating the probability of visibility of each zbuffer stack element which is part of a group of zbuffer stack elements comprising a further zbuffer stack element which is visible, on basis of the capability of a display device. In this case a zbuffer stack element corresponds with a portion of the threedimensional input model. Fig. 3A schematically shows the contents of a zbuffer stack 300 after the computation of a view 302 on basis of the threedimensional input model. The zbuffer stack 300 comprises a number of datacells 304322 for storage of data representing the portions of the threedunensional input model. This zfauffer stack 300 comprises three levels, / = 1, 1 = 2 and i = 3. The indicated numerals 1 8 in he datacells 304322 of the zbuffer stack 300 correspond to the different objects 1 8 of the threedimensional input model. For example, in a first datacell 312 data related lo a portion of the second object 2 is stored. In the zbuffer stack 300 the zvalues, i.e. depth values, of the portions of the threedimensional mput model are stored. Besides that, the corresponding color and luminance values are stored. In Figs 3A3C only a number of datacells 304322 is depicted for a single value of the ycoordinate. Creating a projection on basis of a zbuffer stack 300 is wellknown in the prior art. Because of the nature of a zbuffer stack it is very easy to determine which of the zbuffer stack elements are visible in the view 302: those with the highest level, i.e. j = 3 in this case. Thus, in the view 302 only those portions of which the data is stored in the datacells, e,g. 304310 of the higest level ( = 3 arepresent. In Fig. 3A the datacells corresponding to the portions which are visible in this particular view are shaded. It can be seen that in this case only a portion of the second object 2 is visible and only a part of the eighth object 8 is visible. Most of the fourth object 4 is visible, except that part that is occluded by the fifth object 5. Fig. 3B schematically shows the contents of the zbuffer stack of Fig. 3A after segmentation. The segmentation is applied to determine which groups of the zbuffer stack elements form the respective objects 18 of the threedimensional input model. For this purpose, the contents of the datacells 304322 of the zbuffer stack 300 are analyzed to determine which groups of the datacells store the data belonging to the different objects of the threedimensional mput model. This segmentation, or object extraction is based on the stored values, e.g. color, luminance and depth in combination with the distance between the different datacells 304322 of the zbuffer stack 300. In Fig. 3B the different groups of datacell are indicated by means of the curves with dots at the ends. Besides, luminance, color and depth also the probability of visibility is stored in memory. Per datacell a value of that quantity is stored. Typically the following types can be distinguished: 1; definitely will be visible in one of the projections; 11: most probably will be visible in one of the projections;  Ill: most probably will not be visible in one of the projections; and  tV: definitely will not be visible in one of the projections. After the first projection, type I is assigned to a number of zbuffer stack elements, e.g. 304 and 306. Other zbuffer stack elements might get mitialized with type IV or III. After the segmentation, the probability of visibility of a number of the zbuffer stack elements is updated, on basis of the capability of the display device. Typically, the probability of visibility of each 2bufFer stack element which is part of a group of zbuffer stack elements comprising a further zbutfer stack element which is visible (Type I), is adapted. For example, in a first datacell 312 data related to a portion of the second object 2 is stored. After the segmentation it became clear that the first datacell 312 belongs to a group of datacells to which also a second datacell 304 belongs of which it is known thai it stores data belonging to a portion ofobject 2 which is visible. On basis ofthat and on basis of the known viewing angle and depthrange it is decided to update the probabiOty of visibility of the second datacell 312 to type II. In Fig. 3C with an arrow 324 it is indicated that this zbuffer stack element might be visible in another view. Also the probability of visibility of other datacells 314322 is updated in a similar way. Fig. 3C schematically shows the contents of a zbuffer stack of Fig, 3B after updating the probabilities of visibiliQ'. The zbufier stack elements being assigned a probability of visibility of type 1 or n are shaded. In the example described in connection with Figs. 3A3C all objects are opaque, i.e, not transparent. It should be noted that the method accordmg to the invention also can be applied for transparent objects. In that case, also a value representing the transparency of each of the zbuffer stack elements, i.e. portions of the threedimensional input models, should be stored in the respective datacells 304322. Fig. 4 schematically shows a scaling unit 400 according to the invention for scaling a threedimensiond input model into a scaled threedimensional output model. The scaling unit 400 comprises:  a probability determming unit 402 for determining for portions of the threedimensional input model respective probabilities that the corresponding portions of the scaled threedimensional output model are visible in a twodimensional view of the scaled threedimensional output model; and  a geometrical transforaiation unit 408 for geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities. Data representing the threedimensional input model is provided at the input connector 410 of the scaling unit 400 and the scaling unit 400 provides data representing die scaled threedimensional output model at the output connector 412. Via the control interface 414 control data related to a display device, e.g, the depthrange and maximum viewing angle, are provided. The worldng of the probability determining unit 402 is described in connection with Figs. 3A3C. The geometrical transformation unit 408 comprises a minimum and maximum detection unit 404 and a gain ontrol unit 406. The minimum and maximum detection unit 404 is arranged to determine for each array of zbuffer stack elements having mutually equal xvalues and mutually equal yvalues a corresponding in mimum zvalue and maximum zvalue. The gain control unit 406 is arranged to compute scaled zvalues for the zbuffer stack elements on basis of the respective minimum zvalues and maximum zvalues and the depthrange of the display device. The working of the geometrical transformation unit 408 according to the invention will be described in more detaill in connection with Fig. 4. The probability determining unit 402, the minimum and maximum detection unit 404 and the gain control unit 406 may be implemented using one processor. Normally, these fiinctions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetically and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality. Fig. 5 schematically shows the geometrical transformation unit 408 of the scaling unit 400 accordii^ to the invention. This geometrical transformation unit 408 is designed to process the data in a zbuffer stack 300 as described in connection with Fig. 3C. The data being stored in the zbuffer stack 300 is provided for each x ,y pair. In the example described in connection with Fig. 3C there are three levels per array / = 1, i = 2 or J = 3. For each of the levels a zvalue is provided and a probability of visibility. If a particular zbuffer element is of type IV, i.e, definitely not visible in one of the m one of projections, then the corresponding data is provided to the clipping unit 518. Otherwise the data is provided to the maximum detector 502 and the minimum detector 504. The maximum detector 502 is arranged to extract the maximum zvalue per x,y coordinate and the minimum detector 504 is arranged to extract the minimum zvalue per x,y coordinate. The maximum zvalues for each x,y coordinate are provided to a first filter unit 506. The minimum zvalues for each x,y coordinate are provided to a second filter unit 508. Preferably the first filter unit 506 and the second filter unit 508 are morphologic filters. Morphologic filters are common nonlinear image processing units. See for instance the article "Lowlevel image processing by maxmin filters" by P.W. Verbeek, H,A. Vrooman and L.J. van Vliet, in "Signal Processing", vol. 15, no. 3, pp. 249258,1988. Other types of filters, e.g. lowpass might also be applied for the first filter unit 506 and the second filter unit 508. The output of the first filter unit 506 is a kind of relief of maximum 2values and the output of the second filter unit 508 is a kind of relief of minimum zvalues. The output of the first filter unit 505 and the second filter unit 508 are combined by a first combining means 510 which adds the two signals and divides the sum by a factor two. The output of tt» first combining means 510 is a kind of mean value, i.e. a mean relief. This output is subtracted from the input data by means of the subtraction unit 514. This subtraction can be interpreted as a (and of offset correction. The output of the first filter unit 506 and the second filter unit 508 are also combined by a second combining means 512 which subtracts the two signals and divides the sum by a factor two. The output of the second combining means 512 is a kind of range value which is used to normalize the output data of the subtraction unit 514. This normalization is performed by means ofthenonnalisation unit 516. The output ofthe normalisation unit 516 is provided to the multiplier unit 520 which maps the data to the available depthrange or optionally preferred depthrange. In this case, t is a function of the available display depth range and viewing angle. Fig. 6 schematically shows the scaling of a threedimensional input model into a scaled threedimensional output model. The scaling is performed by means of the stretching approach as described in connection with Fig. 5. The threedimensiona] input model comprises three objects 602606 which are visible in a view which corresponds to a projection which is applicable for the display device. The display device has a depftirange d. The stretching is such that the usage of the available depthrange is optimal. That means that if there are only two objects for a certain x,y pair Uien one of the objects, or a portion of it, is moved to the front border of the depthrange d and the other object, or a portion of it, is moved to the back border of the depthrange d ■ For example the first input object 602 partly overlaps with the second input object 604, i.e. the first input object 602 is partly occluded by the second input object 604. The result is that a first portion 612 corresponding to the first input object 602 is mapped to the back border of the depthrange d and that a first portion 614 corresponding to the second input object 604 is mapped to the front border of the depthrange d. If there is only one object for a certain x,y pair then this object, or a portion of it, is moved to the center of the depthrange d. For example a first portion 620 corresponding to the third input object 606 is mapped to the center of the depthrange d. Also a second portion 618 correspondingto the second input object 604 is mapped to the front border of the depthrange d and a second portion 608 corresponding to the first input object 602 is mapped to the center of the depthrange d. To make mappings from portions of one and the same input object smooth, there are transition portions. This smoothing is caused by the first filter unit 506 and the second filter unit 508. For example, a third portion 610 coiresponding to the first mput object 602 fonns a transition from the center to the back border of the depthrange d, to connect the first portion 612 and the second portion, corresponding to the first input object 602. Also a third portion 616 corresponding to the second input object 604 forms a transhion from the center to the front border of the depthrange d, to connect the first portion 614 and the second portion 618, corresponding to the first second object 604. Fig. 7 schematically shows an image display apparatus 700 according to the invention, comprising:  a receiver 702 for reccivmg a signal representing a threedimensional input model;  a scaling unit 400 for scaling the threedimensional input model into a scaled threedimensional output model, as described in connection with Fig. 4; and  a display device 100 for visualizing a view of the scaled threedimensionai output model. The signal may be a broadcast signal received via an antenna or cable but may also be a signal fl'om a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). The signal is provided at the input connector 710, The image display ^paratus 700 might e.g. be a TV. Optionally the image display apparatus 700 comprises storage means, like a harddisk or means for storage on removable media, e.g. optical disks. The image display apparatus 700 might also be a system being applied by a filmstudio or broadcaster. It should be noted that the abovementioned embodiments illustrate rather than limit the invention and that those skilled in the ait will be able to design alteraative embodiments without departing fitim the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be consttucted as limiting the claim. The word 'comprising' does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware. WE CLAIM: 1. A method of scaling a threedimensional input model representing a threedimensional scene comprising a number of objects (200208) into a three dimensional scaled output model (210224), the method comprising: providing the threedimensional input model (200208) on an input connector (410); determining for portions of the threedimensional input model (20020S) respective probabilities that the corresponding portions of the scaled threedimensional output model (210224) are visible in a twodimensional view of the scaled threedimensional output model, the determining being based on a projection of the threedimensional input model (200208) in a viewing direction; geometrically transforming portions of the threedimensional input model into the respective portions of the scaled threedimensional output model on basis of the respective probabilities, wherein the portions of the scaled threedimensional output model which will not be visible are disregarded; and providing the scaled threedimensional output model (210224) on an output connector (412). 2. A method of scaling a tfireedimensional input model (200208) as claimed in claim 1, whereby determining the probability that the first one of the portions is visible, is based on comparing a first value of a first coordinate of the first one of the portions with a second value of the first coordinate of a second one of the portions. 3. A method of scaling a threedimensional input model (200208) as claimed in claim 2, whereby determining the probability that a first one of the portions is visible, is based on capabilities of a display device (100) on which the threedimensional scaled output model (210224) will be displayed. 

0136chenp2006 abstractduplicate.pdf
0136chenp2006 claimsduplicate.pdf
0136chenp2006 correspondenceothers.pdf
0136chenp2006 correspondencepo.pdf
0136chenp2006 description (complete)duplicate.pdf
0136chenp2006 description (complete).pdf
Patent Number  225040  

Indian Patent Application Number  136/CHENP/2006  
PG Journal Number  49/2008  
Publication Date  05Dec2008  
Grant Date  30Oct2008  
Date of Filing  10Jan2006  
Name of Patentee  KONINKLIJKE PHILIPS ELECTRONICS N.V  
Applicant Address  GROENEWOUDSEWEG 1, NL5621 BA EINDHOVEN,  
Inventors:


PCT International Classification Number  G06T15/00  
PCT International Application Number  PCT/IB2004/051124  
PCT International Filing date  20040705  
PCT Conventions:
