Title of Invention

A METHOD FOR DEFINING SURFACE PARAMETER FOR A 3D OBJECT MODEL IN A COMPUTER SYSTEM

Abstract The present invention relates to a method for a computer system includes posing a 3D model (500) in a first configuration, determining a first 2D (510) view of the 3D model in the first configuration, posing the 3D model in a second configuration (550), determining a second 2D (570) view of the 3D model in the second configuration, associating a first 2D image with the first 2D view of the model, associating a second 2D (570) image with the second 2D view of the model, associating a first set of surface parameters with a surface of the 3D model that is visible in the first 2D view m response to the first 2D image and the first configuration for the 3D model, and associating a second set of surface parameters with a surface of the 3D model that is visible in the second 2D view in response to the second 2D image and the second configuration for the 3D model, FIGURE 5
Full Text

BACKGROUND OF THE INVENTION
[0001 ] The present invention relates to computer animation. More specifically, the present invention relates to enhanced methods and apparatus for specifying surface properties of animation objects.
[0002] Throughout the years, movie makers have often tried to tell stories involving make-beUeve creatures, far away places, and fantastic things. To do so, they have often relied on animation techniques to bring the make-believe to "life." Two of the major paths in animation have traditionally included, drawing-based animation techniques and stop motion animation techniques.
[0003] Drawing-based animation techniques were refined in the twentieth century, by movie makers such as Walt Disney and used in movies such as "Snow White and the Seven Dwarves" and "Fantasia" (1940). This animation technique typically required artists to hand-draw (or paint) animated images onto a transparent media or eels. After painting, each eel would then be captured or recorded onto film as one or more fi^ames in a movie.
[0004] Stop motion-based animation techniques typically required the construction of miniature sets, props, and characters. The filmmakers would construct the sets, add props, and position the miniature characters in a pose. After the animator was happy with how everything was arranged, one or more firames of fihn would be taken of that specific arrangement. Stop motion animation techniques were developed by movie makers such as Willis O'Brien for movies such as "King Kong" (1932). Subsequently, these techniques were refined by animators such as Ray Harryhausen for movies includmg "The Mighty Joe Young" (1948) and Clash Of The Titans (1981).
[0005] With the wide-spread availability of Computers in the later part of the twentieth century, animators began to rely upon computers to assist in the animation process. This included using computers to facilitate drawing-based animation, for example, by painting images, by generating in-between unages ("tweening"), and the like. This also included using computers to augment stop motion animation techniques. For example, physical models could be represented by virtual models in computer memory, and manipulated.
[0006] One of the pioneering companies in the computer aided animation (CAA) industry was Pixar, dba Pixar Animation Studios. Over the years, Pixar developed and offered botti

computing platforms specially designed for CAA, and Academy-Award® winning rendering software known as RenderMan®.
[0007] Over the years, Pixar has also developed software products and software environments for internal use allowing users (modelers) to easily define object rigs and allowing users (animators) to easily animate the object rigs. Based upon such real-world experience, the inventors of the present invention have determined that additional features could be provided to such products and environments to faciUtate the object definition and animation process. One such feature includes methods and apparatus for facilitating the definition of surface properties of objects.
[0008] The inventors of the present invention have determined that improved methods for specifying surface parameters to an object are needed.
BRIEF SUMMARY OF THE INVENTION
[0009] The present invention relates to computer animation. More specifically, the present invention relates to methods and apparatus allowing a user to specify surface parameters of an object or portions of an object tJiat are in different poses.
[0010] Embodiments of the present invention are used to help manage the process of the creating of three dimensional "paintings." Embodiments control the definition of multiple poses, manage the rendering of views, provide the mechanism for transmitting texture information to the surface materials, provide cataloging and source control for textures and other data files, and the like.
[0011 ] With embodiments of the present invention, a user can effectively paint "directly" onto a three dimensional object using any conventional two dimensional painting program. In one embodiment, the painting program relies on "layers." With embodiments, the user paints a number of two dimensional paintings (e.g. overlay images), on different views of the object. Typical views are cameras with orientation of "firont," "back," and the like. With embodiments of the present invention, the user can re-pose the object model, in multiple configurations, if the model is too hard to paint fully in a single reference pose. Additionally, the user can paint a number of overlay images of views of the reposed object.
[0012] A typical workflow of embodiments of the present invention includes: loading an object model into the system and posing the object model in different configurations. For example to paint a table the user may have one pose that defines the table and another pose

that "explodes" the model by translating the table legs away firom the bottom of the table. Next, the workflow may include creating or defining one or more views to paint on the model in the different poses.
[0013] In various embodiments, a rendering pass is performed on the object in the different poses and in the defined views. The results of the rendering pass are typically bitmap images, views, of the rendered object and associated depth maps of the rendered surfaces. The workflow may also include the user loading the rendered bitmaps into a two dimensional paint program and painting one or more passes representing color, displacement, or the like.
[0014] Later, the system computes at render time the result of a planar projection (reverse map) of the each object in each pose to each views and stores the resulting 2D coordinates of every visible surface point. The surface shader will use these stored 2D coordinates for evaluating surface parameters, such as 2D texture maps, for each pass. The values returned by the pass computation are then used to produce different effects in the shader like coloring or displacing the surfaces affected by paint. In other embodiments, non-planar projections such as perspective projections are used.
[0015] In embodiments, the depth map is evaluated during the planar projection phase process to ensure that only the foremost surface relative to the projected view receives tiie paint. Additionally, the surface normals are taken into consideration during the projection process to avoid projecting paint onto surfaces that are perpendicular or facing away the projection view.
[0016] In embodiments of the present invention, after all the projection passes are resolved for every view and every pose of the object model, the surface shader finishes its computation. The resulting rendered object model is typically posed in a different pose firom the poses described above.
[0017] In various embodiments, the rendered object model is typically rendered in context of a scene, and the rendered scene is stored in memory. At a later time, the rendered scene is typically retrieved firom memory and displayed to a user. In various embodiments, the memory may be a hard disk drive, RAM, DVD-ROM, CD-ROM, fihn media, print media, and the like.
[0018] According to the above, embodiments of the present invention allow users to pose articulated three-dimensional object models in multiple configurations for receiving projected paint firom multiple views. These embodiments increase the efficiency and efiFectiveness of

applying surface parameters, such as multiple texture maps, colors, and the like, onto surfaces of complex deformable three dimensional object models.
[0019] Advantages of embodiments of the present invention include the capability to allow the user to paint any three dimensional object model firom multiple viewpoints and multiple pose configurations of the object. The concept of multiple pose configurations allows the user to paint in areas that may not be directly accessible unless the model is deformed or decomposed into smaller pieces.
[00201 Embodiments of the present invention introduce unique techniques for organizing the multiple view/poses and for ^plying the resulting texture maps back onto the object. More specifically, tlie embodiments selectively control which surfaces receive paint using the surface orientation (normals) and depth maps rendered firom the projecting views.
[0021] According to one aspect of the invention, a method for a computer system is described. One method includes posing at least a portion of a three-dimensional object model in a first configuration, determining a first two-dimensional view of at least the portion of the three-dimensional object model while in the fijrst configuration, posing the portion of the three-dimensional object model in a second configuration, and determining a second two-dimensional view of the portion of the three-dimensional object model while in the second configuration. Various techniques also include associating a first two-dimensional image with the first two-dimensional view of at least the portion of the object model, and associating a second two-dimensional image with the second two-dimensional view of the portion of the object model. The process may also include associating a first set of surface parameters with a surface of at least the portion of the three-dimensional object model that is visible in the first two-dimensional view in response to the first two-dimensional image and in response to the first configuration for at least the portion of the three-dimensional object model, and associating a second set of surface parameters with a surface of the portion of the three-dimensional object model that is visible in the second two-dimensional view in response to the second two-dimensional image and in response to the second configuration for the portion of the three-dimensional object model.
[0022] According to another aspect of the invention, a computer program product for a computer system including a processor is described. The computer program product includes code that directs the processor to receive a first configuration for at least a portion of a three-dimensional object, code that directs the processor to determine a first two-dimensional

image, wherein the first two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the first configuration, code that directs the processor to receive a second configuration for at least the portion of the three-dimensional object, and code that directs the processor to determine a second two-dimensional image, wharein the second two-dimensional image exposes a surface of at least the portion of the three-dimensional object in the second configuration. Additional computer code may include code that directs the processor to receive a first two-dimensional paint image, wherein the first two-dimensional paint image is associated with the first two-dimensional image, and code tliat directs the processor to receive a second two-dimensional paint image, wherein the second two-dimensional paint image is associated with the second two-dimensional image. The code may also include code that directs the processor to determine a first group of parameters in response to the first two-dimensional paint image, wherein the first group of parameters is associated with the surface of at least the portion of the three-dimensional object in the first configuration, and code that directs the processor to determine a second group of parameters in response to the second two-dimensional paint image, wherein the second group of parameters is associated with the siu-face of at least the portion of the three-dimensional object in the second configuration. The codes may include machine readable or human readable code on a tangible media. Typical media includes a magnetic disk, an optical disk, or the like.
[0023] According to yet another aspect of the present invention, a computer system is described. The computer system typically includes a display, a memory, and a processor. In one computer system, the memory is configured to store a model of a three-dimensional object, a first pose and a second pose for the three-dimensional object, a first two-dimensional image and a second two dimensional-image, and surface shading parameters associated with a surface of the three-dimensional object. In the computer system, the processor is typically configured to output a first view of the three-dimensional object in the first pose to the display, configured to output a second view of the three-dimensional object in the second pose to the display, and configured to receive the first two-dimensional image and to receive the second two-dimensional image. The processor may also be configured to determine a first set of surface parameters associated with surfaces of the three-dimensional object in response to the first view of the three-dimensional object and in response to the first two-dimensional image, and configured to determine a second set of surface parameters
6

associated with additional surfaces of the three-dimensional object in response to the second view of the three-dimensional object and in response to the second two-dimensional image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In order to more folly understand the present invention, reference is made to the accompanying drawings. Understanding that these drawings are not to be considered limitations in the scope of the invention, the presently described embodiments and the presently understood best mode of the invention are described with additional detail through use of the accompanying drawings in which:
[0025] Fig. 1 illustrates a block diagram of a system according to one embodiment of the present invention;
[0026] Fig. 2 illustrates a block diagram of an embodiment of the present invention;
[0027] Figs. 3 A-B illustrates a flow process according to an embodiment of the present invention;
[0028] Fig. 4 illustrates an example of an embodiment;
[0029] Figs. 5A-C illustrate one example of an embodiment of the present invention;
[0030] Figs. 6A-D illustrate another example of an embodiment of the present invention; and
[0031] Figs. 7A-C illustrate another example of an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION [0032] In the following patent disclosure, the following terms are used:
[0033] Gprim (geometric primitive): a single three dimensional surface defined by a parametric function (Bspline), by a collection of three dimensional points organized in an arbitrary number of faces (polygons), or the like.
7

[0034] Model (Object Model): a collection of Gprims organized in an arbitrary number of faces (polygon meshes and subdivision surfaces), implicit surfaces or the like. The system does not require a 2D surface parameterization to perform its operations.
[0035] View: an orthographic or perspective camera that can generate an image of the model from a specific viewpoint.
[0036] Pose: the state of the model in terms of specific rigid transformations in its hierarchy and specific configuration of its Gprims. A pose also describes the state of one or more views.
[0037] A pose typically includes the position and orientation of both a model and all the view cameras. In embodiments of the present invention, a pose specifies a particular configuration or orientation of more than one object with in an object model. For example, a pose may specify that two objects are a particular distance firom each other, or that two objects are a particular angle with respect to each other, or the like. Examples of different poses of objects will be illustrated below.
[0038] Whenever the character is positioned, its position is typically saved as a named pose so it can be referenced later by the system and the user. A user saves a new pose afler re¬positioning the model and establishing the new camera views. A painting (overlay image) that is created m a particular view is tied intimately to that camera's position and orientation.
[0039] Pass: type of painting, for example "color" or "displacanent", along with the number of color channels that are to be used in the pass. The name provides a handle with which to reference a set of paintings within the shader. Typically the names are arbitrary.
[0040] Fig, 1 is a block diagram of typical computer system 100 according to an embodiment of the present invention.
[0041 ] In the present embodiment, computer system 100 typically includes a monitor 110, computer 120, a keyboard 130, a user input device 140, a network interface 150, and the like.
[0042] In the present embodiment, user input device 140 is typically embodied as a computer mouse, a trackball, a track pad, wireless remote, drawing tablet, an integrated display and tablet (e.g. Cintiq by Wacom), voice command system, eye tracking system, and
9

the like. User input device 140 typically allows a user to select objects, icons, text and the like that appear on the monitor 110.
[0043] Embodiments of network interface 150 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber Une (DSL) unit, and the like. Network interface 150 are typically coupled to a computer network as shown. In other embodiments, network interface 150 may be physically integrated on the motherboard of computer 120, may be a software program, such as soft DSL, or the like.
[0044] Computer 120 typically includes familiar computer components such as a processor 160, and memory storage devices, such as a random access memory (RAM) 170, disk drives 180, and system bus 190 interconnecting the above components.
[0045] In one embodiment, computer 120 is a PC compatible computer having one or more microprocessors such as PentiumlV™ or Xeon"^** microprocessors from Intel Corporation. Further, in the present embodiment, computer 120 typically includes a LINUX-based operating system.
[0046] RAM 170 and disk drive 180 are examples of tangible media for storage of data, audio / video files, computer programs, scene descriptor files, object data files, overlay images, depth maps, shader descriptors, a rendering engine, a shading engine, output image files, texture maps, displacement maps, painting environment, object creation environments, animation environments, surface shading enviroimient, asset management systems, databases and database management systems, and the Uke. Other types of tangible media include floppy disks, removable hard disks, optical storage media such as CD-ROMS, DVDs and bar codes, semiconductor memories such as flash memories, read-only-memories (ROMS), battery-backed volatile memories, networked storage devices, and the like.
[0047] In the present embodiment, computer system 100 may also include software that enables communications over a network such as the HTTP, TCP/IP, RTP/RTSP protocols, and the like. In alternative embodiments of the present invention, othra: communications software and transfer protocols may also be used, for example IPX, UDP or the like.
[0048] Fig. 1 is representative of computer rendering systems capable of embodying the present invention. It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, the computer may be a desktop, portable, rack-mounted or tablet configuration. Further, the use of other micro processors are contemplated, such as Pentium'^ or Itanium'™

microprocessors; OpteronTM or AthlonXP"^ microprocessors from Advanced Micro Devices, Inc; PowerPC G4TM, 05^" microprocessors from Motorola, Inc.; and the like. Further, other types of operating systems are contemplated, such as Windows® operating system such as WindowsXP®, WindowsNT®, or the Uke from Microsoft Corporation, Solaris from Sun Microsystems, LINUX, UNIX, MAC OS from Apple Computer Corporation, and the like.
[00491 Fig. 2 illustrates a block diagram of an embodiment of the present invention. Specifically, Fig. 2 illustrates an animation environment 200, an object creation environment 210, and a storage system 220.
[0050] In the present embodiment, object creation environment 210 is an environment that allows users (modellers) to specify object articulation models, including armatures and rigs. Within this environment, users can create models (manually, procedurally, etc.) of objects and specify how the objects articulate with respect to animation variables (Avars). In one specific embodiment, object creation environment 210 is a Pixar proprietary object creation environment known as "Gepetto." In other embodiements, other types of object creation environments can be used.
[0051] In the present embodiment, object creation environment 210 may also be used by users (shaders) to specify surface parameters of the object models. As will be described below, an environment may be provided within object creation enviroimient 210, or separately, that allows users to assign parameters to the surfaces of the object models via painting. In various embodiments, the surface parameters include color data, texture mapping data, displacement data, and the like. These surface parameters are typically used to render the object within a scene,
[0052] In embodiments of the present invention, the environment allows the user to define poses for the object model. Additionally, it allows the user to render views of the object model in the different poses. The environment also provide the mechanism to perform planar projections (with possible use of depth maps and surface normals) on "reference" poses, also known as Pref, while shading the and rendering the standard object configuration, and maintains association among the different views, different poses, different paint data, different surface parameter data, and the like, as will be described below.
[0053] In the present embodiment, the object models that are created with object creation environment 210 may also be used in animation environment 200. Typically, object models are heirarchically built. The heirarchical nature for building-up object models is useful

because different users (modellers) are typically assigned the tasks of creating the different models. For example, one modeller is assigned the task of creating a hand model 290, a different modeller is assigned the task of creating a lower arm model 280, and the like. [0054] In the present embodiment, arumation environment 200 is an envu-omnent that allows users (animators) to manipulate object articulation models, via the animation variables (Avars). In one embodiment, animation environment 200 is a Pixar proprietary animation enviroment known as "MenV," although in other embodiments, other animation enviroranents could also be adapted. In this embodiment, animation environment 200 allows an animator to manipulate the Avars provided in the object models (generic rigs) and to move the objects with resspect to time, i.e. animate an object.
[0055] In other embodiments of the present invention, animation enviroimient 200 and object cration enviromnent 210 may be combmed into a single integrated enviroimient.
[0056] In Fig. 2, storage system 220 may include any organized and repeatable way to access object articulation models. For example, in one embodiment, storage system 220 includes a simple flat-directory structure on local drive or network drive; in other embodiments, storage system 220 maybe an asset management system or a database access system tied to a database, or the like. In one embodiment, storage system 220 receives references to object models from animation environment 200 and object creation environment 210. In return, storage system 220 provides the object model stored therein. Storage system 220 typically also stores the surface shading parameters, overlay images, depth maps, etc. discussed herein.
[0057] Previously, Pixar's object creation enviromnent allowed a user to paint and project images (textures) from multiple views onto an object model in a specilSc configuration (pose). However, the inventors of the present invention recognized the object creation environment did not support views of objects in different poses and that it was difficuh to apply textures on compUcated three dimensional models in a single pose.
[0058] Figs. 3A-B illustrates a flow process according to an embodiment of the present invention. Initially, a three dimensional object model is provided, step 300. Typically, one or more users (object modelers) specify a geometric representation of one or more objects via an object creation environment. Together, these objects are combined to form a model of a larger object. In the present embodiment, the modeler may use an object creation environment such as Gepetto, or the like.
fl

[00591 Next, in the present embodiment, additional users specify how the surface of the objects should appear. For example, such users (shaders) specify any number of surface effects of the object such as base color, scratches, dirt, displacement maps, roughness and shininess maps, transparency and control over material type. To do so, the user takes the three dimensional object model and specifies an initial pose, step 310. hi other embodiments, this step need not be specifically performed, as objects have default poses. For example, an object model for a character such as an automobile may have a defauh pose with its doors closed. At the same time, the user typically specifies one or more view camera position. Li other embodiments, a number of default cameras may be used for each object. For example, in various embodiments, commonly specified projection views include a top view, a left side view, a right side view, a bottom view, and the like. Additionally, the camera may be an oblique view, or the like. Projection views may be planar, but may also be non-planar and projective. As examples, perspective projections are contemplated, where curved projectors map an image onto a curved surface.
[0060] Next, in the present embodiment, the computer system renders a two dimensional view of the three-dimensional object in the first pose, step 320. More specifically, the system renders each view by using the object model, the pose, and view camera data, hi various embodiments, the rendering pass may be a high quality rendering via a rendering program such as Pixar's Renderman product. Li other embodiments, the rendering / shading process may be performed with a low quality rendering process, such as GL, and GPU hardware and software Tenderers.
[0061] In various embodiments, each rendered view may be stored as individual view image files, or combined into a larger file. Along with each rendered view, a depth map is also generated, firom which the planar projection fimction, described below, utiUzes.
[0062] In the present embodunent, the system displays the one or more rendered views of the object, step 330. In embodiments of the present invention, this step occurs in a user environment that allows the user to graphically assign pixel values on a two dimensional image. Commonly, such environments are termed to include "paint" functionahty. In one embodiment, the one or more views cai be smiultaneously displayed to the user.
[0063] Next, in embodiments, a user assigns pixel values with the views of the object, step 340. In one embodiment, the use performs this action by graphically painting "on top" of the
12.

views of the object. The painting is analogous to a child painting or coloring an image in a coloring book. For example, the user applies different brushes onto the views of the object, using an overlay layer or the like. The use of mechanisms analogous to "layers" is contemplated herein. In the present embodiments, the different brushes may have one or more gray scale values, one or more colors, and the like.
[0064] As an example, the user may use a fine black brush to draw a crack-type pattern in an overiay layer of the view. In another example, the user may use a spray paint-type brush to darken selected portions in an overlay layer of the view. In yet another example, the user may use a paint brush to color an overlay layer of the view. In still other embodiments, other ways to specify an overlay layer image are also contemplated, such as the application of one or more gradients to the image, the apphcation of manipulations limited to specific portions of the overlay layer image (e.g. selections), the inclusion of one or more images into an overlay layer image (e.g. a decal layer), and the like.
[00651 Ii5 the present embodiment, the overlay layer image for each view is then stored, step 350. In various examples, the overlay layer images are stored in separate and identifiable files firom the two dimensional view. In other embodiments, the overlay layer image is stored in a layer of the two dimensional view, or the like. In various embodiments, the file including the overlay layer image is also associated with the pose defined in step 310, and depth map determined in step 320, step 360.
[0066] In the present embodiment, the user may decide to re-pose the three-dimensional object in a second pose, step 370. The process described above is then repeated. In various embodiments, the process of re-posing the three-dimensional object, creating one or more views, and depth maps painting on top of the views, etc. can be repeated for as many poses the user deems necessary. As an example, for a character object, one pose may be the character with an opened mouth and arms up, and another pose may be the character with a closed mouth and arms down. As another example, for a folding table, one pose may be tlie folding table unfolded, and another pose may be the folding table with its legs "exploded" or separated from the table top.
[00671 In some embodiments of the present invention, a user may see views derived from different poses of the three-dimensional object on the screen at the same time. Accordingly, the process of viewing and painting described above need not be performed only based upon

one pose of the object at a time. Additionally, the user may paint on top of views of the object from different poses in the same session. For example, for a character posed with its mouth open, the user may paint white on a layer on a view showing the character's mouth, then the user may paint black on a layer of a view showing the character's hair, then the user
A
may repaint a different shade of white on the layer on the view showing the character's mouth, and the like.
[0068] In the present embodiment, the next step is to associate values painted on each view of the three-dimensional object in the first pose back to the object, step 380. More specifically, each view of the object is typically a projection of surfaces of the three-dimensional object in the first pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay image are projected back to the three-dimensional object using the associated depth map. This functionality is enabled because the system maintains a linkage among the overlay image, the view, and the pose of the three-dimensional object. In cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the first pose.
[0069] In embodiments of the present invention, surface normals may be used to "feather" the effect of the projection onto surfaces of the three-dimensional object. For example, for a surface parallel to the projection view, the paint effect may be calculated to be -100%; whereas for a surface that is at a 30 degree angle to the projection view, the paint effect may be calculated to be -50% (sin(30)); whereas for a surface that is at a 60 degree angle to the projection view, the paint effect may calculated to be -13% (sin(60)); and the like. The amount of feathering may be adjusted by the user. In other embodiments, feathering may be used to vary the paint effect at transition areas, such as the edges or borders of the object, and the hke. In various embodiments, feathering of the projected paint, reduces smearing of the projected paint.
[0070] Fig. 4 illustrates an example of an embodiment. In this example, a three-dimensional cylmder 500 appears as a rectangle 510 in a two-dimensional view 520 of cylinder 500. According to the above process, a user paints an overlay image 530 on top of view 520. In this example, the user paints the bottom half of cylmder 500 black.
[0071] Next, as illustrated in Fig. 4, the overlay unage is projected to the three-dimensional cyUnder, accordmgly, the model of the front bottom surface 540 of cylinder 500 is associated

with the property or color of black and feathered as the surface normal points away from viewing plane. The back bottom surface 550 of cylinder 500 is not associated with the color black, as it was not exposed in view 520.
[0072] In the present example, a back view 560 and a bottom view 570 of cylinder 500 could be specified to expose remaining bottom half surfaces of cyhnder 500.
[0073] Returning to Fig. 3, the next step is to associate values painted on each view of the three-dimensional object in the second pose back to the object, step 390. Similar to the above, each view of the object is typically a projection of surfaces of the three-dimensional object in the second pose into two-dimensions. Accordingly, portions of that appear to be "painted upon" by the overlay unage are projected to the three-dimensional object using the associated depth map. Again, in cases where there are multiple rendered views, the paint is projected back for each rendered view to the three-dimensional object in the second pose.
[0074] In the present embodiment, the planar projections from step 380 and step 390 are combined and both projected back to the surface of the three-dimensional object, step 400, In other words, the users may paint upon the rendered views of the three-dimensional object that are in different poses, and have the paint data be projected back to a single three-dimensional object in a neutral pose.
[0075] The inventors of the present invention believe that the above fimctionality is significant as it allows the user to "paint" hard-to-reach portions of a three-dimensional object by allowing the user to repose the three-dimensional object, and allowing the user to paint upon the resulting rendered view. As an example, one pose may be a character with their mouth closed, and another with the mouth open. Further, examples of the use of embodiments of the present invention will be illustrated below.
[0076] In embodiments of the present invention, this step 400 can be performed before a formal rendering of the three-dimensional object. la other embodiments, step 400 occurs dynamically during a formal rendering process. For example, the data from steps 380 and 390 may be maintained in separate files. Then, when the object is to be rendered in high-quality (e.g. with Pixar's Renderman), the system dynamically combines the planar projection data from the three-dimensional object in the first pose with the planar projection data from the three-dimensional object in the second pose.
1^

[0077] The combmed planar projection data is then used to render the three-dhnensional object in typically a third pose, step 410. As an example, the first pose may be a character with both the arms down, the second pose may be the character with both the arms up, and the third pose may be the character with only one arm up.
[0078] In embodiments of the present invention, the paint data may specify any number of properties for the surface of the object. Such properties are also termed shading pass data. For a typical object surface, there may be more than one hundred shading passes. For example, tiie paint data may specify surface colors, application of texture maps, application of displacement maps, and the like. In embodiments of the present invention, the planar projections from steps 380 and 390 may apply the same properties or different properties to the surface of the object. For example, step 380 may be a surface "crack" pass, and step 390 may be a surface color pass.
[0079] In various embodiments, the object is rendered at the same time as other objects in a scene. The rendered scene is typically another two-dimensional image that is then stored, step 420. In embodiments of the present invention, the rendered scene can be stored in optical form such as fihn, optical disk (e.g. CD-ROM, DVD), or the like; magnetic form such as a hard disk, network drive, or the like, electronic form such as an electronic signal, a data packet, or the like. The representation of the resulting rendered scene may later be retrieved and displayed to one or more viewers, step 430.
[0080] Figs. 5A-C illustrate one example of an embodiment of the present invention. Specifically, Fig. 5 A illustrates a three-dimensional model of a box 600 in a closed pose. In Fig. 5B, a nimiber of two-dimensional views of box 600 are illustrated, including a front view 610, a top view 620, and a side view 630.
[0081] In Fig. 5C, a user creates overlay images 640-660 on top of views 610-630, respectively. As discussed above, the user typically paints on top of view 610-630 to create overlay images 640-660. Fig. 5D illustrates a three-dimensional model of box 670 in the closed pose after overlay images 640-660 are projected back to the three-dimensional model of box 600 in the closed pose.
[0082] Figs. 6A-D illustrate another example of an embodiment of the present invention. Specifically, Fig. 6A illustrates a three-dimensional model of a box 700 in a open pose. In
/6

Fig. 6B, two-dimensional views of box 700 are illustrated, including a top view 710, a first cross-section 720, and a second cross-section 730.
[0083] In Fig. 6C, a user creates an overlay images 740-760 on top of views 710-730, respectively. Agam, the user typically paints on top of the respective views to create overlay images. Fig. 6D illustrates a three-dimensional model of box 770 in the open pose after overlay images 740-760 are projected back to the three-dimensional model of box 700 in the open pose.
[0084] In the present embodiment, the three-dimensional model of box 670 and of box 770 are then combined into a single three-dimensional model. Illustrated in Fig. 6E is a single three-dimensional model of a box 780 including the projected back data from Fig. 5C and 6C. As shown in Fig. 6E, the three-dimensional model may be posed in a pose different from the pose in Fig. 5A or Fig. 6A.
[0085] Figs. 7A-C illustrate another example of an embodiment of the present invention. More specifically, Fig. 7A illustrates a three-dimensional model of a stool in a default pose 800. In Fig. 7B, a number of views 810 of stool 800 in the default pose are illustrated. In this example, the user can paint upon views 810, as described above. Fig. 7C then illustrates the three-dimensional model of the stool in a second pose 820. As can be seen, the legs 830 of the stool are "exploded" or separated from the sitting surface. A number of views 840 are illustrated. In this example, it can be seen that with views 840, the user can more easily paint the bottom of the sitting surface 850 and the legs 860 of the stool.
[0086] Many changes or modifications are readily envisioned. In hght of the above disclosure, one of ordinary skill in the art would recognize that the concepts described above may be applied to any number of environments. For example, the painting functions may be provided integral to an object creation environment, a separate shading environment, a third-party paint program (e.g. Photoshop, Maya, Softhnage), and the Uke. Some embodiments described above use planar projection techniques to form views of object and to project back the overlay layer to the three dimensional object. Other embodiments may also use non-planar projection techniques, to form perspective views of an object, and to project back to the three dimensional object.
[0087] In other embodiments of the present invention, the process of painting in an overlay layer and performing a planar projection back to the three dimensional object may be done in
n

real-time or near real time for multiple poses of the object. For example, the user may be presented with a first view of the object in the first pose, and a first of the object in a second pose. Next, the user paints in an overlay layer of the first view. In this embodiment, as the user paints, a planar projection process occurs that projects the paint back to the three dimensional object. Then in real time or near-real time, the system re-renders the first view of the object in the first pose and also the second view of the object in the second pose. In such embodiments, because the process occurs very quickly, the user can see the effect of the specification of surface parameters on the object in one pose, in all other poses (views of other poses).
[0088] In embodiments of the present invention, various methods for painting upon the surface are contemplated, such as with brushes, textures, gradients, filters, and the like. Further, various methods for storing the painted images (e.g. layers) are also contemplated.
[0089] The above embodiments disclose a method for a computer system, a computer system capable of performing the disclosed methods. Additional embodiments include computer program products on tangible media including software code that allows the computer system to perform the disclosed methods, and the like.
[0090] Further embodiments can be envisioned to one of ordinary skill in the art after reading this disclosure. In other embodiments, combinations or sub-combinations of the above disclosed invention can be advantageously made. The block diagrams of the architecture and flow charts are grouped for ease of understanding. However it should be understood that combinations of blocks, additions of new blocks, re-arrangement of blocks, and the like are contemplated in alternative embodiments of the present invention.
[0091] The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that various modifications and changes may be made thereunto without departing fi-om the broader spirit and scope of the invention as set forth in the claims.


1. A method for defining surface parameter for a 3D object model in a computer system (100) comprises:
posing at least a portion of a three-dimensional object model in a first configuration (600);
determining a first two-dimensional view (610, 620, 630) of at least the portion of the three-dimensional object model while in the first configuration (600);
posing the portion of the three-dimensional object model in a second configuration (700); determining a second two-dimensional view (710, 720, 730) of the portion of the three-dimensional object model while in the second configuration (700);
associating a first two-dimensional image (640, 650, 660) with the first two-dimensional view (610, 620, 630) of at least the portion of the object model;
associating a second two-dimensional image (740, 750, 760) with the second two-dimensional view (710, 720, 730) of the portion of the object model;
associating a first set of surface parameters (640, 650, 660) with a surface of at least the portion of the three-dimensional object model that is visible (610, 620, 630) in the first two-dimensional view (610, 620, 630) in response to the first two-dimensional image (640, 650, 660) and in response to the first configuration (600) for at least the portion of the three-dimensional object model (670, 780); and
associating a second set of surface parameters (740, 750, 760) with a surface of the portion of the three-dimensional object model that is visible (710, 720, 730) in the second two-dimensional view (710, 720, 730) in response to the second two-dimensional image (740, 750, 760) and in response to the second configuration (700) for the portion of the three-dimensional object model (770, 780).

2. The method as claimed in claim 1 wherein the first two-dimensional view (610, 620, 630) of the portion of the three-dimensional object model (500) is selected fi-om the group: front view (640, 510), side view (660), top view (650), bottom view (570).
3. The method as claimed in claim 1 wherein the portion of the three-dimensional object model (800) comprises a first object (820) and a second object (830); wherein the first configuration for at least the portion of the three-dimensional object model comprises the first object (820) and the second object (830) having a first relationship (810);
wherein the first configuration for the portion of the three-dimensional object model comprises the first object (820) and the second object (830) having a second relationship (840); and wherein the first relationship (810) and the second relationship (840) are different.
4. The method as claimed in claim 3 wherein the first relationship (810) and the
second relationship (840) are selected from the group: linear relationship (840),
angular relationship (810).
5. The method as claimed in claim 1 comprising: displaying the first two-
dimensional view (610, 620, 630, 520) of at least the portion of the object model (500)
on a display (110); and
creating the first two-dunensional image (640, 650, 660) by painting on top of the first two-dimensional view (610, 620, 630, 520) of at least the portion of the object model (500) on the display (110).
6. The method as claimed in claim 5 wherein the first set of surface parameters
(540, 550) is selected from the group including: surface color, surface appearance,
displacement maps, texture maps.

7. The method as claimed in claim 6 comprising: rendering the portion of the three-
dimensional object model in response to the first set of surface parameters and the
second set of surface parameters to form a rendered object (780); and
storing a representation of the rendered object in a tangible media (170, 180).
8. A system configured to perform the method as claimed in any of the preceding
method claims.


Documents:

687-chenp-2006 complete specification as granted.pdf

687-chenp-2006 correspondence others-02-07-2009.pdf

687-CHENP-2006 CORRESPONDENCE OTHERS.pdf

687-CHENP-2006 CORRESPONDENCE PO.pdf

687-chenp-2006 drawings-02-07-2009.pdf

687-chenp-2006 drawings.pdf

687-CHENP-2006 FORM 18.pdf

687-CHENP-2006 FORM 3.pdf

687-chenp-2006 form-13-02-07-2009.pdf

687-CHENP-2006 PETITIONS.pdf

687-CHENP-2006 POWER OF ATTORNEY.pdf

687-chenp-2006-abstract.pdf

687-chenp-2006-claims.pdf

687-chenp-2006-correspondnece-others.pdf

687-chenp-2006-description(complete).pdf

687-chenp-2006-drawings.pdf

687-chenp-2006-form 1.pdf

687-chenp-2006-form 3.pdf

687-chenp-2006-form 5.pdf

687-chenp-2006-pct.pdf


Patent Number 234668
Indian Patent Application Number 687/CHENP/2006
PG Journal Number 29/2009
Publication Date 17-Jul-2009
Grant Date 11-Jun-2009
Date of Filing 24-Feb-2006
Name of Patentee PIXAR
Applicant Address 1200 Park Avenue, Emeryville, CA 94608
Inventors:
# Inventor's Name Inventor's Address
1 HAHN, Thomas 1236 Ashmount Avenue, Piedmont, CA 94610
2 SAYRE, Rick 6 Windsor Avenue, Kensington, CA 94708
PCT International Classification Number G09G5/00
PCT International Application Number PCT/US2004/008993
PCT International Filing date 2004-03-23
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 60/491,160 2003-07-29 U.S.A.