Title of Invention

STORAGE MEDIUM STORING METADATA FOR PROVIDING ENHANCED SEARCH FUNCTION

Abstract A storage medium is provided for storing metadata for providing an enhanced search function using various search keywords of audio-visual (AV) data. The storage medium stores: AV data; and metadata for conducting an enhanced search of the AV data by scene using information regarding at least one search keyword. The metadata may include information regarding an entry point and/or duration, angles, etc. of each scene. Hence, the enhanced search can be conducted using various search keywords. Further, search results can be reproduced according to diverse scenarios, and the enhanced search function can be provided for movie titles that support multiple angles or multiple paths. Moreover, metadata can be created in multiple languages, thereby enabling the enhanced search function to support multiple languages.
Full Text

Description
STORAGE MEDIUM STORING METADATA FOR PROVIDING
ENHANCED SEARCH FUNCTION
Technical Field
[1] The present invention relates to reproducing audio-visual (AV) data recorded on a
storage medium, and more particularly, to a storage medium storing metadata for providing an enhanced search function.
Background Art
[2] Storage media, such as DVDs and Blu-ray discs (BDs), store audio-visual (AV)
data composed of video, audio, and/or subtitles that are compression-encoded according to standards for digital video and audio compression, such as a MPEG (Motion Picture Experts Group) standard. Storage media also store additional information such as encoding properties of AV data or the order in which the AV data is to be reproduced. In general, moving pictures recorded on a storage medium are sequentially reproduced in a predetermined order. However, the moving pictures can be reproduced in units of chapters while AV data is being reproduced,
[3] FIG, 1 illustrates a structure of AV data recorded on a typical storage medium. As
shown in FIG. 1, a storage medium (such as the medium 250 shown, for example, in FIG. 2) is typically formed with multiple layers in order to manage a structure of AV data recorded thereon. The data structure 100 includes one or more clips 110 that are recording units of a multimedia image (AV data); one or more play lists 120 that are reproducing units of multimedia image (AV data); move objects 130 including navigation commands that are used to reproduce a multimedia image (AV data); and an index table 140 that is used to specify a movie object to be first reproduced and titles of movie objects 130,
[4] The clips 110 are implemented as one object which includes a clip AV stream 112
for an AV data stream for a high picture quality movie and clip information 114 for attributes corresponding to the AV data stream. For example, the AV data stream may be compressed according to a standard, such as the motion picture experts group (MPEG). However, such clips 110 need not require the AV data stream 112 to be compressed in all aspects of the present invention. In addition, the clip information 114 may include audio/video properties of the AV data stream 112, an entry point map in which information regarding a location of a randomly accessible entry point is recorded m units of a predetermined section and the like,
[5] Each playlist 120 includes a playlist mark composed of marks which indicate the
positions of clips 110 corresponding to the playlist 120, Each playlist 120 also includes

a set of reproduction intervals of these clips 110, and each reproduction interval is referred to as a play item 122, Hence, the AV data can be reproduced in units of playlists 120 and in an order of play items 122 listed in each playlist 120.
[6] The movie object 130 is formed with navigation command programs, and these
navigation commands start reproduction of a playlist 120, switch between movie objects 130, or manage reproduction of a playlist 120 according to preference of a user.
[7] The index table 140 is a table at the top layer of the storage medium to define a
plurality of titles and menus, and includes start location information of all titles and menus such that a title or menu selected by a user operation, such as title search or menu call, can be reproduced. The index table 140 also includes start location information of a title or menu that is automatically reproduced first when a storage medium is placed on a reproducing apparatus.
Disclosure of Invention
Technical Problem
[8] However, in such a storage medium, there is no method for jumping to arbitrary
scene according to a search condition (e.g., scene, character, location, sound, or item) desired by a user and reproducing the scene. In other words, a typical storage medium does not provide a function for moving to a portion of the AV data according to a search condition (e.g., scene, character, location, sound, or item) set by the user and reproducing the portion. Therefore, the storage medium cannot offer diverse search functions.
[9] Since AV data is compression-encoded and recorded on a conventional storage
medium according to an MPEG 2 standard and multiplexed, it is difficult to manufacture a storage medium that contains metadata needed to search for a moving picture. In addition, once a storage medium is manufactured, it is almost impossible to edit or reuse AV data or metadata stored in the storage medium.
[10] Further, a currently defined playlist mark cannot distinguish multiple angles or
multiple paths. Therefore, even when AV data supports multiple angles or multiple paths, it is difficult to provide diverse enhanced search functions on the AV data.
Technical Solution
[11] Various aspects and example embodiments of the present invention provide a
storage medium storing metadata for providing an enhanced search function using various search keywords of audio-visual (AV) data. In addition, the present invention also provides a storage medium storing metadata for actively providing an enhanced search function in connection with AV data in various formats, and an apparatus and method for reproducing the storage medium.
Advantageous Effects

[12] As described above, the present invention provides a storage medium storing
metadata for providing an enhanced search function using various search keywords for
AV data, an apparatus and method for reproducing the storage medium. The present
invention can also provide the enhanced search function in connection with AV data in
various formats.
[13] In other words, the metadata for providing the enhanced search function is defined
by scene by an author, and each scene includes information regarding at least one
search keyword. In addition, each scene includes information regarding an entry point
and/or duration, angles, and so on. Hence, the enhanced search function can be
conducted using various search keywords.
[ 14] Further, search results can be reproduced according to diverse scenarios, and the
enhanced search function can be provided for movie titles that support multiple angles
or multiple paths. Moreover, metadata can be created in multiple languages, thereby
enabling the provision of the enhanced search function that supports multiple
languages.
Description of Drawings
[15] FIG. 1 illustrates a structure of AV data recorded on a typical storage medium;
[ 16] FIG. 2 is a block diagram of an example reproducing apparatus which reproduces a
storage medium storing meta data for providing an enhanced search function according
to an embodiment of the present invention;
[ 17] FIG. 3 is a flowchart illustrating a method of reproducing a recording medium
storing the metadata for providing the enhanced search function according to an
embodiment of the present invention;
[18] FIG. 4 illustrates example screens displayed an example of searching for a desired
scene using metadata for a title scene search;
[ 19] FIG. 5 illustrates the relationship between metadata for a title scene search and
audio-visual (AV) data according to an embodiment of the present invention;
[20] FIG. 6 illustrates a directory of metadata according to an embodiment of the
present invention;
[21] FIG. 7 illustrates a naming mole of an example metadata file according to an
embodiment of the present invention;
[22] FIG. 8 illustrates the structure of the metadata according to an embodiment of the
present invention;
[23] FIG. 9 illustrates a detailed structure of metadata shown in FIG. 8;
[24] FIG. 10 illustrates the application scope of a title which provides the enhanced
search function;
[25] FIG. 11 illustrates an application of the metadata according to an embodiment of
the present invention;

[26] FIG. 12 illustrates an application of the metadata according to another embodiment
of the present invention;
[27] FIG. 13 illustrates an example of a highlight playback using the metadata
according to an embodiment of the present invention;
[28] FIG. 14 illustrates a multi-angle title that provides the enhanced search function
using the metadata according to an embodiment of the present invention; and
[29] FIG. 15 illustrates a reproducing process of an example reproducing apparatus
according to an embodiment of the present invention.
Best Mode
[30] In accordance with an aspect of the present invention, there is provided a storage
medium storing: audio-visual (AV) data; and metadata for conducting an enhanced
search of the AV data by scene using information regarding at least one search
keyword.
[31] The AV data may be a movie tide. The metadata may be defined for each play list
which is a reproduction unit of the AV data. The enhanced search may be applied to a
main playback path playlist which is automatically reproduced according to an index
table when the storage medium is loaded.
[32] The metadata may include information regarding an entry point of each scene.
Each scene may be represented as content between two neighboring entry points.
When a user searches for contents using a search keyword, search results may be
represented as a group of entry points corresponding to metadata whose search
keyword information matches the search keyword. The entry points may be se
quentially arranged temporally on playlist.
[33] The metadata may include information regarding an entry point and duration of
each scene. When the entry points are sequentially arranged temporally, each scene
may be defined as a section between an entry point of the scene and a point at the end
of the duration of the scene.
[34] When a user searches for contents using the search keyword, a playlist may be
reproduced from an entry point of a scene selected from search results by the user to
the end of the playlist.
[35] When a user searches for contents using the search keyword, a scene selected by
the user from search results may be reproduced from the entry point of the scene for
the duration of the scene, and a next scene may be reproduced.
[36] When a user searches for contents using the search keyword, the search results may
be sequentially reproduced without waiting for a user input.
[37] When a user searches for contents using the search keyword, a scene selected by
the user from search results may be reproduced from the entry point of the scene for
the duration of the scene, and reproduction may be stopped.

[38] The metadata may further include information regarding angles supported by each
scene. When the AV data is represented by a single angle, each scene may be distinguished by the entry point thereof, and not by the information regarding the angles. No entry points found as a result of conducting the enhanced search using one search keyword may overlap each other.
[39] When the AV data is multi-angle data, each scene can be distinguished by the entry
point of the scene and the information regarding the angles. At least one of the entry points found as a result of conducting the enhanced search using one search keyword can overlap each other.
[40] The at least one search keyword may comprise at least one of a scene type, a
character, an actor, and search keyword that can be arbitrarily defined by an author. The metadata may be recorded in a file separately from the AV data.
Mode for Invention
[41] Reference will now be made in detail to the present embodiments of the present
invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
[42] FIG. 2 is a block diagram of an example reproducing apparatus which reproduces a
storage medium storing metadata for providing an enhanced search function according to an embodiment of the present invention. Referring to FIG. 2, the reproducing apparatus 200 includes a reading unit 210, a reproducing unit 220, a search unit 230, and a user interface 240.
[43] The reading unit 210 reads audio-visual (AV) data and metadata for providing the
enhanced search function from a storage medium 250, such as a Blu-ray disc (BD). The reproducing unit 220 decodes and reproduces the AV data. In particular, when a user inputs a search keyword, the reproducing unit 220 receives from the search unit 230 information regarding a scene matching the search keyword and reproduces the scene. When there are multiple scenes matching the search keyword, the reproducing unit 220 displays the scenes on the user interface 240 and reproduces one or more of the scenes selected by the user or sequentially reproduces all of the scenes. The reproducing unit 220 may also be called a playback control engine.
[44] The search unit 230 receives a search keyword from the user interface 240 and
searches for scenes matching the search keyword. Then, the search unit 240 transmits the search results to the user interface 240 to display the search results in the form of a list or to the reproducing unit 220 to reproduce the same. As illustrated in FIG. 2, search results may be presented as a list of scenes matching a search keyword.
[45] The user interface 240 receives a search keyword input by a user or displays search
results. Also, when a user selects a scene from search results, i.e., a list of scenes

found, displayed on the user interface 240, the user interface 240 receives information regarding the selection.
[46] FIG. 3 is a flowchart illustrating a method of reproducing a recording medium
storing the metadata for providing the enhanced search function according to an embodiment of the present invention. Referring to the reproducing method 300 shown in FIG. 3, a user inputs a search keyword using the user interface 240, as shown in FIG. 2, at block 310. The search keyword may be a scene type, a character, an actor, an item, a location, a sound, or any word deformed by an author. For example, when the movie 'The Matrix' is reproduced, all scenes in which the character 'Neo' appears can be searched for. Also, all scenes in which an item 'mobile phone' appears can be searched for.
[47] Next, all scenes matching the input search keyword are searched for with reference
to a metadata file at block 320. The metadata file defines a plurality of scenes, and includes information regarding search keywords associated with each scene and an entry point of each scene. The structure of the metadata file will be described in detail below. Portions of AV data which correspond to found scenes are searched for using entry points of the found scenes and are reproduced at block 330. In this way, an enhanced search can be conducted on AV data using various search keywords. Hereinafter, the enhanced search function will also be referred to as a 'tide scene search function.'
[48] FIG. 4 illustrates example screens displayed m an example of searching for a
desired scene using the metadata for the title scene search. The metadata for the title scene search includes search information for each scene in AV data recorded on a storage medium 250, such as a Blu-ray disc (BD). Referring to FIG. 4, while a movie title such as 'The Matrix' or 'The Lord of the Rings' is being reproduced at stage #1, a user selects the title scene search function using the user interface 240, as shown in FIG. 2, such as a remote controller, to search for scenes that are associated with a desired search keyword.
[49] The user selects one of a plurality of search keyword categories displayed on the
user interface 240 at stage #2, and selects a search keyword from the selected search keyword category at stage #3, For example, when the user selects 'item' as a search keyword category and selects 'tower' as a search keyword corresponding to 'item,' the movie tide is searched for scenes in which 'tower' appears, and search results are displayed together with respective thumbnails at stage #4. When the user selects one of the search results, i.e., found scenes, the selected scene is reproduced at stage #5, Using a command such as 'skip to next search result' or 'skip to previous search result' on the user interface 40, a previous or next scene can be searched for and reproduced at stage #6.

[50] A 'highlight playback' function for sequentially reproducing all scenes found can
also be provided. In the highlight playback, all search results are sequentially
reproduced. As a result, there is no need to wait until a user selects one of the search
results. When a user selects a search keyword associated with contents, search results
"= for the selected search keyword are obtained. The search results form the highlights of
the contents associated with the selected search keyword.
[51] The structure of the metadata for the title scene search will now be described in
detail herein below.
[52] FIG. 5 illustrates the relationship between metadata 500 for the title scene search
and AV data on a storage medium according to an embodiment of the present invention. Referring to FIG. 5, the storage medium according to an embodiment of the present invention (such as medium 250, shown in FIG. 2) stores the metadata 500 in addition to the AV data shown in FIG. 1. The metadata 500 may be stored in files separately from movie playlists, which are reproducing units. A metadata file 510 is created for each play list 520, and includes a plurality of scenes 512, which are author-defined sections of each playlist 520. Each scene 512 includes an entry point indicating a start position thereof. In example embodiments of the present invention, each scene 512 may further include the duration thereof.
[53] Using an entry point (EP) map included in clip information 114, each entry point is
converted into an address of a scene in a clip AV stream 112 included in each clip 110. Therefore, the start position of each scene included in a clip AV stream 112, which is real AV data, can be found using an entry point. Each scene 512 also includes information regarding search keywords associated therewith (hereinafter referred to as search keyword information). For example, the search keyword information may include the following:
[54] Scene 1 is a battle scene,
[55] Characters are A, B and C,
[56] Actors are a, b and c, and
[57] Location is x.
[58] Accordingly, a user can search for scenes matching a desired search keyword based
on the search keyword information of each scene 512. In addition, the start positions of found scenes in a clip AV stream 112 can be determined using the entry points of the found scenes, and then the found scenes can be reproduced.
[59] FIG. 6 illustrates a directory of metadata 500 according to an embodiment of the
present invention. Referring to FIG. 6, metadata 500 related to the AV data shown in FIG. 5, is stored in files in respective directories. Specifically, an index table is stored in an index bdmv file, a movie object is stored in a MovieObjectbdmv file, and playlists are stored in xxxxx.mpls files in a PLAYLIST directory. In addition, clip in-

formation is stored in xxxxx.clpi files in a CLIPINF directory, clip AV streams arc stored in xxxxx,m2ts files in a STREAM directory, and other data is stored in files in an AUXDATA directory.
[60] The metadata 500 for the tide scene search is stored m files in a META directory
separately from the AV data. A metadata file for a disc library is dlmt_xxx,xml, and a metadata file for the title scene search is esmt_xxx_yyyyy,xml. According to an embodiment of the present invention, the meta data 100 is recorded in an XML format and in a markup language for easy editing and reuse. Hence, after the storage medium is manufactured, data recorded thereon can be edited and reused,
[61] FIG. 7 illustrates a naming rule of an example metadata file 510 according to an
embodiment of the present invention. Referring to FIG. 7, the name of the metadata file 510 starts with a prefix esmt_ indicating metadata 500. The next three characters indicate a language code according to an ISO 639-2 standard, and the next five characters indicate a corresponding playlist number. As described above, a metadata file 510 is created for each playlist 520, as shown in FIG. 5, In addition, a menu displayed during the title scene search can support multiple languages using the language code according to an ISO 639-2 standard.
[62] FIG. 8 illustrates the structure of an example metadata file 510 according to an
embodiment of the present invention. As described in connection with FIG. 5, each metadata file 510 includes a plurality of scenes 512. Referring to FIG. 8, each scene 512 corresponds to search keywords such as a scene type, a character, actor, etc. A value of each search keyword may be expressed using a sub-element or an attribute of the search keyword according to an XML rule.
[63] FIG. 9 illustrates a detailed structure of an example metadata file 510 shown in
FIG. 8. Referring to FIG. 9, each scene 512 for the title scene search includes a scene type element, a character element, an actor element, or an 'authordef element which is an author-defined search keyword. In addition, each scene 512 includes ‘entry_point' indicating the start position of each scene and 'duration' indicating a period of time during which each scene is reproduced. When multiple angles are supported, each scene 512 also includes 'angle_num' indicating a particular angle. Whether to include 'duration' and 'angle_num' in each scene 512 is optional.
[64] An example of conducting the title scene search using metadata 500 will now be
described as follows.
[65] Specifically, FIG. 10 illustrates the application scope of a title which provides the
enhanced search function according to an embodiment of the present invention. As previously shown in FIG. 5, a storage medium 250, such as a Blu-ray disc (BD), may store a movie tide for reproducing a moving picture such as a movie and an interactive title including programs for providing interactive functions to users. The metadata 500

for the title scene search provides the enhanced search function while a moving picture is being reproduced. Thus, the metadata 500 is used only for movie tides. The type of title can be identified by a Title_playback_type' field. If the Title_playback_type' field of a title is Ob, the title is a movie title. If the Title„playback„type' field of a title is lb, the title is an interactive tide. Therefore, the tide scene search according to an embodiment of the present invention can be conducted only when the Title_playback_type'field is Ob.
[66] Referring to FIG. 10, when a storage medium 250, such as a Blu-ray disc (BD), is
loaded into an example reproducing apparatus 200, as shown in FIG. 2, title #1 is accessed using an index table. When a navigation command *Play playlist#r included in movie object #1 of tide #1 is executed, playlist #1 is reproduced. As shown in FIG. 10, playlist #1 is composed of at least one play item. An author may arbitrarily define a chapter or a scene, regardless of a play item.
[67] A playlist which is automatically reproduced according to the index table when a
storage medium 250 is loaded into an example reproducing apparatus 200, shown in FIG. 2, is called a main playback path playlist, and a playlist which is reproduced by another movie object that a user calls using a button object while the main playback path playlist is being reproduced is called a side playback path playlist. The side playback path playlist is not within the scope of a chapter or a scene defined by an author. Therefore, according to an embodiment of the present invention, the title scene search function is enabled for the main playback path playlist and disabled for the side playback path playlist.
[68] In summary, the application scope of the title that provides the enhanced search
function has the following constraints.
[69]
1. The tide scene search is applied to movie titles.
2. Metadata for the tide scene search is defined in units of playlists. Since a movie title may include one or more playlists, one or more metadata may be defined for a playlist.
3. The tide scene search is applied to the main playback path playlist, but not to the side playback pathlaylist.
[70] FIG. 11 illustrates an application of metadata 500 according to an embodiment of
the present invention. Referring to FIG. 11, scenes used in the metadata 500 are defined. The scenes are basic units used in the metadata 500 for the title scene search and basic units of contents included in a playlist. An author may designate entry points in a playlist on a global time axis. Content between two neighboring entry points is a scene.
[71] When a user searches for contents using a search keyword, search results are

represented as a group of entry points included in scenes having metadata whose search keyword information matches the search keyword. Such entry points are sequentially arranged temporally and transmitted to the playback control engine, i.e., as the reproducing unit 200 as shown in FIG. 2. The playback control engine can search for a plurality of scenes associated with identical search keywords and reproduce the scenes.
[72] Referring to FIG. 11, entry points for each search keyword are expressed as circles.
For example, when a user selects scenetype #1 as a search keyword, the search results include scene #1, scene #3, and scene #n. Then, the user may select some of scene #1, scene #3, and scene #n for reproduction. In addition, the user may navigate and reproduce previous or next search results using a user operation (UO) such as *Skip to next sceneO' or 'Skip to previous scene()through the user interface 240, shown in FIG. 2.
[73] FIG. 12 illustrates an application of metadata 500 according to another
embodiment of the present invention. Referring to FIG. 12, scenes are defined using duration in addition to entry points described above. An interval between an entry point and a point at the end of the duration is defined as a scene. When a user selects a scene, search results can be reproduced according to three scenarios.
[74] Scenario 1: simple playback
[75] Regardless of duration, a playlist is reproduced from an entry point of a scene
selected by a user from search results to the end of the playlist unless there is a user input. For example, when a user selects scenetype #1, playlist #1 is reproduced from an entry point of scene #1 to the end of playlist #1.
[76] Scenario 2: highlight playback
[77] A playlist is reproduced from an entry point of a scene selected by a user from
search results until the end of the duration of the selected scene. Then, the reproducing unit 20 jumps to a next scene and reproduces the next scene. For example, when a user selects scenetype #2, only scene #1 and scene #3, which are search results, are reproduced. In other words, only the highlights of playlist #1 which are associated with the search keyword scenetype #2 are reproduced. Another example of the highlight playback is illustrated in FIG. 13. Referring to FIG, 13, search results are sequentially reproduced. Therefore, there is no need to stop and wait for a user input after a found scene is reproduced. In other words, after one of a plurality of search results for actor 'a' is reproduced, a next search result is subsequently reproduced. In this way, only the highlights of actor 'a' are reproduced. For the highlight playback, each search result is expressed using a duration and an entry point. The search results can be linked and sequentially reproduced using the entry points and the duration information.
[78] Scenario 3: scene-based playback

[79] Search results are reproduced by scene. In other words, a scene selected by a user
from search results is reproduced from an entry point of the scene for the duration of the scene. After the duration, reproduction is stopped until a user input is received. Scenario 3 is similar to scenario 2 except that the reproduction is stopped at the end of the scene.
[80] FIG. 14 illustrates an example multi-angle title that provides the enhanced search
function using metadata 500 according to an embodiment of the present invention. Referring to FIG, 14, an example of a multi-path title composed of multiple angles is illustrated. The multi-path title is composed of five (5) play items. Of the five play items, a second (2nd) play item is composed of three (3) angles, and a fourth (4th) play item is composed of four (4) angles. In a playlist that supports multiple angles, scene #1 and scene #2 matching the search keyword scenetype #1 and scene #3 and scene scene #4 matching the search keyword scenetype #2 are found. Each scene is defined by an entry point and duration.
[81] Found scenes can be overlapped each other because overlapping entry points can be
distinguished by 'angle_num' shown in FIG. 5. However, when entry points do not overlap each other, scenes found as a result of the enhanced search cannot overlap each other. When a user desires to reproduce search results according to scenario 2, the reproducing apparatus sequentially reproduces scenes along a dotted arrow in FIG. 14.
[82] Referring to FIG. 14, scenes which cover a portion of a play item or a plurality of
play items are illustrated. In each scene, the metadata 500 of AV data thereof is defined.
[83] In the case of play items which support multiple angles (for example, the second
and fourth play items), the metadata 500 is applied to AV data corresponding to one of the supported multiple angles. For example, in the case of scene #1, parts of first and second play items are defined as a reproduction section, and a value of angle_num is three. The value of angle_num is applied only to play items that support multiple angles. Therefore, play items that do not support multiple angles are reproduced at a default angle. A player status register (PSR), 3 which is a state register of the reproducing apparatus 200, as shown, for example, in FIG. 2, is designated as a default angle. Accordingly, when scene #1 is reproduced, play item #1 which does not support multiple angles is reproduced at the default angle, and play item #2 which supports multiple angles is reproduced at angle 3 according to the value designated as the attribute of angle_num. In this case, search keywords defined for scene #1 for the title scene search are applied to angle 3 for play item 2 that supports multiple angles. As described above, when metadata 500 including angle_num is used, a tide which supports multiple angles can also provide various enhanced search functions according to a designated search keyword.

[84] FIG. 15 illustrates a reproducing process of an example reproducing apparatus
according to an embodiment of the present invention. Referring to FIG, 15, the reproducing apparatus 200, shown in FIG. 2, provides the title scene search function while reproducing a movie title. When a storage medium 250, such as a Blu-ray disc (BD), is loaded into the reproducing apparatus 200 and the reproduction of a movie title starts (operation 1510), the title scene search function is activated to be in a valid state (operation 1520). As described above with reference to FIG. 14, when a movie title that supports multiple angles is reproduced, the title scene search can be conducted by changing an angle (operation 1530). In addition, if a multi-path playlist is supported (operation 1522), when a playlist is changed to a main playback path playlist, the title scene search function is activated to be in the valid state (operation 1534). However, when the playlist is changed to a side playback path playlist, the title scene search function becomes invalid (operation 1532). Further, when a title is changed to an interactive title, not a movie title, the title scene search function becomes invalid (operation 1538).
[85] Example embodiments of the enhanced search method according to the present
invention can be written as a computer program and can also be implemented in a general digital computer that executes the computer program recorded on a computer-readable medium. Codes and code segments constructing the computer program can be easily induced by computer programmers in the art. The computer-readable medium can be any data storage device that can store data which can be thereafter read by a computer system. Examples of the computer-readable recording medium include read only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and carrier waves (such as data transmission through the Internet), The computer-readable medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
[86] While the present invention has been particularly shown and described with
reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention. For example, any computer readable media or data storage devices may be utilized, as long as metadata is included in the playlist in the manner shown in FIG. 5 through FIG. 15. In addition, metadata can also be configured differently as shown in FIG. 5. Moreover, a reproducing apparatus as shown in FIG, 2 can be implemented as part of a recording apparatus, or alternatively a single apparatus for performing recording and/or reproducing functions with respect to a storage medium. Similarly, the CPU can be implemented as a chipset having firmware, or alternatively, a general or special purposed

computer programmed to perform the methods as described, for example, with reference to FIG. 3, and FIGS. 10-15. Accordingly, it is intended, therefore, that the present invention not be limited to the various example embodiments disclosed, but that the present invention includes all embodiments falling within the scope of the appended claims.














V

We claim :
1. A storage medium comprising:
audio-visual (AV) data; and
metadata for conducting an enhanced search of the AV data by scene using information regarding at least one search keyword.
2. The storage medium as claimed in claim 1, wherein the AV data is a movie title.
3. The storage medium as claimed in claim 1, wherein the metadata is defined for each playlist which is a reproduction unit of the AV data.
4. The storage medium as claimed in claim 1, wherein the enhanced search is applied to a main playback path playlist which is automatically reproduced according to an index table when the storage medium is loaded into a reproducing apparatus.
5. The storage medium as claimed in claim 1, wherein the metadata comprises infonnation regarding an entry point of each scene.
6. The storage medium as claimed in claim 5, wherein each scene is represented as content between two neighboring entry points.
7. The storage medium as claimed in claim 6, wherein, when a user searches for contents using a search keyword, search results are represented as a group of entry points corresponding to metadata whose search keyword information matches the search keyword.
8. The storage medium as claimed in claim 7, wherein the entry points are sequentially arranged temporally on a playlist.
9. The storage medium as claimed in claim 1, wherein the metadata comprises information regarding an entry point and a duration of each scene,
10. The storage medium as claimed in claim 9, wherein, when the entry points are sequentially arranged temporally, each scene is defined as a section between an entry pomt of the scene and a point at the end of the duration of the scene.
11. The storage medium as claimed in claim 9, wherein, when a user searches for contents using the search keyword, a playlist is reproduced from an entry point of a scene selected from search results by the user to the end of the playlist.
12. The storage medium as claimed in claim 9, wherein, when a user searches for contents using the search keyword, a scene selected by the user from search results is reproduced from the entry point of the scene for the duration of the scene, and a next scene is reproduced.
13. The storage medium as claimed in claim 9, wherein, when a user searches for contents using the search keyword, search results are sequentially reproduced without w^aiting for a user input.
14. The storage medium as claimed in claim 9, wherein, when a user searches for contents using the search keyword, a scene selected by the user from search results is reproduced from the entry point of the scene for the duration of the scene, and reproduction is stopped.







15. The storage medium as claimed in claim 5, wherein the metadata further comprises
information regarding angles supported by each scene. \6. The storage medium as claimed in claim 15, wherein, when the AV data is represented by
a single angle, each scene is distinguished by the entry point of each scene, and not by the
information regarding the angles.
17. The storage medium as claimed in claim 16, wherein no entry points found as a result of conducting the enhanced search using one search keyword overlap each other.
18. The storage medium as claimed in claim 15, wherein, when the AV data is multi-angle data, each scene can be distinguished by the entry point of the scene and the information regarding the angles.
19. The storage medium as claimed in claim 18, wherein at least one of the entry points found as a result of conducting the enhanced search using one search keyword can overlap each other.
20. The storage medium as claimed in claim 1, wherein the at least one search keyword comprises at least one of a scene type, a character, an actor, and search keyword that can be arbitrarily defined by an author.
21. The storage medium as claimed in claim 1, wherein the metadata is recorded in a file separately from the AV data.
22. A storage medium formed with multiple layers to manage a data structure of audio- visual (AV) data recorded thereon, comprising:
one or more playlists that are reproducing units of AV data; and
metadata created for each playlist for providing an enhanced search function on the AV data;
wherein the metadata is defined scene by scene, and comprises information regarding at least
one search keyword to be applied to a corresponding scene, such that when a search keyword
is input by a user, AV data corresponding one or more scenes matching the search keyword is
reproduced.
23- The storage medium as claimed in claim 22, wherein the metadata further comprises
infonnation regarding a start location and a reproduction duration time of the
corresponding scene.
24. The storage medium as claimed in claim 22, wherein the search keyword includes at least one of a search keyword regarding a scene type, a search keyword regarding one or more characters appearing in the corresponding scene, a search keyword regarding one or more actors/actresses playing the corresponding characters, and a search keyword regarding a search criterion which a producer defines.
25. A method for enabling a user to conduct enhanced search of a storage medium, said method comprising the steps of:

(a) storing an audio-visual (AV) data upon the storage medium; and
(b) storing metadata on the storage medium for conducting an enhanced search of the AV data by scene using information regarding at least one search keyword.
26. The method as claimed in claim 25, wherein the AV data is a movie title.






27. The method as claimed in claim 25, wherein the metadata is defined for each playlist which is a reproduction unit of the AV data.
28. The method as claimed in claim 25, wherein the enhanced search is applied to a main playback list which is automatically reproduced according to an index table when the storage medium is loaded into a reproducing apparatus.
29. The method as claimed in claim 25, wherein the metadata comprises mformation regarding an entry point of each scene.
30. The method as claimed in claim 29, wherein each scene is represented as content between two neighboring entry points.
31. The method as claimed in claim 30, wherein, when a user searches for contents using a search key word, search results are represented as a group of entry pomts corresponding to metadata whose search keyword information matches the search keyword.
32. The method as claimed in claim 31, wherein the entry points are sequentially arranged temporally on a playlist.
33. The method as claimed in claim 25, wherein the metadata comprises information regarding an entry point and a duration of each scene.
34. The method as claimed in claim 33, wherein, when the entry points are sequentially arranged temporally, each scene is defined as a section between an entry point of the scene and a point at the end of the duration of the scene.
35. The method as claimed in claim 33, wherein, when a user searches for contents using the keyword, a playlist is reproduced from an entry point of a scene selected from search results by the user to the end of the playlist.
36. The method as claimed in claim 33, wherein, when a user searches for contents using the search keyword, a scene selected by the user from search results is reproduced from the entry point of the scene for the duration of the scene, and a next scene is reproduced.
37. The method as claimed in claim 33, wherein, when a user searches for contents using the search keyword, search results are sequentially reproduced without waiting for a user input.
38. The method as claimed in claim 33, wherein, when a user searches for contents using the search keyword, a scene selected by the user from search results is reproduced from the entry point of the scene for the duration of the scene, and reproduction is stopped.
39. The method as claimed in claim 29, wherein the metadata further comprises infonnation regarding angles supported by each scene.
40. The method as claimed in claim 39, wherein, when the AV data is represented by a single angle, each scene is distinguished by the entry point of each scene, and not by the information regarding the angles.
41. The method as claimed in claim 40, wherein no entry points found as a result of conducting the enhanced search using one search keyword overlap each other.

42. The method as claimed in claim 39, wherein, when the AV data is multi-angle data, each
scene can be distinguished by the entry point of the scene and the information regarding
the angles.
43. The method as claimed in claim 42, wherein at least one of the entry points found as a result of conducting the enhanced search using one search keyword can overlap each other.
44. The method as claimed in claim 25, wherein the at least one search keyword comprises at least one scene type, a character, an actor, and search keyword that can be arbitrarily defined by an author.
45. The method as claimed in claim 25, wherein the metadata is recorded in a file separately
from the AV data.
46. A method for enabling a user to conduct enhanced search of a storage medium formed
with multiple layers to manage a data structure of audio-visual (AV) data recorded
thereon, said method comprising the steps of:
(a) recording one or more playlists that are reproducing units of AV data; and
(b) recording metadata created for each playlist for providing an enhanced search function on the AV data;
wherein the metadata is defined scene by scene, and comprises information regarding at least one search keyword to be applied to a corresponding scene, such that when a search keyword is input by a user, AV data corresponding one or more scenes matching the search word is reproduced.
47. The method as claimed in claim 46, wherein the metadata further comprises information regarding a start location and a reproduction duration time of the corresponding scene.
48. The method as claimed in claim 46, wherein the search keyword comprises at least one of a search keyword regarding a scene type, a search keyword regarding one or more characters appearing in the corresponding scene, a search keyword regarding one or more actors / actresses playing the corresponding characters, and a search keyword regarding a search criterion which a producer defines.
49. A method for enabling a user to conduct enhanced search of a storage medium and a
storage medium comprising audio-visual (AV) data and metadata for conducting an
enhanced search of the A V data by scene using information regarding at least one search
keyword, substantially as herein described with reference to the accompanying drawings
and as illustrated in the foregoing examples.


Documents:

3348-chenp-2007 form-1 18-09-2007.pdf

3348-chenp-2007 form-13 26-03-2009.pdf

3348-chenp-2007 form-13 31-07-2007.pdf

3348-chenp-2007 form-3 26-03-2009.pdf

3348-chenp-2007 form-3 18-09-2007.pdf

3348-CHENP-2007 AMENDED CLAIMS 16-04-2012.pdf

3348-CHENP-2007 AMENDED PAGES OF SPECIFICATION 16-04-2012.pdf

3348-chenp-2007 correspondence others 18-09-2007.pdf

3348-chenp-2007 correspondence others 26-03-2009.pdf

3348-CHENP-2007 EXAMINATION REPORT REPLY RECEIVED 16-04-2012.pdf

3348-CHENP-2007 FORM-3 16-04-2012.pdf

3348-CHENP-2007 OTHER PATENT DOCUMENT 16-04-2012.pdf

3348-CHENP-2007 POWER OF ATTORNEY 16-04-2012.pdf

3348-chenp-2007-abstract.pdf

3348-chenp-2007-claims.pdf

3348-chenp-2007-correspondnece-others.pdf

3348-chenp-2007-description(complete).pdf

3348-chenp-2007-drawings.pdf

3348-chenp-2007-form 1.pdf

3348-chenp-2007-form 26.pdf

3348-chenp-2007-form 3.pdf

3348-chenp-2007-form 5.pdf

3348-chenp-2007-others.pdf

3348-chenp-2007-pct.pdf


Patent Number 252056
Indian Patent Application Number 3348/CHENP/2007
PG Journal Number 17/2012
Publication Date 27-Apr-2012
Grant Date 23-Apr-2012
Date of Filing 31-Jul-2007
Name of Patentee SAMSUNG ELECTRONICS CO., LTD
Applicant Address 416, MAETAN-DONG ,YEONTONG-GU,SUWON-SI,GYEONGGI-DO 442-742
Inventors:
# Inventor's Name Inventor's Address
1 CHUN, HYE-JEONG 101-301 SANHO APT., MABUK-DONG,GIHEUNG -GU,YONGIN-SI,GYEONGGI-DO
2 PARK, SUNG-WOOK 4-1103 MAPO HYUNDAI APT.,GONGDEOK 2-DONG, MAPO-GU, SEOUL, REPUBLIC OF KOREA
PCT International Classification Number G11B 20/10
PCT International Application Number PCT/KR2006/000050
PCT International Filing date 2006-01-06
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10-2005-0108532 2005-11-14 Republic of Korea
2 10-2005-0001749 2005-01-07 Republic of Korea