Title of Invention

INFORMATION STORAGE MEDIUM, REPRODUCING APPARATUS AND METHOD THEREOF

Abstract An information storage medium containing subtitles and subtitle processing apparatus, where the information storage medium includes; audio-visual (AV) data; and subtitle data in which at least one subtitle text data and output style information designating an output form of subtitle text are stored with a text format With this, output style of subtitle texts included in the text subtitle data can be overlapped, a subtitle file can be easily produced, and subtitle for an AV stream can be output with various forms
Full Text ABSTRACT
[Abstract of the Disclosure]
Provided are an information storage medium containing a subtitle and a subtitle
5 processing apparatus. The information storage medium includes: AV data; and subtitle data in which at least one subtitle text data and output style information for designating an output form of the subtitle texts are stored with a text format. With this, a subtitle file can be easily produced, and subtitles for an AV stream can be output with various forms.
10
[Representative Drawing] FIG. 3
1

SPECIFICATION
[Title of the Invention]
5
INFORMATION STORAGE MEDIUM CONTAINING SUBTITLES AND PROCESSING APPARATUS THEREFOR
[Brief DeScript1on of the Drawings]
10 FIG. 1 illustrates a structure of a text subtitle file;
FIG. 2 is a block diagram of an apparatus for reproducing an information storage medium on which a text subtitle is recorded;
FIG. 3 is a detailed block diagram of a text subtitle-processing unit of FIG. 2; FIG. 4 illustrates a Subtitling segment structure in which subtitle control 15 information transmitted to a composition buffer is recorded;
FIG. 5 illustrates the correlation between structures recording PCS, RCS, ODS, and CLUT information;
FIGS. 6A through 6C are diagrams for illustrating a process of generating an image for a plurality of subtitles using one composition information data and one 20 position information data;
FIGS. 7A through 7C are diagrams for illustrating a process of generating an image for a plurality of subtitles using one composition information data and a plurality of position information data; and
FIGS. 8A through 8C are diagrams for illustrating a process of generating an 25 image so that one image object is included in one composition information data by allocating a plurality of composition information data for a plurality of subtitles.
[Detailed DeScript1on of the Invention] [Object of the Invention] 30 [Technical Field of the Invention and Related Art prior to the Invention]
The present invention relates to an information storage medium, and more particularly, to an information storage medium containing a plurality of subtitles that can be separately displayed and a processing apparatus therefor.
2

A conventional subtitle is a bitmap image that is included in an AV stream. Therefore, it is inconvenient to produce such a subtitle, and there is no choice but to merely read the subtitle in its present form since a user cannot select various attributes of the subtitle defined by a subtitle producer. That is, since the attributes, such as font, 5 character size, and character color, are predetermined and included in the AV stream as a bitmap image, the user cannot change the attributes at will.
Also, since the subtitle is compressed and encoded in the AV stream, an output start time and an output end time of the subtitle are clearly designated to correspond to the AV stream, and times when subtitles are output should not overlap. That is, only 10 one subtitle has to be output at a certain time.
However, since an output start time and an output end time of a subtitle are designated by a subtitle producer and recorded on an information storage medium separately from an AV stream, the output start times and output end times of a plurality of subtitles may overlap one another. In other words, since more than two subtitles 15 may be output in a certain time period, a method of solving this problem is necessary.
[Technical Goal of the Invention]
The present invention provides an information storage medium having recorded thereon a plurality of text subtitles that can be separately displayed although overlap in 20 one another and an apparatus for reproducing the information storage medium.
[Structure and Operation of the Invention]
According to an aspect of the present invention, there is provided an information storage medium comprising: AV data; and subtitle data in which at least one subtitle text 25 data and output style information for designating an output form of the subtitle texts are stored in a text format.
The output style information may contain pieces of information so that the output style information may be differently applied to the subtitle texts.
When a plurality of subtitle data exist, the plurality of subtitle data may be 30 separately rendered, and rendered images may compose a plurality of pages, respectively.
According to another aspect of the present invention, there is provided a text subtitle processing apparatus comprising: a text subtitle parser separately extracting
3

rendering information used for rendering a text from text subtitle data and control
information used for presenting the rendered text; and a text layout/font renderer
generating a bitmap image of a subtitle text by rendering the subtitle text according to
the extracted rendering information.
5 The text layout/font renderer may render at least one subtitle text data by
applying different styles to the subtitle text data and compose a plurality of pages with a plurality of rendered images.
Hereinafter, the present invention will now be described more fully with reference
to the accompanying drawing, in which an embodiment of the invention is shown.
10 FIG. 1 illustrates a structure of a text subtitle file 100.
Referring to FIG. 1, the text subtitle file 100 includes dialog information 110, presentation information 120, and meta data 130a and 130b.
The dialog information 110 includes subtitle texts, output start times of the subtitle texts, output end times of the subtitle texts, style groups or style information 15 used for text rendering, text change effect information such as fade-in and fade-out, and a formatting code of the subtitle texts. The formatting code includes a code for displaying a text with bold characters, a code for displaying the text in Italics, a code for indicating underlining, and a code for indicating a line change.
The presentation information 120 includes style information used for rendering 20 the subtitle texts and is composed of a plurality of style groups. A style group is a bundle of styles on which the style information is recorded. A style includes information used for rendering and displaying a subtitle text. This information includes, for example, a style name, a font, a text color, a background color, a text size, a line-height, a text output region, a text output start position, an output direction, and an 25 align method.
The meta data 130a and 130b, which are additional information of a moving picture, include information required for performing additional functions except a subtitle output function.
FIG. 2 is a block diagram of an apparatus for reproducing an information storage 30 medium on which a text subtitle is recorded.
Referring to FIG. 2, a text subtitle-processing unit 220 renders a text in order to process a text subtitle. The text subtitle processing unit 220 includes a text subtitle parser 221, which extracts presentation information and dialog information from the
4

subtitle, and a text layout/font renderer 222, which generates an output image by rendering the text according to the extracted presentation information.
The text subtitle may be recorded on an information storage medium or a memory included in a reproducing apparatus. In FIG. 2, the information storage 5 medium or the memory on which the text subtitle is recorded is called a subtitle information storage unit 200.
A text subtitle file produced by corresponding to a reproducing moving picture and font data to be used for rendering the subtitle are read from the subtitle information storage unit 200 and stored in a buffer 210. The subtitle file stored in the buffer 210 is
10 transmitted to a text subtitle parser 221, and information required for rendering the subtitle is parsed by the text subtitle parser 221. A subtitle text, font information, and rendering style information are transmitted to the text layout/font renderer 222, and control information of the text subtitle is transmitted to a composition buffer 233 of a presentation engine 230. The control information, i.e., information for displaying a
15 screen with the subtitle, includes an output region and an output start position.
The text layout/font renderer 222 generates a bitmap image by rendering the text subtitle using text rendering information transmitted from the text subtitle parser 221 and the font data transmitted from the buffer 210, composes one subtitle page by designating an output start time and an output end time of each subtitle text, and
20 transmits the bitmap image and the subtitle page to an object buffer 234 of the presentation engine 230.
The subtitle of the bitmap image form read from the subtitle information storage unit 200 is input to a coded data buffer 231 and processed by a graphic processing unit 232. Accordingly a bitmap image is generated by the graphic processing unit 232.
25 The generated bitmap image is transmitted to the object buffer 234, and control information of the bitmap image is transmitted to the composition buffer 233. The control information is used for designating a time and a position at which the bitmap image stored in the object buffer 234 is output to a graphic planer 240 and designating a color lookup table (CLUT) 250 in which color information to be applied to the bitmap
30 image output to the graphic planer 240 is recorded. The composition buffer 233 receives object PCS transmitted from the text subtitle parser 221 and bitmap subtitle data processed by the graphic processing unit 232 and transmits control information for outputting the subtitle onto a screen to a graphic controller 235. The graphic controller
5

235 controls the object buffer 234 to combine the bitmap subtitle data processed by the graphic processing unit 232 and rendered text subtitle object data received from the text layout/font renderer 222 and the graphic planer 240 to generate a graphic plane from the combined data, and outputs the graphic plane to a display unit (not shown) with 5 reference to the CLUT 250.
FIG. 3 is a detailed block diagram of the text subtitle processing unit 220 of FIG. 2.
Referring to FIG. 3, a subtitle, which is text subtitle file information, is input to the text subtitle parser 221. The text subtitle parser 221 transmits subtitle control
10 information to the presentation engine 230 and text rendering information to the text layout/font renderer 222 by parsing the input subtitle. The text layout/font renderer 222 receives the text rendering information from the text subtitle parser 221 and stores control information of a subtitle text in an element control data buffer 290, subtitle data in a text data buffer 291, and style information used for rendering in a style data buffer
15 292. Also, the text layout/font renderer 222 stores font data used for text rendering in a font data buffer 293.
The control information stored in the element control data buffer 290 may be a formatting code. The formatting code includes a code for displaying a text with bold characters, a code for displaying the text in Italics, a code for indicating underlining, and
20 a code for indicating a line change. The text data stored in the text data buffer 291 is text data to be output as a subtitle. The style data stored in the style data buffer 292 may be data such as a font, a text color, a background color, a text size, a line-height, a text output region, a text output start position, an output direction, and an align method. A text renderer 294 generates a subtitle image with reference to information recorded in
25 each buffer and transmits the subtitle image to the presentation engine 230.
FIG. 4 illustrates a Subtitling_segment structure in which subtitle control information transmitted to the composition buffer 233 is recorded.
Information required to form subtitles is stored in the form of the Subtitling_segment illustrated in FIG. 4. The Subtitling_segment structure contains
30 information required to output bitmap subtitles. For example, the Subtitling_segment structure contains information such as a page composition segment (PCS) used to compose a page on a screen which includes subtitles, a region composition segment (RCS), which is information regarding a region in which subtitles exist, an object data
6

segment (ODS), which is information regarding object data formed as bitmap images for subtitles, and a CLUT_definition_segment structure in which color code information regarding object data and backgrounds.
Referring to FIG. 4, the Subtitling_segment structure includes sync_byte, 5 segment_type, page_id, segment_length, and segment_data_field (). The content of the Subtitling_segment structure is changed according to a value of segment_type. When the value of segment_type is 0 x 10, the Subtitling_segment structure is a page_composition_segment structure. Therefore, the PCS information is recorded in segment_data_field (). When the value of segment_type is 0 x 11, the
10 Subtitling_segment is the region_composition_segment structure. Therefore, the RCS information is recorded in segment_data_field (). In addition, when the value of segment_type is 0 x 12, the Subtitling_segment structure is the CLUT_definition_segment. Therefore, the CLUT information is stored in segment_data_field (). When the value of segment_type is 0 x 13, the
15 Subtitling_segment structure is an object_data_segment. Therefore, the ODS information is recorded in segment_data_field ().
FIG. 5 illustrates the correlation between structures recording the PCS, RCS, ODS, and CLUT information.
A subtitle displayed on a screen is composed for each page. Each page may
20 include data used for other purposes in addition to a subtitle. A PCS records
information used to compose a page. Specifically, a PCS includes region_id indicating a region, which is region information retained by the PCS, position information of a region corresponding to each regionjd, page_time_out information, which is information regarding the time taken for the PCS to disappear from the screen, and
25 other information required to compose a page.
Referring to FIG. 5, a page can include at least one region whose image is displayed on the screen. Such regions are distinguished by region id. The RCS is a structure which records information required to compose such regions. The RCS includes information regarding the width and height of each region, objected indicating
30 an object included in each region, position information of each object, and color information used for each object.
The ODS records object data. In other words, the ODS records data on an object to be output to at least one RCS included in one PCS. The ODS includes data
7

type information of an object and object data. When the value of the data type of an
object is 0 x 00, image data of the object is recorded in a pixel_data_sub_block
structure. When the value of the data type of the object is 0 x 01, text character string
data is recorded in a character_code field.
5 The text subtitle processing unit generates the PCS, RCS, ODS, and CLUT
information regarding each rendered subtitle image which will be output on the screen to provide text-based subtitles. The PCS, RCS, ODS, and CLUT information thus generated are transmitted to the composition buffer of the presentation engine.
As described above, when an information storage medium containing subtitles 10 generated in a text form is reproduced, various methods of outputting more than one subtitles at the same time exist.
In a first method, the text subtitle processing unit 220 generates a new image for a plurality of subtitles, text output times of which are overlapped, and transmits a subtitle composed of objects generated to be output to one PCS in one PCS to the presentation 15 engine 230.
There is a second method of composing the subtitles so that the subtitles, text output times of which are overlapped, have different position information. That is, the text subtitle processing unit 220 generates an image of the plurality of subtitles, text output times of which are overlapped, using different position information data in one 20 PCS and transmits the generated image to the presentation engine 230.
There is a third method of generating subtitles, text output times of which are overlapped, using different PCS. That is, the text subtitle processing unit 220 generates different PCS for a plurality of subtitles, text output times of which are overlapped, so that only one object is included in one PCS. Each of the methods will 25 now be described in more detail with reference to the following drawings.
FIGS. 6A through 6C are diagrams for illustrating a process of generating an image for a plurality of subtitles using one PCS and one region.
In FIG. 6A, a style "Script" is defined as style information used for subtitle text rendering. Referring to FIG. 6A, the style "Script" uses a font "Arial.ttf", a text color 30 "black", a background color "white", a character size "16pt", a text reference position of coordinates (x, y), an align method "center", an output direction "left-to-right-top-to-bottom", a text output region "left, top, width, height", and a line-height "40px".
8

In FIG. 6B, subtitle texts 610, 620, and 630 rendered using the style "Script" are defined. Referring to FIG. 6B, the subtitle text Hello 610 is output from "00:10:00" to "00:15:00", the subtitle text Subtitle 620 is output from "00:12:00" to "00:17:00", and the subtitle text World 630 is output from "00:14:00" to "00:19:00". Therefore, two or three 5 subtitle texts are output between "00:12:00" and "00:17:00". Here, "
" indicates a line change. Using of the
tag can prevent a plurality of subtitles from being overlapped on one region even though one style is used.
FIG. 6C shows a result of outputting the subtitles defined in FIGS. 6A and 6B. Referring to FIG. 6C, data stored in each buffer of the text subtitle processing unit 220 10 in each time zone will be described in detail.
CD Before "00:10:00": output a PCS including a void subtitle image
Element control data buffer: void
Text data buffer: void
Style data buffer: style information of "Script"
15 Font data buffer: font information of "Arial.ttf"
© From "00:10:00" to "00:12:00": output a PCS including an image in which the subtitle text Hello 610 is rendered
Element control data buffer: control information of the subtitle text Hello 610
Text data buffer: "Hello"
20 Style data buffer: style information of "Script"
Font data buffer: font information of "Arial.ttf"
(3) From "00:12:00" to "00:14:00": output a PCS including an image in which the subtitle text Hello 610 and the subtitle text Subtitle 620 are rendered
Element control data buffer: control information of the subtitle text Hello 610 and 25 the subtitle text Subtitle 620
Text data buffer: "Hello" and "
Subtitle"
Style data buffer: style information of "Script"
Font data buffer: font information of "Arial.ttf"
© From "00:14:00" to "00:15:00": output a PCS including an image in which the 30 subtitle text Hello 610, the subtitle text Subtitle 620, and the subtitle text World 630 are rendered
Element control data buffer: control information of the subtitle text Hello 610, the subtitle text Subtitle 620, and the subtitle text World 630
9

Text data buffer: "Hello" and "
Subtitle" and "

World"
Style data buffer: style information of "Script"
Font data buffer: font information of "Arial.ttf"
© From "00:15:00" to "00:17:00": output a PCS including an image in which the 5 subtitle text Subtitle 620 and the subtitle text World 630 are rendered
Element control data buffer: control information of the subtitle text Subtitle 620 and the subtitle text World 630
Text data buffer: "
Subtitle" and "

World"
Style data buffer: style information of "Script"
10 Font data buffer: font information of "Arial.ttf'
© From "00:17:00" to "00:19:00": output a PCS including an image in which the subtitle text World 630 is rendered
Element control data buffer: control information of the subtitle text World 630
Text data buffer: "

World"
15 Style data buffer: style information of "Script"
Font data buffer: font information of "Arial.ttf'
(Z) After "00:19:00": output a PCS including a void subtitle image
Element control data buffer: void
Text data buffer: void
20 Style data buffer: style information of "Script"
Font data buffer: font information of "Arial.ttf'
As shown in the above subtitle output process, in the first method, one subtitle
image is generated by applying the same style to a plurality of subtitle texts having
overlapped output times, one PCS including the subtitle image is generated, and the
25 generated PCS is transmitted to the presentation engine 230. At this time,
page_time_out indicating the time when the transmitted PCS disappears from a screen
means the time when a finally output subtitle among a plurality of subtitles having
overlapped output times disappears or the time when a new subtitle is added.
Text subtitle processing of the output subtitles must be quickly performed 30 considering a time Tdecoding taken for performing decoding of the subtitles in the text
subtitle processing unit 220 and a time Tcompositionn taken for outputting the rendered
subtitles from the object buffer 234 to the graphic planer 240. When Tsmrt indicates
the time when a subtitle is output from the text subtitle processing unit 220 of the
10

reproducing apparatus, and when Taniml indicates the time when the subtitle arrives at
the text subtitle processing unit 220, correlations between these times are calculated by Equation 1.
[Equation 1]
5 T _r cr +T
start arrival decoding composition
T =T +T
decoding rendering composition information generation
Num of char
T V T
rendering / , M char(i)
Referring to Equation 1, it can be known how quickly the text subtitle must be processed. Here, Tdecoding indicates the time taken to render a subtitle to be output,
10 generating PCS including a rendered object, and transmitting the generated PCS to the object buffer 234. The subtitle requiring an output time of Tsmrt must start to be
processed before at least the time obtained by adding Tdecoding and TcomposUon. The time
Tdecoding is obtained by adding Trendering, which is the time taken to render the subtitle text
and transmit the rendered subtitle text to the object buffer 234, and Tpcs, which is the
15 time taken to generate the PCS including the rendered object and transmit the PCS to the graphic planer 240. The time Tchar is the time taken to render one character.
Therefore, Trendering is obtained by adding times taken to render all characters.
The size of the object buffer 234 must be equal to or larger than the size of the object. Here, the size of the object is obtained by adding the sizes of character data of
20 the object. Therefore, the number of characters composing one subtitle is limited to the number of characters which can be stored in the object buffer 234. Also, since the object buffer 234 can store a plurality of subtitles, the number of characters composing the plurality of subtitles is also limited to the number of characters which can be stored in the object buffer 234.
25 FIGS. 7A through 7C are diagrams for illustrating a process of generating an
image for a plurality of subtitles using one PCS and a plurality of position information data.
In FIG. 7A, styles "Scriptl", "Script2", and "Script3" are defined as style information used for subtitle text rendering. Referring to FIG. 7A, each of the three
30 styles uses a font "Arial.ttf, a text color "black", a background color "white", a character size "16pt", an align method "center", an output direction "left-to-right-top-to-bottom",
11

and a line-height "40px". As a subtitle text reference position, "Scriptl" has
coordinates (x1, y1), "ScriptZ has coordinates (x2, y2), and "Script3" has coordinates
(x3, y3). As a text output region, "Scriptl" has "left1, top1, width1, height1, "Script2"
has "Ieft2, top2, width2, height2", and "Script3" has "Ieft3, top3, width3, height3".
5 In FIG. 7B, subtitle texts 710, 720, and 730 rendered using the styles "Scriptl",
"Script2", and "Script3" are defined. Referring to FIG. 7B, the subtitle text Hello 710 uses the style "Scriptl" and is output from "00:10:00" to "00:15:00", the subtitle text Subtitle 720 uses the style "Script2" and is output from "00:12:00" to "00:17:00", and the subtitle text World 730 uses the style "Script3" and is output from "00:14:00" to 10 "00:19:00". Therefore, two or three subtitle texts are output between "00:12:00" and "00:17:00". Since different scripts are used, the line change tag
is unnecessary.
FIG. 7C shows a result of outputting the subtitles defined in FIGS. 7A and 7B.
Referring to FIG. 7C, data stored in each buffer of the text subtitle-processing unit 220
in each time zone will be described in detail.
15 ® Before "00:10:00": output a PCS including a void subtitle image
Element control data buffer: void
Text data buffer: void
Style data buffer: void
Font data buffer: font information of "Arial.ttf"
20 (2) From "00:10:00" to "00:12:00": output a PCS including an image in which the
subtitle text Hello 710 is rendered
Element control data buffer: control information of the subtitle text Hello 710
Text data buffer: "Hello"
Style data buffer: style information of "Scriptl"
25 Font data buffer: font information of "Arial.ttf"
(3) From "00:12:00" to "00:14:00": output a PCS including the subtitle text Hello 710 and the subtitle text Subtitle 720
Element control data buffer: control information of the subtitle text Hello 710 and
the subtitle text Subtitle 720
30 Text data buffer: "Hello" and "Subtitle"
Style data buffer: style information of "Scriptl" and "Script2"
Font data buffer: font information of "Arial.ttf"
12

© From "00:14:00" to "00:15:00": output a PCS including the subtitle text Hello 710, the subtitle text Subtitle 720, and the subtitle text World 730
Element control data buffer: control information of the subtitle text Hello 710, the
subtitle text Subtitle 720, and the subtitle text World 730
5 Text data buffer: "Hello", "Subtitle", and "World"
Style data buffer: style information of "Scriptl", "Script2", and "Script3"
Font data buffer: font information of "Arial.ttf"
(5) From "00:15:00" to "00:17:00": output a PCS including the subtitle text Subtitle
720 and the subtitle text World 730
10 Element control data buffer: control information of the subtitle text Subtitle 720
and the subtitle text World 730
Text data buffer: "Subtitle" and "World"
Style data buffer: style information of "Script2" and "Script3"
Font data buffer: font information of "Arial.ttf'
15 © From "00:17:00" to "00:19:00": output a PCS including the subtitle text World
730
Element control data buffer: control information of the subtitle text World 730
Text data buffer: "World"
Style data buffer: style information of "Script3"
20 Font data buffer: font information of "Arial.ttf'
© After "00:19:00": output a PCS including a void subtitle image
Element control data buffer: void
Text data buffer: void
Style data buffer: void
25 Font data buffer: font information of "Arial.ttf'
In the second method described above, subtitle images for subtitle texts are
generated by applying different styles to a plurality of subtitle texts having overlapped
output times, one PCS including the subtitle images is generated, and the generated
PCS is transmitted to the presentation engine 230. A text subtitle processing time is
30 the same as that of the first method. That is, text subtitle processing of the output subtitles must be quickly performed considering a time Tdecoding taken for performing
decoding of the subtitles in the text subtitle processing unit 220 and a time Tcomposilion taken for outputting the rendered subtitles from the object buffer 234 to the graphic
13

planer 240. However, in this method, since a plurality of objects exit, a rendering time is obtained by adding the times taken to render the respective objects. That is, the rendering time is calculated by Equation 2. [Equation 2]

T -start arrival t + Tlecoding composition
Tdecoding = Trendering + T~ 1PCS
Trendering Nutn of■ = zi=0 objTMOBJ(i)
T Num of charZ T1char(i)
Limitation of the number of characters of the subtitle text which can be stored in 10 the object buffer 234 is the same as that of the first method.
FIGS. 8A through 8C are diagrams for illustrating a process of generating an image so that one image object is included in one PCS by allocating a plurality of PCS for a plurality of subtitles.
In FIG. 8A, styles "Script1", "Script2", and "Script3" are defined as style 15 information used for subtitle text rendering. Referring to FIG. 8A, each of the three styles uses a font "Arial.ttf", a text color "black", a background color "white", a character size "16pt", an align method "center", an output direction "left-to-right-top-to-bottom", and a line-height "40px". As a subtitle text reference position, "Script1" has coordinates (x1, y1), "Script2" has coordinates (x2, y2), and "Script3" has coordinates 20 (x3, y3). As a text output region, "Script1" has "left1, top1, width1, height1", "Script2" has "Ieft2, top2, width2, height2", and "Script3" has "Ieft3, top3, width3, height3".
In FIG. 8B, subtitle texts 810, 820, and 830 rendered using the styles "Script1", "Script2", and "Script3" are defined. Referring to FIG. 8B, the subtitle text Hello 810 uses the style "Script1" and is output from "00:10:00" to "00:15:00", the subtitle text 25 Subtitle 820 uses the style "Script2" and is output from "00:12:00" to "00:17:00", and the subtitle text World 830 uses the style "Script3" and is output from "00:14:00" to "00:19:00". Therefore, two or three subtitle texts are overlapped between "00:12:00" and "00:17:00".
FIG. 8C shows a result of outputting the subtitles defined in FIGS. 8A and 8B. 30 Referring to FIG. 8C, data stored in each buffer of the text subtitle processing unit 220 in each time zone will be described in detail.
14

CD From "00:00:00": output a PCS including a void subtitle image
Element control data buffer: void
Text data buffer: void
Style data buffer: void
5 Font data buffer: font information,of "Arial.ttf"
(2) From "00:10:00": output a PCS including an image in which the subtitle text Hello 810 is rendered
Element control data buffer: load control information of the subtitle text Hello 810
Text data buffer: "Hello"
10 Style data buffer: style information of "Scriptl"
Font data buffer: font information of "Arial.ttf' © From "00:12:00": output a PCS including the subtitle text Hello 810 and PCS
including the subtitle text Subtitle 820
Element control data buffer: load control information of the subtitle text Subtitle 15 820
Text data buffer: "Subtitle"
Style data buffer: style information of "Script2"
Font data buffer: font information of "Arial.ttf"
© From "00:14:00": output a PCS including the subtitle text Hello 810, a PCS 20 including the subtitle text Subtitle 820, and a PCS including the subtitle text World 830
Element control data buffer: load control information of the subtitle text World 830
Text data buffer: "World"
Style data buffer: style information of "Script3"
Font data buffer: font information of "Arial.ttf'
25 © After "00:15:00": The text subtitle processing unit 220 does not do any
operation until preparing an output for subsequent subtitle texts to be output after
"00:19:00". Therefore, changes of subtitles output between "00:15:00" and "00:19:00"
are performed by the presentation engine 230 controlling the PCS of the subtitles
"Hello", "Subtitle", and "World" received from the text subtitle processing unit 220.
30 That is, at "00:15:00", the presentation engine 230 deletes the PCS and bitmap
image object of the subtitle "Hello" from the composition buffer 233 and the object buffer 234 and outputs only the PCS of the subtitles "Subtitle" and "World" onto a screen. At "00:17:00", the presentation engine 230 deletes the PCS and bitmap image object of the
15

subtitle "Subtitle" from the composition buffer 233 and the object buffer 234 and outputs only the PCS of the subtitle "World" onto the screen. Also, at "00:19:00", the presentation engine 230 deletes the PCS and bitmap image object of the subtitle "World" from the composition buffer 233 and the object buffer 234 and does not output a 5 subtitle onto the screen any more.
In the third method described above, one subtitle image for each subtitle text is generated by applying different styles to a plurality of subtitle texts having overlapped output times, one PCS is generated for one subtitle image, and the generated plurality of PCS are transmitted to the presentation engine 230. A text subtitle processing time
10 is the same as that of the first method. While only a processing time of one PCS at once is considered in the first and second methods since one PCS for a plurality of subtitle texts having overlapped output times is composed and output, a plurality of PCS are generated and output in the third method since each subtitle text composes one PCS. Therefore, for a subtitle text processing start time of the third method, the worst
15 case, that is, a case where a plurality of PCS for a plurality of subtitles having the same output start time are simultaneously generated and output, must be considered. This is described by Equation 3. [Equation 3]
T _ T start arrival decoding composition
20 T =T +T
decoding rendering PCS generation

number of PCST yPSC generation / j /=0 T*PCS (0
Num. of objT = V T-1 rendering /_, 1 OBJ (i) 1=0
Nwn of char *OBJ ~ 2-J *char(i)
The time TPCSgmemtim taken to generate a plurality of PCS is obtained by adding 25 each TPCSgenemion, which is a PCS generation time of one subtitle, all together. The time Trendering taken to generate a plurality of objects by rendering a plurality of subtitles is obtained by adding each T0BJ, which is a rendering time of one subtitle, all together. The time T0BJ taken to render one subtitle is obtained by adding each Tchar, which is a
rendering time of each character included in a relative subtitle, all together. Referring 30 to Equation 3, in order to simultaneously output a plurality of subtitles including a
16

plurality of characters, a sum of times taken to render all characters included in the
subtitles, compose the plurality of PCS, and output the plurality of PCS must be less
than a difference between a subtitle output time and a subtitle processing start time of
the text subtitle processing unit 220.
5 Limitation of the number of characters of the subtitle text which can be stored in
the object buffer 234 is the same as that of the first method or the second method.
As described in the third method, in an information storage medium and a reproducing apparatus constructed with a structure of supporting simultaneous outputs of a plurality of PCS, a text subtitle and another bitmap image can be simultaneously
10 output onto a screen.
Data compressed and encoded in an AV stream includes video data, audio data, bitmap-based subtitles, and other non-subtitle bitmap images. An image © displayed on a top-right of a screen in order to indicate a TV program for over 15 years old only is an example of the non-subtitle bitmap images. In a conventional method, since only
15 one PCS is output onto a screen at one time, a region for outputting a bitmap subtitle and a region for outputting a non-subtitle bitmap image are separately defined in PCS in order to simultaneously output the bitmap subtitle and the non-subtitle bitmap image.
Accordingly, when a user turns an output of subtitles off since the user does not want the output of the subtitles, a decoder stops only decoding of the subtitles.
20 Therefore, since subtitle data is not transmitted to an object buffer, the subtitles
disappear from a screen, and the only non-subtitle bitmap image is continuously output onto the screen.
When the text subtitle processing unit 220 generates an image for a subtitle using one PCS and transmits the PCS to the presentation engine 230 in order to output
25 the subtitle, if an output of subtitles is turned off, a non-subtitle bitmap image recorded in an AV stream is not output, either. Therefore, in a case where a plurality of PCS can be simultaneously output onto a screen as described in the third method of the present invention, when text subtitles are selected instead of bitmap subtitles, images except the bitmap subtitles in PCS included in an AV stream can be continuously output, and
30 the text subtitles can be output using PCS generated by the text subtitle processing unit 220. That is, the text subtitles and the other non-subtitle bitmap images can be simultaneously output onto the screen.
17

The present invention may be embodied in a general-purpose computer by running a program from a computer-readable medium, including but not limited to storage media such as magnetic storage media (ROMs, RAMs, floppy disks, magnetic tapes, etc.), optically readable media (CD-ROMs, DVDs, etc.), and carrier waves 5 (transmission over the internet). The present invention may be embodied as a
computer-readable medium having a computer-readable program code unit embodied therein for causing a number of computer systems connected via a network to effect distributed processing. And the functional programs, codes and code segments for embodying the present invention may be easily deducted by programmers in the art
10 which the present invention belongs to.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following
15 claims.
[Effect of the Invention]
As described above, according to an embodiment of the present invention, a subtitle file can be easily produced, and subtitles for an AV stream can be output in 20 various forms.
18

What is claimed is:
1. An information storage medium comprising:
AV data; and
subtitle data in which at least one subtitle text data and output style information 5 for designating an output form of the subtitle texts are stored in a text format.
2. The information storage medium of claim 1, wherein the output style
information contains a plurality of pieces of information so that the output style
information is differently applied to the subtitle texts.
10
3. The information storage medium of claim 1, wherein the subtitle text data
is rendered by applying the same output style and generated one page composed of
one image.
15 4. The information storage medium of claim 1, wherein the subtitle text data
is rendered by applying different output styles and generated pages, each page being composed of each rendered image.
5. The information storage medium of claim 1, wherein, when a plurality of
20 subtitle data exist, the plurality of subtitle data are rendered, and rendered images
compose a plurality of pages, respectively.
6. The information storage medium of claim 1, wherein the subtitle data
further comprises information of time when the subtitle text is output onto a screen.
25
7. A text subtitle processing apparatus comprising:
a text subtitle parser separately extracting rendering information used for
rendering a text in text subtitle data and control information used for presenting the
rendered text; and
30 a text layout/font renderer generating a bitmap image of a subtitle text by
rendering the subtitle text according to the extracted rendering information.
19

8. The apparatus of claim 7, wherein the text subtitle parser constructs the
control information so that the control information is fitted to a predetermined
information structure format and transmits the control information to a presentation
engine.
5
9. The apparatus of claim 7, wherein the text layout/font Tenderer renders a
plurality of subtitle text data by applying the same output style to the plurality of subtitle
text data and generates one page composed of one image.
10 10. The apparatus of claim 7, wherein the text layout/font renderer renders a
plurality of subtitle text data by applying different output styles to the plurality of subtitle text data and generates one page composed of a plurality of rendered images.
11. The apparatus of claim 7, wherein the text layout/font renderer renders a 15 plurality of subtitle text data by applying different output styles to the plurality of subtitle text data and generates a plurality of page composed of a plurality of rendered images.
20

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION (See section 10, rule 13)
"INFORMATION STORAGE MEDIUM CONTAINING SUBTITLES AND PROCESSING APPARATUS THEREFOR"
SAMSUNG ELECTRONICS CO., LTD., a Korean company, of 416, Maetan-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 442-742, Korea,
The following specification particularly describes the invention and the manner in which it is to be performed.


Documents:

522-MUMNP-2006-ABSTRACT(30-7-2009).pdf

522-mumnp-2006-abstract(8-5-2006).pdf

522-mumnp-2006-abstract(granted)-(14-1-2010).pdf

522-mumnp-2006-abstract.doc

522-mumnp-2006-abstract.pdf

522-mumnp-2006-abstract1.jpg

522-mumnp-2006-cancelled pages(30-10-2009).pdf

522-MUMNP-2006-CANCELLED PAGES(30-7-2009).pdf

522-MUMNP-2006-CLAIMS(30-7-2009).pdf

522-mumnp-2006-claims(8-5-2006).pdf

522-MUMNP-2006-CLAIMS(AMENDED)-(30-10-2009).pdf

522-mumnp-2006-claims.pdf

522-mumnp-2006-correspondance-others.pdf

522-mumnp-2006-correspondance-received.pdf

522-MUMNP-2006-CORRESPONDENCE(30-10-2009).pdf

522-mumnp-2006-correspondence(30-7-2009).pdf

522-mumnp-2006-correspondence(ipo)-(15-1-2010).pdf

522-mumnp-2006-description (complete).pdf

522-MUMNP-2006-DESCRIPTION(COMPLETE)-(30-7-2009).pdf

522-mumnp-2006-description(complete)-(8-5-2006).pdf

522-mumnp-2006-description(granted)-(14-1-2010).pdf

522-mumnp-2006-drawing(30-7-2009).pdf

522-mumnp-2006-drawing(8-5-2006).pdf

522-mumnp-2006-drawing(granted)-(14-1-2010).pdf

522-MUMNP-2006-FORM 1(30-7-2009).pdf

522-mumnp-2006-form 13(30-10-2009).pdf

522-mumnp-2006-form 18(9-11-2006).pdf

522-mumnp-2006-form 2(complete)-(8-5-2006).pdf

522-mumnp-2006-form 2(granted)-(14-1-2010).pdf

522-MUMNP-2006-FORM 2(TITLE PAGE)-(30-7-2009).pdf

522-mumnp-2006-form 2(title page)-(complete)-(8-5-2006).pdf

522-mumnp-2006-form 2(title page)-(granted)-(14-1-2010).pdf

522-MUMNP-2006-FORM 3(30-7-2009).pdf

522-mumnp-2006-form 3(6-9-2007).pdf

522-MUMNP-2006-FORM 5(5-5-2006).pdf

522-mumnp-2006-form-1.pdf

522-mumnp-2006-form-2.doc

522-mumnp-2006-form-2.pdf

522-mumnp-2006-form-3.pdf

522-mumnp-2006-form-5.pdf

522-MUMNP-2006-GENERAL POWER OF ATTORNEY(30-7-2009).pdf

522-MUMNP-2006-OTHER DOCUMENT(30-7-2009).pdf

522-mumnp-2006-petition under rule 137(1)-(30-7-2009).pdf

522-mumnp-2006-petition under rule 137(30-7-2009).pdf

522-MUMNP-2006-PRIORITY DOUMENT(30-7-2009).pdf

522-MUMNP-2006-REPLY TO EXAMINATION REPORT(30-7-2009).pdf

522-mumnp-2006-specification(amended)-(30-7-2009).pdf

522-mumnp-2006-wo international publication report(8-5-2006).pdf

552-MUMNP-2006-CORRESPONDENCE(2-9-2008).pdf

abstract1.jpg


Patent Number 237952
Indian Patent Application Number 522/MUMNP/2006
PG Journal Number 3/2010
Publication Date 22-Jan-2010
Grant Date 14-Jan-2010
Date of Filing 08-May-2006
Name of Patentee SAMSUNG ELECTRONICS CO., LTD.
Applicant Address 416, Maetan-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 442-742,
Inventors:
# Inventor's Name Inventor's Address
1 KANG, Man -Seok 1237-3 Maetan 3-dong, Yeongtong-gu, Suwon-si, Gyeonggi-do 443-848, Korea
2 MOON, Seong-Jin 403-506 Cheongmyung Maeul 4-danji Apt., 1046-1, Yeongtong-dang, Yeong-tong-gu, Suwon-si,Gyeonggi-do 443-738, Korea
3 CHUNG, Hyun-Kwon 569-302 Shinsa-dong, Gangnam-gu, Seoul 135-891, Korea
PCT International Classification Number G11B20/10
PCT International Application Number PCT/KR2004/002904
PCT International Filing date 2004-11-10
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10-2004-0083517 2004-10-19 Democratic People's Repulic
2 10-2003-0079181 2003-11-10 Democratic People's Repulic