Title of Invention

METHOD AND APPARATUS FOR 3-DIMENSIONAL ENCODING AND/OR DECODING OF VIDEO

Abstract Provided is a method for 3-dimensional encoding of videos, which adapts to 5 temporal and spatial characteristics of the videos. The method includes performing temporal estimation on videos taken by a camera located in the center with reference to videos taken by the camera at immediately previous time, when a plurality of cameras is arranged in a row, and performing temporal-spatial estimation on videos taken by other cameras with reference to previous videos taken by cameras adjacent to the camera 10 located in the center. As described above, according to the present invention, 3-dimensional videos acquired using a number of cameras can be efficiently encoded. [Representative Drawing] FIG. 5B 1
Full Text FORM 2THE PATENTS ACT, 1970 (39 of 1970)&The Patents Rules, 2003
PROVISIONAL/ COMPLETE SPECIFICATION(See section 10 and rule 13)
1. TITLE OF THE INVENTION : "METHOD, MEDIUM, AND APPARATUS FOR 3-DIMENSIONAL ENCODING AND/OR DECODING OF VIDEO"
2. APPLICANT (S)(a) NAME : DAEYANG FOUNDATION(b) NATIONALITY: KR(c) ADDRESS : 98 Kunja-dong, Kwangjin-gu, Seoul, 143-747 Republic of Korea(a) NAME : SAMSUNG ELECTRONICS CO., LTD.(b) NATIONALITY: KR(c) ADDRESS : 416, Maetan-dong, Yeongton-gu, Suwon-si, Gyeonggi-do 442-742, Republic of Korea
3. PREAMBLE TO THE DESCRIPTION
PROVISIONALThe following specification describes the invention COMPLETEThe following specification particularly describes the invention and the manner in which it is to be performed.
4. DESCRIPTION (Description shall start from next page)
5. CLAIMS (not applicable for provisional specification. Claims should start with the preamble - "I/we claim" on separate page)
6. DATE AND SIGNATURE (to be given at the end of last page of specification)
7. ABSTRACT OF THE INVENTION (to be given along with complete specification on separate page)


WO 2005/069630 PCT7KR2005/000182
Description
METHOD, MEDIUM, AND APPARATUS FOR
3-DIMENSIONAL ENCODING AND/OR DECODING OF
VIDEO
Technical Field
[1] Embodiments of the present invention relate to video encoding and decoding, and
more perticularly, to a method, medium, and apparatus for 3-dimensional encoding and/or decoding of video, which includes adapting to temporal and spatial characteristics of the video.
Background Art
[2] Video encoding in Moving picture expert group (MPEG)-4 pat 2 and H. 264
(MPEG-4 advanced video encoding (AVC)) involves 2-dimensional encoding of videos and focuses on improving encoding efficiency. However in the field of real-
like communication or virtual reality, 3-dimensional encoding and reproduction of videos are also required. Therefore, studies should be conducted on 3-dimesional encoding of audio video (AV) data instead of conventional 2-dimesional encoding.
[3] MPEG, which is an organization for standardizing video encoding, has made efforts
to establish standards for 3-dimensional encoding of AV data. As a pat of such efforts, a 3-dimensional AV encoding ad-hoc group (AHG) has been organized and standardization is in progress.
Disclosure of Invention
Technical Solution
[4] Embodiments of the present invention include a method, medium, and apparatus for
3-dimensional encoding and/or decoding of video by which video data received from a
plurality of cameras and is coded/decoded 3-dimensionally.
Advantageous Effects
[5] According to embodiments of the present invention, 3-dimensional videos acquired
using a number of cameras can be efficiently encoded, resulting in superior video
display quality.
Description of Drawings
[6] FIG. 1 is a view illustrating encoding and reproduction of stereoscopic videos using
a left view video and a right view video, according to an embodiment of the present
invention;
[7] FIGS. 2A and 2B illustrate exemplary structures of a base layer video and an en-
-2-

WO 2005/069630



PCT/KR2005/000182

hancement layer video;
[8] FIG. 3 is a view illustrating creation of a single video using decimation of the left
view video and right view video and reconstruction of the single video into a left view
video and a right view video using interpolation of the single video, according to an
embodiment of the present invention;
[9] FIG. 4 is a view illustrating motion estimation/compensation of decimated video
composed of a left view video and a right view video;
[10] FIG. 5A illustrates encoding of a plurality of video data received from cameras
arranged in a row, according to an embodiment of the present invention;
[11] FIG. 5B illustrates video taken by a plurality of cameras over time due to scene
change;
[12] FIGS. 6A and 6B are views illustrating 3-dimensional encoding of videos according
to the present invention, according to embodiments of the present invention; and
[ 13] FIG. 7 illustrates camera positions and an order of encoding when the plurality of
cameras exists in a 2-dimensional space, according to an embodiment of the present invention.
Best Mode
[14] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional encoding of videos, the method including performing temporal estimation on video taken by a centerly located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras are arranged in a row, with the centerly located camera being at a central position of the row, and performing temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[15] A result of the performed temporal estimation on video taken by the centerly
located camera may be a base layer video and a result of the performed temporal-
spatial estimation on videos taken by the other cameras may be at least one en
hancement layer video for the base layer video.
[16] In the performing of the temporal-spatial estimation on videos taken by the other
cameras the temporal-spatial estimation may be performed on previous-in-time videos referred to by the videos taken by the other cameras with reference to a number of previous-in-time videos which is equal to a predetermined number of reference
-3-

WO 2005/069630



PCT/KR2005/000182

pictures. In addition, the predetermined number of reference pictures may be 5.
[ 17] Further, in the temporal-spatial estimation on videos taken by the other cameras
temporal-spatial estimation may also be performed with reference to current videos taken by cameras adjacent to the centerly located camera. The temporal-spatial estimation on videos taken by the other cameras temporal-spatial estimation may also be performed with reference to videos taken by all of a plurality of cameras that fall within a range of an angle between previous-in-time videos taken by cameras adjacent to the centerly located camera and videos to be presently estimated.
[18] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional encoding of videos, the method including referring to a previous-in-time video taken by a camera adjacent to a center of a video to be presently encoded, and performing temporal-spatial estimation with reference to as many previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined number of reference pictures.
[19] A result of the referring may be a base layer video and a result of the performed
temporal-spatial estimation may be at least one enhancement layer video for the base layer video.
[20] In addition, an angle between the camera adjacent to the center of the video and the
video to be presently encoded may vary according to an interval between adjacent cameras.
[21] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional encoding of videos, by which a plurality of videos taken by cameras arranged 2-dimensionally are encoded, the method including encoding videos taken by a camera centerly located among other cameras arranged 2-dimensionally, and sequentially encoding videos taken by the other cameras in an order based on shortest distances from the centerly located camera.
[22] A result of the encoding of videos taken by the camera centerly located may be a
base layer video and a result of the sequential encoding may be at least one enhancement layer video for the base layer video.
[23] Further, in the sequentially encoding, if there are a plurality of cameras having a
same distance from the centerly located camera, encoding of the plurality of cameras
having the same distance may be sequentially performed in a spiral manner.
[24] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a medium including computer readable code to implement a method for 3-dimensional encoding of videos, the method including performing
-4-

WO 2005/069630



PCT/KR2005/000182

temporal estimation on video taken by a centerly located camera with reference to videos taken by the centerly located camera at at least an immediately previous time,
when a plurality of other cameras are arranged in a row, with the centerly located camera being at a central position of the row, and performing temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[25] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth an encoder for 3-dimensional encoding, including a first encoder to perform temporal estimation on video taken by a centerly located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras are arranged in a row, with the centerly located camera being at a central position of the row, a second encoder to perform temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time, and a multiplexer to multiplex an output of the first encoder and an output of the second encoder.
[26] In the second encoder the temporal-spatial estimation may be performed on
previous-in-time videos referred to by the videos taken by the other cameras with reference to a number of previous-in-time videos which is equal to a predetermined number of reference pictures.
[27] In addition, an output of the first encoder may be a base layer video and an output
of the second encoded may be at least one enhancement layer video for the base layer video.
[28] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth an encoder for 3-dimensional encoding of videos, including a first encoder encoding present time video taken by a camera adjacent to a center of a video by referring to a previous-in-time video of the camera adjacent to the center of the video, a second encoder to perform temporal-spatial estimation with reference to as many previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined number of reference pictures, and a multiplexer to multiplex an output of the first encoder and an output of the second encoder.
[29] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth an encoder for 3-dimensional encoding of videos, by which
-5-

WO 2005/069630



PCT/KR2005/000182

a plurality of videos taken by cameras arranged 2-dimensionally are encoded, including a first encoder to encode videos taken by a camera centerly located among
other cameras arranged 2-dimensionally, a second encoder to sequentially encode videos taken by the other cameras in an order based on shortest distances from the centerly located camera, and a multiplexer to multiplex an output of the first encoder and an output of the second encoder.
[30] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth an encoding system for 3-dimensional encoding, including a plurality of cameras, with at least one camera of the plurality of cameras being centerly located among the plurality of cameras, a first encoder to perform temporal estimation on video taken by the centerly located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras, of the plurality of cameras, are arranged in a row, with the centerly located camera being at a central position of the row, a second encoder to perform temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time, and a multiplexer to multiplex an output of the first encoder and an output of the second encoder.
[31] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional decoding of videos, the method
including demultiplexing a video bitstream into a base layer video and at least one en
hancement layer video, decoding the base layer video, to decode video encoded by
performed temporal estimation for video taken by a centerly located camera with
reference to video taken by the centerly located camera at at least an immediately
previous time, when a plurality of other cameras were arranged in a row, with the
centerly located camera being at a central position of the row, and decoding the at least
one enhancement layer video, based on network resources, to decode video encoded by performed temporal-spatial encoding on videos taken by the other cameras with
reference to previous-in-time videos taken by cameras adjacent to the centerly located
camera and the video taken by the centerly located camera at the at least the im-
mediately previous time.
[32] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional decoding of videos, the method including demultiplexing a video bitstream into a base layer video and at least one en-

WO 2005/069630



PCT/KR2005/000182

hancement layer video, decoding the base layer video, to decode video encoded by referring to a previous-in-time video taken by a camera adjacent to a center of a video
to be then presently encoded, and decoding the at least one enhancement layer video,
based on network resources, to decode video encoded by performed temporal-spatial estimation with reference to as many previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined number of reference pictures.
[33] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a method for 3-dimensional decoding of videos, by which a plurality of videos taken by cameras arranged 2-dimensionally were encoded, the method including demultiplexing a video bitstream into a base layer video and at least one enhancement layer video , decoding the base layer video, to decode video encoded by encoding videos taken by a camera centerly located among other cameras arranged 2-dimensionally, and decoding the at least one enhancement layer video, based on network resources, to decode video encoded by sequentially encoding videos taken by the other cameras in an order based on shortest distances from the centerly located camera.
[34] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a computer readable medium including computer readable code to implement a method for 3-dimensional decoding of videos, the method including demultiplexing a video bitstream into a base layer video and at least one enhancement layer video , decoding the base layer video, to decode video encoded by
performed temporal estimation on videos taken by a centerly located camera with
reference to videos taken by the centerly located camera at at least an immediately
previous time, when a plurality of other cameras were arranged in a row, with the
centerly located camera being at a central position of the row, and decoding the at least
one enhancement layer video, based on network resources, to decode video encoded by
performed temporal-spatial estimation on videos taken by the other cameras with
reference to previous-in-time videos taken by cameras adjacent to the centerly located
camera and the video taken by the centerly located camera at the at least the im
mediately previous time.
[35] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a decoder for 3-dimensional decoding of videos, including a demultiplexer to demultiplex a video bitstream into a base layer video and at least one enhancement layer video, a first decoder to decode the base layer video, by decoding
-7-

WO 2005/069630



PCT/KR2005/000182

video encoded by performed temporal estimation for video taken by a centerly located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras were arranged in a row, with the centerly located camera being at a central position of the row, and a second decoder to decode the at least one enhancement layer video, based on network resources, by decoding video encoded by performed temporal-spatial encoding on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[36] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a decoder for 3-dimensional decoding of videos, including a demultiplexer to demultiplex a video bitstream into a base layer video and at least one enhancement layer video, a first decoder to decode the base layer video, by decoding video encoded by referring to a previous-in-time video taken by a camera adjacent to a center of a video to be then presently encoded, anda second decoder to decode the at least one enhancement layer video, based on network resources, by decoding video encoded by performed temporal-spatial estimation with reference to as many previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined number of reference pictures.
[37] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a decoder for 3-dimensional decoding of videos, by which a plurality of videos taken by cameras arranged 2-dimensionally were encoded, including a demultiplexer to demultiplex a video bitstream into a base layer video and at least one enhancement layer video, a first decoder to decode the base layer video, by decoding video encoded by encoding videos taken by a camera centerly located among other cameras arranged 2-dimensionally, and a second decoder to decode the at least one enhancement layer video, based on network resources, by decoding video encoded by sequentially encoding videos taken by the other cameras in an order based on shortest distances from the centerly located camera.
[38] To achieve the above and/or other aspects and advantages, embodiments of the
present invention set forth a 3-dimensional encoded signal, including a base layer video encoded through performed temporal estimation on video taken by a centerly located camera with reference to videos taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras were arranged with the centerly located camera being at a central position of the arranged centerly located
-8-

WO 2005/069630



PCT7KR2005/000182

camera, and at least one enhancement layer video encoded through performed temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous
time, wherein the base layer video and the at least one enhancement layer video are multiplexed to generate the 3-demensional encoded signal.
Mode for Invention
[39] Reference will now be made in detail to the embodiments of the present invention,
examples of which are illustrated in the accompanying drawings, wherein like
reference numerals refer to the like elements throughout. The embodiments are
described below to explain the present invention by referring to the figures.
[40] FIG. 1 is a view illustrating encoding and reproduction of stereoscopic video using
left view video and right view video, according to an embodiment of the present
invention.
[41] As illustrating in FIG. 1, in an MPEG-2 multi-view profile (13818-2),
3-dimensional video can be coded and reproduced using a scalable codec in which a
correlation between the left view video and right view video is searched and a disparity
between the two videos is coded according to a condition of a corresponding network.
Encoding is carried out using the left view video as base layer video and the right view
video as enhancement layer video. The base layer video indicates video that can be
coded as it is, while the enhancement layer video indicates video that is additionally
coded and later used to improve the quality of the base layer video when the cor-
responding network transporting the two video layers is in good condition, i.e., when
the network conditions are not favorable only the base layer video may be reproduced.
As such, encoding using both the base layer video and the enhancement layer video is
referred to as scalable encoding.
[42] The left view video can be coded by a first motion compensated DCT encoder 110.
A disparity between the left view video and the right view video can be calculated by a disparity estimator 122, which estimates a disparity between the left view video and the right view video, and a disparity compensator 124 and can then be coded by a second motion compensated DCT encoder 126. Assuming that the first motion compensated DCT encoder 110 that encodes the left view video is a base layer video encoder, the disparity estimator 122, the disparity compensator 124, and the second motion compensated DCT encoder 126 that involve encoding the disparity between the
left view video and the right view video may be referred to as an enhancement layer
-9-

WO 2005/069630



PCT/KR2005/000182

video encoder 120. The encoded base layer video and enhancement layer video can then be multiplexed by a system multiplexer 130 and transmitted to for subsequent
decoding.
[43] In the decoding, multiplexed data can be decomposed into the left view video and
the right view video by a system demultiplexer 140. The left view video can be decoded by a first motion compensated DCT decoder 150. Disparity video is then restored to the right view video by a disparity compensator 162, which compensates for the disparity between the left view video and the right view video, and a second motion compensated DCT decoder 164. Assuming that the first motion compensated DCT decoder 150 that decodes the left view video is a base layer video decoder, the disparity compensator 162 and the second motion compensated DCT decoder 164 that involve searching the disparity between the left view video and the right view video an d decoding the right view video can be referred to as an enhancement layer video decoder 160.
[44] FIGS. 2A and 2B illustrate exemplary structures of base layer video and en-
hancement layer video.
[45] As illustrated in FIG. 2A, similar to video encoding in MPEG-2 or MPEG-4, the
base layer video, which is of a left view video type, is encoded using an into picture (called an I picture) 212, a predictive picture (called a P picture) 218, and bi-directional pictures (called B pictures) 214 and 216. On the other hand, the enhancement layer video, which is of a right view video type, may include a P picture 222 encoded with reference to the I picture 212 of a left view video type, a B picture 224 encoded with reference to the P picture 222 of a right view video type and the B picture 214 of a left view video type, a B picture 226 encoded with reference to the B picture 224 of a right view video type and the B picture 216 of a left view video type, and a B picture 228 encoded with reference to the B picture 226 of a right view video type and the P picture 218 of a left view video type. In other words, the disparity can be encoded with reference to the base layer video. In the illustration of FIG. 2A, the direction of the arrows indicate encoding of respective video with reference to video identified to by the arrow point.
[46] FIG. 2B illustrates another exemplary structure of the enhancement layer video.
[47] Referring to FIG. 2B, the enhancement layer video of a right view video type can
include a B picture 242 encoded with reference to a B picture 232 of a left view video type, a B picture 244 encoded with reference to the B picture 242 of a right view video type and a B picture 234 of a left view video type, and a B picture 246 encoded with

WO 2005/069630



PCT/KR2005/000182

reference to the B picture 244 of a right view video type and a P picture 236 of a left view video type.
[48] FIG. 3 is a view illustrating creation of a single video using decimation of the left
view video and right view video and reconstruction of the single video into left view video and right view video using interpolation of the single video.
[49] Referring to FIG. 3, stereo video encoding can be performed in an MPEG-2 main
profile (MP) that uses motion encoding and disparity encoding. Two videos can be combined into one video by horizontally decimating the left view video and the right view video to 1/2 in stereo video encoding and then reducing the bandwidth by 1/2. The combined video can then be transmitted to a decoder. A decoder receives the combined video and restores the original videos by decomposing the combined video into the left view video and the right view video and two times interpolating the left view video and the right view video.
[50] FIG. 4 is a view illustrating motion estimation/compensation of a decimated video
including the left view video and the right view video.
[51] As illustrated in FIG. 4, the enhancement layer videos RI, RB, and RP can be
encoded with reference to enhancement layer videos adjacent to base layer videos LI, LB, and LP. Here, RI represents the I picture of a right view video type, RB represents the B picture of a right view video type, RP represents the P picture of a right view video type, LI represents the I picture of a left view video type, LB represents the B picture of a left view video type, and LP represents the P picture of a left view video type.
[52] However; such an encoding method has problems that disparity information is not
efficiently compressed and a difference in display quality between the left view video and the right view video becomes consistently greater than 0.5 -1.5 dB. Also, if several cameos exist for one scene, it becomes difficult to receive the extra video data.
[53] FIG. 5A is a view illustrating encoding video data received from a plurality of
cameras arranged in a row.
[54] Referring to FIG. 5A, the plurality of cameras can be arranged in a row, e.g., in a
one-dimensional line. In embodiments of the present invention, it may be assumed that
the cameras exist in a 2-dimensional space composed of i axis and j axis. However, to explain an embodiment of the present invention the case where the plurality of cameras are illustrated as existing in only a one-dimensional space, i.e., i of (i j) is equal to 0. If i is not equal to zero, a plurality of cameras will exist in a 2-dimensional space. Such an example will be described later with reference to FIG. 7.

WO 2005/069630



PCT/KR2005/000182

[55] FIG. 5B illustrates video taken by a plurality of cameos over time, e.g., with scene
changes.
[56] With videos taken by one of the camera being identified by f (i, j, t), at a particular
time t, (i, j) will identify the position of the camera, and when i is equal to 0 the corresponding camera exists in only one dimensional space, as illustrated in FIGS. 5A and 5B. For example, f (0,0,0) identifies a video taken by a center camera at the initial time. If videos taken by other cameras ax arranged along the time axis, there will also exist an angle q with respect to videos taken by adjacent cameras at the adjacent time t. The angle information q can also be used for encoding and decoding.
[57] FIGS. 6A and 6B are views illustrtate 3-dimensional encoding of video, according
to an embodiment of the present invention.
[58] As illustrated in FIG. 6A, videos f (0,0,0), f (0,0,1), f (0,0,2), f (0,0,3), and f (0,
0,4), respectively from cameras located at center positions (0,0, t) from a first direction, are each encoded into base layer videos, i.e., they are each temporally estimated and encoded only with reference to an immediately previous-in-time base layer videos. For example, f (0, 0, 2) is estimated with reference to f (0,0,1), and f (0, 0,3) is estimated with reference to f (0,0,2). As an example, a maximum number of five reference videos can be used. Videos f (0, -1, t) taken by cameras located in positions (0,-1, t) are encoded into first enhancement layer videos. Specifically, videos f (0, -1, t) can be estimated using temporally previous-in-time decoded videos and reference videos of f (0, -1, t-1 ~ t-5). For examples, video f (0, -1,2) can be estimated with reference to videos f (0,0,1) and f (0, -1,1), and video f (0, -1,3) can be estimated with reference to videos f (0, 0, 2) and f (0, -1, 2). Again, in this example, a maximum of five reference videos are used in motion estimation into the base layer videos. In other words, motion is temporal-spatial estimated and then encoded.
[59] Videos of other layers can be encoded in the same way as the above. In other
words, videos f (0, -2, t) taken from camera positions (0, -2, t) can be encoded into
third enhancement layer videos, videos f (0,1, t) taken from cameo positions (0,1, t)
can be encoded into second enhancement layer videos, and videos f (0, 2, t) taken from
cameo positions f (0,2, t) can be encoded into fourth enhancement layer videos.
[60] As further illustrated in FIG. 6B, for encoding of enhancement layer videos,
adjacent layer videos can also be referred to, according to another embodiment of the
present invention. In this case, since a greater number of reference videos are used,
display quality of restored videos can be improved.
[61] FIG. 7 illustrates camera positions and an order of encoding when a plurality of
-12-

WO 2005/069630



PCT/KR2005/000182

cameras exists in a 2-dimensional space.
[62] Referring to FIG. 7, camera positions are illustrated when cameras exist two di-
mensionally and t is equal to 0. According to one order of encoding videos taken by cameras, videos taken by a camera located in a centerly position can be encoded first, and videos taken by the 8 cameras that are located closest to the centerly positioned camera, e.g., those that have a distance of 1 from the centerly positioned camera (it is assumed here that the distance from one camera to another is 1) are sequentially encoded in a spiral manner. Then, videos taken from the 16 cameras that have a distance of 2 from the centerly positioned camera are sequentially encoded in a spiral manner. Such encoding can be arranged as follows.
[63] (1) f (0,0): distance = 0
[64] (2) f (1, 0), f (1,1), f (0,1), f (-1, -1), f (-1, 0), f (-1, -1), f (0, -1), f (1, -1): distance
= 1
[65] (3) f (2, 0), f (2, 1), f (2, 2), - ..: distance = 2
[66] (4) f (3,0), f (3, 1), - : distance = 3
[67] If encoding is performed in the order described above, although the bandwidth of a
corresponding network may be reduced, videos from all the cameras cannot be encoded and transmitted, and thus only a portion of the videos is transmitted. Accordingly, to overcome this potential bandwidth issue, videos from N cameras can be
spatially-temporally predicted and restored using bilinear interpolation or sync function type interpolation. Therefore, once 3-dimensional video information from cameras located in positions (i, j, t) is encoded and transmitted to the decoder even though only partial data is transmitted when the bandwidth of a network is poor, the decoder can still restore the original videos by performing interpolation.
[68] A method for encoding, according to an embodiment of the present invention, can
be further explained using a video f (0,6,6) as an example, as follows.
[69] (1) f (0, 6, 5), f (0, 6, 4), f (0, 6, 3), f (0, 6, 2), f (0, 6, 1) : When j is equal to 6,
temporal prediction, i.e., motion estimation/compensation can be performed. At this time, the number of reference pictures is 5, noting that the number of reference pictures is subject to change according to various circumstances.
[70] (2) Temporal-spatial prediction can be performed from the video f (0,6,6) towards
a center picture. At this time, temporal-spatial prediction is performed using a previously defined angle
θ
. In other words, temporal-spatial prediction can be performed on all the pictures that
-13-

WO 2005/069630 PCT/KR2005/000182
fall within a range of the angle
θ
.If
θ
is equal to 45°, prediction is performed in the following order (for example):
[71] a) f (0, 5,5), f (0,5,4), f (0,5, 3), f (0,5, 2), f (0,5,1)
[72] b) f (0,4,4), f (0,4, 3), f (0,4, 2), f (0,4, 1)
[73] c)f(0,3,3),f(0,3,2),f(0,3,l)
[74] d)f(0,2,2),f(0,2,l)
[75] e)f (0,1,1)
[76] In other words, motion estimation/compensation can be performed in units of
macroblocks on the above 15 temporal-spatial reference pictures, with the reference
pictures being determined using the previously defined angle
θ
[77] (3) During temporal-spatial estimation encoding of (1) and (2), a macroblock that is
most similar to a currently encoded macroblock can be searched for from the reference pictures and motion estimation/compensation and residual transform coding can be performed on the found macroblock.
[78] According to further embodiments of the present invention, decoding methods can
be similarly performed inversely with respect to the aforementioned encoding methods, for example. As described with reference to FIGS. 6A and 6B, once the multiplexed base layer videos and enhancement layer videos are received, the multiplexed videos can be decomposed into individual layer videos and decoded.
[79] Methods for 3-dimensional encoding of videos can be implemented through
computer readable code, e.g., as computer programs. Codes and code segments making up the computer readable code can be easily construed by skilled computer
programmers. Also, the computer readable code can be stored/transferred on computer
readable media, with and methods for 3-dimensional encoding/decoding of videos
being implemented by reading and executing the computer readable codes. The
computer readable media include magnetic recording media, optical recording media,
and carrier wave media, for example.
[80] While the present invention has been particularly shown and described with
reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the at that various changes in form and details may be made therein

WO 2005/069630



PCT/KR2005/000182

without departing from the spirit and scope of the present invention as defined by the following claims.
-15-

WO 2005/069630



PCT/KR2005/000182

Claims
[1] LA method for 3-dimensional encoding of videos, the method comprising:
performing temporal estimation on video taken by a centerly located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras are arranged in a row, with the centerly located camera being at a central position of the row; and performing temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[2] 2. The method of claim 1, wherein a result of the performed temporal estimation
on video taken by the centerly located camera is a base layer video and a result of the performed temporal-spatial estimation on videos taken by the other cameras is at least one enhancement layer video for the base layer video.
[3] 3. The method of claim 1, wherein in the performing of the temporal-spatial
estimation on videos taken by the other cameras the temporal-spatial estimation is performed at least on previous-in-time videos referred to by the videos taken by the other cameras with reference at least to a number of previous-in-time videos which is equal to a predetermined number of reference pictures.
[4] 4. The method of claim 3, wherein the predetermined number of reference
pictures is 5.
[5] 5. The method of claim 3, wherein in the temporal-spatial estimation on videos
taken by the other cameras temporal-spatial estimation is also performed with reference further to current videos taken bycameras adjacent to the centerly located camera.
[6] 6. The method of claim 3, wherein in the temporal-spatial estimation on videos
taken by the other cameras temporal-spatial estimation is performed with reference to videos taken by all of a plurality of cameras that fall within a range of an angle between previous-in-time videos taken by cameras adjacent to the centerly located camera and videos to be presently estimated.
[7] 7. A method for 3-dimensional encoding of videos, the method comprising:
referring to a previous-in-time video taken by a camera adjacent to a center of a
video to be presently encoded; and
performing temporal-spatial estimation with further reference to as many
-16-

WO 2005/069630



PCT/KR2005/000182

previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined number of reference pictures.
[8] 8. The method of claim 7, wherein a result of the referring is a base layer video
and a result of the performed temporal-spatial estimation is at least one enhancement layer video for the base layer video.
[9] 9. The method of claim 7, wherein an angle between the camera adjacent to the
center of the video and the video to be presently encoded varies according to an interval between adjacent cameras.
[10] 10. A method for 3-dimensional encoding of videos, by which a plurality of
videos taken by cameras arranged 2-dimensionaliy are encoded, the method comprising:
encoding videos taken by a camera centerly located among other cameras arranged 2-dimensionally; and
sequentially encoding videos taken by the other cameras in an order based on shortest distances from the centerly located camera.
[11] 11. The method of claim 10, wherein a result of the encoding of videos taken by
the camera centerly located is a base layer video and a result of the sequential encoding is at least one enhancement layer video for the base layer video.
[12] 12. The method of claim 10, wherein in the sequentially encoding, if there are a
plurality of cameras having a same distance from the centerly located camera, encoding of the plurality of cameras having the same distance is sequentially performed in a spiral manner.
[13] 13. A medium comprising computer readable code to implement a method for
3-dimensional encoding of videos, the method comprising: performing temporal estimation on video taken by a centerly located camera with reference to videos taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras are arranged in a row, with the centerly located camera being at a central position of the row; and performing temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[14] 14. The medium of claim 13, wherein a result of the performed temporal
estimation on video taken by the centerly located camera is a base layer video and a result of the performed temporal-spatial estimation on videos taken by the
-17-

WO 2005/069630



PCT/KR2005/000182

other cameras is at least one enhancement layer video for the base layer video.
[15] 15. An encoder for 3-dimensional encoding, comprising:
a first encoder to perform temporal estimation on video taken by a centerly
located camera with reference to video taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras are
arranged in a row, with the centerly located camera being at a central position of
the row;
a second encoder to perform temporal-spatial estimation on videos taken by the
other cameras with reference to previous-in-time videos taken by cameras
adjacent to the centerly located camera and die video taken by the centerly
located camera at the at least the immediately previous time; and
a multiplexer to multiplex an output of the first encoder and an output of die
second encoder.
[16] 16. The encoder of claim 15, wherein in the second encoder the temporal-spatial
estimation is performed at least on previous-in-time videos referred to by the videos taken by the other cameras with reference at least to a number of previous-in-time videos which is equal to a predetermined number of reference pictures.
[17] 17. The encoder of claim 16, wherein the predetermined number of reference
pictures is 5.
[18] 18. The encoder of claim 16, wherein in the second encoder temporal-spatial
estimation is also performed with reference to father current videos taken by cameras adjacent to die centerly located camera.
[19] 19. The encoder of claim 16, wherein in the second encoder temporal-spatial
estimation is performed with reference to videos taken by all of a plurality of cameras that fall within a range of an angle between previous-in-time videos taken by cameras adjacent to the centerly located camera and videos to be presently estimated.
[20] 20. The encoder of claim 16, wherein an output of the first encoder is a base
layer video and an output of the second encoded is at least one enhancement layer video for the base layer video.
[21] 21. An encoder for 3-dimensional encoding of videos, comprising:
a first encoder encoding present time video taken by a camera adjacent to a center of a video by referring to a previous-in-time video of the camera adjacent to die center of die video;
-18-

WO 2005/069630



PCT/KK2005/000182

a second encoder to perform temporal-spatial estimation with further reference to
as many previous-in-time videos adjacent to the camera adjacent to the center of
the video according to a predetermined number of reference pictures; and
a multiplexer to multiplex an output of the first encoder and an output of the
second encoder.
[22] 22. The encoder of claim 21, wherein an angle between the camera adjacent to
the center of the video and the video to be presently encoded varies according to
an interval between adjacent cameras.
[23] 23. The encoder of claim 21, wherein an output of the first encoder is a base
layer video and an output of the second encoded is at least one enhancement
layer video for the base layer video.
[24] 24. An encoder for 3-dimensional encoding of videos, by which a plurality of
videos taken by cameos arranged 2-dimensionally are encoded, comprising:
a first encoder to encode videos taken by a camera centerly located among other cameras arranged 2-dimensionally;
a second encoder to sequentially encode videos taken by the other cameras in an order based on shortest distances from the centerly located camera; and a multiplexer to multiplex an output of the first encoder and an output of the second encoder.
[25] 25. The encoder of claim 24, wherein in the second encoder, if there are a
plurality of cameras having a same distance from the centerly located cameo, encoding of the plurality of cameras having the same distance is sequentially performed in a spiral manner.
[26] 26. The encoder of claim 24, wherein an output of the first encoder is a base
layer video and an output of the second encoded is at least one enhancement layer video for the base layer video.
[27] 27. An encoding system for 3-dimensional encoding, comprising:
a plurality of cameras, with at least one camera of the plurality of cameras being
centerly located among the plurality of cameras;
a first encoder to perform temporal estimation on video taken by the centerly
located camera with reference to video taken by the centerly located camera at at
least an immediately previous time, when a plurality of other cameras, of the
plurality of cameras, are arranged in a row, with the centerly located camera
being at a central position of the row;
a second encoder to perform temporal-spatial estimation on videos taken by the
-19-

WO 2005/069630



PCT/KR2005/000182

outer cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly
located camera at the at least the immediately previous time; and
a multiplexer to multiplex an output of the first encoder and an output of the
second encoder.
[28] 28. The encoding system of claim 27, wherein in the second encoder the
temporal-spatial estimation is performed at least on previous-in-time videos
referred to by the videos taken by the other cameras with reference at least to a
number of previous-in-time videos which is equal to a predetermined number of
reference pictures.
[29] 29. The encoding system of claim 28, wherein in the second encoder temporal-
spatial estimation is performed with reference to videos taken by all of a plurality
of cameras that fall within a range of an angle between previous-in-time videos
taken by cameras adjacent to the centerly located camera and videos to be
presently estimated.
[30] 30. The encoding system of claim 27, wherein an output of the first encoder is a
base layer video and an output of the second encoded is at least one enhancement
layer video for the base layer video.
[31] 31. A method for 3-dimensional decoding of videos, the method comprising:
demultiplexing a video bitstream into a base layer video and at least one enhancement layer video;
decoding the base layer video, to decode video encoded by performed temporal
estimation for video taken by a centerly located camera with reference to video
taken by the centerly located camera at at least an immediately previous time,
when a plurality of other cameras were arranged in a row, with the centerly
located camera being at a central position of the row; and
decoding the at least one enhancement layer video, based on network resources,
to decode video encoded by performed temporal-spatial encoding on videos
taken by the other cameras with reference to previous-in-time videos taken by
cameras adjacent to the centerly located camera and the video taken by the
centerly located camera at the at least the immediately previous time.
[32] 32. The method of claim 31, wherein in the encoding of the at least one en-
hancement layer video, in die performed temporal-spatial estimation on videos taken by the other cameras, the temporal-spatial estimation was performed at
least on previous-in-time videos referred to by the videos taken by the other
-20 -

WO 2005/069630



PCT/KR2005/000182

cameras with reference at least to a number of previous-in-time videos which is equal to a predetermined number of reference pictures.
[33] 33. The method of claim 32, wherein the predetermined number of reference
pictures was 5.
[34] 34. The method of claim 32, wherein in the encoding of the at least one en-
hancement layer video, in the performed temporal-spatial estimation on videos taken by the other cameos, the temporal-spatial estimation was also performed with reference to then further current videos taken by cameos adjacent to the centerly located cameo.
[35] 35. The method of claim 32, wherein in the encoding of the at least one en-
hancement layer vide, in the performed temporal-spatial estimation on videos taken by the other cameos, the temporal-spatial estimation was performed with reference to videos taken by all of a plurality of cameos that fell within a range of an angle between previous-in-time videos taken by cameos adjacent to the centerly located cameo and videos to then currently be estimated.
[36] 36. A method for 3-dimensional decoding of videos, the method comprising:
demultiplexing a video bitstream into a base layer video and at least one enhancement layer video;
decoding the base layer video, to decode video encoded by referring to a previous-in-time video taken by a cameo adjacent to a center of a video to be men presently encoded; and
decoding the at least one enhancement layer video, based on network resources, to decode video encoded by performed temporal-spatial estimation with father reference to as many previous-in-time videos adjacent to the cameo adjacent to the center of the video according to a predetermined number of reference pictures.
[37] 37. The method of claim 36, wherein an angle between the cameo adjacent to
the center of the video and the video to be then presently encoded varied according to an interval between adjacent cameos.
[38] 38. A method for 3-dimensional decoding of videos, by which a plurality of
videos taken by cameras arranged 2-dimensionally were encoded, the method comprising:
demultiplexing a video bitstream into a base layer video and at least one enhancement layer video;
decoding the base layer video, to decode video encoded by encoding videos
-21-

WO 2005/069630



PCT/KR2005/000182

taken by a camera centerly located among other cameos arranged 2-dimensionally; and
decoding the at least one enhancement layer video, based on network resources, to decode video encoded by sequentially encoding videos taken by the other cameras in an order based on shortest distances from the centerly located camera.
[39] 39. The method of claim 38, wherein in the decoding of the sequentially encoded
at least one enhancement layer video, if there were a plurality of cameras having a same distance from the centerly located camera, the encoding of the plurality of cameras having the same distance was sequentially performed in a spiral manner.
[40] 40. A computer readable medium comprising computer readable code to
implement a method for 3-dimensional decoding of videos, the method comprising:
demultiplexing a video bitstream into a base layer video and at least one enhancement layer video;
decoding the base layer video, to decode video encoded by performed temporal estimation on videos taken by a centerly located camera with reference to videos taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras were arranged in a row, with the centerly located camera being at a central position of the row; and decoding the at least one enhancement layer video, based on network resources, to decode video encoded by performed temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[41] 41. A decoder for 3-dimensional decoding of videos, comprising:
a demultiplexer to demultiplex a video bitstream into a base layer video and at
least one enhancement layer video ;
a first decoder to decode the base layer video, by decoding video encoded by
performed temporal estimation for video taken by a centerly located camera with
reference to video taken by the centerly located camera at at least an immediately
previous time, when a plurality of other cameras were arranged in a row, with
the centerly located camera being at a central position of the row; and
a second decoder to decode the at least one enhancement layer video, based on
network resources, by decoding video encoded by performed temporal-spatial
-22-

WO 2005/069630



PCT/KR2005/000182

encoding on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time.
[42] 42. The decoder of claim 41, wherein in the encoding of the at least one en-
hancement layer video, in the performed temporal-spatial estimation on videos taken by the other cameras, the temporal-spatial estimation was performed at least on previous-in-time videos referred to by the videos taken by the other cameras with reference at least to a number of previous-in-time videos which is equal to a predetermined number of reference pictures.
[43] 43. The decoder of claim 42, wherein the predetermined number of reference
pictures was 5.
[44] 44. The decoder of claim 42, wherein in the encoding of the at least one en-
hancement layer video, in the performed temporal-spatial estimation on videos taken by the other cameras, the temporal-spatial estimation was also performed with further reference to then current videos taken by cameras adjacent to the centerly located camera.
[45] 45. The decoder of claim 42, wherein in the encoding of the at least one en-
hancement layer vide, in the performed temporal-spatial estimation on videos taken by the other cameras, the temporal-spatial estimation was performed with reference to videos taken by all of a plurality of cameras that fell within a range of an angle between previous-in-time videos taken by cameras adjacent to the centerly located camera and videos to then currently be estimated.
[46] 46. A decoder for 3-dimensional decoding of videos, comprising:
a demultiplexer to demultiplex a video bitstream into a base layer video and at least one enhancement layer video ;
a first decoder to decode the base layer video, by decoding video encoded by referring to a previous-in-time video taken by a camera adjacent to a center of a
video to be then presently encoded; and
a second decoder to decode the at least one enhancement layer video, based on
network resources, by decoding video encoded by performed temporal-spatial
estimation with further reference to as many previous-in-time videos adjacent to the camera adjacent to the center of the video according to a predetermined
number of reference pictures.
[47] 47. The decoder of claim 46, wherein an angle between the camera adjacent to
-23-

WO 2005/069630



PCT/KR2005/000182

the center of the video and the video to be then presently encoded varied
according to an interval between adjacent cameos.
[48] 48. A decoder for 3-dimensional decoding of videos, by which a plurality of
videos taken by cameras arranged 2-dimensionally were encoded, comprising: a demultiplexer to demultiplex a video bitstream into a base layer video and at least one enhancement layer video ;
a first decoder to decode the base layer video, by decoding video encoded by encoding videos taken by a camera centerly located among other cameras
arranged 2-dimensionally; and
a second decoder to decode the at least one enhancement layer video, based on
network resources, by decoding video encoded by sequentially encoding videos
taken by the other cameras in an order based on shortest distances from the
centerly located camera.
[49] 49. The decoder of claim 48, wherein in the decoding of the sequentially
encoded at least one enhancement layer video, if there were a plurality of
cameras
[50] 50. A 3-dimensional encoded signal, comprising:
a base layer video encoded through performed temporal estimation on video
taken by a centerly located camera with reference to videos taken by the centerly located camera at at least an immediately previous time, when a plurality of other cameras were arranged with the centerly located camera being at a central position of the arranged centerly located camera; and
at least one enhancement layer video encoded through performed temporal-spatial estimation on videos taken by the other cameras with reference to previous-in-time videos taken by cameras adjacent to the centerly located camera and the video taken by the centerly located camera at the at least the immediately previous time,
wherein the base layer video and the at least one enhancement layer video are multiplexed to generate the 3-demensional encoded signal.
-24-

51. A method for 3-dimensional encoding of videos, a medium, an encoder for 3-dimensional encoding of videos, a method for 3-dimensional decoding of videos, a computer readable medium, a decoder for 3-dimensional encoding of videos, a 3-dimensional encoded signal are substantially as herein described with reference to the accompanying drawings.
Dated this 9th day of September, 2005.

RAVI BHOLA
OF K & S PARTNERS
AGENT FOR THE APPLICANT(S)



























-25-
ABSTRACT
[Abstract of the Disclosure]
Provided is a method for 3-dimensional encoding of videos, which adapts to
5 temporal and spatial characteristics of the videos. The method includes performing temporal estimation on videos taken by a camera located in the center with reference to videos taken by the camera at immediately previous time, when a plurality of cameras is arranged in a row, and performing temporal-spatial estimation on videos taken by other cameras with reference to previous videos taken by cameras adjacent to the camera
10 located in the center. As described above, according to the present invention,
3-dimensional videos acquired using a number of cameras can be efficiently encoded. [Representative Drawing]
FIG. 5B
1

Documents:

1011-mumnp-2005-abstract.doc

1011-mumnp-2005-abstract.pdf

1011-mumnp-2005-assignment(10-09-2007).pdf

1011-mumnp-2005-cancelled pages(31-08-2007).pdf

1011-mumnp-2005-claims(granted)-(31-08-2007).doc

1011-mumnp-2005-claims(granted)-(31-08-2007).pdf

1011-mumnp-2005-claims.doc

1011-mumnp-2005-claims.pdf

1011-mumnp-2005-correspondance-others.pdf

1011-mumnp-2005-correspondance-received-ver-071005.pdf

1011-mumnp-2005-correspondance-received-ver-090905.pdf

1011-mumnp-2005-correspondance-received-ver-151105.pdf

1011-mumnp-2005-correspondance-seand.pdf

1011-mumnp-2005-correspondence(05-10-2007).pdf

1011-mumnp-2005-correspondence(ipo)-(05-10-2007).pdf

1011-mumnp-2005-description (complete).pdf

1011-mumnp-2005-drawing(31-08-2007).pdf

1011-mumnp-2005-drawings.pdf

1011-mumnp-2005-form 1(31-08-2007).pdf

1011-mumnp-2005-form 18(15-09-2005).pdf

1011-mumnp-2005-form 2(granted)-(31-08-2007).doc

1011-mumnp-2005-form 2(granted)-(31-08-2007).pdf

1011-mumnp-2005-form 26(10-09-2007).pdf

1011-mumnp-2005-form 26(11-09-2007).pdf

1011-mumnp-2005-form 26(12-09-2007).pdf

1011-mumnp-2005-form 26(22-05-2007).pdf

1011-mumnp-2005-form 26(31-08-2007).pdf

1011-mumnp-2005-form 3(09-09-2005).pdf

1011-mumnp-2005-form 3(22-05-2007).pdf

1011-mumnp-2005-form 3(29-08-2007).pdf

1011-mumnp-2005-form 3(31-08-2007).pdf

1011-mumnp-2005-form 5(11-09-2007).pdf

1011-mumnp-2005-form 6(10-09-2007).pdf

1011-mumnp-2005-form-1.pdf

1011-mumnp-2005-form-18.pdf

1011-mumnp-2005-form-2.pdf

1011-mumnp-2005-form-3.pdf

1011-mumnp-2005-form-pct-ib-307.pdf

1011-mumnp-2005-form-pct-ib-311.pdf

1011-mumnp-2005-form-pct-isa-220.pdf

1011-mumnp-2005-form-pct-isa-237.pdf

1011-mumnp-2005-pct-isa-210(15-09-2005).pdf

1011-mumnp-2005-pct-search report.pdf

abstract1.jpg


Patent Number 211318
Indian Patent Application Number 1011/MUMNP/2005
PG Journal Number 21/2008
Publication Date 23-May-2008
Grant Date 24-Oct-2007
Date of Filing 15-Sep-2005
Name of Patentee 1) SEJONG INDUSTRY ACADEMY COOPERATION FOUNDATION 2) SAMSUNG ELECTRONICS CO LTD
Applicant Address 98 KUNJA-DONG, KWANGJIN-GU, SEOUL, 143-747 416 MAETAN-DONG, YEONGTON-GU, SUWON-SI, GYEONGGI -DO 442-742 REPUBLIC OF KOREA
Inventors:
# Inventor's Name Inventor's Address
1 LEE YUNG-LYUL 1-704 KIKDONG APT., 192 GARAK-DONG, SONGPA-GU SEOUL 138-160,
PCT International Classification Number H04N7/24
PCT International Application Number PCT/KR2005/000182
PCT International Filing date 2005-01-20
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10-2004-0004423 2004-01-20 Republic of Korea