Title of Invention

MULTILEVEL IMAGE SEGMENTATION

Abstract A multilevel image segmentation technique using graph cuts is disclosed. A reduced resolution image is generated from a full resolution image which is to be segmented. The reduced resolution image is then segmented in order to identify a boundary between an object and a background within the image. The identified boundary then identifies a portion of an increased resolution image which is segmented in order to refine the earlier identified boundary. The steps may be iterated for successively increasing image resolutions in order to refine the boundary as required by a particular application. An initial identification of object and background portions of the image may be provided as input by a user. Alternatively, a user may identify only the object portion, and the background portion may be automatically determined.
Full Text

Multilevel Image Segmentation
[0001] This application claims the benefit of U.S. Provisional Application
No. 60/644,825 filed January 28, 2005, which is incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0002] The present invention relates generally to image processing, and
more particularly to multilevel image segmentation.
[0003] Digital image processing is becoming increasingly popular as digital
imaging devices continue to become more powerful. For example, digital cameras
can generate pictures having 10 million pixels, and Computed Tomography (CT)
scanners may produce volume data having more than 100 million voxels,
processing these images places a large computational burden on the various
devices that perform image processing.
[0004] One type of processing that is often performed on image data is
segmentation, whereby a boundary is determined between different portions of the
image. For example, in digital photography, it is often desirable to define a
boundary between a main object and background, in order to segment out the main
object. After the main object is segmented, the main object and background may
be processed separately. Similarly, in the medical imaging field, it is often
desirable to segment out a particular object, or portion of an object, from a CT scan
image. For example, in the case of a CT scan of a human heart, it may be
desirable to segment out a portion of the heart (e.g., left.atrium) in order to allow a
physician to more easily analyze the image. One example of segmentation is
illustrated in Fig. 1 which shows a rough image 100 of the left side of a human
heart. Assume that the object of interest is the left atrium 102 with the remaining
portion of the image not being of interest. A desirable segmentation is one which
provides a boundary between the object of interest 102 and the remaining portion
of the image. Such a boundary is shown in Fig. 1 as dotted line 104. Thus, an
appropriate segmentation process would generate boundary 104 between the
object of interest 102 and the remainder of the image.
1

[0005] One well know technique for image segmentation is the use of
graph cuts, as described in Y. Boykov and M. Jolly, Interactive Graph Cuts for
Optimal Boundary & Region Segmentation of Objects in N-D images, Proceedings
of International Conference on Computer Vision, Vol. 1, July 2001, Vancouver,
Canada, pp 105-112. As will be described in further detail below, the graph cuts
technique is an interactive segmentation technique that divides an image into two
segments, an object and background. A user imposes constraints for the
segmentation by indicating certain pixels that are part of the object and certain
pixels that are part of the background. The image is then automatically segmented
using graph cuts to find the globally optimal segmentation of the image.
[0006] The above identified graph cuts technique has become one of the
leading algorithms for interactive image segmentation in 2 dimensions (2D) and 3
dimensions (3D). While this technique provides accurate results for low resolution
images, it is of limited use for high resolution images due to its intense memory
requirements end its suprallinear time complexity. For example, to segment a
typical CT volume of 5123 voxels in a medical imaging application, the memory
consumption would be more than 8GB, which is impractical for current clinical
computers. Further, in a worst case complexity scenario, such Segmentation could
require an extremely long processing time in order to complete, which is impractical
for a medical imaging application.
[0007] Thus, what is needed is a computationally efficient segmentation
technique that provides acceptable segmentation results.
BRIEF SUMMARY OF THE INVENTION
[0008] The present invention provides an improved multilevel image
segmentation technique.
[0009] In accordance with an embodiment of the invention, a reduced
resolution image is generated from a full resolution image which is to be
segmented. The reduced resolution image is then segmented in order to identify a
boundary between an object and a background within the image. The identified
2

boundary then identifies a portion of an increased resolution image which is
segmented in order to refine the earlier identified boundary. By only segmenting a
low resolution image of the entire image, and then segmenting only portions of the
increased resolution image, significant computational resources (e.g., computer
cycles and memory utilization) are saved. The steps may be iterated for
successively increasing image resolutions in order to refine the boundary as
required by a particular application. In an advantageous embodiment, the portion
of the increased resolution image to be segmented may be identified by performing
a dilation operation on the prior identified boundary, and identifying the outer
portion of the dilation results as the background, and identifying the inner portion of
the dilation results as the object.
[0010] An initial identification of object and background portions of the
image may be provided as input by a user. In an alternative embodiment, a user
may identify only the object portion, and the background portion may be
automatically determined. This automatic determination of the background portion
may be performed by using an Identified object portion as a seed for performing a
region growing operation, performing a dilation operation on the result of the region
growing operation, and identifying at feast one point resulting from the dilation as a
background point.
[0011] In an ad vantageous embodiment, the segmentation steps are
performed using a graph cut technique.
[0012] These and other advantages of the invention will be apparent to
those of ordinary skill in the art by reference to the following detailed description
and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] Fig. 1 shows an example of image segmentation;
[0014] Fig. 2 is a high level block diagram of a computer which may be
used to implement the present invention;
[0015] Fig, 3 is a flowchart showing the steps performed in accordance
with an embodiment of the invention;
3

[0016] Fig, 4 graphically illustrates muitilevel segmentation in accordance
with an embodiment of the invention; and
[0017] Fig, 5 illustrates a particular embodiment of the invention which
applies the inventive segmentation technique to the problem of segmentation of the
left atrium in an image volume representing a heart.
DETAILED DESCRIPTION
[0018] The following description describes the present invention in terms
of the processing steps required to implement an embodiment of the invention.
These steps may be performed by an appropriately programmed computer, the
configuration of which is well known in the art. An appropriate computer may be
implemented, for example, using well known computer processor, memory units,
storage devices, computer software, and other components. A high level block
diagram of such a computer is shown in Fig. 2. Computer 202 contains a
processor 204 which controls the overall operation of computer 202 by executing
computer program instructions which define such operation. The computer
program instructions may be stored in a storage device 212 (e.g., magnetic disk,
optical disk, or any other computer readable medium) and loaded into memory 210
when execution of the computer program instructions is desired. Memory 210 may
also be used to store data used during the various steps of the method. Computer
202 also includes one or more interfaces 206 for communicating with other devices
(e.g., locally or via a network). Computer 202 also includes input/output 208 which
represents devices which allow for user interaction with the computer 202 (e.g.,
display, keyboard, mouse, speakers, buttons, etc). One skilled in the art will
recognize that an implementation of an actual computer will contain other
components as well, and that Fig. 2 is a high level representation of some of the
components of such a computer for illustrative purposes. In addition, one skilled in
the art will recognize that the processing steps described herein may also be
implemented using dedicated hardware, the circuitry of which is configured
specifically for implementing such processing steps. Alternatively, the processing
steps may be implemented using various combinations of hardware and software.
4

Further, in various implementations, the functions described herein may be
performed on a dedicated apparatus, or the functions may be part of a device that
performs other functions as well.
[0019] The present invention provides a multilevel banded graph cut
method for fast image segmentation. In general, the technique according to the
present invention performs segmentation at various resolution levels in order to
identify boundaries between an object and background, and propagates the
segmentation solution from lower levels to higher levels. Advantageously,
segmentations after the first level segmentation are only performed on a portion of
the image. More particularly, segmentations after the first level are performed on
the portion of the image identified by the boundary of the prior level segmentation.
By performing the higher resolution segmentations in only that position of the
image that needs to be refined (e.g., the boundary between the object and the
background) significantly less computing resources are used as compared to the
prior art approaches. This multilevel banded approach makes it possible to
achieve high quality segmentation resufts on large data sets with faster speed and
less memory consumption, thus allowing it to be used in a wider range of
applications where high performance segmentation of large image data sets is
required.
[0020] The flowchart of Fig. 3 illustrates the steps performed in
accordance with an embodiment of the invention. Starting with a full resolution
image, the first step 302 is to generate a reduced resolution image. The reduced
resolution image is then segmented in step 304 to identify a boundary between an
object and the background of the image. Next, in step 306, a portion of an
increased resolution image is generated. The portion that is generated is based
upon the boundary identified in step 304. Thus, in step 306, the entire higher
resolution image is not needed. Only a portion of the higher resolution image, as
identified by the previously identified boundary, is needed. Then, in step 308, the
generated portion of the increased resolution image is segmented in order to
generate a refined boundary in the increased resolution image. As represented by
arrow 310, steps 306 and 308 may be iterated, each time performing segmentation
5

on a higher resolution image, until a boundary having a desired resolution is
generated. Thus, by performing multilevel segmentation as shown in Fig, 3,
segmentation is performed in stages, whereby the higher resolution segmentations
are only performed on a portion of the image, thus reducing memory and
computational requirements.
[0021] Further details of an embodiment of the invention will now be
described in conjunction with Fig. 4, which graphically illustrates multilevel
segmentation in accordance with an embodiment of the invention. In one
embodiment of the invention, the segmentation performed may be segmentation in
accordance with the graph cuts algorithm described in Y. Boykov and M. Jolly,
Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in
N-D Images, Proceedings of International Conference on Computer Vision, Vol. 1,
July 2001, Vancouver, Canada, pp 105-112, which is hereby incorporated herein
by reference. That algorithm will be described briefly herein as follows. An N-
dimensional (N-D) image can be specified by a pair (P,l) consisting of a finite
discrete set Pof N-D points (pixels in 2D and voxels in 3D), and a mapping /that
maps each point p in P to a value /(p) in some arbitrary value space. From a given
image, a weighted undirected graph G = (V.E,W) can be constructed that consists
of nodes (vertices) vXXX V , edges e XXX E , and nonnegative weights (costs) wXXX W .
There are two special nodes in V. a source S node specifying the "object" terminal
and a sink T node specifying the "background" terminal. The remaining nodes in V
form a subset U = V /(S.T)| where each node u XXX U uniquely identifies an image
point in P. The set of edges E consist of two types of undirected edges: n-links
(neighborhood links) and t-links (terminal links). Each image node vXXX U has two t-
links {u, S} and {u, T} directly connected to the terminal S and T, respectively.
However, n-links are completely determined by the neighborhood system used
(e.g., 4- or 8-neighborhood system in 2-D and 6-, 18-, or 26-neighborhood system
in 3D). It is noted that larger neighborhood systems typically yield better image
segmentation results, but at the expense of both increased computation time and
memory consumption.
6

[0022] The segmentation of an image into object and background, known
also as hard segmentation, can be expressed as a binary vector
A = (Al....Au....AXXX)1 where the element Au gives the binary segmentation label of
an image point identified by the node u. A segmentation A can be uniquely
determined by a cut C on \h% graph G, where the out C is defined as a subset of
edges in E such that the terminals become separated on the Induced graph G(C) =
(V, E/C). Hence the image segmentation problem can be solved as a minimum
graph out problem on the following energy function

where c7j denotes the edge e spanning between the codes -vXXX, vXXX, XXX V, wXXX denotes
the weight assigned to the edge eXXX and F denotes the set of all feasible cuts.
[0023] Assume that O and B denote the subsets of image nodes marked
as object and background seeds by the user, respectively. Then the weight wXXX for
the graph is given by

where dist (mI,mj), is the Euclidean distance between image points p1 and p)
identified by nodes mI and mj respectively, I1 = 1(Pi),li =I(pi), and MAX is a
very large positive constant. This energy penalizes cuts that pass through
homogeneous regions and encourages cuts that pass through places where
intensity discontinuity is large. The constant parameter s can be chosen
empirically or estimated as a standard deviation over an image sample.
[0024] Further details of the multilevel graph cut technique in accordance
with an embodiment of the present invention will now be described in conjunction
7

with Fig, 4. As described above in conjunction with Fig. 1, a method in accordance
with the principles of the invention includes three stages: coarsening (i.e., reducing
image resolution), initial segmentation, and uncoarsening (i.e., increasing image
resolution).
[0025] During the coarsening stage, a sequence of smaller images
(Iu,Il.....Ik) are constructed from the original image Iu such that the size
constraint M ,L XXX M XXX is satisfied for each, dimension n = 1,......., N1 and each level
k=1,...,k, respectively, where M represents one of the dimensions of the image or
volume. Note that this constraint does not require the size in each dimension to be
reduced simultaneously. This image coarsening is represented in Fig. 4 which
shows image lk-3 402 being coarsened to image lk 404. (The image 402 may be
the result of coarsening of a prior image (not shown) from level K-2-) Thus, image
404 is a reduced resolution image as compared to image 402. In additional to
image coarsening, the location of the object seeds 406 and background seeds 408,
identified by O and B respectively, are also coarsened. The object and
background seeds are provided as user input, for example by a user identifying the
seed points by mouse clicks, or mouse drags. The seed coarsening operator must
satisfy the causality constraint so that the discrete topology of both the object and
background seed regions is preserved throughout all levels, i.e., the number of
connected object and background seed regions must be preserved. Therefore,
different coarsening operators should be chosen for coarsening the image and the
seeds. In an advantageous embodiment, the image is coarsened using either a
weighted mean filler followed by a down sampling of 2 operation, or a down
sampling of 2 operation. An ad-hoc seed location coarsening operator is chosen
such that the causality constraint is satisfied.
[0026] The second stage is the initial segmentation of the coarsest image
l k 404. First, a coarse graph Gk =(VK,Ek, wK) 410 is constructed for lk 404 as
described above. Next, the minimum cut CK 412 of the coarse graph Gk 410 is
determined, also as described above. This minimum cut 412 yields a
segmentation on the image IK.
8

[0027] During the uncoarsening stage, a binary boundary image Jk is
generated to represent all the image points that are identified by the nodes in the
cut Ck ,kXXX [1.... K] , and to project these identified image points onto a higher
resolution boundary image Jk-] at level k-1 using an image uncoarsening
operator. It is noted that the uncoarsening operator not be the dual operator of the
image coarsening operator used in the first stage due to the binary nature of the
boundary image. In an advantageous embodiment, the uncoarsening operator is
defined as follows:
[0028] Jk-1(p) = Jk (a(p)), (3)
where P = (p1,p2.....pN) is an N-D point and α(p) = (α1(P1),α2(P2)....αN,(pN)) is
the reduction mapping used in the coarsening phase to reduce the dimension size
under the size constraint,
[0029] The resulting boundary image J k-1 contains a narrow band that
bounds the candidate boundaries of objects to be extracted from I k-1. The width of
the band may be controlled by an optional dilation of the band by a distanced ≥ 0.
The dilation distance parameter plays an important role. If d is small, the algorithm
may not be able to recover the full details of objects with high shape complexity or
high curvature. On the other hand, if d is large, the computational benefits of
banded graph cuts will be reduced and the wider band may also introduce potential
outliners far away from the desired object boundaries. In an advantageous
embodiment, choosing d =1 is a good compromise between accuracy and
performance for most of real-world 2D and 3D images,
[0030] The graph Gk-1 412 is then constructed using only the nodes inside
the band from the boundary image J k-1. The band's outer layer is used as the new
background seeds B and the band's inner layer are used as the new object seeds
O. In the degenerated case, where the band contains no inner layer due to either
segmenting small objects or using large band width, the coarsened object seeds at
level k - 1 are used as the object seeds O. The coarsened object seeds are
guaranteed to lie inside objects to be segmented. Next, weights are assigned to all
edges according to equation (2).
9

[0031] Once the graph Gk-1 412 is constructed, the minimum cut ck-1 414
or Gk-1 412 is solved. The same uncoarsening procedure may be repeated
recursively at the next level until the minimum cut CXXX is solved on the banded
graph GXXX .yielding the final segmentation result. It is noted that all graphs at levels
k = 0,...,K- 1 have a banded graph structure except the graph GK, which is
significantly smaller than the full grid graph constructed for the image at the same
level.
[0032] One particular embodiment of the Invention will now be described In
conjunction with Fig. S, which applies the above described segmentation technique
to the problem of segmentation of the left atrium in an image volume representing a
heart. Fig. 5 represents the processing steps, inputs and outputs according to this
embodiment. It should be understood that Fig. 5, along with its description, is a
high level description of one embodiment of the invention. The description will
proceed at a high level, with an understanding that the details of the various steps
are as described above. Thus, Fig. 5 is meant to provide an overview description
showing how the technique described in detail above may be applied in a medical
imaging implementation. Fig. 5 shows an image 502 of a heart which may be
generated, for example, by a CT scan. The CT scan image volume, along with an
identified object point, as represented by arrow 504, are provided to a coarsening
step 506. It is noted that this embodiment allows a user to identify a single point as
the object, and does not require the user to identify any background points. The
single object point identified by the user is represented as 514. The determination
of background points will be described below.
[0033] In step 506, the image is coarsened (i.e., its resolution is reduced)
as described above. Although a user only needs to identify a single object point,
the segmentation method requires at least one identified background point. In this
embodiment, therefore, there is a need to identify points slightly outside of the left
atrium. The present invention determines these background points automatically
using a combination of region growing and dilation as described below.
10

[0034] The reduced image volume, along with the object point(s), as
represented by arrow 508, are passed to the region growing step 510. In this step,
a region growing operation is applied from the object point identified by the user. In
an advantageous embodiment, this region growing is a seeded region grow in
which new voxels are added according to priority. Seeded region growing is
described in R Adams and L. Bischof, Seeded Region Growing, IEEE Trans.
Pattern Anal. Mach. Intell,, 16(6):641-647, June 1994, which is incorporated herein
by reference . In one embodiment, a radius of 8cm may be used during the region
growing step. The resulting image is represented as 512. The boundary of the
region growing selection (as defined by a mask), as represented by arrow 516, is
passed to a dilation step 518, where the boundary is dilated by a fixed length (e.g.,
1 or 2 voxels). The dilation ensures that the boundary of the region growing
selection is outside the left atrium. Points on the boundary are then identified as
the background points. The goal is to mark as background as many neighboring
features as possible (such as the right atrium, the ventricles, and the aorta), which
improves the results of the graph cuts.
[0035] The boundary is passed to the next step, as represented by arrow
520 (where inn/outer boundary represents the dilation results). The graphcut (step
522) segments out the left atrium in low resolution, using the segmentation
techniques described above. The object point selected by the user along with its
neighbors are marked as the object. The dilated boundary from step 518 provides
the background points. Since this segmentation operation is performed on a low
resolution image, the segmentation does not require a large amount of
computational resources or memory. The results from the graphcut step 520
provides a rough estimate of the segmentation, which is represented by arrow 524.
This rough segmentation is illustrated as 544.
[0036] In step 526, the rough segmentation 544 is scaled up to the original
image resolution, in step 530 the scaled up rough segmentation received from
step 526 is dilated to generate a banded region (shown in Fig. 5 as 528) around
the left atrium. The inner boundary of the band is inside the left atrium, and the
outer boundary of the band outside the left atrium. The inner boundary points are
11

used as the object points for the next segmentation step, and the outer boundary
points are used as the background points for the next segmentation step.
[0037] In step 540, the points in the banded region 528 are segmented in
graphcut step 540. Since the domain of the graph is small and narrow, the
graphcut step 540 can be performed quickly and without a large memory
requirement. The resulting segmentation is shown as 542.
[0038] !t is noted that in an alternate embodiment, the order of steps 526
and 530 could be switched, so that the rough segmentation 544 is first dilated in
low resolution to form a band, and then scaled up to the higher resolution.
[0039] Addition control of the segmentation is possible if the user marks
additional points as the object or the background. By providing these additional
points to the region growing step 510 and the low resolution graphcut step 522T the
user's input can be integrated into the segmentation process. In another alternate
embodiment in the case of images with homogeneous intensities in the chambers,
the final banded graphcut step 540 can be replaced with an efficient thresholding
approach which could generate similar segmentation accuracy.
[0040] The foregoing Detailed Description is to be understood as being in
every respect illustrative and exemplary, but not restrictive, and the scope of the
invention disclosed herein is not to be determined from the Detailed Description,
but rather from the claims as interpreted according to the full breadth permitted by
the patent laws. It is to be understood that the embodiments shown and described
herein are only illustrative of the principles of the present invention and that various
modifications may be implemented by those skilled in the art without departing from
the scope and spirit of the invention. Those skilled in the art could implement
various other feature combinations without departing from the scope and spirit of
the invention, For example, the techniques of the present invention could be
extended to segment 4 (or more) dimensional images as well.
12

CLAIMS:
1. A method for processing a full resolution image comprising:
a) generating a reduced resolution image from said full resolution image;
b) segmenting said reduced resolution image to identify a boundary
between art object and a background; and
c) segmenting a portion of an increased resolution image to generate a
refined boundary, said portion based upon said prior identified boundary.
2. The method of claim 1 wherein said increased resolution image is said
full resolution image.
3. The method of claim 1 further comprising the steps of:
receiving an identification of at least a portion of said object as user input;
and
automatically determining at least a portion of said background.
4. The method of claim 3 wherein said step of automatically determining
at least a portion of said background further comprises:
performing a region growing operation using said at least a portion of said
object as a seed;
performing a dilation operation on a result of said region growing
operation; and
identifying at least one point resulting from said dilation operation as
background.
5. The method of claim 1 further comprising:
iteratively repeating step c using images having successively increasing
resolutions.
6. The method of claim 1 wherein said step b comprises:
generating a graph of said reduced resolution image; and
performing a graph cut on said graph.
13

7. The method of claim 1 wherein said step c comprises:
generating a graph of said portion of an increased resolution image; and
performing a graph cut on said graph.
8. The method of claim 1 further comprising the step of identifying said
portion by projecting said prior identified boundary onto said increased resolution
image.
9. The method of claim 8 wherein said step of identifying said portion
further comprises:
performing a dilation operation on said projected prior identified boundary.
10. The method of claim 9 further comprising the steps of:
identifying an outer portion of said dilated projected prior identified
boundary as background; and
identifying an inner portion of said dilated projected prior identified
boundary as object.
11. An apparatus for processing a full resolution image comprising:
a) means for generating a reduced resolution image from said full
resolution image;
b) means for segmenting said reduced resolution image to identify a
boundary between an object and a background; and
c) means for segmenting a portion of an increased resolution image to
generate a refined boundary, said portion based upon said prior identified
boundary.
12. The apparatus of claim 11 wherein said increased resolution image is
said full resolution image.
13. The apparatus of claim 11 further comprising:
means for receiving an identification of at least a portion of said object as
user input; and
14

means for automatically determining at least a portion of said background.
14. The apparatus of claim 13 wherein said means for automatically
determining at least a portion of said background further comprises:
means for performing a region growing operation using said at least a
portion of said object as a seed;
means for performing a dilation operation on a result of said region
growing operation; and
means for identifying at least one point resulting from said dilation
operation as background.
15. The apparatus of claim 11 further comprising:
means for iteratively repeating step c using images having successively
increasing resolutions.
16. The apparatus of claim 11 wherein said means b comprises:
means for generating a graph of said reduced resolution image; and
means for performing a graph cut on said graph.
17. The apparatus of claim 11 wherein said means c comprises:
means for generating a graph of said portion of an increased resolution
image; and
means for performing a graph cut on said graph.
18. The apparatus of claim 11 further comprising means for identifying
said portion by projecting said prior identified boundary onto said increased
resolution image.
19. The apparatus of claim 18 wherein said means for identifying said
portion further comprises;
means for performing a dilation operation on said projected prior identified
boundary.
15

20. The apparatus of claim 19 further comprising:
means for identifying an outer portion of said dilated projected prior
identified boundary as background; and
means for identifying an inner portion of said dilated projected prior
identified boundary as object.
21. A method for segmenting an image (10) based on object seeds O and
background seeds G comprising the steps of:
a) generating a plurality (K) of reduced resolution images(I1,....,Ik);
b) generating a graph Gk for the lowest resolution image Ik;
c) calculating a minimum cut CK of said graph GK based on O and G;
d) generating a binary boundary image Jk to represent the image points
identified by nodes in said minimum cut Ck;
e) projecting said image points onto a higher resolution boundary image
f) generating a graph Gk-1 for said higher resolution boundary image
Jk-1; and
g) calculating a minimum cut Ck-1 for said graph Gk-1 .
22. The method of claim 21 further comprising the step of:
recursively repeating steps d-g until a minimum cut C0 is calculated on a
graph C0 to generate a segmentation of image IXXX.
23. The method of claim 21 further comprising the step of:
applying a dilation operation (where distance d ≥ 0) on said higher
resolution boundary image ./k-1.
24. The method of claim 23 wherein d=1 .
25. The method of claim 21 wherein said step of projecting said image
points onto a higher resolution boundary image J k-1 is performed according to:
16


26. The method of claim 21 wherein said step of generating a graph Gk-1
for said higher resolution boundary image ji-1 uses the outer layer of said
boundary image as object seeds (0) and the inner layer of said boundary image
as background seeds (G).
Dated this 3rd day of JANUARY 2006.

17

A multilevel image segmentation technique using graph cuts is disclosed.
A reduced resolution image is generated from a full resolution image which is to
be segmented. The reduced resolution image is then segmented in order to
identify a boundary between an object and a background within the image. The
identified boundary then identifies a portion of an increased resolution image which is segmented in order to refine the earlier identified boundary. The steps may be iterated for successively increasing image resolutions in order to refine
the boundary as required by a particular application. An initial identification of
object and background portions of the image may be provided as input by a user.
Alternatively, a user may identify only the object portion, and the background portion may be automatically determined.

Documents:

00009-kol-2006-abstract.pdf

00009-kol-2006-claims.pdf

00009-kol-2006-description complete.pdf

00009-kol-2006-drawings.pdf

00009-kol-2006-form 1.pdf

00009-kol-2006-form 2.pdf

00009-kol-2006-form 3.pdf

00009-kol-2006-gpa.pdf

9-KOL-2006-CORRESPONDENCE 1.1.pdf

9-KOL-2006-CORRESPONDENCE.pdf

9-KOL-2006-FORM 13.pdf

9-KOL-2006-FORM-27.pdf

9-kol-2006-granted-abstract.pdf

9-kol-2006-granted-claims.pdf

9-kol-2006-granted-correspondence.pdf

9-kol-2006-granted-description (complete).pdf

9-kol-2006-granted-drawings.pdf

9-kol-2006-granted-examination report.pdf

9-kol-2006-granted-form 1.pdf

9-kol-2006-granted-form 18.pdf

9-kol-2006-granted-form 2.pdf

9-kol-2006-granted-form 3.pdf

9-kol-2006-granted-form 5.pdf

9-kol-2006-granted-gpa.pdf

9-kol-2006-granted-reply to examination report.pdf

9-kol-2006-granted-specification.pdf

9-kol-2006-granted-translated copy of priority document.pdf

9-KOL-2006-PA.pdf


Patent Number 239972
Indian Patent Application Number 9/KOL/2006
PG Journal Number 16/2010
Publication Date 16-Apr-2010
Grant Date 16-Apr-2010
Date of Filing 03-Jan-2006
Name of Patentee SIEMENS CORPORATE RESEARCH, INC.
Applicant Address 755 COLLEGE ROAD EAST, PRINCETON, NJ
Inventors:
# Inventor's Name Inventor's Address
1 SUN , YIYONG 16 VOSCEK COURT LAWRENCEVILLE, NJ 08648
2 GRADY, LEO 5 KNOLL DRIVE YARDLEY, PA 19067
3 XU, CHENYANG 3 DIANA COURT ALLENTOWN, NJ 08501
4 LOMBAERT, HERVE 3350 A. MARECHAL AP. 316 MONTREAL, QC H3T 1M9
PCT International Classification Number G06K 9/34
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 60/644,825 2005-01-18 U.S.A.
2 11/313,102 2005-12-20 U.S.A.