Title of Invention

A METHOD FOR TRACKING OF PLANAR MOVEMENT OF MULTIPLE OBJECTS

Abstract ABSTRACT A novel method is disclosed for tracking the motion of multiple objects using special marker geometries. The marker geometry is such that the segmented marker region can be automatically recognized with less computational effort than conventional pattern matching approaches and with greater robustness. The marker geometry allows determination of the orientation from a single marker thereby eliminating the need for using an additional marker merely for determining orientation. The method is made more reliable by use of a identification tag as part of the marker that positively identifies a marker from the rest of the markers and overcomes the difficulty encountered by many automatic tracking systems in re-establishing track after it is lost. r 8 ptc vm
Full Text FORM - 2
THE PATENTS ACT, 1970
(39 OF 1970)
COMPLETE SPECIFICATION (See section 10)
TITLE OF THE INVENTION
"A Method for Tracking of Planar Movement of Multiple Objects"
(a) INDIAN INSTITUTE OF TECHNOLOGY Bombay (b) having administrative office at Powai, Mumbai 400076, State of Maharashtra, India and (c) an autonomous educational Institute, and established in India under the Institutes of Technology Act 1961
The following specification particularly describes the nature of the invention and the manner in which it is to be performed

ORIGINAL
GRANTED
8-12-2006

FIELD OF THE INVENTION
The present invention relates to novel method for tracking planar movement of multiple objects from sequence of digital images using markers.
BACKGROUND OF INVENTION
Tracking of objects from a sequence of images requires automatic identification of the objects of interest. The approaches used to accomplish this include recognition based on the radiation characteristics and reflective properties of the object itself or of a tag or marker attached to the object. Use of markers, when permissible, assist in tracking by facilitating automatic recognition of the objects. Marker based tracking systems are already in use for motion analysis for switchgears and other high speed mechanisms, crash testing of automobiles, human and animal gait studies, an aid to improving the performance of sportsmen and for assisting sports commentators.
Description of Prior Art
US Patent no. 5,008,804, entitled Robotic Television Camera Dolly system uses checkerboard markers painted or pasted on the floor to position and align a dolly to a specified marker. Camera Dollies are conceived of having sensors at the bottom to sense a transition from dark to light. The dolly is operated by first moving in open loop (dead reckoning motion) to bring the sensors on the target marker. The patent teaches method for centering the dolly on the marker and aligning it along the marker. No camera image is used and four sensors are used to locate the dolly with respect to the marker. The marker is fixed and the system will not work if the marker were moving on an object.
U.S. Patent No. 5,617,335 entitled System for and Method of Recognizing and Tracking Target Mark uses a marker design with a flat white triangular shape fixed above a black circle. It teaches a histogram based approach to determine the three-dimensional coordinates of an object with respect to a tracking camera mounted on a robotic arm. The system suffers from the following shortcomings:
• The marker being a three dimensional object cannot be attached to objects which are moving very closed to other objects due to problem of interference and will be unsuitable for use on most planar mechanisms.
• The system does not process multiple markers and therefore does not track multiple bodies.
• The patent does not disclose how track can be re-established if it gets lost due to occlusion during a move. Occlusion occurs when one of the markers on an object is not seen fully in the imaging system due to another object coming between the marker and the imaging system.

U.S. Patent No. 5,731,785 entitled System and Method for Locating Objects Including an Inhibiting Feature is aimed at tracking of objects by "an electronic code generating system or device carried by the object in a portable housing". The system treats these active markers or beacons as point sources of energy and is based on triangulation of each unique signal to locate and identify an object. The system suffers from the following drawbacks:
• Each marker or beacon needs a source of power and cannot be attached to objects on the scale of most mechanism in machines due to space limitations.
• The system is based on use of GPS and is not suitable for tracking small motions such as in machine parts. Also for this reason, the accuracy of motion detection is low.
• The system is not capable of determining the orientation of object but only the location.
US Patent no. 6,079,862, entitled Automatic tracking lighting equipment uses a marker attached to an object to be tracked based on a video signal. It consists of a tracking apparatus consisting of means for picking up an image in a specified area, means for detecting a marker attached to an object to be tracked based on a first video signal from said image pickup means, etc. The patent teaches control of floodlights based on the location of the marker image in video camera. The aim is to automatically control the lighting system so that the object being tracked remains well lit. In a preferred embodiment, the system uses infrared source of radiation and an infrared sensitive camera to accomplish the task. The system suffers from the following shortcoming:
• The system only tracks one marker and is not suitable for multiple object
tracking.
U.S. Patent No. 6,567,116 entitled Multiple Object Tracking System issued to Aman et al (May 2003) teaches the use of two types of cameras - tracking type with special filter to only capture objects printed with special inks and filming type that does not use any special filter and captures visual image. Tracking camera uses narrow band radiation which is not in the spectrum of the visible light. Identification markers made using special ink are placed on moving objects which are seen in the tracking type of cameras. There are a multitude of such pairs of cameras and special radiation sources attached overhead of the region of interest. The system suffers from the following drawbacks:
• The patent does not teach the actual process for identifying a marker or the actual method for determining the position and orientation of various marker.
• The system requires special energy sources and filters to isolate the markers in the images of the tracking type cameras, which increases the cost of the system.
One of the oldest commercially available motion measurement system called SELSPOT used a number of infrared LEDs mounted on a moving object such as

a moving person. Multiple cameras are used to identify the trajectory points of each of the LEDs in the image. In this approach LEDs can be turned on in turn so that only one LED emits radiation at a time and the correspondence of points in the different cameras can be done easily. However, such systems suffer from the following drawbacks:
• The markers require power source for use and may need wires from the power source that may interfere with the motion.
• The markers cannot be used in most machine motion studies, where the parts are constrained to move in close vicinity due to limitation of space.
• If the LEDs are powered intermittently, then different points of interests are located at different instants of time, causing some uncertainty in the relative locations and orientations of different moving bodies.
More recent commercial systems have been developed by VICON Motion Systems, USA and Vannier Photoelec, France. The Autotrack® ver.3 tracking software from Vennier Photoelec has been used for crash testing of automobiles. It most commonly uses checkerboard markers and other symmetric markers. Only the location of the markers (i.e., the center point of the markers) is utilized by this system. The system suffers from the following drawbacks:
• Orientation of the markers is not determined and a minimum of two markers are necessary to be able determine the orientation of an arbitrarily moving object.
• When the markers are occluded, the system may loose automatic track and a manual intervention is required.
• The system is also prone to losing track due to variation in light levels or noise in the image.
Many applications require tracking systems for automatically estimating the motion of multiple moving objects that may enter and exit the scene or get partly or fully obscured in some part of the motion. Given that markers provide standard image patterns, there is a long felt need to utilize markers to achieve computationally efficient and robust tracking with estimation of orientation, positive object recognition but without the requirement of special energy sources and filters for radiation in non-visible frequency band.
SUMMARY OF INVENTION
The main object of the invention is to determine position and orientation of moving markers using special marker geometries, coupled with the use of tags to positively identify markers, in order to estimate the motion of objects undergoing planar motion.
Another object of the invention is to eliminate the need to have additional markers merely to be able to compute the orientation of an object.

Yet another object of the Invention is to enable a multitude of markers In a scene without mixing up of markers.
Yet another object of the Invention is to provide a robust method to automatically establish track or re-establish It after the track Is lost.
Thus in accordance with this invention, the method uses markers, which are pasted, painted or attached in some other manner, on one or more objects whose motion, in a plane or parallel planes, is to be estimated. An Image acquisition system placed with Image sensor axis near normal to the planes of motion captures the motion of the moving objects in a digital form and passes on the image information to a data processing system. An appropriate system processes the image data to separate the regions consisting of the markers from the rest of the scene by computing properties of the marker geometry, based on which the marker position and orientation are determihed IrL the Image frame of reference, which correspond to the position and orientation of the moving object, in question injhe world coordinate frame of reference.
As described above, the totality of marker comprises a geometrical marker shape that has less than two axes of symmetry and an identification tag In a background. The marker shape has an interior of different colour, reflectivity or luminosity as compared to Its background, such that a thresh-holding of the Image will yield different binary value for the pixels In the marker and those In the background. The region of image consisting of the pixels of the marker geometry is called the marker region.
Various properties can be computed for the marker region, including a number of moments which remain invariant for a given shape with respect to translation, rotation and scaling of the shape, called Invariant moments. In general, such moment values of any shape do not remain strictly constant in a digital Image due to the presence of discrete pixels of finite dimensions. According to this Invention, the marker geometry used Is such that the Invariant moments computed for the marker regions show little variation In their values for changes In orientation and scaling or have invariant moments well separated from similar Invariant moments of other regions present in the Image. Such properties of marker region allow computationally efficient methods to automatically recognize It in spite of variations In orientation and size of the markers.
According to this Invention, the marker geometry used is such that once it Is detected. Its position and orientation can be determined from the location of all the pixels belonging to the marker region. For determining the orientation without any ambiguity, the marker geometry Is chosen to have less than two axes of symmetry. Having determined the position and orientation of the marker, this invention teaches to locate a region containing the Identification tag, which allows for positive identification of a marker.

DETAILED DESCRIPTION
BRIEF DESCRIPTION OF THE DRAWINGS
Fig 1 shows shapes, which are evaluated for desirable properties as markers. Fig 1(a) shows an asymmetric "L"-shaped marker geometry (1) with the two limbs of constant width and lengths in the proportion of two to one, with the longer limb truncated with a steep sloped line. Fig 1(b) shows an asymmetric "L"-shaped marker geometry (2) with the two limbs of constant widths and lengths in the proportion of two to one, with both the limbs truncated with a sloped line (2). Fig 1(c) shows an asymmetric "L"-shaped marker geometry with two pointing limbs of similar proportions and with outer edges being perpendicular to each other (3). Fig 1(d) shows an asymmetric "L"-shaped marker geometry with two pointing limbs of similar proportions and with one outer and one inner edge perpendicular to each other (4). Fig 1(e) shows an asymmetric.right angle triangular shaped marker geometry with perpendicular sides in the ratio of two to one (5). Fig 1(f) shows a symmetric triangular shaped marker geometry with a single axis of symmetry (6). Fig 1(g) shows a symmetric rectangular shaped marker geometry with two axes of symmetry (7). Fig. 1(h) shows a symmetric elliptical shaped marker geometry with a cross inside with two axes of symmetry (8). Fig 1(1) shows a symmetric elliptical shaped marker geometry with a triangular cut out with a single axis of symmetry (9). The major axis of the ellipse is the axis of symmetry in this geometry. Fig 1(j) shows an asymmetric elliptical shaped marker geometry with a quadrant removed (10).
Fig 2 shows the variation in the first four invariant moments for the marker geometries shown in Fig 1 in the form of graphs. Curves labeled (11), (21), (25) and (29) correspond to the shape labeled as (4) in Fig 1. Curves labeled (12), (22), (26) and (30) correspond to the shape labeled as (3) in Fig 1. Curves labeled (13), (23) and (27) correspond to the shape labeled as (1) in Fig 1. Curve labeled (14), (24) and (28) correspond to the shape labeled as (2) in Fig 1. Curve labeled (15) corresponds to the shape labeled as (5) in Fig 1. Curve labeled (16) corresponds to the shape labeled as (6) in Fig 1. Curve labeled (17) corresponds to the shape labeled as (9) in Fig 1. Curve labeled (18) corresponds to the shape labeled as (8) in'Fig 1. Curve labeled (19) corresponds to the shape labeled as (10) in Fig 1. Curve labeled (20) corresponds to the shape labeled as (7) in Fig 1.
Fig 3(a) shows one of the embodiments of the marker (1) with a numeric identification tag (33) and the bounding box for the marker (32). The coordinate frame of reference of the image frame is also shown (31). Fig 3(b) shows the outline of the marker geometry (35) and the bounding box for the identification tag (34) separately. Fig 3(c) shows three clearly identifiable points on the marker, namely the concave corner (36), the centroid (37) and the convex corner, further¬most from the centroid (38). Fig 3(d) shows the vector (39) joining the centroid (37) to the farthest point from the centroid (38). Fig 3(e) shows the numeric identification tag (40). Fig 3(f) shows the angle (41), representing the orientation

of the marker, made in the counter-clockwise direction by the vector (39) with the positive X-axis of the image coordinate frame of reference (31) and the position of the point denoting the marker location (36) with respect to the image coordinate frame of reference (31).
Fig 4 shows different embodiments of the marker geometries and of the identification tags. Fig 4(a) is the same marker geometry shown in Fig 1(a) (1) with numeric identification tag (33) and the bounding box (32) for the marker. Fig. 4(b) is the same marker geometry shown in Fig 1(b) (2) with the identification tags consisting of a number of diamonds (42). Fig 4(c) shows another embodiment of asymmetric marker geometry (44) with a bar-code identification tag (43). Fig 4(d) shows yet another embodiment of asymmetric marker geometry (46) with an alphabetical identification tag (45). Fig 4(e) is the same marker geometry shown in Fig 1(d) (4) with the alphabetic tag in a different orientation (47). Fig 4(f) shows yet another embodiment of asymmetric marker geometry (49) with an alphanumeric identification tag (48).
Figure 5 illustrates an embodiment of the system we propose for multiple object tracking. The means for acquisition of the image is shown in (51). The motion of the mechanism shown above is in plane (52). The line of sight for this means is near normal to the plane of motion of the mechanism. Markers are firmly attached to multiple objects in this mechanism that are to be tracked. The frame (53) is fixed. The driving link for the mechanism is (54). Markers have been affixed on link (55) of the mechanism. Marker (56) is completely visible to the image acquisition means. Marker (57) on link (55) is occluded. The X and Y axes of a world coordinate frame of reference on an immovable link of mechanism (53) are indicated by (58).
Figure 6 illustrates the mechanism shown in figure 5, when viewed near normally. The data processing system (59) takes the digital images of the mechanism as its input. The output (60) from the data processing system (59) is in the form of position in terms of the X and Y co-ordinates and orientation with respect to the X axis for different markers frame by frame.
Figure 7 shows the main steps of the proposed method. The first step is initialization of relevant details related to the marker geometry (100). Then the intensity values of pixels from the first frame are read (101). The threshold is then estimated (102). Using this threshold, the image is segmented (103). Thereafter, identification of markers is done (104). This involves finding the location and orientation of the marker. If there are more frames to process (106), the intensity values of pixels in the next frame are read (107). The steps estimation of threshold (102), segmentation (103) and identification of (104) are then repeated for ail subsequent images.
Figure 8 shows the initialization step (100) of figure 7 in more detail. Centralized Hu moments are used as a non-limiting example of property that is invariant

under translatiorv,orieiitajtlQri-and scaling. First the marker geometry template is loaded to the data processing system f95). Then the loaded image is segmented (96). The Hu Moments are then calculated (97) for the segmented region and stored as the ideal Hu Moments.
Figure 9 illustrates step (103) of figure 7 in more detail. The image is read into an array of integer values representing the intensity for each pixel (111). The threshold is obtained from step (102) of figure 7. At the end of one pass through the image frame, all the pixels of a region are assigned a common label distinct from the labels of other regions. Beginning with the topmost row (113), each row is scanned pixel by pixel from left to right. Each pixel is assigned an integer value called 'label'. The procedure for assigning labels is as follows: For the pixel under consideration a comparison is made between the pixel intensity and the threshold limit (114):
1. If the pixel intensity is outside the threshold limit of a marker pixel, it is labeled zero (119).
2. If the intensity is within the threshold limit of a marker pixel, the labels of its left and top neighbours (115) are examined and the label assignment is done based on the labels of these neighbours (116) as described below:
i. If both neighbours have zero labels (124), a new region is created with a new label (125) and the pixel is labeled with this label (118).
ii. If one of the neighbours has zero labels while the other has non-zero label, the current pixel is added to that region of pixels with the non¬zero label (117) and the pixel is assigned the same label (118).
iii. If both have different non-zero labels, the two regions are merged into a single region and the current pixel is added to this region (127). The label for the new region is the label of the top neighbour of the current pixel (118).
The steps (120) and (123) check if the last column and the last row are reached respectively. If there are more columns to process (122), the pixel in the next column is processed. If there are more rows to process (128), the first pixel (121) in the next row is processed. If there are no more rows to process, the method terminates. Whenever a pixel is added to a region, sums of the row and column numbers of all pixels in the region as well as a pixel count of the region is updated (126), wherein,
1. Row number of a pixel is the index of the row in which the pixel belongs, with the topmost row in the frame having the index zero.
2. Column number of a pixel is the index of the column in which the pixel belongs, with the leftmost column in the frame having the index zero.
This enables calculation of the centroid of each region at the end of the scan without the need for another pass through the regions.
Figure 10 illustrates step (104) of figure 7 in more detail. The processing begins with the first region (157) obtained from the segmentation step (103) of figure 7. If the number of pixels in the region is too less (154), the region is rejected (155).

The centralized Hu moments of the region are calculated (145). These centralized Hu moment values are then compared to the centralized Hu moment values for the standard marker gegrnetry. If all centralized Hu moment values for a region are not within pre'-s|pecified tolerance limits of the centralized Hu moment values for the standard marker template (147), the region is rejected as amarker (155). Otherwise the orientation of the potential marker is computed (149). This involves finding the positions of clearly identifiable points of the marker geometry. The tag pattern on the region is identified (150). If the tag pattern is valid (150), the region is accepted as a marker. If the region being processed is the last region (153), the method terminates. Othenwise the next region is considered (156) and the above steps are repeated.
Figure 11 illustrates step (102) of figure 7 in more detail, which is the process for estimation of the threshold intensity value for a given frame. An initial value of threshold is chosen for the frame (171). The segmentation process is carried out using this value as threshold (172). Identification of individual markers is then done based on the tag-patterns of the markers (173). A count of the markers identified is kept, corresponding to the intensity value used as threshold (174). The next intensity value (176) is then used as threshold, and the above steps are repeated for this threshold value. These steps are repeated for a set of intensity values (175). Next, those threshold values are identified for which maximum count of the markers is obtained. The median of this set of threshold values is used as an estimate for the threshold for the current frame (177).
The invention is now illustrated with non-limiting example.
Referring to the geometrical shapes shown in figure 1, we will use the first four Hu moments which are invariant with respect to rotation and scaling of the shape. The centralized Hu moments (represented by Φ1.Φ2.Φ3,Φ4 are
computed as follows:



wherein, x and y are the row and column indices respectively for a pixel and n is the total number of pixels in the region.
All the required summations i.e., Σx, Σx2 Σx3 Σxy, Σx2y, Σxy2 Σy, Σy2, Σy3 involving the row and column numbers are maintained during a pass through the pixels of segmented regions.
For continuous geometric shape, the summations are replaced by integral operations and result in constant values when the shape in question is rotated or scaled. However, for an image segment the constancy is not obtained due to the fact that a pixel has finite size. Due to this fact, a segment boundary that is not aligned with pixels cannot be represented exactly in the digital image and the rotated images effectively have an altered shape and therefore the invariant moment actually deviates. This deviation for shapes given in Fig 1 is plotted for 36 different orientations in graphs shown in Fig 2.
Referring to Fig 2, it can be seen that for most shapes the invariant higher invariant moments rapidly decrease. It is for this reason that invariant moments higher than four are not practically usable in most cases. It can be noted that marker geometries (3) and (4) have significantly higher invariant moment compared to other shapes considered. Therefore these shapes are easy to distinguish from other shapes even based on comparison of a single invariant moment and the reliability of recognition is improved by considering more invariant moments.
It is also worth noting that invariant moments corresponding to marker geometries having curved boundaries (8), (9) and (10) and having more than one axis of symmetry (7) are comparatively smaller and are less easy to distinguish from common shapes that may be found in the scene and therefore are not suitable as markers.
Referring to curves (13), (14), (23) and (24), it can be noticed that marker geometry (1) displays the least percentage deviation with change in orientation in


the image compared to other marker geometries. This is a desired property that helps in the marker geometry being recognized reliably. This fact can be better seen from the table below.
Table 1

The steps to track multiple moving objects required in this method are now described.
The image acquisition device is positioned with the optical axis near-normal to the plane of motion of objects as shown in figure (5). Marker geometry such as (1) is attached to the objects whose motion is to be measured. The markers are black in color while the background is white. Identification tags (33), (48), (43) and (42) located in the quadrant defined by the two limbs are of numeric, alpha¬numeric, barcode or symbolic type and are black in color. The high contrast between the marker and background and the identification tag and background causes a threshold operation based on the image intensities to result in pixels in the marker and tag regions to yield binary value of one and the background pixels value of zero.
At least one marker is pasted on each moving object to be tracked which does not get occluded during the motion. More markers are pasted on links that get occluded during motion so that at least one marker remains visible in the image. A unique identification tag number is used on each marker. Markers attached to fixed bodies can serve as reference as these are not expected to move from frame to frame of the image sequence.
The output of the image acquisition device is timed electrical signals, which are converted into a matrix of intensity values using image-acquisition hardware. The


matrix of intensity values in each image is processed using data processing system. Each image in the sequence is associated with a time, with respect to a datum, at which the image is acquired. In determining the motion of the objects, the information of the position, orientation of the markers and the time instant of the frame are required. If necessary, the image can be preprocessed. In preprocessing, random noise can be reduced and the quality of the image enhanced to improve the fidelity of distinguishing the markers from the background.
The task performed by the data processing system is shown in figure 6 and figure 7. The image frame input consists of pixels. A pixel is said to be 'connected' to another if they share a common side. A set of 'connected' pixels is said to form a 'region', wherein connected pixels are as defined above, if every pixel in this set is connected to at least one other pixel. Any two connected pixels are called 'neighbours'. The first task in the extraction of the markers is to divide the image into regions in the image-array composed of groups of interconnected pixels having similar intensity values. This step is known as segmentation and can be accomplished using standard method like Rosenfeld method. For each of the regions, invariant moment values are calculated, as already illustrated
The values corresponding to each of the regions are compared with the pre¬determined values of the invariant moments for the marker geometry being used. If the deviations are within pre-defined limits, the region is treated as a potential marker. Else no further computations are performed in this region of the image. A first estimate of orientation is obtained from the location of two clearly identifiable points. A refined estimate of orientation is then obtained for each marker based on the slopes of clearly identifiable straight boundary lines of the marker region. The size of the marker region in terms of the size of the 'standard marker' is calculated based on the distances between the centroid of the marker region and the pixel farthest from the centroid. From the 'size' of the marker region and the geometry of the 'standard marker', the end points of the clearly identifiable straight boundary lines are located. Next least squares fit line is fit through those pixels that satisfy the following conditions:
1. They are 'close' to imaginary lines joining the end points. A pixel is said to be 'close' to a line if it lies within a two pixel-distance of that line.
2. They are 'boundary' pixels. A pixel is said to be a boundary pixel if not all of its top, bottom, left and right neighbors are members of the region with the same label.
The slope of this line gives the orientation with respect to a reference line on the standard marker. Based on the geometric features of the marker being used, the computation of the four corners of the bounding box defining the identification tag can be accomplished. Knowing the tag bounding box and the orientation of the marker, the tag bounding box image can be rotated to straighten the characters and symbols of the identification tag.


The identification tag is then extracted from the bounding box corresponding to the potential marker, analyzed and interpreted. If interpretation results in a valid tag ID then the potential marker is confirmed otherwise, it is rejected.
The recognition of a segmented region as a marker is made reliable from the shape of the marker, as the invariant moments of the chosen shapes are distinctly different from other objects in the scene and are found to have less variation in the invariant moments with respect to change in orientation and scaling.
Knowing the positions and orientations of a multitude of markers on a body of interest at different times, the motion of the body is inferred from well established principles of Kinematics. From the motion of different bodies in the image sequence, the relationship between motions of different bodies can be accurately inferred.
Since each marker contains an identification tag, the method described here automatically establishes track as soon as a new marker enter the scene or if an occluded marker becomes visible again.


We claim:
1. A method for tracking planar movement of multiple objects with reference to
world coordinate frame of reference using markers from a sequence of digital
images each having an image coordinate frame of reference wherein the
sequence of digital images is obtained from an image acquisition system
placed such that the line of sight of the image acquisition device is near
normal to the planes of the motion of objects to be tracked and processing of
the sequence of digital images is performed by data processing system in
steps comprising:
• segmenting image into regions based on image pixel intensities
• identifying regions that belong to markers based on a property of marker geometry
• obtaining orientation of markers and identifying individual markers by analyzing the tag pattern located at a known position and orientation with respect to the marker geometry,
wherein segmenting of images involves
• considering each pixel intensity and dividing the entire image into regions
• assigning a label to each pixel such that processed pixels belonging to the same region have a common label
• optionally tracking centroid of each region
wherein identifying a marker region involves
• computing property that is invariant to translation, rotation and scaling, of each region and comparing it with the said properties of the marker geometry to obtain marker regions
• obtaining positions of identifiable points such as centroid and further most corner point from the centroid of the marker region
• obtaining the location and orientation of the marker with respect the image coordinate frame of reference and thereby obtaining the location and orientation of the object in the world coordinate frame of reference
wherein a marker comprises a marker geometry and a unique tag pattern for identification in a contrasting background wherein the marker geometry has less than two axes of symmetry and facilitates reduction in variance of said property of marker geomety with respect to translation, rotation and scaling.
2. A method for tracking planar movement of multiple objects using markers
from a sequence of digital images as claimed in claim 1 wherein
segmenting of image is carried out in a data processing system using an
intensity threshold value chosen as the median of the range of threshold
values for which the maximum number of markers are identified.


3. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein segmenting of image is carried out in a data processing system using a color threshold value chosen as the median of the range of color threshold values for which the maximum number of markers are identified.
4. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein property for identifying marker region is a set of invariant moments such as Hu invariant moments.
5. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein orientation of marker region is obtained using variant moments.
6. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein a clearly identifiable point of the marker region is a corner point farthest from the centroid of the marker region.
7. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein a clearly identifiable point of the marker region is a corner point nearest from the centroid of the marker region.
8. A method for tracking planar movement of multiple objects using markers from a sequence of digital images of claim 1 wherein orientation of marker is obtained from the line passing through the centroid and a clearly identifiable point of the marker region.
9. A method for tracking planar movement of multiple objects using markers from a sequence of digital images of claim 1 wherein orientation of marker is obtained from the line passing through two clearly identifiable points of the marker region.
10. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein orientation of marker is obtained from the longest straight line of boundary of the marker.
11. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein a measure of orientation of marker is based on first finding the boundary pixels of a straight line edge of the marker region and then boundary pixels are considered to obtain the best estimate of the slope of the said edge.


12. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein centroid of a region is tracked during the segmentation process.
13. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the image acquired the by the image acquisition device is monochromatic.
14. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the image acquired the by the image acquisition device is in color.
15. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein marker is located using its corner point closest to the centroid.
16. A method for tracking planar movement of multiple objects using markers from a sequence of digital images of as claimed in claim 1 wherein marker is located using its corner point farthest from the centroid.
17. A method for tracking planar movement of multiple objects using markers from a sequence of digital images of claim 1 wherein marker is located by the centroid of the region.
18. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the tag pattern for identifying a marker is numerals.
19. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the tag pattern for identifying a marker is a set of letters of the alphabet.
20. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the tag pattern for identifying a marker is a bar code.
21. A method for tracking planar movement of multiple objects using markers from a sequence of digital images as claimed in claim 1 wherein the tag pattern for identifying a marker is the size of an identification mark.


22. A method for tracking planar movement of multiple objects from a sequence of images wherein the marker as claimed in claim 1 wherein the tag pattern for identifying a marker is a combination of numerals, letters of the alphabet, barcode, identification mark.
Dated this 14th Day of December 2004
Agent on behalf of Applicant Dr. Prabuddha Ganguli

Documents:

1278-mum-2003-abstract(08-12-2006).pdf

1278-mum-2003-abstract-(8-12-2004).doc

1278-mum-2003-cancelled pages(08-12-2006).pdf

1278-mum-2003-claim(granted)-(8-12-2006).doc

1278-mum-2003-claims(granted)-(08-12-2006).pdf

1278-mum-2003-correspondence(12-12-2006).pdf

1278-mum-2003-correspondence(ipo)-(18-04-2007).pdf

1278-mum-2003-form 1(12-12-2006).pdf

1278-mum-2003-form 1(18-01-2006).pdf

1278-mum-2003-form 13(08-12-2006).pdf

1278-mum-2003-form 19(15-12-2004).pdf

1278-mum-2003-form 2(granted)-(08-12-2006).pdf

1278-mum-2003-form 2(granted)-(8-12-2006).doc

1278-mum-2003-form 3(16-12-2003).pdf

1278-mum-2003-form 3(18-01-2006).pdf

1278-mum-2003-form 5(08-12-2006).pdf

1278-mum-2003-power of attorney(16-12-2003).pdf


Patent Number 206126
Indian Patent Application Number 1278/MUM/2003
PG Journal Number 42/2008
Publication Date 17-Oct-2008
Grant Date 18-Apr-2007
Date of Filing 16-Dec-2003
Name of Patentee INDIAN INSTITUTE OF TECHNOLOGY BOMBAY
Applicant Address INDIAN INSTITUTE OF TECHNOLOGY, BOMBAY, POWAI, MUMBAI 400 076
Inventors:
# Inventor's Name Inventor's Address
1 BHARTENDU SETH DEPARTMENT OF MECHANICAL ENGINEERING, INDIAN INDTITUTE OF TECHNOLOGY, BOMBAY, POWAI, MUMBAI 400 0-76,
2 RAHUL RAJ C/O SRI RUP KAMAL, QR. NO. 6066, SECTOR 4/F, BOKARO STEEL CITY, JHARKAHAND-827004
3 AMRISH CHANDRAKANT ACHARYA 002, "ASHWINI" -A WING, APNA GHAR SOCIETY, SWAMI SAAMARTH NAGAR, ANDHERI (WEST), MUMBAI-400 053
4 KOUSTUBH MOHAIR I-104, MAYFLOWER PARK, MALLAPUR, HYDERABAD 500076.
PCT International Classification Number H 04 N 7/00
PCT International Application Number N/A
PCT International Filing date
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 NA