Title of Invention

SYSTEM AND METHOD FOR MAINTAINING OBJECTS IN A LOOKUP CACHE

Abstract A method, computer program product, and a data processing system for maintaining objects in a lookup cache is provided. A primary list is populated with a first plurality of objects. The primary list is an unordered list of the first plurality of objects. A secondary list is populated with a second plurality of objects. The secondary list is an ordered list of the second plurality of objects. Periodically, at least one object of the first plurality of objects is demoted to the secondary list, and at least one object of the second plurality of objects is promoted to the primary list.
Full Text

SYSTEM AND METHOD FOR MAINTAINING OBJECTS IN A LOOKUP CACHE
Technical field
The present invention relates generally to an improved data processing system and in particular to a system and method for maintaining objects in a lookup cache.
Background art
A set associative cache allows data from any address to be stored in a cache location. The cache address is used as a tag. Ail tags maintained by the cache must be compared with a requested address . If a tag matches the requested address , the matching tag's associated data is then accessed. A set associative cache requires an associative memory to hold the tags. A set associative cacne separates objects to be cached into sets based on a unique identifier and every obiect will belong to one set only. As objects are added to the cache, each set can only grow to a limited size. Once the set has reached its full size, an existing object must be discarded each time a new object is added. Typically, the discarded object will be the least recently used (LRU; obiect in the set . A name lockup cacne is derived from the general set associative cacne architecture. Because a name lookup cache generally contains only 3 subser if
tar. :.niy grow to 5 limited size . Once the set has reached its ru.li size, an existing object - typically the least recently used object -must oe discarded every t ime a new object i s added . Typi ca 11 y, the set is a 1 in bed 11 s ^ t i.--. -is searched in a linear fashion starting at the head. The s-arch ends vjhen "h-obiect is found or the tail object is searched. If the object, was f'ur:;, .it is rfvt"e:. f r air, its current posirj^n in the list to the nea ::; vt the lis- i ."■■
Tne a:>i-oe- described name lookup cache and LRU implementation car e;:hibi t sen Jus per formance probiems on modern large symmetric multiprocessor computers. Particularly, the LRU algorithm is implemented by moving c-bjeoz.-to the head of the cache list and, thus, for every cache hit or ir.ser t i:n ■-.-" a new object, the set's list anchor must, be modified to point t>.. zhe new iis~

head object. Due to various characteristics of modern memory design, global data that is frequently modified presents a serious performance inhibition in a name lookup cache organized by an LRU algorithm.
It would be desirable to devise a method and system for maintaining objects m a lookup cache that addresses some of the disadvantages of existing techniques.
Disclosure of the invention
Accordingly, the present invention provides a method, computer program product, and a data processing system for maintaining obj ects in a lookup cache . A primary list is populated with a first plurality of objects. The primary list is an unordered list of the first plurality of objects. A secondary list is populated with a second plurality of objects. The secondary list is an ordered list of the second plurality of objects. Periodically, at least one object of the first plurality of objects is demoted to the secondary list, and at. least one ot~ecr of the second plurality of objects is promoted to the primary list.
The claimed organization of the lookup cache into primary and secondary lists may result in less frequent updates to the list of cached objects otnvoareci r-c on"*e n 11on a1 loo kup c ache imp1em e n t a t i on s.
Brief description of the drawings
Embodiments of the invention will now be described, by way of e::ample oriiv, woth reference :■:■ the following drawings:
n ^referred embodiment c>f the ore-sent invention :tev be iiur ie.ment -o; ~:tf^:- .. ■"■I" the ore sent invention ma v be implemented; Flour- 3 A is 5 oiaur aroma r io. oornpr ismo a linked lost data st ructure as is ronvent ional ;
a 1 gor11hrr, after a call or request fox object 304 as is converr:ona 1 ;

Figure 4A is a cache system comprising a primary iist and a secondary list implemented according to a preferred embodiment of the present invention;
Figure 4B is a diagrammatic illustration of the cache system of Figure 4A showing a reorganization of the cache system after a cache hit of the secondary list;
Figure 4C is a diagrammatic illustration of the cache system shown in Figures 4A and 4E after a global-retrieval of a requested object; Figure 4D is a diagrammatic illustration of the cache system of Figures 4A-4C after a periodic cache update implemented according to a preferred embodiment of the present invention; and
Figure 5 is a flowchart of processing performed by the cache system for returning requested objects implemented according to a preferred embodiment of the
present invention.
Mode(s) for carrying out the invention
with reference now to the ficures and in particular with reference to Fiaure 1, a pictorial representation of a data processing system in which one preset*: invention may be implemented is depicted in accordance with a preferred embodiment of the present invention. A computer 100 is depicted which inciuJ-S svstem unit 102, video display terminal 104, keyboard 1 o, s~ ia>o ■.!-.■'.: os ; which may include floppy drives and other types 'if permanent anw. remr'^able storage media, and mouse 110. Additional input devices may be included with personal computer 100, such as, for example, a joystick, tcochpad, touch screen, trackball, microphone/ and the like. Computer 10 0 can be irnoleiaenteo us no oy suitable computer, such as an IBM eServer computer :r Inteil iOtaticn o:ooo_., which are products if International Business Machines Cory or rO i rn , l^noo1 u krraonk, wow York. Although one depicted representation STOWS a coicuter, o.t-.-j
systems sof twar a residing in computer readable medi a in oyer v,r i ..: w ;Oob computer 1og.
AT
Referring t: Figure 2, a block diagram of a data processing system, sash o data processing system 100 in Figure 1, is depicted in accordance wi th ~, preferred embodiment of the present invention. Data procession system! o'o may

oe a symmetric multiprocessor ! SMP) system including a plurality of vrocesscrs 2 02 and 2 04 connected ro system bus 2 06. Alternatively, a sinqle processor system may be employed. Also connected to system bus 206 is memory controller/cache 208, which provides an interface to local memory 209. I/O bus bridge 210 is connected to system bus 206 and provides an interface to I/O bus 212. Memory controller/cache 208 and I/O bus bridge 210 may be integrated as depicted. Peripheral component interconnect (PCI) bus bridge 214 connected to I/O bus 212 provides an interface to PCI local bus 216. A number of modems may be connected to PCI local bus 216. Typical PCI bus implementations will support-four PCI expansion slots or add-in connectors. Communications links to clients 108-112 in Figure 1 may be provided through modem 218 and network adapter 220 connected to PCI local bus 216 through add-in connectors.
Additional PCI bus bridges 222 and 224 provide interfaces for additional POXT local buses 22 6 and 228, from which additional modems or net wo r l: aaaoter s may be supported. In this manner, data processing system 200 allows connerr:ons to multiple network computers. A memory-mapped graphics adapter r30 and hard disk 232 may also be connected to I/O bus 212 as depicted, either directly or i n d i r e c r 1 v
loose of .ordinary skill in the art will appreciate that the h:oittt. :Ari?teo m Figure 2 may vary. For example, other peripheral devices, su :J- -,~ or^ic-:C disk drives and the like, also may be used in addition *:■: rr -.:". ;■ .' i > ■: ". ~ .oar aware deer cted. Che depicted example is not meant to imply architectural limitations with respect to the present invention. The data processing system :1 epicted in Figure 2 may be, for examp 1 e, an IBM aServer t-Serias s\'stem, a product of International Business Machines Corpora ti tn in Arm1" ok, how Yor V , running the Advanced Interactive Executive (A IX" . iter at inn s-;~ —or. : r II! IX". ooeratma system. The present invention Di'iuaes ~ '.ache system the: mar : - v.. ■-
some ob 7 ects from one secondary list to the primary list . Because the cm_oi-iists are of a fixed size, an equal number of objects are moved from the crimsry list to the secondary list during the cache update. Apart from the cache update, an anchor of one primary list is not thanged and thus the primary list memorises unordered objects. The secondary list is maintained as an ordered Its- v y -u: 1PU algorithm. Fiaure 3A is a diagrammatic illustration of a lookup '-ache, soon

as a directory name lookup cache or file cache, comprising a linked list data structure configured as is conventional. Linked list 300 comprises a plurality of objects 302-305 organised by a least recently used algorithm. Objects 302-305 each have an associated link 306-309. The lookup cache comprises ar: anchor 211 , or handle, that references linked list 300. For example, anchor 310 may be a pointer that comprises an address of a head, or first, object of linked list 300. Links 306-308 comprise a pointer or address of the next object in linked list 300. Tail object 305 comprises the last object of linked list 300 ana associated link 309 comprises a null pointer to designate object 305 as the last object of linked list 300. Thus, linked list 300 comprises a logically ordered list having object 302 at the head of the list with objects 303-305 sequentially ordered from the second to last object of linked list 300.
An LRU algorithm maintains objects 302-305 in an organization such that the most recently requested object is positioned at the head of linked list 50— and the least recently requested object is maintained as the tail of linked list 300. Intermediate objects between the head and tail of linked list 30'" are positioned in linked list 300 according to the corresponding request order of the associated objects. Thus, in the illustrative example of Figure 3A, object 302 is the most recently requested object and is positioned as one head of linked list 300. Object 305 is the least recently requested object and to positioned as the tail of linked list 300. Thus, objects 302-305 ore oisiiioneo in linked list 300 from the most recently requested object to the least recently' requested object. Figure 3B is a diagrammatic illustration of linheoi list '-■('04 as is conventional. The LRU alaorithm modifies the organ! tat ion of linked list '-Or
3r. one illustrative example, object 304 is requested and the LFn ov oofieo ~h-linked list "uo properly indicate that object 304 is currently one moot re rant-.;' used ■.'boeot. For eoosmple, the LRU alaorithm may mO'Oiify anchor 51.- *: ■ ■ i -o n en -----
reference the previous head oioj eot. In the iliust r ~z ive ^uamr. 1-, link 3 '■'. .-revised by the LRU algorithm to point to object 301 that was positi.nea os the neoo of linked list ~0C prior to the request or object 304. Additionally, ~mo

When a requested object is not located en the lockup cache, a global search may be performed, e.g. , by a file management system. In the event the requested object is located globally, it is returned to the requesting entity and inserted as the head object of linked list 300. Accordingly, tail object 305 is discarded in the event that no additional capacity remains for objects within linked list 300. A similar update of linked list 300 is required as that described above with reference to Figure 3E.
In a preferred embodiment of the present invention, an anchor may comprise two pointers, or an implementation as two anchors, that respectively point to two distinct cache lists referred to herein as a primary list and a secondary list. Each list is of a fixed size and is initially empty. As objects are added v> the cache, these objects are inserted at the head of the secondary list. At pre-defined intervals, the cache lists are updated by moving some objects from the secondary list to the primary list. Because the cache lists are of a fixed site, an equal number of objects are moved from the primary list to the secondary list during the cache update. Apart from the cache update, the anchor of one primary list is not changed and thus the primary list comprises unordered objects. The secondary list is maintained as an ordered list by an LRU algorithm. Preferably, the object (si moved from the secondary list to the primary lis*: in a cache update is selected based on its order within the secondary list. Particularly, one o-r more of the most recently used objects selecteoj f Alternat i"eiy, a ^redetermined number of objects having the smallest hit f'-iinr may loe selected for demotion to the secondary list. In yet another embotiime.ni., the cache system may evaluate all hit counts of rejects in the jrim^ry lis" and comoare them with hit counts of objects in the secondary iisr. An-.' do^ecr having a hit count in the primary list that is less than a hit count ■:■£ an o-bject in the secondary list may be selected for demotion to the secondary list v.'hile the object in the secondary list having the hit count exceeding the hit ::'tt

of the primary list object is promoted to the primary list. Other implementations for selecting objects of the primary list for demotion to one secondary list may be suitably substituted. Figure 4A is a cache system implemented according to a preferred embodiment of the present invention. The cache system comprises a primary list 400 and a secondary list 4 10. Primary list 400 comprises one or more objects 402-405 each having a respective link 406-409. Anchor 401 references primary list 400. For example, anchor 401 may be implemented as a pointer comprising an address to one of objects 402-4 05. In the illustrative example, anchor 401 comprises a pointer to object 402. Links 406-408 reference respective objects within list 400. Link 409 associated with object 405 comprises a null to indicate that the associated object 405 is the tail object of primary list 400. Objects of primary list 400 are not maintained in a logical ordering. That is, an LRU algorithm or other ordering algorithm does not maintain an order of objects 4 02-405 within list 4 00.
Additionally, the cache system comprises secondary list 410 comprising objects 412-415 and associated links 416-419. Secondary list 410 is an ordered object list comprising head object 412, tail object 415, and intermediate objects -17 and 414 positioned therebetween. Anchor 411 comprises a pointer to secondary list 4 10 by pointing to an object positioned as a head of secondary list 4 10. In the illustrative example of Figure 4A, secondary list 4 10 is c^nfigcied with object 4 12 positioned as the head object and object 415 positioned as the c^l object. In accordance with a preferred embodiment of the pre sen-1: : veer-" ivo:, an LF.U or other suitable algorithm maintains objects 412-415 sequentially ordered according to their associated usage such that the position -I- ohjects 412-415 within secondary list 410 corresponds to the order in whicn me cbjects nave been requested or otherwise used. Particularly, the hoa'j ct^e:-r "t secondary list 410 is maintained as the most recently osen object- wit.o -ee :ui remainino object positioned in descending order acccrdnog to their associate!
4 00 according to any usage order . For example, assume a request is made :.■ i object 401. A cache system of the present invention first searches primary lis" 4 0 0 to determine if the requested object is maintained in primary list -■ 0' -In the event that the requested ■object is stored in primary list 4"'C, the requested object is returned to the requesting entity, e.g., an applicant'.':; or file system. Mo update to the primary list is per formed due t ■■: i r t riaoh of a requested object from primary list 400.

Assume for illustrative purposes that a request is issued for object 414. Toe cache system first searches primary list 400 and determines that one requested object is not stored on primary list 4G0. The cache system then proceeds to search secondary list 410. In the illustrative example, object 414 is returned to the requesting entity. Additionally, the secondary list is updated to properly comprise a descending order of objects according to their usage. Thus, the cache system modifies the anchor 411 that points to the head object of secondary list 410 to point to the most recently requested object 414 as shown by the diagrammatic illustration of the cache system implemented according to a preferred embodiment of the present invention in Figure 4B. Additionally, link 418 associated with object 414 is modified to point to previous head object 412 that is now demoted to the second-most upper, or recently used, object in secondary-list 410. Likewise, link 417 is revised to point to tail object 41L Thus, secondary list 410 is updated to maintain a sequential ordering of objects in descending order of their associated usage. In the illustrative example of Fioure 4B, obj ect 414 is logically positioned as the head object indicating object 414 is the most recently used object within secondary list 410. detects 412, 413, and 415 are configured in descending order according to their usage.
Assume now that an object is requested that is not located in either primary list 4 00 or secondary list 410. The cache system, after searching the primary list and the secondary list, may attempt a global location of the requested object . Fc-r example, the cache system may interface with a file man- ";em-.ro: system! or other suitable application that attempts to retrieve the requested jtiect. In the event that the requested object is obtained global]y, the requested object is returned to the requesting entity. Additionally, the ca ;;he system is updated by adding the globally-retrieved object to secondary List 4 i to Particularly, the globally-retrieved object is added c^ secondary List
Floor- ~r is a tiagr aromatic illustration ■"'! the cache system ciovo. ...h r o;; ■-_-.--:?. an a 4B after a global-ret r level of a requested object in act or dance v;ith a preferred embodiment of the present invention. In the ii lust rat L'e ex";;n:j: i.e, assume a requested object ■: illustratively designate':) objected", is not irctteo in rrimary list 4 0 0 and secondary list 410, and a subsequent global sear ■"■":". results in location of the requested object. The globally-retrieved object L-: returned to the requesting entity and is inserted into secondary list 4 10 -is the head object. In the illustrative example, object 421 is the

globally-retrieved object and is inserted into the secondary list 4 10. Anchor 411 is modified to point to globally-retrieved object 421. Additionally, lire: 422 associated with object 421 is configured to point to the previous head object 414 and link 417 is revised as a null pointer to indicate associated object 413 is the tail object. The previous tail object (object 415 in Figure 4E; has been discarded to provide sufficient capacity for the newly inserted ocjeet 421 within secondary list 410.
Figure 4D is a diagrammatic illustration of the cache system of Figures 4A-4C after a periodic cache update implemented according to a preferred embodiment of the present invention. Updates to the cache system are preferably performed at pre-defined intervals. The cache update is designed to exchange objects between the ordered secondary list with the non-ordered primary list. Preferably, objects of the primary list that are exchanged with objects of the secondary list are used less frequently than the objects promoted from the secondary list. Accordingly, a hit count, or other suitable mechanism, of objects within the primary list and/or secondary list may be used for evaluating whether to exchange objects between the primary and secondary lists. Ficr ■ re 4 0 15 illustrative of a cache update evaluation that resulted in identification ;.f objects 4 03 and 4 04 for demotion to secondary list 410 and identification af objects 414 and 421 for promotion from secondary list 410 to tnmary li.:t 4 C'O. In the illustrative example, obi ects 4 03 and ^ 0 4 moved, from crimar v UST:
410. Other implementations may position objects demoted from primary list -JOO at the upper end of secondary list 410, or alternatively, objects demoted from primary list 400 may be positioned within secondary list 410 based '- Fioure 0 is a flowchart of processing oerfcrmed lav ~he iwhut cache for 1. :.ati.,.' requested aoaects implemented according to a preferred embodiment :" ~ne
-implemented as a computer-readable instruction set "hat is fetched, f-.n examtJ-tram local memory 209, and executed by a processing unit such as processor 101 or 104 . The rout ine is initial iced -'step 5 02) and awaits receipt ■: f a request for an obiect >stet 504) . Responsive to receipt of ^ request f -i an or ~ ~ct, a search for the requested object is then performed on the primary list .step 10 K. > . An evaluation is made to determine if the requested old ect is 1 ~ oated in the primary list (step 500). in the event the requested object is located in the primary list, the routine returns the object to the requesting entity'






Documents:

184-CHENP-2007 AMENDED PAGES OF SPECIFICATION 06-07-2011.pdf

184-CHENP-2007 AMENDED CLAIMS 06-07-2011.pdf

184-chenp-2007 form-3 06-07-2011.pdf

184-CHENP-2007 OTHER PATENT DOCUMENT 1 06-07-2011.pdf

184-chenp-2007 other patent document 2 06-07-2011.pdf

184-CHENP-2007 CORRESPONDENCE 10-08-2010.pdf

184-CHENP-2007 CORRESPONDENCE OTHERS 06-07-2011.pdf

184-CHENP-2007 CORRESPONDENCE OTHERS 27-06-2013.pdf

184-chenp-2007-abstract.pdf

184-chenp-2007-claims.pdf

184-chenp-2007-correspondnece-others.pdf

184-chenp-2007-description(complete).pdf

184-chenp-2007-drawings.pdf

184-chenp-2007-form 1.pdf

184-chenp-2007-form 26.pdf

184-chenp-2007-form 3.pdf

184-chenp-2007-form 5.pdf

184-chenp-2007-pct.pdf


Patent Number 258717
Indian Patent Application Number 184/CHENP/2007
PG Journal Number 06/2014
Publication Date 07-Feb-2014
Grant Date 31-Jan-2014
Date of Filing 17-Jan-2007
Name of Patentee INTERNATIONAL BUSINESS MACHINES CORPORATION
Applicant Address ARMONK, NEW YORK 10504, USA
Inventors:
# Inventor's Name Inventor's Address
1 MEWHINNEY, GREG 6901 DANWOOD DRIVE, AUSTIN, TEXAS 78759, USA
2 SRINIVAS, MYSORE, SATHYANARAYANA 910 COTTAGE GROVE PASS, AUSTIN, TEXAS 78717, USA
PCT International Classification Number G06F 12/12
PCT International Application Number PCT/EP05/52283
PCT International Filing date 2005-05-18
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 10/870,458 2004-06-17 U.S.A.