Title of Invention

"A METHOD FOR PROTECTING THE FAILURE OFA BORDER ROUTER BETWEEN TWO DOMAINS IN A COMPUTER NETWORK"

Abstract A method for protecting against the failure of a border router between two domains in a computer network, a first domain of the two domains including a next-hop router and the second domain of the two domains itrcluding one or more next-hop routers (NNHOPs), the method characterized in: configuring the next-hc router in the first domain to protect the border router (protecting router); learning address prefixt.5 of destinations reachable fiom the one or more NNHOPs, the learning performed by the proteczing router located in the first domain; calculating, using a prcbcessor of the protecting router located in the first domain, a backup tunnel to each NNHOP that excludes the border router; associating each backup tunnel with one or more learned address prefixes; and in response to failure of the border router, rerouting traffic directed to the one or more learned address prefixes, via one or more network interfaces of the protecting router located in the first domain, onto an appropriate backup tunnel to a corresponding NNHOP. Figure 6
Full Text AN EFFICIENT MECHANISM FOR FAST RECOVERY IN CASE OF BORDER ROUTER NODE FAILURE IN A COMPUTER
NETWORK
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention relates to computer networks and more particularly to protecting against border router failure between domains of a computer network using backup tunnels.
Background Information
A computer network is a geographically distributed collection of nodes inter-connected by communication links and segments for transporting data between end nodes, such as personal computers and workstations. Many types of networks are available, with the types ranging from local area networks (LANs) to wide area net-works (WANs). LANs typically connect the nodes over dedicated private communica-tions links located in the same general physical location, such as a building or campus. WANs, on the other hand, typically connect geographically dispersed nodes over long-distance communications links, such as common carrier telephone lines, optical light-paths, synchronous optical networks (SONET), or synchronous digital hierarchy (SDH) links. The Internet is an example of a WAN that connects disparate networks through-out the world, providing global communication between nodes on various networks. The nodes typically communicate over the network by exchanging discrete frames or packets of data according to predefined protocols, such as the Transmission Control Protocol/Internet Protocol (TCP/IP). In this context, a protocol consists of a set of rules defining how the nodes interact with each other. Computer networks may be further interconnected by an intermediate network node, such as a router, to extend the effec-tive "size" of each network.
Since management of interconnected computer networks can prove burden-some, smaller groups of computer networks may be maintained as routing domains or

autonomous systems. The networks within an autonomous system (AS) are typically coupled together by conventional "intradomain" routers configured to execute intrado-main routing protocols, and are generally subject to a common authority. To improve routing scalability, a service provider (e.g., an ISP) may divide an AS into multiple "ar-eas." It may be desirable, however, to increase the number of nodes capable of ex-changing data; in mis case, interdomain routers executing interdomain routing proto-cols are used to interconnect nodes of the various ASes. Moreover, it may be desirable to interconnect various ASes that are operated under different administrative domains. As used herein, an AS or, more particularly, an area is generally referred to as a "do-main," and a router that interconnects different domains together is generally referred to as a "border router,"
An example of an interdomain routing protocol is the Border Gateway Protocol version 4 (BGP), which performs routing between domains (ASes) by exchanging rout-ing and reachability information among neighboring interdomain routers of the sys-tems. An adjacency is a relationship formed between selected neighboring (peer) routers for the purpose of exchanging routing information messages and abstracting the network topology. The routing information exchanged by BGP peer routers typically includes destination address prefixes, i.e., the portions of destination addresses used by the routing protocol to render routing ("next-hop") decisions. Examples of such desti-nation addresses include IP version 4 (IPv4) and version 6 (IPv6) addresses. BGP gen-erally operates over a reliable transport protocol, such as TCP, to establish a TCP con-nection/session. The BGP protocol is well known and generally described in Request for Comments (RFC) 1771, entitled A Border Gateway Protocol 4 (BGP-4), published March 1995.
Examples of an intradomain routing protocol, or an interior gateway protocol (IGP), are the Open Shortest Path First (OSPF) routing protocol and the Intermediate-System-to-Intennediale-System (IS-IS) routing protocol. The OSPF and IS-IS proto-cols are based on link-state technology and, therefore, are commonly referred to as link-state routing protocols. Link-state protocols define the manner with which routing information and network-topology information are exchanged and processed in a do-main. This information is generally directed to an intradomain router's local state (e.g.,

the router's usable interfaces and reachable neighbors or adjacencies). The OSPF pro-tocol is described in RFC 2328, entitled OSPF Version 2, dated April 1998 and the IS-IS protocol used in the context of IP is described in RFC 1195, entitled Use of OSI ISLS for routing in TCP/IP and Dual Environments, dated December 1990, both of which are hereby incorporated by reference.
An intermediate network node often stores its routing information in a routing table maintained and managed by a routing information base (RIB). The routing table is a searchable data structure in which network addresses are mapped to their associated routing information. However, those skilled in the art will understand that the routing table need not be organized as a table, and alternatively may be another type of search-able data structure. Although the intermediate network node's routing table may be configured with a predetermined set of routing information, the node also may dynami-cally acquire ("learn") network routing information as it sends and receives data pack-ets. When a packet is received at the intermediate network node, the packet's destina-tion address (e.g., stored in a header of the packet) may be used to identify a routing table entry containing routing information associated with the received packet. Among other things, the packet's routing information indicates the packet's next-hop address.
To ensure that its routing table contains up-to-date routing information, the in-termediate network node may cooperate with other intermediate nodes to disseminate routing information representative of the current network topology. For example, sup-pose the intermediate network node detects that one of its neighboring nodes (i.e., adja-cent network nodes) becomes unavailable, e.g., due to a link failure or the neighboring node going "off-line," etc. In this situation, the intermediate network node can update the routing information stored in its routing table to ensure that data packets are not routed to the unavailable network node. Furthermore, the intermediate node also may communicate this change in network topology to the other intermediate network nodes so they, too, can update their local routing tables and bypass the unavailable node. In this manner, each of the intermediate network nodes becomes "aware" of the change in topology.
Typically, routing information is disseminated among the intermediate network nodes in accordance with a predetermined network communication protocol, such as a

link-state protocol (e.g., IS-IS, or OSPF). Conventional link-state protocols use link-state packets (or "IGP Advertisements") for exchanging routing information between interconnected intermediate network nodes (IGP nodes). As used herein, an IGP Ad-vertisement generally describes any message used by an IGP routing protocol for com-municating routing information among interconnected IGP nodes, i.e., routers and switches. Operationally, a first IGP node may generate an IGP Advertisement and "flood" (i.e., transmit) the packet over each of its network interfaces coupled to other IGP nodes. Thereafter, a second IGP node may receive the flooded IGP Advertisement and update its routing table based on routing information contained in the received IGP Advertisement, Next, the second IGP node may flood the received IGP Advertisement over each of its network interfaces, except for the interface at which the IGP Adver-tisement was received. This flooding process may be repeated until each intercon-nected IGP node has received the IGP Advertisement and updated its local routing ta-ble.
In practice, each IGP node typically generates and disseminates an IGP Adver-tisement whose routing information includes a list of the intermediate node's neighbor-ing network nodes and one or more "cost" values associated with each neighbor. As used herein, a cost value associated with a neighboring node is an arbitrary metric used to determine the relative ease/burden of communicating with that node. For instance, the cost value may be measured in terms of the number of hops required to reach the neighboring node, the average time for a packet to reach the neighboring node, the amount of network traffic or available bandwidth over a communication link coupled to the neighboring node, etc.
As noted, IGP Advertisements are usually flooded until each intermediate net-work IGP node has received an IGP Advertisement from each of the other intercon-nected intermediate nodes. Then, each of the IGP nodes can construct the same "view" of the network topology by aggregating the received lists of neighboring nodes and cost values. To that end, each IGP node may input this received routing information to a "shortest path first" (SPF) calculation that determines the lowest-cost network paths that couple the intermediate node with each of the other network nodes. For example, the Dijkstra algorithm is a conventional technique for performing such a SPF calcula-

tion, as described in more detail in Section 12.2.4 of the text book Interconnections Second Edition, by Radia Perlman, published September 1999, which is hereby incor-porated by reference as though fully set forth herein. Each IGP node updates the rout-ing information stored in its local routing table based on the results of its SPF calcula-tion. More specifically, the RIB updates the routing table to correlate destination nodes with next-hop interfaces associated with the lowest-cost paths to reach those nodes, as determined by the SPF calculation.
Multi-Protocol Label Switching (MPLS) Traffic Engineering has been devel-oped to meet data networking requirements such as guaranteed available bandwidth or fast restoration. MPLS Traffic Engineering exploits modem label switching techniques to build guaranteed bandwidth end-to-end tunnels through an IP/MPLS network of la-bel switched routers (LSRs). These tunnels are a type of label switched path (LSP) and . thus are generally referred to as MPLS Traffic Engineering (TE) LSPs. Examples of MPLS TE can be found in RFC 3209,, entitled RSVP-TE: Extensions to RSVPfor LSP Tunnels dated December 2001, RFC 3784 entitled Intermediate-System-to-Intermediate-System (IS-IS) Extensions for Traffic Engineering (TE) dated June 2004, and RFC 3630, entitled Traffic Engineering (TE) Extensions to OSPF Version 2 dated September 2003, the contents of all of which are hereby incorporated by reference in their entirety.
Establishment of an MPLS TE-LSP from a head-end LSR to a tail-end LSR in-volves computation of a path through a network of LSRs. Optimally, the computed path is the "shortest" path, as measured in some metric, that satisfies all relevant LSP Traffic Engineering constraints such as e.g., required bandwidth, availability of backup bypass tunnels for each link and node included in the path, etc. Path computation can either be performed by the head-end LSR or by some other entity operating as a path compulation element (PCE). The head-end LSR (or a PCE) exploits its knowledge of network topology and resources available on each link, to perform the path computation according to the LSP Traffic Engineering constraints. Various path computation meth-odologies are available including CSPF (constrained shortest path first). MPLS TE-LSPs can be configured within a single domain, e.g., IGP area or level, or may also span multiple domains, e.g., IGP areas or levels.

One difficulty that arises in crossing domain boundaries is that path computa-tion at the head-end LSR requires knowledge of network topology and resources across the entire network between the head-end and the tail-end LSRs. Yet service providers typically do not share this information with each other across domain borders, In par-ticular, network topology and resource information do not generally flow across area boundaries even though a single service provider may operate all the areas. Neither the head-end LSR nor any single PCE will have sufficient knowledge to compute a path. Because of this, MPLS Traffic Engineering path computation techniques are required to compute inter-domain TE-LSPs.
The use of PCEs has been adapted to create a distributed PCE architecture, in order to extend MPLS TE-LSPs across domain boundaries. An example of such a dis-tributed architecture is described in commonly-owned copending U.S. Patent Applica-tion Serial No. 10/767,574, entitled COMPUTING INTER-AUTONOMOUS SYSTEM MPLS TRAFFIC ENGINEERING LSP PATHS, filed by Vasseur et al., on September 18,2003, the contents of which are hereby incorporated by reference in its entirety, In a distributed PCE architecture, the visibility needed to compute paths is extended be-tween adjacent domains so that PCEs may cooperate to compute paths across multiple domains by exchanging virtual shortest path trees (VSPTs) while preserving confiden-tiality across domains (e.g., when applicable to ASes).
Some applications may incorporate unidirectional data flows configured to transfer time-sensitive traffic from a source (sender) in a computer network to a desti-nation (receiver) in the network in accordance with a certain "quality of service" (QoS). Here, network resources may be reserved for the unidirectional flow to ensure that the QoS associated with the data flow is maintained. The Resource ReSerVation Protocol (RSVP) is a network-control protocol that enables applications to reserve resources in order to obtain special QoS tor their data flows. RSVP works in conjunction with rout-ing protocols to, e.g., reserve resources for a data flow in a computer network in order to establish a level of QoS required by the data flow. RSVP is defined in R. Braden, et al., Resource ReSerVation Protocol (RSVP), RFC 2205. In the case of traffic engineer-ing applications, RSVP signaling is used to establish a TE-LSP and to convey various

TE-LSP attributes to routers, such as border routers, along the TE-LSP obeying the set of required constraints whose path may have been computed by various means.
Generally, a tunnel is a logical structure that encapsulates a packet (a header and data) of one protocol inside a data field of another protocol packet with a new header. In this manner, the encapsulated data may be transmitted through networks that it would otherwise not be capable of traversing. More importantly, a tunnel creates a transparent virtual network link between two network nodes that is generally unaffected by physical network links or devices (i.e., the physical network links or devices merely forward the encapsulated packet based on the new header). While one example of a tunnel is an MPLS TE-LSP, other known tunneling methods include, inter alia, the Layer Two Tunnel Protocol (L2TP), the Point-to-Point Tunneling Protocol (PPTP), and IP tunnels.
Reconfiguring a network in response to a network element failure using, e.g., pure IP rerouting, can be time consuming. Many recovery techniques, however, are available to provide fast recovery and/or network configuration in the event of a net-work element failure, including, inter alia, Fast Reroute (FRR), e.g., IP FRR or MPLS TE FRR. An example of MPLS TE FRR is described hi Pan, et al., Fast Reroute Ex-tensions to RSVP-TEfor LSP Tunnels, Internet Draft, February 2005, which is hereby incorporated by reference as though fully set forth herein. FRR has been widely de-ployed to protect against ultra-domain network failures, where "backup tunnels" are created to bypass a protected network device (e.g., links, shared risk link groups (SRLGs), and nodes). When the network element fails, traffic is quickly rerouted over a backup tunnel to bypass the railed device (or in the case of MPLS, a set of TE-LSP(s) is/are quickly rerouted). In an inter-domain IP/MPLS environment, however, there is a lack of knowledge of routers surrounding an inter-domain border router regarding the forwarding path of inter-domain traffic traversing that border router. Protection of bor-der routers is particularly critical because of the amount of traffic generally traversing them. While solutions are available to create inter-domain backup tunnels (e.g., manu-ally configuring backup tunnels from each router surrounding the border router in one domain to each router surrounding the border router in another), one difficulty is to dy-namically select an appropriate backup tunnel for each destination address prefix. Se-

lecting an improper backup tunnel could result in routing loops, black holing, time de lay, or other undesirable outcomes.
SUMMARY OF THE INVENTION
The present invention is directed to a technique for protecting traffic (IP) against the failure of a border router between two domains in a computer network using Fast Reroute and backup tunnels. According to the protection technique, the border router (i.e., the "protected border router") announces/advertises a list of all its adjacent next-hop routers (i.e., its "neighbors.") residing in first and second domains intercon-nected by the protected border router. A neighbor in the first domain mat is immedi-ately upstream to the protected border router and that is configured to protect the border router (i.e., the "protecting router") learns address prefixes (i.e., "protected prefixes") reachable from the next-hop routers in the second domain (i.e., "next-next-hops," NNHOPs to the protected prefixes from the protecting router). The protecting router calculates a backup tunnel to each NNHOP that excludes the protected border router, and associates each backup tunnel with one or more protected prefixes accordingly. When the protected border router fails, Fast Reroute is triggered, and the protected pre-fix traffic is rerouted by the protecting router onto an appropriate backup tunnel to a corresponding NNHOP.
In the illustrative embodiment of the present invention, the protected border router employs an Interior Gateway Protocol (IGP), such as the Open Shortest Path First (OSPF) or Intermediate-System-to-Intermediate-System (IS-IS) protocol, to transmit the advertised list of neighbors. Notably, a specific type of IGP advertisement may be used to limit the transmission to only the neighbors. Upon receiving the list of neighbors, the protecting router parses those NNHOP neighbors that are in the second domain, and learns the address prefixes reachable from each NNHOP. In one aspect of the present invention, the protecting router creates Multiprotocol Label Switching (MPLS) Traffic Engineering Label Switched Paths (TE-LSPs) to the NNHOPs and util-izes extensions to a request/response signaling exchange, such as the Resource ReSer-Vation Protocol (RSVP), to learn the reachable address prefixes. These TE-LSPs are

"fake," in that they are only used to obtain reachability information, (e.g., without re-serving bandwidth), and may be subsequently torn down.
Using the reachability information, the protecting router calculates backup tun-nels to the NNHOPs that are diversely routed around the protected border router (e.g., by traversing another border router between the first and second domains). These backup tunnels are illustratively stored in a shadow table of the protecting router, so that conventional (BP) routing may still be performed by the protecting router using its routing table. Once the protecting router determines that the protected border router has failed, Fast Reroute is triggered, and all traffic destined to a protected prefix is re-routed by the protecting router onto an appropriate backup tunnel to a corresponding NNHOP, e.g., according to the shadow table.
Advantageously, the novel technique protects against the failure of a border router between two domains in a computer network using Fast Reroute and backup tunnels. By creating a limited number of backup tunnels between a protecting router and NNHOP routers, the inventive technique provides Fast Reroute protection of the border router that requires minimal configuration and avoids the deployment of backup tunnels from every neighbor in the first domain of the protected border router to every neighbor in the second domain.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and further advantages of the invention may be better understood by referring to the following description in conjunction with the accompanying drawings in which like reference numerals indicate identically or functionally similar elements, of which:
Fig. 1 is a schematic block diagram of an exemplary computer network of areas that may be used in accordance with the present invention;
Fig. 2 is schematic block diagram of an exemplary router that may be advanta-geously used with the present invention;
Fig. 3 is a schematic block diagram of an exemplary IGP Advertisement that may be flooded by the routers;

Fig. 4 is a schematic block diagram of an exemplary computer network as in Fig. 1 with illustrative TE-LSPs in accordance with the present invention;
Fig. 5 is schematic block diagram of an exemplary routing table that may be ad-vantageously used with the present invention;
Fig. 6 is a schematic block diagram of an exemplary computer network as in Fig. 1 with illustrative backup tunnels in accordance with the present invention; and
Fig. 7 is a flowchart illustrating a sequence of steps for protecting against the failure of a border router between two domains in accordance with the present inven-tion.
DETAILED DESCRIPTION OF AN ILLUSTRATIVE
EMBODIMENT
Fig. 1 is a schematic block diagram of an exemplary computer network 100 comprising area A1 having exemplary intradomain router A, and area A2 having ex-emplary intradomain routers B and C. Illustratively, address prefixes IP1 and EP2 are reachable via routers B and C, respectively. In addition, Al and A2 share area border routers ABR1 and ABR2. As used herein, an area is a collection of routers that share full network topology information with each other but not necessarily with routers out-side the area. A collection of areas may be contained within a single autonomous sys-tem (AS). The term area as used herein also encompasses the term "level" which has a similar meaning for networks that employ IS-IS as their interior gateway protocol (IGP), in which case the area border routers ABR1-2 are embodied as level I/level 2 (L1L2) routers. These examples are merely representative. The terms area and level are used interchangeably herein, as well as the use of ABR, L1L2 routers, and more generally, border routers.
Data packets may be exchanged among the areas A1-A2 using predefined net-work communication protocols such as the Transmission Control Protocol/Internet Pro-tocol crCP/IP), User Datagram Protocol (UDP), Asynchronous Transfer Mode (ATM) protocol, Frame Relay protocol, Internet Packet Exchange (IPX) protocol, etc. Routing information may be distributed among the routers of the areas using predetermined

IGPs, such as conventional distance-vector protocols or, illustratively, link-state proto-cols, through the use of link-state advertisements or link-state packets ("IGP Adver-tisements").
Fig. 2 is a schematic block diagram of an exemplary router 200 that may be ad-vantageously used with the present invention as, e.g., an intradomain router or a border router. The router comprises a plurality of network interfaces 210, a processor 220, and a memory 240 interconnected by a system bus 250. The network interfeces 210 contain the mechanical, electrical and signaling circuitry for communicating data over physical links coupled to the network 100. The network interfeces may be configured to trans-mit and/or receive data using a variety of different communication protocols, including, inter alia, TCP/IP, UDP, ATM, synchronous optical networks (SONET), wireless pro-tocols, Frame Relay, Ethernet, Fiber Distributed Data Interface (FDDI), etc.
The memory 240 comprises a plurality of storage locations that are addressable by the processor 220 and the network interfeces 210 for storing software programs and data structures associated with the present invention. The processor 220 may comprise necessary elements or logic adapted to execute the software programs and manipulate the data structures, such as routing table 500 and shadow table 550. A router operating system 242, portions of which are typically resident in memory 240 and executed by the processor, functionally organizes the router by, inter alia, invoking network opera-tions in support of software processes and/or services executing on the router. These software processes and/or services include Routing Information Base (RIB) 245, rout-ing services 247, and RSVP services 249. It will be apparent to those skilled in the art that other processor and memory means, including various computer-readable media, may be used to store and execute program instructions pertaining to the inventive tech-nique described herein.
Routing services 247 contain computer executable instructions executed by processor 220 to perform functions provided by one or more routing protocols, such as OSPF and IS-IS. These functions may be configured to manage a forwarding informa-tion database (not shown) containing, e.g., data used to make forwarding decisions. RSVP services 249 contain computer executable instructions for implementing RSVP and processing RSVP messages in accordance with the present invention, RSVP is de-

scribed in R, Braden, et al., Resource ReSerVation Protocol (RSVP), Request For Comments (RFC) 2205, September 1997, available from the IETF and which is hereby incorporated by reference as though fully set forth herein, and in RFC 3209, entitled RSVP-TE: Extensions to RSVPfor LSP Tunnels, as incorporated above.
Changes in the network topology may be communicated among routers 200 us-ing a link-state protocol, such as the conventional OSPF and IS-IS protocols. Suppose, for example, that a communication link fails within an area or a cost value associated with a network node changes. Once the change in the network's state is detected by one of the routers, that router may flood an IGP Advertisement communicating the change to the other routers in the area. In this manner, each of the routers eventually "converges" to an identical view of the network topology.
In one embodiment, the routers described herein are IP routers that implement Multi-Protocol Label Switching (MPLS)'and operate as label switched routers (LSRs). In one simple MPLS scenario, at an ingress to a network, a label is assigned to each incoming packet based on its forwarding equivalence class before forwarding the packet to a next-hop router. At each router, a forwarding selection and a new substitute label are determined by using the label found in the incoming packet as a reference to a label forwarding table that includes this information. At the network egress (or one hop prior), a forwarding decision is made based on the incoming label but optionally no la-bel is included when the packet is sent on to the next hop.
The paths,taken by packets that traverse the network in this manner are referred to as label switched paths (LSPs). Establishment of a TE-LSP requires computation of a path, signaling along the path, and modification of forwarding tables along the path. MPLS TE establishes LSPs that have guaranteed bandwidth under certain conditions. Illustratively, the TE-LSPs may be signaled through the use of the RSVP protocol, and in particular, RSVP TE signaling messages.
In accordance with RSVP, to establish a TE-LSP (data flow) between a sender (e.g., a head-end node A) and a receiver (e.g., a tail-end node C), the sender may send an RSVP path (Path) message downstream hop-by-hop along a path (e.g., a unicast route) to the receiver to identify the sender and indicate e.g., bandwidth needed to ac-commodate the data flow, along with other attributes of the TE-LSP. The Path message

may contain various information about the data flow including, e.g., traffic characteris-tics of the data flow.
To compute paths across multiple domains, previously incorporated U.S. Appli-cation Serial No. 10/767,574 describes the use of a virtual shortest path tree (VSPT) algorithm in a distributed path computation element (PCE) architecture. Notably, it will be apparent to those skilled in the art that other methods may be used to compute the TE-LSPs (e.g., explicit paths, etc.), and such methods are within the scope of the present invention. Furthermore, the path computation request (and response) can be implemented in accordance with a protocol specified in Vasseur, et al. RSVP Path Computation Request and Reply Messages (draft-vasseur-mpls-computation-rsvp-05.txt), Internet Draft, July 2004, which is hereby incorporated by reference as though fully set forth herein.
Although the illustrative embodiment described herein is directed to MPLS, it should also be noted that the present invention may advantageously apply to General-ized MPLS (GMPLS), which pertains not only to packet and cell-based networks, but also to Time Division Multiplexed (TDM) and optical networks. GMPLS is well known and described in RFC 3945, entitled Generalized Multi-Protocol Label Switch-ing (GMPLS) Architecture, dated October 2004, and RFC 3946, entitled Generalized Multi-Protocol Label Switching (GMPLS) Extensions for Synchronous Optical Network (SONET) and Synchronous Digital Hierarchy (SDH) Control, dated October 2004, the contents of both of which are hereby incorporated by reference ha their entirety.
The present invention is directed to a technique for protecting traffic (IP) against the failure of a border router between two domains in a computer network using Fast Reroute and backup tunnels. According to the protection technique, the border router (i.e., the "protected border router") announces/advertises a list of all its adjacent next-hop routers (i.e., its "neighbors") residing in first and second domains intercon-nected by the protected border router. A neighbor in the first domain that is immedi-ately upstream to the protected border router and that is configured to protect the border router (i.e., the "protecting router") learns address prefixes (i.e., "protected prefixes")
.' .
reachable from the next-hop routers hi the second domain (i.e., "next-next-hops," NNHOPs to the protected prefixes from the protecting router). The protecting router

calculates a backup tunnel to each NNHOP that excludes the protected border router, and associates each backup tunnel with one or more protected prefixes accordingly. When the protected border router fails, Fast Reroute is triggered, and the protected pre-fix traffic is rerouted by the protecting router onto an appropriate backup tunnel to a corresponding NNHOP.
In the illustrative embodiment of the present invention, the protected border router employs an Interior Gateway Protocol (IGP), such as the Open Shortest Path First (OSPF) or Intermediate-System-to-Intermediate-System (IS-IS) protocol, to transmit the advertised list of neighbors. Notably, a specific type of IGP advertisement may be used to limit the transmission to only the neighbors. Fig. 3 illustrates an exem-plary IGP Advertisement 300 that may be transmitted (flooded) by the router 200 (e.g., the protected border router). Illustratively, the IGP Advertisement 300 is embodied as a Type-9 Opaque LSA, defined in RFC 2370, entitled The OSPF Opaque ISA Option, dated July 1998, the contents of which are hereby incorporated by reference in its en-tirety. An Age field 302 stores an age value that may be used for determining whether the IGP Advertisement 300 is valid. The age value is typically initialized to a non-zero integer value, often in units of seconds. The age value may be decremented, e.g., by one every second, until it reaches zero, thereby indicating that the IGP Advertisement has become invalid. That is, every router 200 that stores or floods the IGP Advertise-ment 300 continuously ages the packet until the age value equals zero. Those skilled in the art will appreciate that other aging mechanisms alternatively may be used, such as incrementing the IGP Advertisement age value from an initial value, e.g., equal to zero, until it reaches a known upper limit.
The advertisement 300 also includes an Options field 304, and an LSA Type field 306 denoting whether the LSA is a Type-9, Type-10, or Type-11 LSA. A Type-11 LSA has AS-wide scope, meaning it is flooded throughout an entire AS, while a Type-10 LSA has area-wide scope, which would not be flooded beyond that area into which the LSA was flooded. A Type-9 LSA, on the other hand, has a link-local scope, which is not flooded beyond the local (subnetwork, Le., only to the adjacent next-hop routers (neighbors). While the present invention illustratively utilizes a Type-9 LSA to

limit unnecessary network traffic, those skilled in the art will understand that other LS A Types may be used with the scope of the present invention.
Also included within the IGP Advertisement 300 is an Opaque Type field 308 and an Opaque ID field 310, which signify the type and ID of the Opaque LSA. An Advertising Router field 312 stores a value that identifies the router that generated and originally broadcast the IGP Advertisement 300. A Sequence Number field 314 stores a sequence number indicating the relative version of the IGP Advertisement. Typically, the sequence number stored in the field 314 is incremented, e.g., by one, for every new version of the IGP Advertisement The IGP Advertisement 300 is therefore considered "stale" (invalid) if its sequence number is less than the sequence number stored in a previously-received version of the IGP Advertisement, Le., generated by the same ad-vertising node. Accordingly, the routers 200 may be configured to store and forward only the most recent version of an IGP Advertisement, e.g., the version having the larg-est sequence number.
The IGP Advertisement also includes a Checksum field 316 containing a check-sum used to validate the contents of the Advertisement, and a Length field 318 contain-ing a value used to signify either the overall length of the Advertisement or the length of an Opaque Information field 320. The Opaque Information field 320, by definition, is used by the advertising node for transmitting any desired information. According to the invention, the Opaque Information field illustratively includes a list of addresses for the next-hop neighbors adjacent to the protected border router. Using Fig. 1 as an ex-ample, the Opaque Information field populated by, e.g., protected border router ABR1, would contain the addresses of adjacent routers A, B, and C.
Alternatively, the IGP Advertisement may be in the form of an IS-IS Link State Packet (not shown), which may include type/lenguVvalue (TLV) encoded formats to convey information. The TLV encoded format is used to identify a type (T) of infor-mation being communicated (conveyed), a length (L) of information to be conveyed, and a value (V) of the actual uiformation conveyed. The length (L) parameter con-tained in the length field is typically implementation-specific and can denote the length from the beginning of the Type field of the object to the end. However, the length gen-erally demotes the length of the Value (V) field and not the Type (T) or Length (L)

fields. In the alternative embodiment, the IS-IS Packet may include an Extended Ob-ject illustratively embodied as a TLV. The Value field of the Extended Object may contain, inter alia, a list of addresses for the next-hop neighbors adjacent to the pro-tected border router, as mentioned above. Notably, however, an IS-IS Link State Packet is generally not limited in scope and is flooded throughout an entire IS-IS level, in which case the non-neighboring routers would be configured to ignore the list of ad-dresses.
Upon receiving the list of neighbors, the protecting router (e.g., a "point of local repair," PLR) parses those NNHOP neighbors that are in the second domain, and learns the address prefixes reachable from each NNHOP. Illustratively, the protecting router parses the NNHOPs from the list of neighbor addresses by comparing the addresses to addresses already known to the protecting router to be in its domain through examina-tion of, e.g., its shortest path tree (SPT).. If the protecting router does not have an ad-dress listed as an address in its SPT, the address must be in another domain, and hence, an NNHOP reachable through the protected border router. Alternatively, the NNHOPs may be statically configured, e.g., by a system administrator with rnter-domain knowl-edge. This alternative may be practical for use where a protected border router has only a small number of (e.g., two or three) NNHOPs in its surrounding areas.
In one aspect of the present invention, the protecting router creates MPLS TE-LSPs to the NNHOPs and utilizes extensions to a request/response signaling exchange, such as RSVP, to learn the set of address prefixes reachable by those NNHOPs (pro- . tected prefixes), as well as to learn which NNHOP would be used by the protected bor-der router to reach such address prefixes. Each TE-LSP is illustratively embodied as an unconstrained two-hop TE-LSP from the protecting router (A) to each NNHOP (B and C) via the protected border router (ABR1). The TE-LSPs are created using any avail-able tunnel establishment protocols, such as explicit path or PCE-based path computa-tion. Also, because they are used only to retrieve reachability information from the NNHOPs, the TE-LSPs may be configured as "fake" TE-LSPs, meaning they reserve no bandwidth and/or transmit no traffic.
During establishment of the TE-LSPs, a variety of techniques are available for the protecting router to learn the address prefixes reachable by the NNHOPs, and to

determine which NNHOP to use for each address prefix. Such techniques include, in-ter a/fa, static or policy-based routing, or dynamic methods, such as those described in United States Application Serial No. 11/001,349, entitled PROPAGATION OF ROUTING INFORMATION IN RSVP-TE FOR INTER-DOMAIN TE-LSPS, filed by Vasseur et al. on December 1,2004, the contents of which are hereby incorporated in its entirety. The dynamic methods described therein include the use of extensions to the RSVP signaling exchange to transmit reachability information. For example, a re-quest object is sent to a "target node" along a TE-LSP (e.g., ABR1) to request reach-ability information, to which the target node responds with an entry object. Illustra-tively, the reachability information includes one or more address prefixes reachable by the tail-end node (e.g., router C) as determined by the target node. Once such informa-tion is obtained from each NNHOP, the protecting router knows which NNHOP would be used by the protected border router to reach certain address prefixes. Alternatively, this information may be dynamically acquired using the IGP advertisement 300 above; however due to the potentially large number of prefixes, this may not be the most opti-mal method. Of course, the list of NNHOPs, address prefix reachability information, and/or the identity of an NNHOP to use for each reachable prefix may be statically con-figured, e.g., by a system administrator with inter-domain knowledge of the network topology.
Fig. 4 is a schematic block diagram of an exemplary computer network 100 as in Fig. I with illustrative TE-LSPs in accordance with the present invention. Notably, these TE-LSPs are created according to the methods described above. For example, T1 (shown as a dashed line) is a TE-LSP from head-end router A (protecting router) through ABR1 (protected border router) to tail-end router B (which can reach DPI). In addition, T2 (shown as a dotted line) is a TE-LSP from head-end router A through ABR1 to tail-end router C (which can reach IP2).
Using the reachability information, the protecting router calculates backup tun-nels to the NNHOPs that are diversely routed around the protected border router (e.g., by traversing another border router between the first and second domains). The backup tunnels are illustratively embodied as any known tunnel type, such as, e.g., MPLS TE-LSPs, L2TP tunnels, PPTP tunnels, or IP tunnels. To assure that the tunnels are di-

versely routed from the protected border router, network element exclusion or PCE-based inter-domain path computation can be used. In Figs. 1 and 4, a diverse backup tunnel manifests as traversing ABR2. It should be noted that any other border router (not shown) that is separate (diverse) from the protected border router may be used, and that the number of tunnels can be any number up to the number of NNHOPs traversing any available diverse border router.
These backup tunnels are illustratively stored in a shadow table of the protecting router, so that conventional (EP) routing may still be performed by the protecting router using its routing table. Fig. 5 is schematic block diagram of exemplary routing table 500 that may be advantageously used with the present invention. Routing table 500 is illustratively stored in memory 240 and includes one or more entries 510, each com-prising a plurality of fields for storing a reachable destination address 512, a next-hop interface 514 and next-hop address 516 to reach that destination, and an associated met-ric (e.g., cost) 518 of reaching the destination. The routing table 500 is illustratively maintained and managed by RIB 245. To mat end, the RIB 245 maintains copies of routes (paths) provided by the routing protocols, such as IGP, in order to compute best paths/routes for installation into the routing table 500.
For example, assume that a destination address prefix IP1 is reachable from node A via node B. In addition, the cost of the path A-B connecting node A to node B is "6" (such as via ABR1 of Fig. 1), and the cost of the link(s) from B to the reachable address IPl is "1." A destination address field 512 of entry 510N contains the reach-able address IP1, and the next-hop fields 514,516, are populated with, e.g., link A-ABR1 and a loopback address of node ABR1, respectively. Note that a loopback ad-dress of the next hop node is used as the next-hop address for many reasons, including as a way to avoid depending upon the availability of network interfaces of that node. The cost of IP1 is the cost of all links to the reachable address, i.e., "7."
Associated with IP1 of entry 510N is shadow table 550. As noted, the shadow table 550 illustratively stores backup tunnels created by the protecting router, such that conventional routing still occurs at the protecting router. Fields of the shadow table 550 are populated with the reachability information obtained from the protected border router and corresponding to the backup tunnels. The shadow table 550 essentially

comprises the same format as routing table 500, but with destination address prefixes reachable via the NNHOPs of the backup tunnels. Specifically, each entry 560 of the shadow table 550 may include a plurality of fields for storing one or more destination prefixes 562 reachable from the NNHOP, a reference to the tunnel 564 to the NNHOP, the address of the NNHOP 566, and (optionally) a cost metric 568, e.g., "8." Notably, the cost metric for the tunnel may be greater man, less than, or equal to the cost of the IP routing links, and that the values "8" and "7" respectively should be taken as exam-ples. Note also that the cost along the protected path may or may not be known by the protecting router, in which case such cost would not be present in the shadow table 550.
Fig. 6 is a schematic block diagram of an exemplary computer network 100 as in Fig. 1 with illustrative backup tunnels in accordance with the present invention. No-tably, the backup tunnels are created according to the methods described above. For example, BT1 (shown as a dashed line) is a backup tunnel from head-end router A (pro-tecting router) through ABR2 (diverse border router) to tail-end router B (NNHOP to IP 1). In addition, BT2 (shown as a dotted line) is a backup tunnel from head-end router A through ABR2 to tail-end router C (NNHOP to IP2). Prior to any change in the state of ABR1 (prior to Mure), conventional IP routing (using, e.g., routing table 500) is employed by RIB 245 of protecting router A (i.e., IP packets are IP routed, etc.). The backup tunnels stored in shadow table 550 are "dormant" (unused) until needed, as de-scribed below.
Once the protecting router determines that the protected border router has failed (e.g., as denoted by "X" overlaid on ABR1 in Fig. 6), Fast Reroute is triggered, and all traffic destined to a protected prefix is rerouted by the protecting router onto an appro-priate backup tunnel to a corresponding NNHOP, e.g., according to the shadow table. The protecting router detects a failure of the protected border router through any known link or node failure detection technique, such as layer 2 failure detection and/or any keep-alive protocol (e.g., IGP or Bidirectional Forwarding Detection, BFD), the latter of which detects, generally, the presence or absence of a connection signal or "heart-beat."
According to the invention, once Fast Reroute has been triggered, the RIB 245 adapts its routing policy to determine whether incoming traffic is destined for the set of

protected prefixes (i.e., through the protected border router). Notably, this determina-tion can be made by comparing the destination prefix to the protected prefixes or by comparing the next-hop address of the destination prefix to the protected border router address. Alternatively, upon receiving the traffic, the RIB 245 checks the correspond-ing routing table entry 510N for a respective shadow table entry 560N, such as through a flag, a pointer, or other indicating means. Should there be no indication of a respec-tive shadow table entry (i.e., the traffic is not destined for a protected prefix), the rout-ing table entry 510N is used in a conventional manner. If, however, the traffic is bound for a protected prefix (e.g., IP1), it is rerouted onto the backup tunnel in the respective shadow table entry 560N, such as in backup tunnel reference field 564 (backup tunnel 1) to the proper NNHOP (router B). Illustratively, a traffic packet (e.g., an IP packet) that is destined for a protected prefix is encapsulated as a tunnel packet (a tunnel label is appended or "pushed" onto the packet), which is rerouted onto a backup tunnel cor-responding to the tunnel label (BT1). Once the backup tunnel tail-end node (NNHOP B) receives the tunnel packet, it removes ("pops") the tunnel label, and routes the traffic packet according to the NNHOP's conventional routing table.
Alternatively, once Fast Reroute has been triggered, the RIB 245 merges the contents of a shadow table entry 560N with a respective routing table entry 5 ION when the reachable destination addresses match. As a result of the merge, the associated at-tributes of the routing table entry 510N are updated to reflect attributes of the backup runnels. For example, the entry 510N of the routing table 500 is updated such that the next-hop interface field 514 contains the backup tunnel reference from entry 564, the next-hop address field 516 contains node B from field 566, and the metric field 518 (if necessary) contains the cost of the backup tunnel (e.g., the value "8") from field 568. It may be desirable, in this alternative embodiment, to store the previous information of the routing table entry 510N, such that in the event the protected border router becomes available, the traffic flow may be restored to its original state.
Fig. 7 is a flowchart illustrating a sequence of steps for protecting against the failure of a border router between two domains in accordance with the present inven-tion. The sequence 700 starts at step 705, and continues to step 710, where a protected border router (PBR) (ABR1) advertises a list of next-hop routers to its surrounding

neighbors, such as, e.g., through an IGP Advertisement 300 as described above. Upon receiving the advertisement in step 715, a protecting router (PR) configured as such (router A), illustratively establishes a TE-LSP (e.g., a "fake" TE-LSP) to each NNHOP through the protected border router in step 720 (such as to router B via TE-LSP T1 and router C via TE-LSP T2). Using the (fake) TE-LSPs, the protecting router is able to learn the address prefixes reachable from each NNHOP in step 725. For example and referring to Fig. 4, the protecting router learns that IP1 is reachable via router B, and IP2 is reachable via router C. Notably, in the event the reachability information has been statically configured (e.g., by a system administrator), steps 720,725 may be omitted.
In step 730, the protecting router computes backup tunnels to the NNHOP(s) (such as backup turmels BT1 and BT2 to router B and C, respectively). To ensure di-versity, each backup tunnel must specifically exclude the protected border router (ABR1), such as by network element exclusion or PCE-based inter-domain path com-pulation as mentioned above. In step 735, the protecting router stores the backup tun-nels (e.g., in shadow table 550), and continues conventional routing (e.g., using routing table 500). In step 740, the protecting router detects a Mure of the protected border router, and triggers Fast Reroute in step 745. In step 750, the protecting router then de-termines whether the traffic is destined for the protected prefixes. If not, the traffic is conventionally routed in step 755. Otherwise, all traffic destined for the protected pre-fixes (i.e., originally through the protected border router) is rerouted onto the corre-sponding backup tunnels in step 760, e.g., according to the shadow table 550. The se-quence ends in step 765.
Advantageously, the novel technique protects against the failure of a border router between two domains in a computer network using Fast Reroute and backup tunnels. By creating a limited number of backup tunnels between a protecting router and NNHOPs, the inventive technique provides Fast Reroute protection of the border router that requires minimal configuration and avoids the deployment of backup tunnels from every neighbor in the first domain of the protected border router to every neighbor in the second domain.
While there has been shown and described an illustrative embodiment that pro-tects against the failure of a border router between two domains in a computer network, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the present invention. For example, the invention may also be advantageously used with ASes under applicable circumstances. Alternatively, through modifications to the teachings described herein and/or additional processing, those skilled in the art will understand that the present invention may be adapted for use with ASes generally.
The foregoing description has been directed to specific embodiments of this in-vention. It will be apparent, however, that other variations and modifications may be made to the described embodiments, with the attainment of some or all of their advan-tages. For instance, it is expressly contemplated that the teachings of this invention can be implemented as software, including a computer-readable medium having program instructions executing on a computer, hardware, firmware, or a combination thereof. Accordingly this description is to be taken only by way of example and not to other-wise limit the scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
What is claimed is:


We claim:
1. A method for protecting against the failure of a. border router between two domains in
a computer network, a first domain of the two domains including a next-hop router and
the second domain of the two domains including one or more next-hop routers
(NNHOPs), the method characterized in:
configuring the next-hop router in the first domain to protect the border router
(protecting router);
learning address prefixes of E;stinations reachable from the one or more
NNHOPs, the learning performed by th(+p rotecting router located in the first domain;
calculating, using a processor of the protecting router located in the first domain, a
backup tunnel to each NNHOP that excludes the border router;
associating each backup tunnel with one or more learned address prefixes; and
in response to failure of the border router, rerouting traffic directed to the one or
more learned address prefixes, via one or more network interfaces of the protecting router
located in 'the first domain, onto an appropriate backup tunnel to a corresponding
NNHOP.
2. The method as claimed in claim 1, wherein the border router advertises a list of the
one or more adjacent next-hop routers residing in the first and second domains
interconnected by the border router.
3. The method as claimed in claim 2, wherein the border router employs an Interior
Gateway Protocol (IGP) to transmit the advertised list of next-hop routers.
4. The method as claimed in claim 2, wherein the border router specifies a type of
advertisement to limit the transmission to only the next-hop routers.
5. The method as claimed in claim 1, wherein the learning address prefixes
includes creating, at the protecting router, Multiprotocol Label Switching (MPLS) Traffic
Engineering Label Switched Paths (TE-LSPs) to each of the one or more NNHOPs, and
utilizing extensions to a requesthesponse signaling exchange to learn the reachable
address prefixes.
6. The method as claimed in claim 5, wherein the creating TE-LSPs to each of the one or
more NNHOPs creates TE-LSPs without reserving bandwidth.
7. The method as claimed in claim 1, wherein the associating each backup tunnel
includes storing the backup tunne in a shadow table of the protecting router.
8. A system configured to protzct against the failure of a border router between two
domains in a computer network, the system comprising one or more next-hop routers
adjacent to the border router, the one or more adjacent next-hop routers residing in first
and second domains interconnected by the border router, and characterized in that
one of the next-hop routers in the first domain is configured to protect the border
router (protecting router) by utilizing backup tunnels extending fiom the first domain to
I
each of the one or more next-hop routers in the second domain (NNHOPs),
each backup tunnel excluding the border router and associated with one or more
address prefixes of destinations reachable from the NNHOP terminating the backup
tunnel, the address prefixes learned from the NNHOPs, where in the event the border
router fails, the learned address prefix traffic is rerouted by the protecting router, via one
or more network interfaces, onto an appropriate backup tunnel to a corresponding
NNHOP.
9'. The system as claimed in claim 8, wherein the border router is configured to create
and send an advertisement to the next-hop routers, the advertisement containing a list of
the one or more adjacent next-hop routers.
10. The system as claimed in claim 9, wherein the advertisement employs an Interior
Gateway Protocol (IGP) to transmit the advertised list of next-hop routers.
11. The system as claimed in claim 9, -herein the border router is configured to specify
a specific type of advertisement to limit the transmission to only the next-hop routers.
12. The system as claimed in claim 8: wherein the one or more address prefixes learned
from the NNHOPs are learned using Multiprotocol Label Switching (MPLS) Traffic
Engineering Label Switched Paths (TE-LSPs) created at the protecting router to each of
the one or more NNHOPs, and extensions to a request/response signaling exchange.
13. The system as claimed in claim 12, wherein the TE-LSPs reserve no bandwidth.
14. The system as claimed in claim Bwherein the protecting router is configured to
maintain a shadow table to store the backup tunnels.

Documents:

4213-delnp-2007-Abstract-(31-08-2012).pdf

4213-delnp-2007-abstract.pdf

4213-delnp-2007-Claims-(31-08-2012).pdf

4213-delnp-2007-claims.pdf

4213-delnp-2007-Correspondence Others-(13-08-2012).pdf

4213-delnp-2007-Correspondence Others-(31-08-2012).pdf

4213-delnp-2007-correspondence-others 1.pdf

4213-delnp-2007-correspondence-others.pdf

4213-delnp-2007-description (complete).pdf

4213-delnp-2007-Drawings-(31-08-2012).pdf

4213-delnp-2007-drawings.pdf

4213-delnp-2007-Form-1-(31-08-2012).pdf

4213-delnp-2007-form-1.pdf

4213-delnp-2007-form-18.pdf

4213-delnp-2007-Form-2-(31-08-2012).pdf

4213-delnp-2007-form-2.pdf

4213-delnp-2007-Form-3-(13-08-2012).pdf

4213-delnp-2007-form-3.pdf

4213-delnp-2007-form-5.pdf

4213-delnp-2007-GPA-(31-08-2012).pdf

4213-delnp-2007-pct-101.pdf

4213-delnp-2007-pct-202.pdf

4213-delnp-2007-pct-210.pdf

4213-delnp-2007-pct-220.pdf

4213-delnp-2007-pct-237.pdf

4213-delnp-2007-pct-301.pdf

4213-delnp-2007-pct-304.pdf

4213-delnp-2007-pct-311.pdf

abstract.jpg


Patent Number 257166
Indian Patent Application Number 4213/DELNP/2007
PG Journal Number 37/2013
Publication Date 13-Sep-2013
Grant Date 06-Sep-2013
Date of Filing 04-Jun-2007
Name of Patentee CISCO TECHNOLOGY, INC.
Applicant Address 170 WEST TASMAN DRIVE, SAN JOSE, CA 95134-1706, USA.
Inventors:
# Inventor's Name Inventor's Address
1 JEAN-PHILIPPE VASSEUR 94 ADAMS STREET, DUNSTABLE, MA 01827, USA.
2 STEFANO B.PREVIDI VIA CITTA' DELLA PIEVE, 88, I-00191 ROMA, ITALY.
PCT International Classification Number H04L 12/26
PCT International Application Number PCT/US2005/041793
PCT International Filing date 2005-11-17
PCT Conventions:
# PCT Application Number Date of Convention Priority Country
1 11/011,814 2004-12-14 U.S.A.