idnits 2.17.1 draft-ietf-nimrod-routing-arch-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-25) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 2) being 60 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 291 instances of too long lines in the document, the longest one being 4 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1996) is 10115 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '6' is mentioned on line 1089, but not defined == Unused Reference: '4' is defined on line 1144, but no explicit reference was found in the text ** Downref: Normative reference to an Historic RFC: RFC 1479 (ref. '1') -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' Summary: 10 errors (**), 0 flaws (~~), 4 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Draft I. Castineyra 2 Nimrod Working Group J. N. Chiappa 3 February 1996 M. Steenstrup 4 draft-ietf-nimrod-routing-arch-01.txt Expires August 1996 6 The Nimrod Routing Architecture 8 Status of this Memo 10 This document is an Internet-Draft. Internet-Drafts are working documents 11 of the Internet Engineering Task Force (IETF), its areas, and its working 12 groups. Note that other groups may also distribute working documents as 13 Internet-Drafts. 15 Internet-Drafts are draft documents valid for a maximum of six months and 16 may be updated, replaced, or obsoleted by other documents at any time. It 17 is inappropriate to use Internet- Drafts as reference material or to cite 18 them other than as "work in progress." 20 To learn the current status of any Internet-Draft, please check the 21 "1id-abstracts.txt" listing contained in the Internet- Drafts Shadow 22 Directories on ds.internic.net (US East Coast), nic.nordu.net (Europe), 23 ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific Rim). 25 Abstract 27 We present a scalable internetwork routing architecture, called Nimrod. The 28 Nimrod architecture is designed to accommodate a dynamic internetwork of 29 arbitrary size with heterogeneous service requirements and restrictions and 30 to admit incremental deployment throughout an internetwork. The key to 31 Nimrod's scalability is its ability to represent and manipulate 32 routing-related information at multiple levels of abstraction. 34 Contents 36 1 Introduction 1 38 2 Overview of Nimrod 1 40 2.1 Constraints of the Internetworking Environment . . . . . . . . . . 2 42 2.2 The Basic Routing Functions . . . . . . . . . . . . . . . . . . . . 3 44 2.3 Scalability Features . . . . . . . . . . . . . . . . . . . . . . . 5 46 2.3.1 Clustering and Abstraction . . . . . . . . . . . . . . . . . . 5 48 2.3.2 Restricting Information Distribution . . . . . . . . . . . . . 6 50 2.3.3 Local Selection of Feasible Routes . . . . . . . . . . . . . . 6 52 2.3.4 Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 54 2.3.5 Limiting Forwarding Information . . . . . . . . . . . . . . . . 7 56 3 Architecture 7 58 3.1 Endpoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 60 3.2 Nodes and Adjacencies . . . . . . . . . . . . . . . . . . . . . . . 8 62 3.3 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 64 3.3.1 Connectivity Specifications . . . . . . . . . . . . . . . . . . 9 66 3.4 Locators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 68 3.5 Node Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . 10 70 3.5.1 Adjacencies . . . . . . . . . . . . . . . . . . . . . . . . . . 10 72 3.5.2 Internal Maps . . . . . . . . . . . . . . . . . . . . . . . . . 10 74 3.5.3 Transit Connectivity . . . . . . . . . . . . . . . . . . . . . 11 76 3.5.4 Inbound Connectivity . . . . . . . . . . . . . . . . . . . . . 11 78 3.5.5 Outbound Connectivity . . . . . . . . . . . . . . . . . . . . . 11 80 4 Physical Realization 11 82 i 83 4.1 Contiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 85 4.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 87 4.3 Multiple Locator Assignment . . . . . . . . . . . . . . . . . . . . 13 89 5 Forwarding 17 91 5.1 Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 93 5.2 Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 95 5.3 Connectivity Specification (CSC) Mode . . . . . . . . . . . . . . . 22 97 5.4 Flow Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 99 5.5 Datagram Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 101 5.6 Connectivity Specification Sequence Mode . . . . . . . . . . . . . 23 103 6 Security Considerations 23 105 7 Authors' Addresses 23 107 ii 108 1 Introduction 110 Nimrod is a scalable routing architecture designed to accommodate a 111 continually expanding and diversifying internetwork. First suggested by 112 Noel Chiappa, the Nimrod architecture has undergone revision and refinement 113 through the efforts of the Nimrod working group of the IETF. In this 114 document, we present a detailed description of this architecture. 116 The goals of Nimrod are as follows: 118 1. To support a dynamic internetwork of arbitrary size by providing 119 mechanisms to control the amount of routing information that must be 120 known throughout an internetwork. 122 2. To provide service-specific routing in the presence of multiple 123 constraints imposed by service providers and users. 125 3. To admit incremental deployment throughout an internetwork. 127 We have designed the Nimrod architecture to meet these goals. The key 128 features of this architecture include: 130 1. Representation of internetwork connectivity and services in the form of 131 maps at multiple levels of abstraction. 133 2. User-controlled route generation and selection based on maps and 134 traffic service requirements. 136 3. User-directed packet forwarding along established paths. 138 Nimrod is a general routing architecture that can be applied to routing both 139 within a single routing domain and among multiple routing domains. As a 140 general internetwork routing architecture designed to deal with increased 141 internetwork size and diversity, Nimrod is equally applicable to both the 142 TCP/IP and OSI environments. 144 2 Overview of Nimrod 146 Before describing the Nimrod architecture in detail, we provide an overview. 147 We begin with the internetworking requirements, followed by the routing 148 functions, and concluding with Nimrod's scaling characteristics. 150 1 151 2.1 Constraints of the Internetworking Environment 153 Internetworks are growing and evolving systems, in terms of number, 154 diversity, and interconnectivity of service providers and users, and 155 therefore require a routing architecture that can accommodate internetwork 156 growth and evolution. A complicated mix of factors such as technological 157 advances, political alliances, and service supply and demand economics will 158 determine how an internetwork will change over time. However, correctly 159 predicting all of these factors and all of their effects on an internetwork 160 may not be possible. Thus, the flexibility of an internetwork routing 161 architecture is its key to handling unanticipated requirements. 163 In developing the Nimrod architecture, we first assembled a list of 164 internetwork environmental constraints that have implications for routing. 165 This list, enumerated below, includes observations about the present 166 Internet; it also includes predictions about internetworks five to ten years 167 in the future. 169 1. The Internet will grow to include O(10^9) networks. 171 2. The number of internetwork users may be unbounded. 173 3. The capacity of internetwork resources is steadily increasing but so is 174 the demand for these resources. 176 4. Routers and hosts have finite processing capacity and finite memory, 177 and networks have finite transmission capacity. 179 5. Internetworks comprise different types of communications media -- 180 including wireline, optical and wireless, terrestrial and satellite, 181 shared multiaccess and point-to-point -- with different service 182 characteristics in terms of throughput, delay, error and loss 183 distributions, and privacy. 185 6. Internetwork elements -- networks, routers, hosts, and processes -- may 186 be mobile. 188 7. Service providers will specify offered services and restrictions on 189 access to those services. Restrictions may be in terms of when a 190 service is available, how much the service costs, which users may 191 subscribe to the service and for what purposes, and how the user must 192 shape its traffic in order to receive a service guarantee. 194 8. Users will specify traffic service requirements which may vary widely 195 among sessions. These specifications may be in terms of requested 196 qualities of service, the amounts they are willing to pay for these 197 services, the times at which they want these services, and the 198 providers they wish to use. 200 2 201 9. A user traffic session may include m sources and n destinations, where 202 m, n > or = 1. 204 10. Service providers and users have a synergistic relationship. That is, 205 as users develop more applications with special service requirements, 206 service providers will respond with the services to meet these demands. 207 Moreover, as service providers deliver more services, users will 208 develop more applications that take advantage of these services. 210 11. Support for varied and special services will require more processing, 211 memory, and transmission bandwidth on the part of both the service 212 providers offering these services and the users requesting these 213 services. Hence, many routing-related activities will likely be 214 performed not by routers and hosts but rather by independent devices 215 acting on their behalf to process, store, and distribute routing 216 information. 218 12. Users requiring specialized services (e.g., high guaranteed throughput) 219 will usually be willing to pay more for these services and to incur 220 some delay in obtaining them. 222 13. Service providers are reluctant to introduce complicated protocols into 223 their networks, because they are more difficult to manage. 225 14. Vendors are reluctant to implement complicated protocols in their 226 products, because they take longer to develop. 228 Collectively, these constraints imply that a successful internetwork routing 229 architecture must support special features, such as service-specific routing 230 and component mobility in a large and changing internetwork, using simple 231 procedures that consume a minimal amount of internetwork resources. We 232 believe that the Nimrod architecture meets these goals, and we justify this 233 claim in the remainder of this document. 235 2.2 The Basic Routing Functions 237 The basic routing functions provided by Nimrod are those provided by any 238 routing system, namely: 240 1. Collecting, assembling, and distributing the information necessary for 241 route generation and selection. 243 2. Generating and selecting routes based on this information. 245 3. Establishing in routers information necessary for forwarding packets 246 along the selected routes. 248 3 249 4. Forwarding packets along the selected routes. 251 The Nimrod approach to providing this routing functionality includes map 252 distribution according to the "link-state" paradigm, localization of route 253 generation and selection at traffic sources and destinations, and 254 specification of packet forwarding through path establishment by the sources 255 and destinations. 257 Link-state map distribution permits each service provider to have control 258 over the services it offers, through both distributing restrictions in and 259 restricting distribution of its routing information. Restricting 260 distribution of routing information serves to reduce the amount of routing 261 information maintained throughout an internetwork and to keep certain 262 routing information private. However, it also leads to inconsistent routing 263 information databases throughout an internetwork, as not all such databases 264 will be complete or identical. We expect routing information database 265 inconsistencies to occur often in a large internetwork, regardless of 266 whether privacy is an issue. The reason is that we expect some devices to 267 be incapable of maintaining the complete set of routing information for the 268 internetwork. These devices will select only some of the distributed 269 routing information for storage in their databases. 271 Route generation and selection, based on maps and traffic service 272 requirements, may be completely controlled by the users or, more likely, by 273 devices acting on their behalf and does not require global coordination 274 among routers. Thus these devices may generate routes specific to the 275 users' needs, and only those users pay the cost of generating those routes. 276 Locally-controlled route generation allows incremental deployment of and 277 experimentation with new route generation algorithms, as these algorithms 278 need not be the same at each location in an internetwork. 280 Packet forwarding according to paths may be completely controlled by the 281 users or the devices acting on their behalf. These paths may be specified 282 in as much detail as the maps permit. Such packet forwarding provides 283 freedom from forwarding loops, even when routers in a path have inconsistent 284 routing information. The reason is that the forwarding path is a route 285 computed by a single device and based on routing information maintained at a 286 single device. 288 We note that the Nimrod architecture and Inter-Domain Policy Routing (IDPR) 289 [1] share in common link-state routing information distribution, localized 290 route generation and path-oriented message forwarding. In developing the 291 Nimrod architecture, we have drawn upon experience gained in developing and 292 experimenting with IDPR. 294 4 295 2.3 Scalability Features 297 Nimrod must provide service-specific routing in arbitrarily large 298 internetworks and hence must employ mechanisms that help to contain the 299 amount of internetwork resources consumed by the routing functions. We 300 provide a brief synopsis of such mechanisms below, noting that arbitrary use 301 of these mechanisms does not guarantee a scalable routing architecture. 302 Instead, these mechanisms must be used wisely, in order enable a routing 303 architecture to scale with internetwork growth. 305 2.3.1 Clustering and Abstraction 307 The Nimrod architecture is capable of representing an internetwork as 308 clusters of entities at multiple levels of abstraction. Clustering reduces 309 the number of entities visible to routing. Abstraction reduces the amount 310 of information required to characterize an entity visible to routing. 312 Clustering begins by aggregating internetwork elements such as hosts, 313 routers, and networks according to some predetermined criteria. These 314 elements may be clustered according to relationships among them, such as 315 "managed by the same authority", or so as to satisfy some objective 316 function, such as "minimize the expected amount of forwarding information 317 stored at each router". Nimrod does not mandate a particular cluster 318 formation algorithm. 320 New clusters may be formed by clustering together existing clusters. 321 Repeated clustering of entities produces a hierarchy of clusters with a 322 unique universal cluster that contains all others. The same clustering 323 algorithm need not be applied at each level in the hierarchy. 325 All elements within a cluster must satisfy at least one relation, namely 326 connectivity. That is, if all elements within a cluster are operational, 327 then any two of them must be connected by at least one route that lies 328 entirely within that cluster. This condition prohibits the formation of 329 certain types of separated clusters, such as the following. Suppose that a 330 company has two branches located at opposite ends of a country and that 331 these two branches must communicate over a public network not owned by the 332 company. Then the two branches cannot be members of the same cluster, 333 unless that cluster also includes the public network connecting them. 335 Once the clusters are formed, their connectivity and service information is 336 abstracted to reduce the representation of cluster characteristics. Example 337 abstraction procedures include elimination of services provided by a small 338 fraction of the elements in the cluster or expression of services in terms 339 of average values. Nimrod does not mandate a particular abstraction 340 algorithm. The same abstraction algorithm need not be applied to each 341 cluster, and multiple abstraction algorithms may be applied to a single 342 cluster. 344 5 345 A particular combination of clustering and abstraction algorithms applied to 346 an internetwork results in an organization related to but distinct from the 347 physical organization of the component hosts, routers, and networks. When a 348 clustering is superimposed over the physical internetwork elements, the 349 cluster boundaries may not necessarily coincide with host, router, or 350 network boundaries. Nimrod performs its routing functions with respect to 351 the hierarchy of entities resulting from clustering and abstraction, not 352 with respect to the physical realization of the internetwork. In fact, 353 Nimrod need not even be aware of the physical elements of an internetwork. 355 2.3.2 Restricting Information Distribution 357 The Nimrod architecture supports restricted distribution of routing 358 information, both to reduce resource consumption associated with such 359 distribution and to permit information hiding. Each cluster determines the 360 portions of its routing information to distribute and the set of entities to 361 which to distribute this information. Moreover, recipients of routing 362 information are selective in which information they retain. Some examples 363 are as follows. Each cluster might automatically advertise its routing 364 information to its siblings (i.e., those clusters with a common parent 365 cluster). In response to requests, a cluster might advertise information 366 about specific portions of the cluster or information that applies only to 367 specific users. A cluster might only retain routing information from 368 clusters that provide universal access to their services. 370 2.3.3 Local Selection of Feasible Routes 372 Generating routes that satisfy multiple constraints is usually an 373 NP-complete problem and hence a computationally intensive procedure. With 374 Nimrod, only those entities that require routes with special constraints 375 need assume the computational load associated with generation and selection 376 of such routes. Moreover, the Nimrod architecture allows individual 377 entities to choose their own route generation and selection algorithms and 378 hence the amount of resources to devote to these functions. 380 2.3.4 Caching 382 The Nimrod architecture encourages caching of acquired routing information 383 in order to reduce the amount of resources consumed and delay incurred in 384 obtaining the information in the future. The set of routes generated as a 385 by-product of generating a particular route is an example of routing 386 information that is amenable to caching; future requests for any of these 387 routes may be satisfied directly from the route cache. However, as with any 388 caching scheme, the cached information may become stale and its use may 389 result in poor quality routes. Hence, the routing information's expected 391 6 392 duration of usefulness must be considered when determining whether to cache 393 the information and for how long. 395 2.3.5 Limiting Forwarding Information 397 The Nimrod architecture supports two separate approaches for containing the 398 amount of forwarding information that must be maintained per router. The 399 first approach is to multiplex, over a single path (or tree, for multicast), 400 multiple traffic flows with similar service requirements. The second 401 approach is to install and retain forwarding information only for active 402 traffic flows. 404 With Nimrod, the service providers and users share responsibility for the 405 amount of forwarding information in an internetwork. Users have control 406 over the establishment of paths, and service providers have control over the 407 maintenance of paths. This approach is different from that of the current 408 Internet, where forwarding information is established in routers independent 409 of demand for this information. 411 3 Architecture 413 Nimrod is a hierarchical, map-based routing architecture that has been 414 designed to support a wide range of user requirements and to scale to very 415 large dynamic internets. Given a traffic stream's description and 416 requirements (both quality of service requirements and usage-restriction 417 requirements), Nimrod's main function is to manage in a scalable fashion how 418 much information about the internetwork is required to choose a route for 419 that stream, in other words, to manage the trade-off between amount of 420 information about the internetwork and the quality of the computed route. 421 Nimrod is implemented as a set of protocols and distributed databases. The 422 following sections describe the basic architectural concepts used in Nimrod. 423 The protocols and databases are specified in other documents. 425 3.1 Endpoints 427 The basic entity in Nimrod is the endpoint. An endpoint represents a user 428 of the internetwork layer: for example, a transport connection. Each 429 endpoint has at least one endpoint identifier (EID). Any given EID 430 corresponds to a single endpoint. EIDs are globally unique, relatively 431 short "computer-friendly" bit strings---for example, small multiples of 64 432 bits. EIDs have no topological significance whatsoever. For ease of 433 management, EIDs might be organized hierarchically, but this is not 434 required. 436 7 437 BEGIN COMMENT 439 In practice, EIDs will probably have a second form, which we can 440 call the endpoint label (EL). ELs are ASCII strings of unlimited 441 length, structured to be used as keys in a distributed database 442 (much like DNS names). Information about an endpoint---for 443 example, how to reach it---can be obtained by querying this 444 distributed database using the endpoint's label as key. 446 END COMMENT 448 3.2 Nodes and Adjacencies 450 A node represents a region of the physical network. The region of the 451 network represented by a node can be as large or as small as desired: a 452 node can represent a continent or a process running inside a host. 453 Moreover, as explained in section 4, a region of the network can 454 simultaneously be represented by more than one node. 456 An adjacency consists of an ordered pair of nodes. An adjacency indicates 457 that traffic can flow from the first node to the second. 459 3.3 Maps 461 The basic data structure used for routing is the map. A map expresses the 462 available connectivity between different points of an internetwork. 463 Different maps can represent the same region of a physical network at 464 different levels of detail. 466 A map is a graph composed of nodes and adjacencies. Properties of nodes are 467 contained in attributes associated with them. Adjacencies have no 468 attributes. Nimrod defines languages to specify attributes and to describe 469 maps. 471 Maps are used by routers to generate routes. In general, it is not required 472 that different routers have consistent maps. 474 BEGIN COMMENT 476 Nimrod has been designed so that there will be no routing loops 477 even when the routing databases of different routers are not 478 consistent. A consistency requirement would not permit 479 representing the same region of the internetwork at different 480 levels of detail. Also, a routing-database consistency 481 requirement would be hard to guarantee in the very large internets 483 8 484 Nimrod is designed to support. 486 END COMMENT 488 In this document we speak only of routers. By "router" we mean a physical 489 device that implements functions related to routing: for example, 490 forwarding, route calculation, path set-up. A given device need not be 491 capable of doing all of these to be called a router. The protocol 492 specification document, see [2], splits these functionalities into specific 493 agents. 495 3.3.1 Connectivity Specifications 497 By connectivity between two points we mean the available services and the 498 restrictions on their use. Connectivity specifications are among the 499 attributes associated with nodes. The following are informal examples of 500 connectivity specifications: 502 o "Between these two points, there exists best-effort service with no 503 restrictions." 505 o "Between these two points, guaranteed 10 ms delay can be arranged for 506 traffic streams whose data rate is below 1 Mbyte/sec and that have low 507 (specified) burstiness." 509 o "Between these two points, best-effort service is offered, as long as 510 the traffic originates in and is destined for research organizations." 512 3.4 Locators 514 A locator is a string of binary digits that identifies a location in an 515 internetwork. Nodes and endpoint are assigned locators. Different nodes 516 have necessarily different locators. A node is assigned only one locator. 517 Locators identify nodes and specify *where* a node is in the network. 518 Locators do *not* specify a path to the node. An endpoint can be assigned 519 more than one locator. In this sense, a locator might appear in more than 520 one location of an internetwork. 522 In this document locators are written as ASCII strings that include colons 523 to underline node structure: for example, a:b:c. This does not mean that 524 the representation of locators in packets or in databases will necessarily 525 have something equivalent to the colons. 527 A given physical element of the network might help implement more than one 529 9 530 node---for example, a router might be part of two different nodes. Though 531 this physical element might therefore be associated with more than one 532 locator, the nodes that this physical element implements have each only one 533 locator. 535 The connectivity specifications of a node are identified by a tuple 536 consisting of the node's locator and an ID number. 538 All map information is expressed in terms of locators, and routing 539 selections are based on locators. EIDs are *not* used in making routing 540 decisions---see section 5. 542 3.5 Node Attributes 544 The following are node attributes defined by Nimrod. 546 3.5.1 Adjacencies 548 Adjacencies appear in maps as attributes of both the nodes in the adjacency. 549 A node has two types of adjacencies associated with it: those that identify 550 a neighboring node to which the original node can send data to; and those 551 that identivy a neighboring node that can send data to the original node. 553 3.5.2 Internal Maps 555 As part of its attributes, a node can have internal maps. A router can 556 obtain a node's internal maps---or any other of the node's attributes, for 557 that matter---by requesting that information from a representative of that 558 node. (A router associated with that node can be such a representative.) A 559 node's representative can in principle reply with different internal maps to 560 different requests---for example, because of security concerns. This 561 implies that different routers in the network might have different internal 562 maps for the same node. 564 A node is said to own those locators that have as a prefix the locator of 565 the node. In a node that has an internal map, the locators of all nodes in 566 this internal map are prefixed by the locator of the original node. 568 Given a map, a more detailed map can be obtained by substituting one of the 569 map's nodes by one of that node's internal maps. This process can be 570 continued recursively. Nimrod defines standard internal maps that are 571 intended to be used for specific purposes. A node's "detailed map" gives 572 more information about the region of the network represented by the original 573 node. Typically, it is closer to the physical realization of the network 574 than the original node. The nodes of this map can themselves have detailed 576 10 577 maps. 579 3.5.3 Transit Connectivity 581 For a given node, this attribute specifies the services available between 582 nodes adjacent to the given node. This attribute is requested and used when 583 a router intends to route traffic *through* a node. Conceptually, the 584 traffic connectivity attribute is a matrix that is indexed by a pair of 585 locators: the locators of adjacent nodes. The entry indexed by such a pair 586 contains the connectivity specifications of the services available across 587 the given node for traffic entering from the first node and exiting to the 588 second node. 590 The actual format of this attribute need not be a matrix. This document 591 does not specify the format for this attribute. 593 3.5.4 Inbound Connectivity 595 For a given node, this attribute represents connectivity from adjacent nodes 596 to points within the given node. This attribute is requested and used when 597 a router intends to route traffic to a point within the node but does not 598 have, and either cannot or does not want to obtain, a detailed map of the 599 node. The inbound connectivity attribute identifies what connectivity 600 specifications are available between pairs of locators. The first element 601 of the pair is the locator of an adjacent node; the second is a locator 602 owned by the given node. 604 3.5.5 Outbound Connectivity 606 For a given node, this attribute represents connectivity from points within 607 the given node to adjacent nodes. This attribute identifies what 608 connectivity specifications are available between pairs of locators. The 609 first element of the pair is a locator owned by the given node, the second 610 is the locator of an adjacent node. 612 The Transit, Inbound and Outbound connectivity attributes together wiht a 613 list of adjacencies form the "abstract map." 615 4 Physical Realization 617 A network is modeled as being composed of physical elements: routers, 618 hosts, and communication links. The links can be either 619 point-to-point---e.g., T1 links---or multi-point---e.g., ethernets, X.25 621 11 622 networks, IP-only networks, etc. 624 The physical representation of a network can have associated with it one or 625 more Nimrod maps. A Nimrod map is a function not only of the physical 626 network, but also of the configured clustering of elements (locator 627 assignment) and of the configured connectivity. 629 Nimrod has no pre-defined "lowest level": for example, it is possible to 630 define and advertise a map that is physically realized inside a CPU. In this 631 map, a node could represent, for example, a process or a group of processes. 632 The user of this map need not necessarily know or care. ("It is turtles 633 all the way down!", in [3] page 63.) 635 4.1 Contiguity 637 Locators sharing a prefix must be assigned to a contiguous region of a map. 638 That is, two nodes in a map that have been assigned locators sharing a 639 prefix should be connected to each other via nodes that themselves have been 640 assigned locators with that prefix. The main consequence of this 641 requirement is that "you cannot take your locator with you." 643 As an example of this, see figure 1, consider two providers x.net and y.net 644 (these designations are *not* locators but DNS names) which appear in a 645 Nimrod map as two nodes with locators A and B. Assume that corporation z.com 646 (also a DNS name) was originally connected to x.net. Locators corresponding 647 to elements in z.com are, in this example, A-prefixed. Corporation z.com 648 decides to change providers---severing its physical connection to x.net. 649 The connectivity requirement described in this section implies that, after 650 the provider change has taken place, elements in z.com will have been, in 651 this example, assigned B-prefixed locators and that it is not possible for 652 them to receive data destined to A-prefixed locators through y.net. 654 A B 655 +------+ +------+ 656 | x.net| | y.net| 657 +------+ /+------+ 658 / 659 +-----+ 660 |z.com| 661 +-----+ 663 Figure 1: Connectivity after switching providers 665 The contiguity requirement simplifies routing information exchange: if it 667 12 668 were permitted for z.com to receive A-prefixed locators through y.net, it 669 would be necessary that a map that contains node B include information about 670 the existence of a group of A-prefixed locators inside node B. Similarly, a 671 map including node A would have to include information that the set of 672 A-prefixed locators asigned to z.com is not to be found within A. The more 673 situations like this happen, the more the hierarchical nature of Nimrod is 674 subverted to "flat routing." The contiguity requirement can also be 675 expressed as "EIDs are stable; locators are ephemeral." 677 4.2 An Example 679 Figure 2 shows a physical network. Hosts are drawn as squares, routers as 680 diamonds, and communication links as lines. The network shown has the 681 following components: five ethernets ---EA through EE; five routers---RA 682 through RE; and four hosts---HA through HD. Routers RA, RB, and RC 683 interconnect the backbone ethernets---EB, EC and ED. Router RD connects 684 backbone EC to a network consisting of ethernet EA and hosts HA and HB. 685 Router RE interconnects backbone ED to a network consisting of ethernet EE 686 and hosts HC and HD. The assigned locators appear in lower case beside the 687 corresponding physical entity. 689 Figure 3 shows a Nimrod map for that network. The nodes of the map are 690 represented as squares. Lines connecting nodes represent two adjacencies in 691 opposite directions. Different regions of the network are represented at 692 different detail. Backbone b1 is represented as a single node. The region 693 of the network with locators prefixed by "a" is represented as a single 694 node. The region of the network with locators prefixed by "c" is 695 represented in full detail. 697 4.3 Multiple Locator Assignment 699 Physical elements can form part of, or implement, more than one node. In 700 this sense it can be said that they can be assigned more than one locator. 701 Consider figure 4, which shows a physical network. This network is composed 702 of routers (RA, RB, RC, and RD), hosts (HA, HB, and HC), and communication 703 links. Routers RA, RB, and RC are connected with point-to-point links. The 704 two horizontal lines in the bottom of the figure represent ethernets. The 705 figure also shows the locators assigned to hosts and routers. 707 In figure 4, RA and RB have each been assigned one locator (a:t:r1 and 708 b:t:r1, respectively). RC has been assigned locators a:y:r1 and b:d:r1; one 709 of these two locators shares a prefix with RA's locator, the other shares a 710 prefix with RB's locator. Hosts HA and HB have each been assigned three 711 locators. Host HC has been assigned one locator. Depending on what 712 communication paths have been set up between points, different Nimrod maps 713 result. A possible Nimrod map for this network is given in figure 5. 715 13 716 a:h1 +--+ a:h2 +--+ 717 |HA| |HB| 718 | | | | 719 +--+ +--+ 720 a:e1 | | 721 --------------------- EA 722 | 723 /\ /\ 724 /RB\ b1:r1 /RD\ b2:r1 725 /\ /\ \ / 726 / \/ \ \/ 727 EB b1:t:e1 / \ | EC 728 ------------------------ -------------------------- b2:e1 729 / \ 730 / \ 731 /\ \ 732 /RA\ b1:r2 \/\ 733 \ / /RC\ b2:t:r2 734 \/ \ / 735 \ \/ 736 \ / ED 737 ----------------------------------- b3:t:e1 738 | 739 | 740 | 741 /\ 742 /RE\ b3:t:r1 743 \ / 744 EE \/ 745 ----------------------------- c:e1 746 | | 747 +--+ +--+ 748 |HC| c:h1 |HD| c:h2 749 | | | | 750 +--+ +--+ 752 Figure 2: Example Physical Network 754 14 755 +-----+ +-----+ 756 +----------+ | | | | 757 | |--------------| b2 | --------------| a | 758 | | | | | | 759 | b1 | +-----+ +-----+ 760 | | | 761 | | | 762 | | | 763 +----------+ | 764 \ | 765 \ | 766 \ | 767 \ | 768 \ +--------+ 769 \ | | 770 ------- | b3:t:e1| 771 | | 772 +--------+ 773 | 774 | 775 | 776 | 777 +-------+ 778 | | 779 |b3:t:r1| 780 | | 781 +-------+ 782 | 783 +-----+ +-----+ +-----+ 784 | | | | | | 785 | c:h1|-------| c:e1|-----| c:h2| 786 | | | | | | 787 +-----+ +-----+ +-----+ 789 Figure 3: Nimrod Map 791 15 792 a:t:r1 b:t:r1 793 +--+ +--+ 794 |RA|------------|RB| 795 +--+ +--+ 796 \ / 797 \ / 798 \ / 799 \ / 800 \ / 801 \ / 802 \ / 803 \ 804 +--+ 805 |RC| a:y:r1 806 +--+ b:d:r1 807 | 808 --------------------------- 809 | | | 810 a:y:h1 +--+ +--+ +--+ a:y:h2 811 b:d:h2 |HA| |RD| c:r1 |HB| b:d:h1 812 c:h1 +--+ +--+ +--+ c:h2 813 | 814 | 815 -------------------- 816 | 817 +--+ 818 |HC| c:h3 819 +--+ 821 Figure 4: Multiple Locators 823 16 824 a b c 825 +-------------+ +-------------+ +---------------+ 826 | | | | | | 827 | a:t | | b:t | | | 828 | +--+ | | +--+ | | | 829 | | |--------------|--| | | | | 830 | +--+ | | +--+ | | | 831 | | | | | | | | 832 | +--+ | | +--+ | | | 833 | + + | | + + | | | 834 | +--+ a:y | | +--+ b:d | | | 835 | | | | | | 836 +-------------+ +-------------+ +---------------+ 838 Figure 5: Nimrod Map 840 Nodes and adjacencies represent the *configured* clustering and connectivity 841 of the network. Notice that even though a:y and b:d are defined on the same 842 hardware, the map shows no connection between them: this connection has not 843 been configured. A packet given to node `a' addressed to a locator prefixed 844 with "b:d" would have to travel from node a to node b via the arc joining 845 them before being directed towards its destination. Similarly, the map 846 shows no connection between the c node and the other two top level nodes. 847 If desired, these connections could be established, which would necessitate 848 setting up the exchange of routing information. Figure 6 shows the map when 849 these connections have been established. 851 In the strict sense, Nimrod nodes do not overlap: they are distinct 852 entities. But, as we have seen in the previous example, a physical element 853 can be given more than one locator, and, in that sense, participate in 854 implementing more than one node. That is, two different nodes might be 855 defined on the same hardware. In this sense, Nimrod nodes can be said to 857 overlap. But to notice this overlap one would have to know the 858 physical-to-map correspondence. It is not possible to know when two nodes 859 share physical assets by looking only at a Nimrod map. 861 5 Forwarding 863 Nimrod supports four forwarding modes: 865 1. Connectivity Specification Chain (CSC) mode: In this mode, packets 866 carry a list of connectivity specifications. The packet is required to 867 go through the nodes that own the connectivity specifications using the 868 services specified. The nodes associated with the listed connectivity 870 17 871 +--------+ +--------+ 872 | | | | 873 | a:t:r1 |-----------------------------------------------| b:t:r1 | 874 | | | | 875 +--------+ +--------+ 876 | | 877 | | 878 | /-----------------------------------------\ | 879 | | | | 880 | | | | 881 | +--------+ +--------+ +--------+ | 882 | | | | | | | | 883 | | a:y:h1 --------| c:h1 |--------------------| b:d:h1 | | 884 | | | | | | | | 885 | +--------+ +--------+ +--------+ | 886 | | | | | | | | 887 +--------+ | | +------+ +------+ | +--------+ 888 | | | | | | | | | | | 889 | a:y:r1 | | | | c:r1 |--| c:h3 | | | b:d:r1 | 890 | | | | | | | | | | | 891 +--------+ | | +------+ +------+ | +--------+ 892 | | | | | | | | 893 | +--------+ +--------+ +--------+ | 894 | | | | | | | | 895 | | a:y:h2 |-------- c:h2 |--------------------| b:d:h2 | | 896 | | | | | | | | 897 | +--------+ +--------+ +--------+ | 898 | | | | 899 | | | | 900 | | | | 901 | \-----------------------------------------/ | 902 \-------------------------------------------------------------/ 904 Figure 6: Nimrod Map II 906 18 907 specifications should define a continuous path in the map. A more 908 detailed description of the requirements of this mode is given in 909 section 5.3. 911 2. Connectivity Specifications Sequence (CSS) mode: In this mode, packets 912 carry a list of connectivity specifications. The packet is supposed to 913 go sequentially through the nodes that own each one of the listed 914 connectivity specifications in the order they were specified. The 915 nodes need not be adjacent. This mode can be seen as a generalization 916 of the CSC mode. Notice that CSCs are said to be a *chains* of 917 locators, CSSs are *sequences* of locators. This difference emphasizes 918 the contiguity requirement in CSCs. A detailed description of this 919 mode is in section 5.6. 921 3. Flow mode: In this mode, the packet includes a path-id that indexes 922 state that has been previously set up in routers along the path. 923 Packet forwarding when flow state has been established is relatively 924 simple: follow the instructions in the routers' state. Nimrod 925 includes a mechanism for setting up this state. A more detailed 926 description of this mode can be found in section 5.4. 928 4. Datagram mode: in this mode, every packet carries source and 929 destination locators. This mode can be seen as a special case of the 930 CSS mode. Forwarding is done following procedures as indicated in 931 section 5.5. 933 BEGIN COMMENT 935 The obvious parallels are between CSC mode and IPV4's strict 936 source route and between CSS mode and IPV4's loose source route. 938 END COMMENT 940 In all of these modes, the packet may also carry locators and EIDs for the 941 source and destinations. In normal operation, forwarding does not take the 942 EIDs into account, only the receiver does. EIDs may be carried for 943 demultiplexing at the receiver, and to detect certain error conditions. For 944 example, if the EID is unknown at the receiver, the locator and EID of the 945 source included in the packet could be used to generate an error message to 946 return to the source (as usual, this error message itself should probably 947 not be allowed to be the cause of other error messages). Forwarding can 948 also use the source locator and EID to respond to error conditions, for 949 example, to indicate to the source that the state for a path-id cannot be 950 found. 952 Packets can be visualized as moving between nodes in a map. A packet 953 indicates, implicitly or explicitly, a destination locator. In a packet 954 that uses the datagram, CSC, or CSS forwarding mode, the destination locator 956 19 957 is explicitly indicated . In a packet that uses the flow forwarding mode, 958 the destination locator is implied by the path-id and the distributed state 959 in the network (it might also be included explicitly). Given a map, a 960 packet moves to the node in this map to which the associated destination 961 locator belongs. If the destination node has a "detailed" internal map, 962 the destination locator must belong to one of the nodes in this internal map 963 (otherwise it is an error). The packet goes to this node (and so on, 964 recursively). 966 5.1 Policy 968 CSC and CSS mode implement policy by specifying the connectivity 969 specifications associated with those nodes that the packet should traverse. 970 Strictly speaking, there is no policy information included in the packet. 971 That is, in principle, it is not possible to determine what criteria were 972 used to select the route by looking at the packet. The packet only contains 973 the results of the route generation process. Similarly, in a flow mode 974 packet, policy is implicit in the chosen route. 976 A datagram-mode packet can indicate a limited form of policy routing by the 977 choice of destination and source locators. For this choice to exist, the 978 source or destination endpoints must have several locators associated with 979 them. This type of policy routing is capable of, for example, choosing 980 providers. 982 5.2 Trust 984 A node that chooses not to divulge its internal map can work internally any 985 way its administrators decide, as long as the node satisfies its external 986 characterization as given in its Nimrod map advertisements. Therefore, the 987 advertised Nimrod map should be consistent with a node's actual 988 capabilities. For example, consider the network shown in figure 7 which 989 shows a physical network and the advertised Nimrod map. The physical 990 network consists of hosts and a router connected together by an ethernet. 991 This node can be sub-divided into component nodes by assigning locators as 992 shown in the figure and advertising the map shown. The map seems to imply 993 that it is possible to send packets to node a:x without these being 994 observable by node a:y; however, this is actually not enforceable. 996 In general, it is reasonable to ask how much trust should be put in the maps 997 obtained by a router. Even when a node is "trustworthy," and the 998 information received from the node has been authenticated, there is always 999 the possibility of an honest mistake. 1001 20 1002 +--+ 1003 |RA| a:r1 1004 +--+ 1005 | 1006 | 1007 | 1008 | 1009 ------------------------------- 1010 | | 1011 +--+ +--+ 1012 |Ha| a:x:h1 |Ha| a:y:h2 1013 +--+ +--+ 1015 Physical Network 1017 a | 1018 +----------------|-------------------- 1019 | | | 1020 | +----+ | 1021 | |a:r1| | 1022 | a:x +----+ a:y | 1023 | +------+ / \ +-------+ | 1024 | | | / \| | | 1025 | | | | | | 1026 | | | | | | 1027 | +------+ +-------+ | 1028 | | 1029 + -----------------------------------+ 1031 Advertised Nimrod Map 1033 Figure 7: Example of Misleading Map 1035 21 1036 5.3 Connectivity Specification (CSC) Mode 1038 Routing for a CSC packet is specified by a list of connectivity 1039 specifications carried in the packet. These are the connectivity 1040 specifications that make the specified path, in the order that they appear 1041 along the path. These connectivity specifications are attributes of nodes. 1042 The route indicated by a CSC packet is specifed in terms of connectivity 1043 specifications rather than physical entities: a connectivity specification 1044 in a CSC-mode packet would correspond to a type of service between two 1045 points of the network without specifying the physical path. 1047 Given two connectivity specifications that appear consecutively in the a 1048 CSC-mode packet, there should exist an adjacency going from the node 1049 corresponding to the first connectivity specification to the node 1050 corresponding to the second connectivity specification. The first 1051 connectivity specification referenced in a CSC-mode packet should be an 1052 outbound connectivity specification; similarly, the last connectivity 1053 specification referenced in a CSC-mode packet should be an inbound 1054 connectivity specification; the rest should be transit connectivity 1055 specifications. 1057 5.4 Flow Mode 1059 A flow mode packet includes a path-id field. This field identifies state 1060 that has been established in intermediate routers. The packet might also 1061 contain locators and EIDs for the source and destination. The setup packet 1062 also includes resource requirements. Nimrod includes protocols to set up 1063 and modify flow-related state in intermediate routers. These protocols not 1064 only identify the requested route, but also describe the resources requested 1065 by the flow---e.g., bandwidth, delay, etc. The result of a set-up attempt 1066 might be either confirmation of the set-up or notification of its failure. 1067 The source-specified routes in flow mode setup are specified in terms of 1068 CSSs. 1070 5.5 Datagram Mode 1072 A realistic routing architecture must include an optimization for datagram 1073 traffic, by which we mean user transactions which consist of single packets, 1074 such as a lookup in a remote translation database. Either of the two 1075 previous modes contains unacceptable overhead if much of the network traffic 1076 consists of such datagram transactions. A mechanism is needed which is 1077 approximately as efficient as the existing IPv4 "hop-by-hop" mechanism. 1078 Nimrod has such a mechanism. 1080 The scheme can be characterized by the way it divides the state in a 1081 datagram network between routers and the actual packets. In IPv4, most 1083 22 1084 packets currently contain only a small amount of state associated with the 1085 forwarding process ("forwarding state")---the hop count. Nimrod proposes 1086 that enlarging the amount of forwarding state in packets can produce a 1087 system with useful properties. It was partially inspired by the efficient 1088 source routing mechanism in SIP [5], and the locator pointer mechanism in 1089 PIP [6]). 1091 Nimrod datagram mode uses pre-set flow-mode state to support a strictly 1092 non-looping path, but without a source-route. 1094 5.6 Connectivity Specification Sequence Mode 1096 The connectivity specification sequence mode specifies a route by a list of 1097 connectivity specifications. There are no contiguity restrictions on 1098 consecutive connectivity specifications. 1100 BEGIN COMMENT 1102 The CSS and CSC modes can be seen as combination of the datagram 1103 and flow modes. Therefore, in a sense, the basic forwarding modes 1104 of Nimrod are just these last two. 1106 END COMMENT 1108 6 Security Considerations 1110 Security Considerations are not addressed in this document. 1112 7 Authors' Addresses 1114 Isidro Castineyra 1115 BBN Systems and Technologies 1116 10 Moulton Street 1117 Cambridge, MA 02138 1118 Phone: (617) 873-6233 1119 Email: isidro@bbn.com 1121 Noel Chiappa 1122 Email: gnc@ginger.lcs.mit.edu 1124 Martha Steenstrup 1125 BBN Systems and Technologies 1126 10 Moulton Street 1128 23 1129 Cambridge, MA 02138 1130 Phone: (617) 873-3192 1131 Email: msteenst@bbn.com 1133 References 1135 [1] M. Steenstrup, "Inter-Domain Policy Routing Protocol Specification: 1136 version 1," RFC 1479, June 1993. 1138 [2] M. Steenstrup and R. Ramanathan, "Nimrod Functionality and Protocols 1139 Specification," Internet Draft, February 1996. 1141 [3] R. Wright, Three Scientists and their Gods Looking for Meaning in an 1142 Age of Information. New York: Times Book, first ed., 1988. 1144 [4] S. Deering, "SIP: Simple Internet Protocol," IEEE Network, vol. 7, 1145 May 1993. 1147 [5] P. Francis, "A Near-Term Architecture for Deploying Pip," IEEE 1148 Network, vol. 7, May 1993. 1150 24