idnits 2.17.1 draft-claise-opsawg-service-assurance-architecture-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 16, 2019) is 1622 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-18) exists of draft-ietf-opsawg-tacacs-15 -- Obsolete informational reference (is this intentional?): RFC 3164 (Obsoleted by RFC 5424) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 OPSAWG B. Claise 3 Internet-Draft J. Quilbeuf 4 Intended status: Informational Cisco Systems, Inc. 5 Expires: May 19, 2020 November 16, 2019 7 Service Assurance for Intent-based Networking Architecture 8 draft-claise-opsawg-service-assurance-architecture-01 10 Abstract 12 This document describes the architecture for Service Assurance for 13 Intent-based Networking (SAIN). This architecture aims at assuring 14 that service instances are correctly running. As services rely on 15 multiple sub-services by the underlying network devices, getting the 16 assurance of a healthy service is only possible with a holistic view 17 of network devices. This architecture not only helps to correlate 18 the service degradation with the network root cause but also the 19 impacted services when a network component fails or degrades. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at https://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on May 19, 2020. 38 Copyright Notice 40 Copyright (c) 2019 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (https://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 57 3. Architecture . . . . . . . . . . . . . . . . . . . . . . . . 6 58 3.1. Decomposing a Service Instance Configuration into an 59 Assurance Graph . . . . . . . . . . . . . . . . . . . . . 8 60 3.2. Intent and Assurance Graph . . . . . . . . . . . . . . . 9 61 3.3. Subservices . . . . . . . . . . . . . . . . . . . . . . . 10 62 3.4. Building the Expression Graph from the Assurance Graph . 10 63 3.5. Building the Expression from a Subservice . . . . . . . . 11 64 3.6. Open Interfaces with YANG Modules . . . . . . . . . . . . 11 65 4. Security Considerations . . . . . . . . . . . . . . . . . . . 12 66 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 67 6. Open Issues . . . . . . . . . . . . . . . . . . . . . . . . . 12 68 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 69 7.1. Normative References . . . . . . . . . . . . . . . . . . 12 70 7.2. Informative References . . . . . . . . . . . . . . . . . 12 71 Appendix A. Changes between revisions . . . . . . . . . . . . . 13 72 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . 13 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 75 1. Terminology 77 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 78 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 79 "OPTIONAL" in this document are to be interpreted as described in BCP 80 14 [RFC2119] [RFC8174] when, and only when, they appear in all 81 capitals, as shown here. 83 SAIN Agent: Component that communicates with a device, a set of 84 devices, or another agent to build an expression graph from a 85 received assurance graph and perform the corresponding computation. 87 Assurance Graph: DAG representing the assurance case for one or 88 several service instances. The nodes are the service instances 89 themselves and the subservices, the edges indicate a dependency 90 relations. 92 SAIN collector: Component that fetches or receives the computer- 93 consumable output of the agent(s) and displays it in a user friendly 94 form or process it locally. 96 DAG: Directed Acyclic Graph. 98 ECMP: Equal Cost Multiple Paths 100 Expression Graph: Generic term for a DAG representing a computation 101 in SAIN. More specific terms are: 103 o Subservice Expressions: expression graph representing all the 104 computations to execute for a subservice. 106 o Service Expressions: expression graph representing all the 107 computations to execute for a service instance, i.e. including the 108 computations for all dependent subservices. 110 o Global Computation Graph: expression graph representing all the 111 computations to execute for all services instances (i.e. all 112 computations performed). 114 Dependency: The directed relationship between subservice instances in 115 the assurance graph. 117 Informational Dependency: Type of dependency whose score does not 118 impact the score of its parent subservice or service instance(s) in 119 the assurance graph. However, the symptoms should be taken into 120 account in the parent service instance or subservice instance(s), for 121 informational reasons. 123 Impacting Dependency: Type of dependency whose score impacts the 124 score of its parent subservice or service instance(s) in the 125 assurance graph. The symptoms are taken into account in the parent 126 service instance or subservice instance(s), for informational 127 reasons. 129 Metric: Information retrieved from a network device. 131 Metric Engine: Maps metrics to a list of candidate metric 132 implementations depending on the target model. 134 Metric Implementation: Actual way of retrieving a metric from a 135 device. 137 Network Service YANG Module: describes the characteristics of 138 service, as agreed upon with consumers of that service [RFC8199]. 140 Service Instance: A specific instance of a service. 142 Service configuraiton orchestrator: Quoting RFC8199, "Network Service 143 YANG Modules describe the characteristics of a service, as agreed 144 upon with consumers of that service. That is, a service module does 145 not expose the detailed configuration parameters of all participating 146 network elements and features but describes an abstract model that 147 allows instances of the service to be decomposed into instance data 148 according to the Network Element YANG Modules of the participating 149 network elements. The service-to-element decomposition is a separate 150 process; the details depend on how the network operator chooses to 151 realize the service. For the purpose of this document, the term 152 "orchestrator" is used to describe a system implementing such a 153 process." 155 SAIN Orchestrator: Component of SAIN in charge of fetching the 156 configuration specific to each service instance and converting it 157 into an assurance graph. 159 Health status: Score and symptoms indicating whether a service 160 instance or a subservice is healthy. A non-maximal score MUST always 161 be explained by one or more symptoms. 163 Health score: Integer ranging from 0 to 100 indicating the health of 164 a subservice. A score of 0 means that the subservice is broken, a 165 score of 100 means that the subservice is perfectly operational. 167 Subservice: Part of an assurance graph that assures a specific 168 feature or subpart of the network system. 170 Symptom: Reason explaining why a service instance or a subservice is 171 not completely healthy. 173 2. Introduction 175 Network Service YANG Modules [RFC8199] describe the configuration, 176 state data, operations, and notifications of abstract representations 177 of services implemented on one or multiple network elements. 179 Quoting RFC8199: "Network Service YANG Modules describe the 180 characteristics of a service, as agreed upon with consumers of that 181 service. That is, a service module does not expose the detailed 182 configuration parameters of all participating network elements and 183 features but describes an abstract model that allows instances of the 184 service to be decomposed into instance data according to the Network 185 Element YANG Modules of the participating network elements. The 186 service-to-element decomposition is a separate process; the details 187 depend on how the network operator chooses to realize the service. 188 For the purpose of this document, the term "orchestrator" is used to 189 describe a system implementing such a process." 191 In other words, service configuration orchestrators deploy Network 192 Service YANG Modules through the configuration of Network Element 193 YANG Modules. Network configuration is based on those YANG data 194 models, with protocol/encoding such as NETCONF/XML [RFC6241] , 195 RESTCONF/JSON [RFC8040], gNMI/gRPC/protobuf, etc. Knowing that a 196 configuration is applied doesn't imply that the service is running 197 correctly (for example the service might be degraded because of a 198 failure in the network), the network operator must monitor the 199 service operational data at the same time as the configuration. The 200 industry has been standardizing on telemetry to push network element 201 performance information. 203 A network administrator needs to monitor her network and services as 204 a whole, independently of the use cases or the management protocols. 205 With different protocols come different data models, and different 206 ways to model the same type of information. When network 207 administrators deal with multiple protocols, the network management 208 must perform the difficult and time-consuming job of mapping data 209 models: the model used for configuration with the model used for 210 monitoring. This problem is compounded by a large, disparate set of 211 data sources (MIB modules, YANG models [RFC7950], IPFIX information 212 elements [RFC7011], syslog plain text [RFC3164], TACACS+ 213 [I-D.ietf-opsawg-tacacs], RADIUS [RFC2865], etc.). In order to avoid 214 this data model mapping, the industry converged on model-driven 215 telemetry to stream the service operational data, reusing the YANG 216 models used for configuration. Model-driven telemetry greatly 217 facilitates the notion of closed-loop automation whereby events from 218 the network drive remediation changes back into the network. 220 However, it proves difficult for network operators to correlate the 221 service degradation with the network root cause. For example, why 222 does my L3VPN fail to connect? Why is this specific service slow? 223 The reverse, i.e. which services are impacted when a network 224 component fails or degrades, is even more interesting for the 225 operators. For example, which service(s) is(are) impacted when this 226 specific optic dBM begins to degrade? Which application is impacted 227 by this ECMP imbalance? Is that issue actually impacting any other 228 customers? 230 Intent-based approaches are often declarative, starting from a 231 statement of the "The service works correctly" and trying to enforce 232 it. Such approaches are mainly suited for greenfield deployments. 234 Instead of approaching intent from a declarative way, this framework 235 focuses on already defined services and tries to infer the meaning of 236 "The service works correctly". To do so, the framework works from an 237 assurance graph, deduced from the service definition and from the 238 network configuration. This assurance graph is decomposed into 239 components, which are then assured independently. The root of the 240 assurance graph represents the service to assure, and its children 241 represent components identified as its direct dependencies; each 242 component can have dependencies as well. The SAIN architecture 243 maintains the correct assurance graph when services are modified or 244 when the network conditions change. 246 When a service is degraded, the framework will highlight where in the 247 assurance service graph to look, as opposed to going hop by hop to 248 troubleshoot the issue. Not only can can this framework help to 249 correlate service degradation with network root cause/symptoms, but 250 it can deduce from the assurance graph the number and type of 251 services impacted by a component degradation/failure. This added 252 value informs the operational team where to focus its attention for 253 maximum return. 255 3. Architecture 257 SAIN aims at assuring that service instances are correctly running. 258 The goal of SAIN is to assure that service instances are operating 259 correctly and if not, to pinpoint what is wrong. More precisely, 260 SAIN computes a score for each service instance and outputs symptoms 261 explaining that score, especially why the score is not maximal. The 262 score augmented with the symptoms is called the health status. 264 As an example of a service, let us consider a point-to-point L2VPN 265 connection (i.e. pseudowire). Such a service would take as 266 parameters the two ends of the connection (device, interface or 267 subinterface, and address of the other end) and configure both 268 devices (and maybe more) so that a L2VPN connection is established 269 between the two devices. Examples of symptoms might be "Interface 270 has high error rate" or "Interface flapping", or "Device almost out 271 of memory". 273 To compute the health status of such as service, the service is 274 decomposed into an assurance graph formed by subservices linked 275 through dependencies. Each subservice is then turned into an 276 expression graph that details how to fetch metrics from the devices 277 and compute the health status of the subservice. The subservice 278 expressions are combined according to the dependencies between the 279 subservices in order to obtain the expression graph which computes 280 the health status of the service. 282 The overall architecture of our solution is presented in Figure 1. 283 Based on the service configuration, the SAIN orchestrator deduces the 284 assurance graph. It then sends to the SAIN agents the assurance 285 graph along some other configuration options. The SAIN agents are 286 responsible for building the expression graph and computing the 287 health statuses in a distributed manner. The collector is in charge 288 of collecting and displaying the current health status of the assured 289 service instances and subservices. Finally, the automation loop is 290 closed by having the SAIN Collector providing feedback to the network 291 orchestrator. 293 +-----------------+ 294 | Service | 295 | Configuration |<--------------------+ 296 | Orchestrator | | 297 +-----------------+ | 298 | | | 299 | | Network | 300 | | Service | Feedback 301 | | Instance | Loop 302 | | Configuration | 303 | | | 304 | V | 305 | +-----------------+ +-------------------+ 306 | | SAIN | | SAIN | 307 | | Orchestrator | | Collector | 308 | +-----------------+ +-------------------+ 309 | | ^ 310 | | Configuration | Health Status 311 | | (assurance graph) | (Score + Symptoms) 312 | V | Streamed 313 | +-------------------+ | via Telemetry 314 | |+-------------------+ | 315 | ||+-------------------+ | 316 | +|| SAIN |---------+ 317 | +| agent | 318 | +-------------------+ 319 | ^ ^ ^ 320 | | | | 321 | | | | Metric Collection 322 V V V V 323 +-------------------------------------------------------------+ 324 | Monitored Entities | 325 | | 326 +-------------------------------------------------------------+ 328 Figure 1: SAIN Architecture 330 In order to produce the score assigned to a service instance, the 331 architecture performs the following tasks: 333 o Analyze the configuration pushed to the network device(s) for 334 configuring the service instance and decide: which information is 335 needed from the device(s), such a piece of information being 336 called a metric, which operations to apply to the metrics for 337 computing the health status. 339 o Stream (via telemetry [RFC8641]) operational and config metric 340 values when possible, else continuously poll. 342 o Continuously compute the health status of the service instances, 343 based on the metric values. 345 3.1. Decomposing a Service Instance Configuration into an Assurance 346 Graph 348 In order to structure the assurance of a service instance, the 349 service instance is decomposed into so-called subservice instances. 350 Each subservice instance focuses on a specific feature or subpart of 351 the network system. 353 The decomposition into subservices is an important function of this 354 architecture, for the following reasons. 356 o The result of this decomposition is the assurance case of a 357 service instance, that can be represented is as a graph (called 358 assurance graph) to the operator. 360 o Subservices provide a scope for particular expertise and thereby 361 enable contribution from external experts. For instance, the 362 subservice dealing with the optics health should be reviewed and 363 extended by an expert in optical interfaces. 365 o Subservices that are common to several service instances are 366 reused for reducing the amount of computation needed. 368 The assurance graph of a service instance is a DAG representing the 369 structure of the assurance case for the service instance. The nodes 370 of this graph are service instances or subservice instances. Each 371 edge of this graph indicates a dependency between the two nodes at 372 its extremities: the service or subservice at the source of the edge 373 depends on the service or subservice at the destination of the edge. 375 Figure 2 depicts a simplistic example of the assurance graph for a 376 tunnel service. The node at the top is the service instance, the 377 nodes below are its dependencies. In the example, the tunnel service 378 instance depends on the peer1 and peer2 tunnel interfaces, which in 379 turn depend on the respective physical interfaces, which finally 380 depend on the respective peer1 and peer2 devices. The tunnel service 381 instance also depends on the IP connectivity that depends on the IS- 382 IS routing protocol. 384 +------------------+ 385 | Tunnel | 386 | Service Instance | 387 +-----------------+ 388 | 389 +-------------------+-------------------+ 390 | | | 391 +-------------+ +-------------+ +--------------+ 392 | Peer1 | | Peer2 | | IP | 393 | Tunnel | | Tunnel | | Connectivity | 394 | Interface | | Interface | | | 395 +-------------+ +-------------+ +--------------} 396 | | | 397 +-------------+ +-------------+ +-------------+ 398 | Peer1 | | Peer2 | | IS-IS | 399 | Physical | | Physical | | Routing | 400 | Interface | | Interface | | Protocol | 401 +-------------+ +-------------+ +-------------+ 402 | | 403 +-------------+ +-------------+ 404 | | | | 405 | Peer1 | | Peer2 | 406 | Device | | Device | 407 +-------------+ +-------------+ 409 Figure 2: Assurance Graph Example 411 Depicting the assurance graph helps the operator to understand (and 412 assert) the decomposition. The assurance graph shall be maintained 413 during normal operation with addition, modification and removal of 414 service instances. A change in the network configuration or topology 415 shall be reflected in the assurance graph. As a first example, a 416 change of routing protocol from IS-IS to OSPF would change the 417 assurance graph accordingly. As a second example, assuming that ECMP 418 is in place for the source router for that specific tunnel; in that 419 case, multiple interfaces must now be monitored, on top of the 420 monitoring the ECMP health itself. 422 3.2. Intent and Assurance Graph 424 The SAIN orchestrator analyzes the configuration of a service 425 instance to: 427 o Try to capture the intent of the service instance, i.e. what is 428 the service instance trying to achieve, 430 o Decompose the service instance into subservices representing the 431 network features on which the service instance relies. 433 The SAIN orchestrator must be able to analyze configuration from 434 various devices and produce the assurance graph. 436 To schematize what a SAIN orchestrator does, assume that the 437 configuration for a service instance touches 2 devices and configure 438 on each device a virtual tunnel interface. Then: 440 o Capturing the intent would start by detecting that the service 441 instance is actually a tunnel between the two devices, and stating 442 that this tunnel must be functional. This is the current state of 443 SAIN, however it does not completely capture the intent which 444 might additionally include, for instance, on the latency and 445 bandwidth requirements of this tunnel. 447 o Decomposing the service instance into subservices would result in 448 the assurance graph depicted in Figure 2, for instance. 450 In order for SAIN to be applied, the configuration necessary for each 451 service instance should be identifiable and thus should come from a 452 "service-aware" source. While the figure 1 makes a distinction 453 between the SAIN orchestrator and a different component providing the 454 service instance configuration, in practice those two components are 455 mostly likely combined. The internals of the orchestrator are 456 currently out of scope of this standardization. 458 3.3. Subservices 460 A subservice corresponds to subpart or a feature of the network 461 system that is needed for a service instance to function properly. 462 In the context of SAIN, subservice is actually a shortcut for 463 subservice assurance, that is the method for assuring that a 464 subservice behaves correctly. 466 A subservice is characterized by a list of metrics to fetch and a 467 list of computations to apply to these metrics in order to produce a 468 health status. Subservices, as services, have high-level parameters 469 which defines which object should be assured. 471 3.4. Building the Expression Graph from the Assurance Graph 473 From the assurance graph is derived a so-called expression graph, 474 which is actually a DAG whose sources are constants or metrics and 475 other nodes are operators. The expression graph encodes all the 476 operations needed to produce health statuses from the collected 477 metrics. 479 Subservices shall be device independent. To justify this, let's 480 consider the interface operational status. Dependending on the 481 device capabilities, this status can be collected by an industry- 482 accepted YANG module (IETF, Openconfig), by a vendor-specific YANG 483 module, or even by a MIB module. If the subservice was dependent on 484 the mechanism to collect the operational status, then we would need 485 multiple subservice definitions in order to support all different 486 mechanisms. 488 In order to keep subservices independent from metric collection 489 method, or, expressed differently, to support multiple combinations 490 of platforms, OSes, and even vendors, the framework introduces the 491 concept of "metric engine". The metric engine maps each device- 492 independent metric used in the subservices to a list of device- 493 specific metric implementations that precisely define how to fetch 494 values for that metric. The mapping is parameterized by the 495 characteristics (model, OS version, etc.) of the device from which 496 the metrics are fetched. 498 3.5. Building the Expression from a Subservice 500 Additionally, to the list of metrics, each subservice defines a list 501 of expressions to apply on the metrics in order to compute the health 502 status of the subservice. The definition or the standardization of 503 those expressions (also known as heuristic) is currently out of scope 504 of this standardization. 506 3.6. Open Interfaces with YANG Modules 508 The interfaces between the architecture components are open thanks to 509 the YANG modules specified in YANG Modules for Service Assurance 510 [I-D.claise-opsawg-service-assurance-yang]; they specify objects for 511 assuring network services based on their decomposition into so-called 512 subservices, according to the SAIN architecture. 514 This module is intended for the following use cases: 516 o Assurance graph configuration: 518 * Subservices: configure a set of subservices to assure, by 519 specifying their types and parameters. 521 * Dependencies: configure the dependencies between the 522 subservices, along with their types. 524 o Assurance telemetry: export the health status of the subservices, 525 along with the observed symptoms. 527 4. Security Considerations 529 TO BE COMPLETED 531 5. IANA Considerations 533 This document includes no request to IANA. 535 6. Open Issues 537 -Security Considerations to be completed 539 7. References 541 7.1. Normative References 543 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 544 Requirement Levels", BCP 14, RFC 2119, 545 DOI 10.17487/RFC2119, March 1997, 546 . 548 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 549 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 550 May 2017, . 552 7.2. Informative References 554 [I-D.claise-opsawg-service-assurance-yang] 555 Claise, B. and J. Quilbeuf, "Service Assurance for Intent- 556 based Networking Architecture", November 2019. 558 [I-D.ietf-opsawg-tacacs] 559 Dahm, T., Ota, A., dcmgash@cisco.com, d., Carrel, D., and 560 L. Grant, "The TACACS+ Protocol", draft-ietf-opsawg- 561 tacacs-15 (work in progress), September 2019. 563 [RFC2865] Rigney, C., Willens, S., Rubens, A., and W. Simpson, 564 "Remote Authentication Dial In User Service (RADIUS)", 565 RFC 2865, DOI 10.17487/RFC2865, June 2000, 566 . 568 [RFC3164] Lonvick, C., "The BSD Syslog Protocol", RFC 3164, 569 DOI 10.17487/RFC3164, August 2001, 570 . 572 [RFC6241] Enns, R., Ed., Bjorklund, M., Ed., Schoenwaelder, J., Ed., 573 and A. Bierman, Ed., "Network Configuration Protocol 574 (NETCONF)", RFC 6241, DOI 10.17487/RFC6241, June 2011, 575 . 577 [RFC7011] Claise, B., Ed., Trammell, B., Ed., and P. Aitken, 578 "Specification of the IP Flow Information Export (IPFIX) 579 Protocol for the Exchange of Flow Information", STD 77, 580 RFC 7011, DOI 10.17487/RFC7011, September 2013, 581 . 583 [RFC7950] Bjorklund, M., Ed., "The YANG 1.1 Data Modeling Language", 584 RFC 7950, DOI 10.17487/RFC7950, August 2016, 585 . 587 [RFC8040] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 588 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 589 . 591 [RFC8199] Bogdanovic, D., Claise, B., and C. Moberg, "YANG Module 592 Classification", RFC 8199, DOI 10.17487/RFC8199, July 593 2017, . 595 [RFC8641] Clemm, A. and E. Voit, "Subscription to YANG Notifications 596 for Datastore Updates", RFC 8641, DOI 10.17487/RFC8641, 597 September 2019, . 599 Appendix A. Changes between revisions 601 v00 - v01 603 o Terminology clarifications 605 o Figure 1 improved 607 Acknowledgements 609 The authors would like to thank Stephane Litkowski, Charles Eckel, 610 and Rob Wilton for their reviews. 612 Authors' Addresses 613 Benoit Claise 614 Cisco Systems, Inc. 615 De Kleetlaan 6a b1 616 1831 Diegem 617 Belgium 619 Email: bclaise@cisco.com 621 Jean Quilbeuf 622 Cisco Systems, Inc. 623 1, rue Camille Desmoulins 624 92782 Issy Les Moulineaux 625 France 627 Email: jquilbeu@cisco.com