idnits 2.17.1 draft-fuxh-mpls-delay-loss-te-framework-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 59 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 118 instances of too long lines in the document, the longest one being 3 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 22, 2012) is 4202 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-T Y.1541' is mentioned on line 288, but not defined == Unused Reference: 'ITU-T.Y.1541' is defined on line 505, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group X. Fu(Ed.), M. Betts, Q. 2 Wang 3 Internet Draft ZTE 4 Intended Status: Informational V. Manral 5 Expires: April 21, 2013 Hewlett-Packard Corp. 6 D. McDysan (Ed.), A. Malis 7 Verizon 8 S. Giacalone 9 Thomson Reuters 10 J. Drake 11 Juniper Networks 13 October 22, 2012 15 Loss and Delay Traffic Engineering Framework for MPLS 17 draft-fuxh-mpls-delay-loss-te-framework-06 19 Status of this Memo 21 This Internet-Draft is submitted to IETF in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering Task 25 Force (IETF), its areas, and its working groups. Note that other groups 26 may also distribute working documents as Internet-Drafts. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference material 31 or to cite them other than as "work in progress." 33 The list of current Internet-Drafts can be accessed at 34 http://www.ietf.org/ietf/1id-abstracts.txt 36 The list of Internet-Draft Shadow Directories can be accessed at 37 http://www.ietf.org/shadow.html 39 This Internet-Draft will expire on April 17, 2011. 41 Copyright Notice 43 Copyright (c) 2012 IETF Trust and the persons identified as the document 44 authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal Provisions 47 Relating to IETF Documents (http://trustee.ietf.org/license-info) in 48 effect on the date of publication of this document. Please review these 49 documents carefully, as they describe your rights and restrictions with 50 respect to this document. Code Components extracted from this document 51 must include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Abstract 57 Deployment and usage of cloud based applications and services that use 58 an underlying MPLS network are expanding and an increasing number of 59 applications are extremely sensitive to delay and packet loss. 60 Furthermore, in cloud computing an additional decision problem arises of 61 simultaneously choosing the data center to host applications along with 62 MPLS network connectivity such that the overall performance of the 63 application is met. Mechanisms exist to measure and monitor MPLS path 64 performance parameters for packet loss and delay, but the mechanisms 65 work only after the path has been setup. The cloud-based and performance 66 sensitive applications would benefit from measurement of MPLS network 67 and potential path information that would be provided for use in the 68 computation before LSP setup and then the selection of LSPs. 70 This document provides a framework and architecture to solve operator 71 problems and requirements using current/proposed approaches, documents 72 scalability assessment and recommendations, and identifies any needed 73 protocol development. 75 Table of Contents 77 1. Introduction...................................................3 78 1.1. Scope.....................................................3 79 2. Conventions used in this document..............................3 80 2.1. Acronyms..................................................3 81 3. Overview of Functional Requirements............................4 82 4. Augment LSP Requestor Signaling with Performance Parameter Values 83 ..................................................................4 84 5. Specify Criteria for Node and Link Performance Parameter Estimation, 85 Measurement Methods...............................................5 86 6. Support Node Level Performance Information when Needed.........5 87 7. Augment Routing Information with Performance Parameter Estimates5 88 8. Augment Signaling Information with Concatenated Estimates......6 89 9. Define Significant Performance Parameter Change Thresholds and 90 Frequency.........................................................6 91 10. Define Thresholds and Timers for Links with Unusable Performance 92 ..................................................................7 93 11. Communicate Significant Performance Changes between Layers....7 94 12. Support for Networks with Composite Links.....................8 95 13. Support Performance Sensitive Restoration, Protection and Rerouting 96 ..................................................................8 97 14. Support Management and Operational Requirements...............8 98 15. Major Architectural and Scaling Challenges....................8 99 16. Approaches Considered but not Taken...........................9 100 17. IANA Considerations...........................................9 101 18. Security Considerations.......................................9 102 19. References....................................................9 103 19.1. Normative References.....................................9 104 19.2. Informative References...................................9 105 20. Acknowledgments..............................................10 107 1. Introduction 109 This draft is one of two created from draft-fuxh-mpls-delay-loss-te- 110 framework-05 in response to comments from an MPLS Review Team (RT). This 111 draft focuses on a framework in response to the problem statement and 112 requirements described in a peer document [DELAY-LOSS-PS]. 114 The purpose of this draft is to summarize a framework and architecture 115 to meet requirements using current/proposed approaches, documents 116 scalability assessment and recommendations, and identifies any needed 117 protocol development. 119 However, computing an LSP path to meet the Network Performance 120 Objective(NPO) for delay, loss and delay variation of these QoS classes 121 is an open problem [DELAY-LOSS-PS]. This draft describes a framework for 122 how the MPLS TE architecture can be augmented use information on 123 configured, measured and/or estimated delay, loss and delay variation 124 for use in LSP path computation and selection. 126 1.1. Scope 128 A (G)MPLS network may have multiple layers of packet, TDM and/or optical 129 network technology and an important objective is to make a prediction of 130 end-to-end delay, loss and delay variation based upon the current state 131 of this network with acceptable accuracy before an LSP is established. 133 The (G)MPLS network may cover a single IGP area/level, may be a 134 hierarchical IGP under control of a single administrator, or may involve 135 multiple domains under control of multiple administrators. 137 An MPLS architecture for Multicast with awareness of delay, loss and 138 delay variation will be taken up in a future version of the draft. 140 2. Conventions used in this document 142 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 143 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 144 document are to be interpreted as described in RFC-2119 [RFC2119]. 146 2.1. Acronyms 148 DS-TE Differentiated Services Traffic Engineering 150 IGP Interior Gateway Protocol 152 (G)MPLS (Generalized) Multi-Protocol Label Switching 154 LSP Label Switched Path 156 RSVP-TE Resource reservation Protocol - Traffic Engineering 158 3. Overview of Functional Requirements 160 [DELAY-LOSS-PS] describes the general problem to be solved and describes 161 a number of requirements grouped in the following subject areas for 162 performance sensitive LSP computation and placement: 164 o Augment LSP Requestor Signaling with Performance Parameter Values 166 o Specify Criteria for Node and Link Performance Parameter Estimation, 167 Measurement Methods 169 o Support Node Level Performance Information when Needed 171 o Augment Routing Information with Performance Parameter Estimates 173 o Augment Signaling Information with Concatenated Estimates 175 o Define Significant Performance Parameter Change Thresholds and 176 Frequency 178 o Define Thresholds and Timers for Links with Unusable Performance 180 o Communicate Significant Performance Changes between Layers 182 o Support for Networks with Composite Link 184 o Support Performance Sensitive Restoration, Protection and Rerouting 186 o Support Management and Operational Requirements 188 The following sections describe aspects of a framework for each of the 189 above requirement sets in terms of functions, protocols and operational 190 scenarios for meeting the requirements. In some cases the descriptions 191 reference current/proposed potentially applicable IETF approaches. 192 Throughout the following sections, certain scalability challenges are 193 identified and in most cases a potential resolution approach is 194 described - these are summarized at the end of the document. 196 4. Augment LSP Requestor Signaling with Performance Parameter Values 198 As described in [DELAY-LOSS-PS] the LSP requestor must be able to make a 199 request for one of two types 1) a minimum possible value or 2)a maximum 200 acceptable valuefor each performance parameter for each LSP. 202 The proposed approach [EXPRESS-PATH] within a single IGP area/level, is 203 that only the origin (or head-end) need be aware of the required 204 performance aspects of the LSP, since the origin has performance 205 information for all of the candidate nodes and links from a performance 206 parameter augmented IGP [OSPF-TE-METRIC-EXT], [ISIS-TE-METRIC-EXT]. 208 For LSPs that traverse multiple area/levels or multiple domains, what is 209 needed in addition to [EXPRESS-PATH] is knowledge of the node and link 210 level performance to determine a path that meets the concatenated 211 performance estimates as described in [DELAY-LOSS-PS]. Furthermore, 212 information available to the LSP originator (e.g., the request type 213 (minimum possible value, maximum acceptable parameter value) may need to 214 be carried in the RSVP_TE signaling message. 216 An alternative approach could make the performance information available 217 to a (set of) Path Computation Elements (PCE), which the LSP requestor 218 could consult. In this case, there would likely need to be extensions 219 made to the PCE Protocol to carry LSP performance parameter information. 221 5. Specify Criteria for Node and Link Performance Parameter Estimation, 222 Measurement Methods 224 Procedures to measure delay and loss on a path level between measurement 225 points have been specified in ITU-T [Y.1731], [G.709] and [RFC 6374]. 226 Ideally, a measurement point would occur within adjacent nodes to 227 measure the delay, loss and delay variation performance for a 228 combination of node and link performance. However, since this method is 229 not universally deployed (and may never be deployed in some nodes), 230 other methods of performance parameter estimation are needed to meet the 231 requirements of [DELAY-LOSS-PS]. 233 Important assumptions from [DELAY-LOSS-PS] are: 235 o the timeframe of the performance parameter estimate, which is 236 specified as the order of minutes 238 o delay and loss are defined as an average and delay variation is 239 defined based upon statistical quantiles 241 These assumptions could allow other methods to estimate performance 242 parameters, such as usage of models to predict values based upon other 243 parameters, such as load, queue thresholds and/or meters. For example, 244 one such method could be a per QoS class based measurement from the 245 ingress of one port to the egress of another port on a node as a 246 function of load in a field test or laboratory to create an empirical 247 model that could be used to insert performance parameter estimates into 248 routing or signaling. 250 The switching delay on a node can be measured internally, and multiple 251 mechanisms and data structures to do this have been defined [LEE]. 253 6. Support Node Level Performance Information when Needed 255 If the IGP structure of link-level advertisements is to be used, then 256 nodal delays can be combined with link-level performance [EXPRESS-PATH]. 257 For example, a solution provide configuration knob to add some fixed 258 value of a portion (e.g., one half) of node delay to link delay. 260 Alternatively, IGPs or a PCE information base could be extended with 261 node-level performance parameter estimates. 263 7. Augment Routing Information with Performance Parameter Estimates 265 [DSTE-PROTO] and [EXPRESS-PATH] use information regarding bandwidth from 266 an IGP area/level for use by performance sensitive LSPs. For a single 267 IGP area/level, the IGP could be augmented with estimates of delay, loss 268 and delay variation as described in [OSPF-TE-METRIC-EXT], [ISIS-TE- 269 METRIC-EXT]. This should also apply to a Forwarding Adjacency LSP (FA- 270 LSP) [RFC4206]. [EXPRESS-PATH] describes how to use these augmented IGP 271 performance measures to compute explicit paths, for example, at a path 272 computation entity. 274 For LSPs that cross an IGP area/level boundary and/or traverse multiple 275 domains, some other solution is needed for LSP path computation and 276 selection, such as augmented PCE information bases. These PCE 277 information bases can then be used by origin or the Path Computation 278 engine to decide paths with the desired path properties. 280 Routing information could use two components to represent performance, 281 "static" and "dynamic". The dynamic component is that caused by traffic 282 load and queuing and would be an approximate value. The static 283 component should be fixed and independent of load (e.g., propagation 284 delay). 286 8. Augment Signaling Information with Concatenated Estimates 288 [DELAY-LOSS-PS] cites specific sections/appendices from [ITU-T Y.1541] 289 regarding how performance estimates are to be composed and concatenated. 291 For LSPs that cross an IGP area/level boundary and/or traverse multiple 292 domains (e.g., Autonomous Systems), if detailed performance parameter 293 information is not provided, then one approach would be to signal the 294 requested performance parameters for the LSP in the RSVP_TE signaling 295 message as described in [DELAY-LOSS-RSVP-TE]. If each area/level and/or 296 domain is unaware of the composition of performance parameters from the 297 prior area/level and/or domains, then signaling would also need to carry 298 the concatenation of these composed performance estimates. 300 Signaling information could use two components to represent performance, 301 "static" and "dynamic". The dynamic component is that caused by traffic 302 load and queuing and would be an approximate value. The static 303 component should be fixed and independent of load (e.g., propagation 304 delay). 306 RSVP-TE signaling across multiple area/levels or domains could include 307 recording status of previous attempts, retries and correlation with end- 308 end LSP performance measures to improve on a trial-and-error approach. 310 Another approach that could meet the requirements could be a (stateful) 311 PCE listening to each domain, communicating amongst PCEs in other 312 domains approximating global state to reduce probing and retries to 313 improve scalability. 315 9. Define Significant Performance Parameter Change Thresholds and Frequency 317 In the augmented IGP approach, performance value changes should be 318 updated and flooded in the IGP only when there is significant change in 319 the value. The LSP originator could determine the IGP update affects 320 performance and can decide on whether to accept the changed value, or 321 request another computation of the LSP. 323 Since performance characteristics of links, nodes and FA-LSPs may change 324 dynamically the amount of information flooded in an augmented IGP 325 approach could be excessive and cause instability. In order to control 326 IGP messaging and avoid being unstable when the delay, delay variation 327 and packet loss value changes, thresholds and a limit on rate of change 328 should be configured in the IGP control plane. 330 10. Define Thresholds and Timers for Links with Unusable Performance 332 For the extended IGP or augmented PCE information base approaches, an 333 acceptable and unacceptable target performance value could be configured 334 for each link (and node, if supported). This should also apply to a 335 Forwarding Adjacency LSP (FA-LSP) [RFC4206]. If a measured or 336 dynamically estimated (e.g., based upon load) performance value 337 increases above the unacceptable threshold, the link (node) could be 338 removed from consideration for future LSP path computations. If it 339 decreases below the acceptable target value, it can then be considered 340 for future LSP path computations. 342 Performance-sensitive LSPs whose path traverses links (nodes) whose 343 performance has been deemed unacceptable by this threshold should be 344 notified. The LSP originator can then decide if it will accept the 345 changed performance, or else request computation of a new path that 346 meets the performance objective. 348 The frequency of a link (node) changing from an unacceptable to an 349 acceptable state should be controlled by configurable parameters. 351 11. Communicate Significant Performance Changes between Layers 353 The generic requirement is for a lower layer network to communicate 354 significant performance changes to a higher layer network. 356 An end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse a 357 FA-LSP of a server layer (e.g., an OTN ring). The boundary nodes of the 358 FA-LSP SHOULD be aware of the performance information for this FA-LSP. 360 If the FA-LSP is used to form a routing adjacency and/or used as a TE 361 link in the client network, the composition of the performance values of 362 the links and nodes that the FA-LSP trail traverses needs to be made 363 available for path computation. This is especially important when the 364 performance information of the FA-LSP changes (e.g., due to a 365 maintenance action or failure in an OTN ring). 367 The frequency of a lower layer network indicating a significant 368 performance change should be controlled by configurable parameters. 370 A separate end-end performance measurement could be done for an LSP 371 after it has been established (e.g., RFC 6374) if it is a lower level 372 FA-LSP used in an LSP hierarchy. The measurement of end-to-end LSP 373 performance may be used to inform the higher layer network of a 374 performance parameter change. 376 If the performance of FA-LSP changes, the client layer must at least be 377 notified. The client layer can then decide if it will accept the 378 changed performance, or else request computation of a new path that 379 meets the performance objective. 381 12. Support for Networks with Composite Links 383 In order to assign the LSP to one of component links with different 384 performance characteristics [CL-REQ], the RSVP-TE message could carry a 385 indication of the request type (i.e., minimum possible value or a 386 maximum acceptable performance parameter value ) for use in component 387 link selection or creation. The composite link should be able to take 388 these parameters into account when assigning LSP traffic to a component 389 link. 391 When Composite Links [CL-REQ] are advertised into an augmented IGP, the 392 desirable solution is to advertise performance information for all 393 component links into the augmented IGP [CL-FW]. Otherwise, if only 394 partial or summarized information is advertised then the originator or a 395 PCE cannot determine whether a computed path will meet the LSP 396 performance objective and this could lead to crank back signaling. 398 13. Support Performance Sensitive Restoration, Protection and Rerouting 400 A change in performance of links and nodes (e.g., due to a lower level 401 restoration action) may affect the performance of one or more end-to-end 402 LSPs. Pre-defined protection or dynamic re-routing could be triggered to 403 handle this case. 405 In the case of predefined protection, large amounts of redundant 406 capacity may have a significant negative impact on the overall network 407 cost. If the LSP performance objective cannot be met after a re-route 408 is attempted, an alarm should be generated to the management plane. The 409 solution should periodically attempt restoration for as controlled by 410 configuration parameters to prevent excessive load on the control plane. 412 14. Support Management and Operational Requirements 414 A separate end-end performance measurement should be done for an LSP 415 after it has been established (e.g., RFC 6374, G.709 or Y.1731). An LSP 416 originator may re-compute a re-signal a path when the measured end-to- 417 end performance is unacceptable. The choice by the originator to re- 418 signal could consider a history of how accurate the performance 419 parameter estimate is delivered by the implementation. The re- 420 computation and re-signaling rates should be controlled by configuration 421 parameters to prevent excessive load on the control plane. 423 15. Major Architectural and Scaling Challenges 425 As described in the preceding sections, there are a several scaling and 426 architectural challenges, with proposed resolution as described below: 428 o Frequency of performance parameter value changes limited to the order 429 of minutes by definition 431 o Augmented IGP flooding performance parameter change frequency within 432 one area/level controlled by configuration parameters 434 o Augmented PCE information base performance parameter change frequency 435 within one area/level controlled by configuration parameters 437 o Re-computation and re-signaling of LSPs whose composition of 438 performance parameter values changes to unacceptable controlled by 439 configuration parameters 441 o Declaration of links, nodes, FA-LSPs as unacceptable/acceptable 442 controlled by configuration parameters 444 o Frequency of a lower layer network indicating a significant 445 performance change controlled by configuration parameters 447 o Re-computation and re-signaling of LSPs whose measured end-end 448 performance is unacceptable controlled by configuration parameters 450 16. Approaches Considered but not Taken 452 One approach would be for the PCE to compute paths for use by the LSP 453 originator for signaling. Some measurement method (e.g., RFC 6374) could 454 then be used to measure the performance of this path. If the measurement 455 indicates that the performance is not met then another request is made 456 to the PCE for a different path, the originator signals for the LSP to 457 be set up and then measured again. This "trial and error" process is 458 very inefficient and a more predictable method is required. 460 17. IANA Considerations 462 No new IANA consideration are raised by this document. 464 18. Security Considerations 466 This document raises no new security issues. 468 19. References 470 19.1. Normative References 472 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 473 Requirement Levels", BCP 14, RFC 2119, March 1997. 475 19.2. Informative References 477 [DELAY-LOSS-PS] X.Fu, D. McDysan et al., "Delay and Loss Traffic 478 Engineering Problem Statement for MPLS," draft-fuxh-mpls- 479 delay-loss-te-problem-statement 481 [DSTE-PROTO] Le Faucheur, F., Ed., "Protocol Extensions for Supportof 482 Diffserv-aware MPLS Traffic Engineering", RFC 4124, June 2005. 484 [ISIS-TE-METRIC-EXT] S. Previdi, "IS-IS Traffic Engineering (TE) Metric 485 Extensions", draft-previdi-isis-te-metric-extensions. 487 [OSPF-TE-METRIC-EXT] S. Giacalone, "OSPF Traffic Engineering (TE) 488 Metric Extensions", draft-ietf-ospf-te-metric-extensions. 490 [EXPRESS-PATH] A. Atlas et al, "Performance-based Path Selection for 491 Explicitly Routed LSPs", draft-atlas-mpls-te-express-path. 493 [Y.1731] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 494 for Ethernet based networks", Feb 2008. 496 [G.709] ITU-T Recommendation G.709, "Interfaces for the Optical 497 Transport Network (OTN)", December 2009. 499 [RFC 6374] D. Frost, S. Bryant, "Packet Loss and Delay Measurement for 500 MPLS Networks," RFC 6374, September 2011. 502 [DELAY-LOSS-RSVP-TE] X. Fu, "RSVP-TE extensions for Delay and Loss 503 Traffic Engineering", draft-fuxh-mpls-delay-loss-rsvp-te-ext. 505 [ITU-T.Y.1541] ITU-T, "Network performance objectives for IP-based 506 services", 2011, . 508 [CL-REQ] C. Villamizar, "Requirements for MPLS Over a Composite Link", 509 draft-ietf-rtgwg-cl-requirement 511 [RFC4206] Kompella, K. and Y. Rekhter, "Label Switched Paths (LSP) 512 Hierarchy with Generalized Multi-Protocol Label Switching 513 (GMPLS) Traffic Engineering (TE)", RFC 4206, October 2005. 515 [CL-FW] C. Villamizar et al, "Composite Link Framework in Multi Protocol 516 Label Switching (MPLS)", draft-ietf-rtgwg-cl-framework 518 [LEE] Myungjin Lee , Sharon Goldberg , Ramana Rao Kompella , George 519 Varghese "Fine-Grained Latency and Loss Measurements in the 520 Presence of Reordering," 521 http://www.cs.bu.edu/fac/goldbe/papers/sigmet2011.pdf 523 20. Acknowledgments 525 This document was prepared using 2-Word-v2.0.template.dot. 527 The authors would like to thank the MPLS Review Team of Stewart Bryant, 528 Daniel King and He Jia for their many helpful comments suggestions in 529 July 2012. 531 Copyright (c) 2012 IETF Trust and the persons identified as authors of 532 the code. All rights reserved. 534 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS 535 IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 536 TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A 537 PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER 538 OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, 539 EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, 540 PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 541 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 542 LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 543 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 544 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 546 This code was derived from IETF RFC [insert RFC number]. Please 547 reproduce this note if possible. 549 Authors' Addresses 551 Xihua Fu 552 ZTE 553 Email: fu.xihua@zte.com.cn 555 Vishwas Manral 556 Hewlett-Packard Corp. 557 191111 Pruneridge Ave. 558 Cupertino, CA 95014 559 US 560 Phone: 408-447-1497 561 Email: vishwas.manral@hp.com 563 Dave McDysan 564 Verizon 565 Email: dave.mcdysan@verizon.com 567 Andrew Malis 568 Verizon 569 Email: andrew.g.malis@verizon.com 571 Spencer Giacalone 572 Thomson Reuters 573 195 Broadway 574 New York, NY 10007 575 US 576 Phone: 646-822-3000 577 Email: spencer.giacalone@thomsonreuters.com 579 Malcolm Betts 580 ZTE 581 Email: malcolm.betts@zte.com.cn 583 Qilei Wang 584 ZTE 585 Email: wang.qilei@zte.com.cn 587 John Drake 588 Juniper Networks 589 Email: jdrake@juniper.net