idnits 2.17.1 draft-huang-bmwg-virtual-network-performance-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 23 instances of too long lines in the document, the longest one being 4 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 28, 2015) is 3285 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC1242' is defined on line 674, but no explicit reference was found in the text == Unused Reference: 'RFC2234' is defined on line 680, but no explicit reference was found in the text == Unused Reference: 'RFC2544' is defined on line 683, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2234 (Obsoleted by RFC 4234) Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BMWG L. Huang, Ed. 3 Internet-Draft R. Gu, Ed. 4 Intended status: Informational China Mobile 5 Expires: October 30, 2015 Bob. Mandeville 6 Iometrix 7 Brooks. Hickman 8 Spirent Communications 9 April 28, 2015 11 Benchmarking Methodology for Virtualization Network Performance 12 draft-huang-bmwg-virtual-network-performance-01 14 Abstract 16 As the virtual network has been widely established in IDC, the 17 performance of virtual network has become a valuable consideration to 18 the IDC managers. This draft introduces a benchmarking methodology 19 for virtualization network performance based on virtual switch. 21 Status of This Memo 23 This Internet-Draft is submitted to IETF in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at http://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on October 30, 2015. 38 Copyright Notice 40 Copyright (c) 2015 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (http://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. 50 Table of Contents 52 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 53 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 54 3. Test Considerations . . . . . . . . . . . . . . . . . . . . . 3 55 4. Key Performance Indicators . . . . . . . . . . . . . . . . . 5 56 5. Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . 6 57 6. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 7 58 6.1. Throughput . . . . . . . . . . . . . . . . . . . . . . . 7 59 6.1.1. Objectives . . . . . . . . . . . . . . . . . . . . . 7 60 6.1.2. Configuration parameters . . . . . . . . . . . . . . 7 61 6.1.3. Test parameters . . . . . . . . . . . . . . . . . . . 8 62 6.1.4. Test process . . . . . . . . . . . . . . . . . . . . 8 63 6.1.5. Test result format . . . . . . . . . . . . . . . . . 8 64 6.2. Frame loss rate . . . . . . . . . . . . . . . . . . . . . 9 65 6.2.1. Objectives . . . . . . . . . . . . . . . . . . . . . 9 66 6.2.2. Configuration parameters . . . . . . . . . . . . . . 9 67 6.2.3. Test parameters . . . . . . . . . . . . . . . . . . . 9 68 6.2.4. Test process . . . . . . . . . . . . . . . . . . . . 9 69 6.2.5. Test result format . . . . . . . . . . . . . . . . . 10 70 6.3. CPU consumption . . . . . . . . . . . . . . . . . . . . . 10 71 6.3.1. Objectives . . . . . . . . . . . . . . . . . . . . . 10 72 6.3.2. Configuration parameters . . . . . . . . . . . . . . 10 73 6.3.3. Test parameters . . . . . . . . . . . . . . . . . . . 11 74 6.3.4. Test process . . . . . . . . . . . . . . . . . . . . 11 75 6.3.5. Test result format . . . . . . . . . . . . . . . . . 11 76 6.4. MEM consumption . . . . . . . . . . . . . . . . . . . . . 12 77 6.4.1. Objectives . . . . . . . . . . . . . . . . . . . . . 12 78 6.4.2. Configuration parameters . . . . . . . . . . . . . . 12 79 6.4.3. Test parameters . . . . . . . . . . . . . . . . . . . 12 80 6.4.4. Test process . . . . . . . . . . . . . . . . . . . . 12 81 6.4.5. Test result format . . . . . . . . . . . . . . . . . 13 82 6.5. Latency . . . . . . . . . . . . . . . . . . . . . . . . . 13 83 6.5.1. Objectives . . . . . . . . . . . . . . . . . . . . . 14 84 6.5.2. Configuration parameters . . . . . . . . . . . . . . 14 85 6.5.3. Test parameters . . . . . . . . . . . . . . . . . . . 14 86 6.5.4. Test process . . . . . . . . . . . . . . . . . . . . 14 87 6.5.5. Test result format . . . . . . . . . . . . . . . . . 15 88 7. Security Considerations . . . . . . . . . . . . . . . . . . . 15 89 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 90 9. Normative References . . . . . . . . . . . . . . . . . . . . 15 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 93 1. Introduction 95 As the virtual network has been widely established in IDC, the 96 performance of virtual network has become a valuable consideration to 97 the IDC managers. This draft introduces a benchmarking methodology 98 for virtualization network performance based on virtual switch as the 99 DUT. 101 2. Terminology 103 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 104 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 105 document are to be interpreted as described in [RFC2119]. 107 3. Test Considerations 109 In a conventional test setup with Non-Virtual test ports, it is quite 110 legitimate to assume that test ports provide the golden standard in 111 measuring the performance metrics. If test results are sub optimal, 112 it is automatically assumed that the Device-Under-Test (DUT) is at 113 fault. For example, when testing throughput at a given frame size, 114 if the test result shows less than 100% throughput, we can safely 115 conclude that it's the DUT that can not deliver line rate forwarding 116 at that frame size(s). We never doubt that the tester can be an 117 issue. 119 While in a virtual test environment where both the DUT as well as the 120 test tool itself are VM based, it's quite a different story. Just 121 like the DUT VM, tester in VM shape will have its own performance 122 peak under various conditions. Just like the DUT VM, a VM based 123 tester will have its own performance characteristics. 125 Tester's calibration is essential in benchmarking testing in a 126 virtual environment. Furthermore, to reduce the enormous combination 127 of various conditions, tester must be calibrated with the exact same 128 combination and parameter settings the user wants to measure against 129 the DUT. A slight variation of conditions and parameter values will 130 cause inaccurate measurements of the DUT. 132 While it is difficult to list the exact combination and parameter 133 settings, the following table attempts to give the most common 134 example how to calibrate a tester before testing a DUT (VSWITCH) 135 under the same condition. 137 Sample calibration permutation: 139 ---------------------------------------------------------------- 140 | Hypervisor | VM VNIC | VM Memory | Frame | | 141 | Type | Speed |CPU Allocation | Size | Throughput| 142 ---------------------------------------------------------------- 143 | ESXi 1G/10G 512M/1Core | 64 | | 144 | | 128 | | 145 | | 256 | | 146 | | 512 | | 147 | | 1024 | | 148 | | 1518 | | 149 ---------------------------------------------------------------- 151 Figure 1: Sample Calibration Permutation 153 Key points are as following: 155 a) The hypervisor type is of ultimate importance to the test results. 156 VM tester(s) MUST be installed on the same hypervisor type as the DUT 157 (VSWITCH). Different hypervisor type has an influence on the test 158 result. 160 b) The VNIC speed will have an impact on testing results. Testers 161 MUST calibrate against all VNIC speeds. 163 c) VM allocations of CPU resources and memory have an influence on 164 test results. 166 d) Frame sizes will affect the test results dramatically due to the 167 nature of virtual machines. 169 e) Other possible extensions of above table: The number of VMs to be 170 created, latency reading, one VNIC per VM vs. multiple VM sharing one 171 VNIC, and uni-directional traffic vs. bi-directional traffic. 173 Besides, the compute environment including the hardware should be 174 also recorded. 176 ----------------------------------------------------- 177 | Compute encironment componenets | Model | 178 ----------------------------------------------------- 179 | CPU | | 180 ----------------------------------------------------- 181 | Memory | | 182 ----------------------------------------------------- 183 | Hard Disk | | 184 ----------------------------------------------------- 185 | 10G Adaptors | | 186 ----------------------------------------------------- 187 | Blade/Motherboard | | 188 ----------------------------------------------------- 190 Figure 2: Compute Environment 192 It's important to confirm test environment for tester's calibration 193 as close to the environment a virtual DUT (VSWITCH) involved in for 194 the benchmark test. Key points which SHOULD be noticed in test setup 195 are listed as follows. 197 1. One or more VM tester(s) need to be created for both traffic 198 generation and analysis. 200 2. vSwitch has an influence on performance penalty due to extra VM 201 addition. 203 3. VNIC and its type is needed in the test setup to once again 204 accommodate performance penalty when DUT (VSWITCH) is created. 206 In summary, calibration should be done in such an environment that 207 all possible factors which may negatively impact test results should 208 be taken into consideration. 210 4. Key Performance Indicators 212 We listed numbers of key performance indicators for virtual network 213 below: 215 a) Throughput under various frame sizes: forwarding performance under 216 various frame sizes is a key performance indicator of interest. 218 b) DUT consumption of CPU: when adding one or more VM(s), DUT 219 (VSWITCH) will consume more CPU. Vendors can allocate appropriate 220 CPU to reach the line rate performance. 222 c) DUT consumption of MEM: when adding one or more VM(s), DUT 223 (VSWITCH) will consume more memory. Vendors can allocate appropriate 224 MEM to reach the line rate performance. 226 d) Latency readings: Some applications are highly sensitive on 227 latency. It's important to get the latency reading with respective 228 to various conditions. 230 Other indicators such as VxLAN maximum supported by the virtual 231 switch and so on can be added in the scene when VxLAN is needed. 233 5. Test Setup 235 The test setup is classified into two traffic models: Model A and 236 Model B. 238 In traffic model A: A physical tester connects to the server which 239 bears the DUT (VSWITCH) and Virtual tester to verify the benchmark of 240 server. 242 _______________________________________________ 243 | | 244 ----------------- | ---------------- ---------------- | 245 |Physical tester|------|---|DUT (VSWITCH) |---------|Virtual tester| | 246 ----------------- | ---------------- ---------------- | 247 | Server | 248 |_______________________________________________| 250 Figure 3: test model A 252 In traffic model B: Two virtual testers are used to verify the 253 benchmark. In this model, two testers are installed in one server. 255 ________________________________________________________________________ 256 | | 257 | ---------------- ---------------- ---------------- | 258 | |Virtual tester|----------|DUT (VSWITCH) |---------|Virtual tester| | 259 | ---------------- ---------------- ---------------- | 260 | Server | 261 |________________________________________________________________________| 263 Figure 4: test model B 265 In our test, the test bed is constituted by physical servers of the 266 Dell with a pair of 10GE NIC and physical tester. Virtual tester 267 which occupies 2 vCPU and 8G MEM and DUT (VSWITCH) are installed in 268 the server. 10GE switch and 1GE switch are used for test traffic and 269 management respectively. 271 This test setup is also available in the VxLAN measurement. 273 6. Benchmarking Tests 275 6.1. Throughput 277 Unlike traditional test cases where the DUT and the tester are 278 separated, virtual network test has been brought in unparalleled 279 challenges. In virtual network test, the virtual tester and the DUT 280 (VSWITCH) are in one server which means they are physically 281 converged, so the test and DUT (VSWITCH) are sharing the same CPU and 282 MEM resources of one server. Theoretically, the virtual tester's 283 operation may have influence on the DUT (VSWITCH)'s performance. 284 However, for the specialty of virtualization, this method is the only 285 way to test the performance of a virtual DUT. 287 Under the background of existing technology, when we test the virtual 288 switch's throughput, the concept of traditional physical switch 289 CANNOT be applicable. The traditional throughput indicates the 290 switches' largest forwarding capability, for certain bytes selected 291 and under zero-packet-lose conditions. But in virtual environments, 292 virtual variations on virtual network will be much greater than that 293 of dedicated physical devices. As the DUT and the tester cannot be 294 separated, it proves that the DUT (VSWITCH) realize such network 295 performances under certain circumstances. 297 Therefore, we change the bytes in virtual environment to test the 298 maximum value which we think of the indicator of throughput. It's 299 conceivable that the throughput should be tested on both the test 300 model A and B. The tested throughput has certain referential 301 meanings to value the performance of the virtual DUT. 303 6.1.1. Objectives 305 The objective of the test is to determine the throughput of the DUT 306 (VSWITCH), which the DUT can support. 308 6.1.2. Configuration parameters 310 Network parameters should be defined as follows: 312 a) the number of virtual tester (VMs) 314 b) the number of vNIC of virtual tester 315 c) the CPU type of the server 317 d) vCPU allocated for virtual tester (VMs) 319 e) memory allocated for virtual tester (VMs) 321 f) the number and rate of server NIC 323 6.1.3. Test parameters 325 a) test repeated times 327 b) test frame length 329 6.1.4. Test process 331 1. Configure the VM tester to offer traffic to the V-Switch. 333 2. Increase the number of vCPU in the tester until the traffic has 334 no packet loss. 336 3. Record the max throughput on VSwitch. 338 4. Change the frame length and repeat from step1 to step4. 340 6.1.5. Test result format 342 -------------------------- 343 | Byte| Throughput (Gbps)| 344 -------------------------- 345 | 64 | | 346 -------------------------- 347 | 128 | 0.46 | 348 -------------------------- 349 | 256 | 0.84 | 350 -------------------------- 351 | 512 | 1.56 | 352 -------------------------- 353 | 1024| 2.88 | 354 -------------------------- 355 | 1518| 4.00 | 356 -------------------------- 358 Figure 5: test result format 360 6.2. Frame loss rate 362 Frame loss rate is also an important indicator in evaluating the 363 performance of virtual switch.As is defined in RFC 1242, percentage 364 of frames that should have been forwarded which actually fails to be 365 forwarded due to lack of resources needs to be tested.Both model A 366 and model B are tested.Frame loss rate is an important indicator in 367 evaluating the performance of virtual switches. 369 6.2.1. Objectives 371 The objective of the test is to determine the frame loss rate under 372 different data rates and frame sizes.. 374 6.2.2. Configuration parameters 376 Network parameters should be defined as follows: 378 a) the number of virtual tester (VMs) 380 b) the number of vNIC of virtual tester 382 c) the CPU type of the server 384 d) vCPU allocated for virtual tester (VMs) 386 e) memory allocated for virtual tester (VMs) 388 f) the number and rate of server NIC 390 6.2.3. Test parameters 392 a) test repeated times 394 b) test frame length 396 c) test frame rate 398 6.2.4. Test process 400 1. Configure the VM tester to offer traffic to the V-Switch with the 401 input frame changing from the maximum rate to the rate with no frame 402 loss at reducing 10% intervals according to RFC 2544. 404 2. Record the input frame count and output count on VSwitch. 406 3. Calculate the frame loss percentage under different frame rate. 408 4. Change the frame length and repeat from step1 to step4. 410 6.2.5. Test result format 412 ---------------------------------------------------------------------------- 413 |Byte|Maxmum frame|90% Maximum frame|80% Maximum frame|...|frame rate with | 414 | | rate (Gbps)| rate (Gbps) | rate (Gbps) | | no loss (Gbps) | 415 ---------------------------------------------------------------------------- 416 | 64| | | | | | 417 ---------------------------------------------------------------------------- 418 | 128| | | | | | 419 ---------------------------------------------------------------------------- 420 | 256| | | | | | 421 ---------------------------------------------------------------------------- 422 | 512| | | | | | 423 ---------------------------------------------------------------------------- 424 |1024| | | | | | 425 ---------------------------------------------------------------------------- 426 |1518| | | | | | 427 ---------------------------------------------------------------------------- 429 Figure 6: test result format 431 6.3. CPU consumption 433 The objective of the test is to determine the CPU load of 434 DUT(VSWITCH). The operation of DUT (VSWITCH) can increase the CPU 435 load of host server. Different V-Switches have different CPU 436 occupation. This can be an important indicator in benchmarking the 437 virtual network performance. 439 6.3.1. Objectives 441 The objective of this test is to verify the CPU consumption caused by 442 the DUT (VSWITCH). 444 6.3.2. Configuration parameters 446 Network parameters should be defined as follows: 448 a) the number of virtual tester (VMs) 450 b) the number of vNIC of virtual tester 452 c) the CPU type of the server 454 d) vCPU allocated for virtual tester (VMs) 455 e) memory allocated for virtual tester (VMs) 457 f) the number and rate of server NIC 459 6.3.3. Test parameters 461 a) test repeated times 463 b) test frame length 465 6.3.4. Test process 467 1. Configure the VM tester to offer traffic to the V-Switch with the 468 traffic value of throughput tested in 6.1. 470 2. Under the same throughput, record the CPU load value of server in 471 the condition of shutting down and bypassing the DUT (VSWITCH), 472 respectively. 474 3. Calculate the increase of the CPU load value due to establishing 475 the DUT (VSWITCH). 477 4. Change the frame length and repeat from step1 to step4. 479 6.3.5. Test result format 481 -------------------------------------------------------- 482 | Byte| Throughput(Gbps)| Server CPU(MHZ) | VM CPU(MHz)| 483 -------------------------------------------------------- 484 | 64 | | | | 485 -------------------------------------------------------- 486 | 128 | 0.46 | 6395 | 5836 | 487 -------------------------------------------------------- 488 | 256 | 0.84 | 6517 | 6143 | 489 -------------------------------------------------------- 490 | 512 | 1.56 | 6668 | 6099 | 491 -------------------------------------------------------- 492 | 1024| 2.88 | 6280 | 5726 | 493 -------------------------------------------------------- 494 | 1518| 4.00 | 6233 | 5441 | 495 -------------------------------------------------------- 497 Figure 7: test result format 499 6.4. MEM consumption 501 The objective of the test is to determine the Memory load of 502 DUT(VSWITCH). The operation of DUT (VSWITCH) can increase the Memory 503 load of host server. Different V-Switches have different memory 504 occupation. This can be an important indicator in benchmarking the 505 virtual network performance. 507 6.4.1. Objectives 509 The objective of this test is to verify the memory consumption by the 510 DUT (VSWITCH) on the Host server. 512 6.4.2. Configuration parameters 514 Network parameters should be defined as follows: 516 a) the number of virtual tester (VMs) 518 b) the number of vNIC of virtual tester 520 c) the CPU type of the server 522 d) vCPU allocated for virtual tester (VMs) 524 e) memory allocated for virtual tester (VMs) 526 f) the number and rate of server NIC 528 6.4.3. Test parameters 530 a) test repeated times 532 b) test frame length 534 6.4.4. Test process 536 1. Configure the VM tester to offer traffic to the V-Switch with the 537 traffic value of throughput tested in 6.1. 539 2. Under the same throughput, record the memory consumption value of 540 server in the condition of shutting down and bypassing the DUT 541 (VSWITCH), respectively. 543 3. Calculate the increase of the memory consumption value due to 544 establishing the DUT (VSWITCH). 546 4. Change the frame length and repeat from step1 to step4. 548 6.4.5. Test result format 550 ---------------------------------------------------- 551 | Byte| Throughput(Gbps)| Host Memory | VM Memory | 552 ---------------------------------------------------- 553 | 64 | | | | 554 ---------------------------------------------------- 555 | 128 | 0.46 | 3040 | 696 | 556 ---------------------------------------------------- 557 | 256 | 0.84 | 3042 | 696 | 558 ---------------------------------------------------- 559 | 512 | 1.56 | 3041 | 696 | 560 ---------------------------------------------------- 561 | 1024| 2.88 | 3043 | 696 | 562 ---------------------------------------------------- 563 | 1518| 4.00 | 3045 | 696 | 564 ---------------------------------------------------- 566 Figure 8: test result format 568 6.5. Latency 570 Physical tester's time refers from its own clock or other time 571 source, such as GPS, which can achieve the accuracy of 10ns. While 572 in virtual network circumstances, the virtual tester gets its 573 reference time from the clock of Linux systems. However, due to 574 current methods, the clock of different servers or VMs can't 575 synchronize accuracy. Although VMs of some higher versions of CentOS 576 or Fedora can achieve the accuracy of 1ms, we can get better results 577 if the network can provide better NTP connections. 579 Instead of finding a better synchronization of clock to improve the 580 accuracy of the test, we consider to use an echo server in order to 581 forward the traffic back to the vitual switch. 583 We use the traffic model A as the time delay test model by 584 substituting the virtual tester with the echo server, which is used 585 to echo the traffic.Thus the delay time equals to half of the time 586 value between the traffic transmitting by the physical tester and the 587 traffic receiving by the physical tester. 589 _______________________________________________ 590 | | 591 ----------------- | ---------------- ---------------- | 592 |Physical tester|------|---|DUT (VSWITCH) |---------| echo server | | 593 ----------------- | ---------------- ---------------- | 594 | Server | 595 |_______________________________________________| 597 Figure 9: time delay test model 599 6.5.1. Objectives 601 The objective of this test is to verify the DUT (VSWITCH) for latency 602 of the flow. This can be an important indicator in benchmarking the 603 virtual network performance. 605 6.5.2. Configuration parameters 607 Network parameters should be defined as follows: 609 a) the number of virtual tester (VMs) 611 b) the number of vNIC of virtual tester 613 c) the CPU type of the server 615 d) vCPU allocated for virtual tester (VMs) 617 e) memory allocated for virtual tester (VMs) 619 f) the number and rate of server NIC 621 6.5.3. Test parameters 623 a) test repeated times 625 b) test frame length 627 6.5.4. Test process 629 1. Configure the physical tester to offer traffic to the V-Switch 630 with the traffic value of throughput tested in 6.1. 632 2. Under the same throughput, record the time of transmitting the 633 traffic and receiving the traffic by the physical tester with and 634 without the DUT. 636 3. Calculate the time difference value between receiving and 637 transmitting the traffic.. 639 4. Calculate the time delay with time difference value with and 640 without the DUT. 642 5. Change the frame length and repeat from step1 to step4. 644 6.5.5. Test result format 646 ------------------------- 647 | Byte| Time delay(Gbps)| 648 ------------------------- 649 | 64 | | 650 ------------------------- 651 | 128 | | 652 ------------------------- 653 | 256 | | 654 ------------------------- 655 | 512 | | 656 ------------------------- 657 | 1024| | 658 ------------------------- 659 | 1518| | 660 ------------------------- 662 Figure 10: test result format 664 7. Security Considerations 666 None. 668 8. IANA Considerations 670 None. 672 9. Normative References 674 [RFC1242] Bradner, S., "Benchmarking terminology for network 675 interconnection devices", RFC 1242, July 1991. 677 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 678 Requirement Levels", BCP 14, RFC 2119, March 1997. 680 [RFC2234] Crocker, D., Ed. and P. Overell, "Augmented BNF for Syntax 681 Specifications: ABNF", RFC 2234, November 1997. 683 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 684 Network Interconnect Devices", RFC 2544, March 1999. 686 Authors' Addresses 688 Lu Huang (editor) 689 China Mobile 690 32 Xuanwumen West Ave, Xicheng District 691 Beijing 100053 692 China 694 Email: huanglu@chinamobile.com 696 Rong Gu (editor) 697 China Mobile 698 32 Xuanwumen West Ave, Xicheng District 699 Beijing 100053 700 China 702 Email: gurong@chinamobile.com 704 Bob Mandeville 705 Iometrix 706 3600 Fillmore Street Suite 409 707 San Francisco, CA 94123 708 USA 710 Email: bob@iometrix.com 712 Brooks Hickman 713 Spirent Communications 714 1325 Borregas Ave 715 Sunnyvale, CA 94089 716 USA 718 Email: Brooks.Hickman@spirent.com