idnits 2.17.1 draft-pedro-nmrg-anticipated-adaptation-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 7 characters in excess of 72. -- The draft header indicates that this document updates draft-pedro-nmrg-anticipated-, but the abstract doesn't seem to mention this, which it should. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 468 has weird spacing: '...rw plid str...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (June 28, 2018) is 2129 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-song-ntf-01 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NMRG P. Martinez-Julia, Ed. 3 Internet-Draft NICT 4 Updates: draft-pedro-nmrg-anticipated- June 28, 2018 5 adaptation-01 (if approved) 6 Intended status: Informational 7 Expires: December 30, 2018 9 Exploiting External Event Detectors to Anticipate Resource Requirements 10 for the Elastic Adaptation of SDN/NFV Systems 11 draft-pedro-nmrg-anticipated-adaptation-02 13 Abstract 15 The adoption of SDN/NFV technologies by current computer and network 16 system infrastructures is constantly increasing, becoming essential 17 for the the particular case of edge/branch network systems. The 18 systems supported by these infrastructures require to be adapted to 19 environment changes within a short period of time. Thus, the 20 complexity of new systems and the speed at which management and 21 control operations must be performed go beyond human limits. Thus, 22 management systems must be automated. However, in several situations 23 current automation techniques are not enough to respond to 24 requirement changes. Here we propose to anticipate changes in the 25 operation environments of SDN/NFV systems in response to external 26 events and reflect it in the anticipation of the amount of resources 27 required by those systems for their ulterior adaptaion. The final 28 objective is to avoid service degradation or disruption while keeping 29 close-to-optimum resource allocation to reduce monetary and operative 30 cost as much as possible. Here we discuss how to achieve such 31 capabilities by the integration of the Autonomic Resource Control 32 Architecture (ARCA) to the management and operation (MANO) of NFV 33 systems. We showcase it by building a multi-domain SDN/NFV 34 infrastructure based on OpenStack and deploying ARCA to adapt a 35 virtual system based on the edge/branch network concept to the 36 operational conditions of an emergency support service, which is 37 rarely used but that cannot leave any user unattended. 39 Status of This Memo 41 This Internet-Draft is submitted in full conformance with the 42 provisions of BCP 78 and BCP 79. 44 Internet-Drafts are working documents of the Internet Engineering 45 Task Force (IETF). Note that other groups may also distribute 46 working documents as Internet-Drafts. The list of current Internet- 47 Drafts is at https://datatracker.ietf.org/drafts/current/. 49 Internet-Drafts are draft documents valid for a maximum of six months 50 and may be updated, replaced, or obsoleted by other documents at any 51 time. It is inappropriate to use Internet-Drafts as reference 52 material or to cite them other than as "work in progress." 54 This Internet-Draft will expire on December 30, 2018. 56 Copyright Notice 58 Copyright (c) 2018 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (https://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with respect 66 to this document. Code Components extracted from this document must 67 include Simplified BSD License text as described in Section 4.e of 68 the Trust Legal Provisions and are provided without warranty as 69 described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 74 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 75 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . 4 76 3.1. Virtual Computer and Network Systems . . . . . . . . . . 4 77 3.2. SDN and NFV . . . . . . . . . . . . . . . . . . . . . . . 5 78 3.3. Management and Control . . . . . . . . . . . . . . . . . 5 79 3.4. The Autonomic Resource Control Architecture (ARCA) . . . 6 80 4. External Event Detectors . . . . . . . . . . . . . . . . . . 8 81 5. Anticipating Requirements . . . . . . . . . . . . . . . . . . 8 82 6. Information Model . . . . . . . . . . . . . . . . . . . . . . 9 83 6.1. Tree Structure . . . . . . . . . . . . . . . . . . . . . 10 84 6.1.1. event-payloads . . . . . . . . . . . . . . . . . . . 10 85 6.1.1.1. basic . . . . . . . . . . . . . . . . . . . . . . 10 86 6.1.1.2. seismometer . . . . . . . . . . . . . . . . . . . 11 87 6.1.1.3. bigdata . . . . . . . . . . . . . . . . . . . . . 11 88 6.1.2. external-events . . . . . . . . . . . . . . . . . . . 11 89 6.1.3. notifications/event . . . . . . . . . . . . . . . . . 12 90 6.2. YANG Module . . . . . . . . . . . . . . . . . . . . . . . 12 91 7. ARCA Integration With ETSI-NFV-MANO . . . . . . . . . . . . . 13 92 7.1. Functional Integration . . . . . . . . . . . . . . . . . 14 93 7.2. Target Experiment and Scenario . . . . . . . . . . . . . 16 94 7.3. OpenStack Platform . . . . . . . . . . . . . . . . . . . 18 95 7.4. Initial Results . . . . . . . . . . . . . . . . . . . . . 19 96 8. Relation to Other IETF/IRTF Initiatives . . . . . . . . . . . 22 97 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 98 10. Security Considerations . . . . . . . . . . . . . . . . . . . 22 99 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 100 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 101 12.1. Normative References . . . . . . . . . . . . . . . . . . 23 102 12.2. Informative References . . . . . . . . . . . . . . . . . 23 103 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 24 105 1. Introduction 107 The incorporation of Software Defined Networking (SDN) and Network 108 Function Virtualization (NFV) to current infrastructures to build 109 virtual computer and network systems is constantly increasing. The 110 need to automate the management and control of such systems has 111 motivated us to design the Autonomic Resource Control Architecture 112 (ARCA), as presented in ICIN 2018 [ICIN-2018]. Automation 113 requirements are enough justified by the increasing size and 114 complexity of systems, which in turn are essential in the current 115 digital world. Moreover, the particular requirements and market 116 benefits of network virtualization have been crystallized in the 117 uprising of SDN/NFV infrastructures. Nowadays they broad reception 118 of the combined SDN/NFV technology supposes a huge leap towards the 119 empowerment and homogenization of virtualization technologies. 120 Therefore, we have modeled ARCA to fit within the reference 121 architecture for management and orchestration of NFV elements, the 122 Virtual Network Functions (VNFs). 124 Behind the scenes, NFV is based on a highly distributed and network 125 empowered version of the well-known Cloud infrastructures and 126 platforms, also complemented by their centralized counterparts. This 127 takes to virtual networks the high degree of flexibility already 128 found for computer systems. It is highly desirable at the time NFV 129 is being exploited by many organizations to build their private 130 infrastructures, as well as by network service providers to build the 131 services they later commercialize. However, to actually exploit the 132 potential monetary and operative cost reduction that is associated to 133 such infrastructures, the amount of resources used by production 134 services must be kept close to the optimum, so the physical resources 135 are exploited as much as possible. 137 The fast detection of changes in the requirements of the virtual 138 systems deployed on the aforementioned SDN/NFV infrastructures, and 139 the consequent adaptation of allocated resources to the new 140 situations, becomes essential to actually exploit their cost and 141 operative benefits, while also avoiding service unresponsiveness due 142 to underlying resource overloading. It is widely accepted that the 143 size and complexity of systems and services makes it difficult for 144 humans to accomplish such task within their objective time 145 boundaries. Therefore, they must be automated. Luckily, the 146 architecture and underlying platforms supporting the SDN/NFV 147 technologies enable the required automation. In fact, some solutions 148 already exist to perform several batched or scripted tasks without 149 human intervention. However, those solutions still have high 150 dependences on low-level human involvement. This remarks the 151 challenge found in control and management automation, which is 152 continuously revised and enlarged. 154 ARCA provides as a small step towards the resolution of the 155 aforementioned problem. It advances the State of the Art in 156 automation of resource control and management by providing a 157 supervised but autonomous mechanism that reduces the time required to 158 perform corrective and/or adaptive changes in virtual computer and 159 network systems from hours/minutes to seconds/milliseconds. 160 Moreover, it is able to take advantage of the event notifications 161 provided by external detectors to anticipate the amount of resources 162 that the controlled SDN/NFV system will require in response to such 163 event. We propose to bring such benefit to the reference 164 architecture promoted by ETSI for the management and orchestration of 165 NFV services (see ETSI-NFV-MANO [ETSI-NFV-MANO]) by integrating ARCA 166 as the Virtual Infrastructure Manager (VIM). We showcase this 167 proposal by discussing the evaluation results obtained by ARCA when 168 runnion on a real and physical experimentation infrastructure based 169 on OpenStack [OPENSTACK]. We thus justify the need to adapt the 170 interfaces supported by the NFV-MANO to include real-world event 171 detectors, which are external to the virtualization platform and 172 virtual resources. 174 2. Terminology 176 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 177 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 178 document are to be interpreted as described in RFC 2119 [RFC2119]. 180 3. Background 182 3.1. Virtual Computer and Network Systems 184 The continuous search for efficiency and cost reduction to get the 185 most optimum exploitation of available resources (e.g. CPU power and 186 electricity) has conducted current physical infrastructures to move 187 towards virtualization infrastructures. Also, this trend enables end 188 systems to be centralized and/or distributed, so that they are 189 deployed to best accomplish customer requirements in terms of 190 resources and qualities. 192 One of the key functional requirements imposed to computer and 193 network virtualization is a high degree of flexibility and 194 reliability. Both qualities are subject to the underlying 195 technologies but, while the latter has been always enforced to 196 computer and network systems, flexibility is a relatively new 197 requirement, which whould not have been impossed without the backing 198 of virtualization and cloud technologies. 200 3.2. SDN and NFV 202 SDN and NFV are conceived to bring high degree of flexibility and 203 conceptual centralization qualities to the network. On the one hand, 204 with SDN, the network can be programmed to implement a dynamic 205 behavior that changes its topology and overall qualities. Moreover, 206 with NFV the functions that are typically provided by physical 207 network equipment are now implemented as virtual appliances that can 208 be deployed and linked together to provide customized network 209 services. SDN and NFV complements to each other to actually 210 implement the network aspect of the aforementioned virtual computer 211 and network systems. 213 Although centralization can lead us to think on the single-point-of- 214 failure concept, it is not the case for these technologoes. 215 Conceptual centralization highly differs from centralized deployment. 216 It brings all benefits from having a single point of decision but 217 retaining the benefits from distributed systems. For instance, 218 control decisions in SDN can be centralized while the mechanisms that 219 enforce such decisions into the network (SDN controllers) can be 220 implemented as highly distributed systems. The same approach can be 221 applied to NFV. Althoug network functions can be implemented in a 222 central computing facility, they can take advantage of several 223 replication and distribution techniques to achieve the properties of 224 distributed systems. Nevertheless, NFV also allows the deployment of 225 functions on top of distributed systems, so they benefit from both 226 distribution alternatives at the same time. 228 3.3. Management and Control 230 The introduction of virtualization into the computer and network 231 system landscape has increased the complexity of both underlying and 232 overlying systems. On the one hand, virtualyzing underlying systems 233 adds extra functions that must be managed propoerly to ensure the 234 correct operation of the whole system, which not just encompasses 235 underlying elements but also the virtual elements running on top of 236 them. Such functions are used to actually host the overlying virtual 237 elements, so there is an indirect management operation that involves 238 virtual systems. Moreover, such complexities are inherited by final 239 systems that get virtualized and deployed on top of those 240 virtualization infrastructures. 242 In parallel, virtual systems are empowered with additional, and 243 widely exploited, functionality that must be managed correctly. It 244 is the case of the dynamic adaptation of virtual resources to the 245 specific needs of their operation environments, or even the 246 composition of distributed elements across heterogeneous underlying 247 infrastructures, and probably providers. 249 Taking both complex functions into account, either separately or 250 jointly, makes clear that management requirements have greatly 251 supassed the limits of humans, so automation has become essential to 252 accomplish most common tasks. 254 3.4. The Autonomic Resource Control Architecture (ARCA) 256 As deeply discussed in ICIN 2018 [ICIN-2018], ARCA leverages the 257 elastic adaptation of resources assigned to virtual computer and 258 network systems by calculating or estimating their requirements from 259 the analysis of load measurements and the detection of external 260 events. These events can be notified by physical elements (things, 261 sensors) that detect changes on the environment, as well as software 262 elements that analyze digital information, such as connectors to 263 sources or analyzers of Big Data. For instance, ARCA is able to 264 consider the detection of an earthquake or a heavy rainfall to 265 overcome the damages it can make to the controlled system. 267 The policies that ARCA must enforce will be specified by 268 administrators during the configuration of the control/management 269 engine. Then, ARCA continues running autonomously, with no more 270 human involvement unless some parameter must be changed. ARCA will 271 adopt the required control and management operations to adapt the 272 controlled system to the new situation or requirements. The main 273 goal of ARCA is thus to reduce the time required for resource 274 adaptation from hours/minutes to seconds/milliseconds. With the 275 aforementioned statements, system administrators are able to specify 276 the general operational boundaries in terms of lower and upper system 277 load thresholds, as well as the minimum and maximum amount of 278 resources that can be allocated to the controlled system to overcome 279 any eventual situation, including the natural crossing of such 280 thresholds. 282 ARCA functional goal is to run autonomously while the performance 283 goal is to keep the resources assigned to the controlled resources as 284 close as possible to the optimum (e.g. 5 % from the optimum) while 285 avoiding service disruption as much as possible, keeping client 286 request discard rate as low as possible (e.g. below 1 %). To achieve 287 both goals, ARCA relies on the Autonomic Computing (AC) paradigm, in 288 the form of interconnected micro-services. Therefore, ARCA includes 289 the four main elements and activities defined by AC, incarnated as: 291 Collector Is responsible of gathering and formatting the 292 heterogeneous observations that will be used in the control 293 cycle. 295 Analyzer Correlates the observations to each other in order to find 296 the situation of the controlled system, especially the 297 current load of the resources allocated to the system and 298 the occurrence of an incident that can affect to the normal 299 operation of the system, such as an earthquake that 300 increases the traffic in an emergency-support system, which 301 is the main target scenario studied in this paper. 303 Decider Determines the necessary actions to adjust the resources to 304 the load of the controlled system. 306 Enforcer Requests the underlying and overlying infrastructure, such 307 as OpenStack, to make the necessary changes to reflect the 308 effects of the decided actions into the system. 310 Being a micro-service architecture means that the different 311 components are executed in parallel. This allows such components to 312 operate in two ways. First, their operation can be dispatched by 313 receiving a message from the previous service or an external service. 314 Second, the services can be self-dispatched, so they can activate 315 some action or send some message without being previously stimulated 316 by any message. The overall control process loops indefinitely and 317 it is closed by checking that the expected effects of an action are 318 actually taking place. The coherence among the distributed services 319 involved in the ARCA control process is ensured by enforcing a common 320 semantic representation and ontology to the messages they exchange. 322 ARCA semantics are built with the Resource Description Framework 323 (RDF) and the Web Ontology Language (OWL), which are well known and 324 widely used standards for the semantic representation and management 325 of knowledge. They provide the ability to represent new concepts 326 without requiring to change the software, just plugin extensions to 327 the ontology. ARCA stores all its knowledge is stored in the 328 Knowledge Base (KB), which is queried and kept up-to-date by the 329 analyzer and decider micro-services. It is implemented by Apache 330 Jena Fuseki, which is a high-performance RDF data store that supports 331 SPARQL through an HTTP/REST interface. Being de-facto standards, 332 both technologies enable ARCA to be easily integrated to 333 virtualization platforms like OpenStack. 335 4. External Event Detectors 337 As mentioned above, current mechanisms used to achieve automated 338 management and control rely only on the continuous monitoring of the 339 resources they control or the underlying infrastructure that host 340 them. However, there are several other sources of information that 341 can be exploited to make the systems more robust and efficient. It 342 is the case of the notifications that can be provided by physical or 343 virtual elements or devices that are watching for specific events, 344 hence called external event detectors. 346 More specifically, although the notifications provided by these 347 external event detectors are related to successes that occur outside 348 the boundaries of the controlled system, such successes can affect 349 the typical operation of controlled systems. For instance, a heavy 350 rainfall or snowfall can be detected and correlated to a huge 351 increase in the amount of requests experienced by some emergency 352 support service. 354 5. Anticipating Requirements 356 One of the main goals of the MANO mechanisms is to ensure the virtual 357 computer and network system they manage meets the requirements 358 established by their owners and administrators. It is currently 359 achieved by observing and analyzing the performance measurements 360 obtained either by directly asking the resources forming the managed 361 system of by asking the controllers of the underlying infrastructure 362 that hosts such resources. Thus, under changing or eventual 363 situations, the managed system must be adapted to cope with the new 364 requirements, incrasing the amount of resources assigned to it, or to 365 make efficient use of available infrastructures, reducing the amount 366 of resources assigned to it. 368 However, the time required by the infrastructure to make effective 369 the adaptations requested by the MANO mechanisms is longer than the 370 time required by client requests to overload the system and make it 371 discard further client requests. This situation is generally 372 undesired but particularly dangerous for some systems, such as the 373 emergency support system mentioned above. Therefore, in order to 374 avoid the disruption of the service, the change in requirements must 375 be anticipated to ensure that any adaptation has finished as soon as 376 possible, preferably before the target system gets overloaded or 377 underloaded. 379 Here we propose to integrate ARCA with NFV-MANO to take advantage of 380 the notifications provided by the aforementioned external event 381 detectors, by correlating them to the target amount of resources 382 required by the managed system and enforcing the necessary 383 adaptations beforehand, particularly before the system performance 384 metrics have actually changed. 386 The following abstract algorithm formalizes the workflow expected to 387 be followed by the different implementations of the operation 388 proposed here. 390 while TRUE do 391 event = GetExternalEventInformation() 392 if event != NONE then 393 anticipated_resource_amount = Anticipator.Get(event) 394 if IsPolicyCompliant(anticipated_resource_amount) then 395 current_resource_amount = anticipated_resource_amount 396 anticipation_time = NOW 397 end if 398 end if 399 anticipated_event = event 400 if anticipated_event != NONE and 401 (NOW - anticipation_time) > EXPIRATION_TIME then 402 current_resource_amount = DEFAULT_RESOURCE_AMOUNT 403 anticipated_event = NONE 404 end if 405 state = GetSystemState() 406 if not IsAcceptable(state, current_resource_amount) then 407 current_resource_amount = GetResourceAmountForState(state) 408 if anticipated_event is not NONE then 409 Anticipator.Set 410 (anticipated_event, current_resource_amount) 411 anticipated_event = NONE 412 end if 413 end if 414 end while 416 This algorithm considers both internal and external events to 417 determine the necessary control and management actions to achieve the 418 proper anticipation of resources assigned to the target system. We 419 propose the different implementations to follow the same approach so 420 they can guess what to expect when they interact. For instance, a 421 consumer, such as an Application Service Provider (ASP), can expect 422 some specific behavior of the Virtual Network Operator (VNO) from 423 which it is consuming resources. This helps both the ASP and VNO to 424 properly address resource fluctuations. 426 6. Information Model 428 In this section we introduce the basic model needed to support the 429 implementation of the anticipation algorithm. It basically includes 430 the concepts and structures used to describe external events and 431 notify (communicate) them to the interested sink, the network 432 controller/manager, through the control and management plane, 433 depending on the specific instantiation of the system. 435 6.1. Tree Structure 437 module: ietf-nmrg-nict-resource-anticipation 438 +--rw events 439 +--rw event-payloads 440 +--rw external-events 442 notifications: 443 +---n event 445 The main models included in the tree structure of the module are the 446 events and notifications. On the one hand, events are structured in 447 payloads and the content of events itself (external-events). On the 448 other hand, there is only one notification, which is the event 449 itself. 451 6.1.1. event-payloads 453 +--rw event-payloads 454 +--rw event-payloads-basic 455 +--rw event-payloads-seismometer 456 +--rw event-payloads-bigdata 458 The event payloads are, for the time being, composed of three types. 459 First, we have defined the basic payload, which is intended to carry 460 any arbitrary data. Second, we have defined the seismometer payload 461 to carry information about seisms. Third, we have defined the 462 bigdata payload that carries notifications coming from BigData 463 sources. 465 6.1.1.1. basic 467 +--rw event-payloads-basic* [plid] 468 +--rw plid string 469 +--rw data? union 471 The basic payload is able to hold any data type, so it has a union of 472 several types. It is intended to be used by any source of events 473 that is (still) not covered by other model. In general, any source 474 of telemetry information (e.g. OpenStack controllers) can use this 475 model as such sources can encode on it their information, which 476 typically is very simple and plain. Therefore, the current model is 477 tightly interrelated to a framework to retrieve network telemetry 478 (see [I-D.song-ntf]). 480 6.1.1.2. seismometer 482 +--rw event-payloads-seismometer* [plid] 483 +--rw plid string 484 +--rw location? string 485 +--rw magnitude? uint8 487 The seismometer model includes the main information related to a 488 seism, such as the location of the incident and its magnitude. 489 Additional fields can be defined in the future by extending this 490 model. 492 6.1.1.3. bigdata 494 +--rw event-payloads-bigdata* [plid] 495 +--rw plid string 496 +--rw description? string 497 +--rw severity? uint8 499 The bigdata model includes a description of an event (or incident) 500 and its estimated general severity, unrelated to the system. The 501 description is an arbitrary string of characters that would normally 502 carry information that describes the event using some higher level 503 format, such as Turtle or N3 for carrying RDF knowlege items. 505 6.1.2. external-events 507 +--rw external-events* [id] 508 +--rw id string 509 +--rw source? string 510 +--rw context? string 511 +--rw sequence? int64 512 +--rw timestamp? yang:date-and-time 513 +--rw payload? binary 515 The model defined to encode external events, which encapsulates the 516 payloads introduced above, is completed with an identifier of the 517 message, a string describing the source of the event, a sequence 518 number and a timestamp. Additionaly it includes a string describing 519 the context of the event. It is intended to communicate the required 520 information about the system that detected the event, its location, 521 etc. As the description of the BigData payload, this field can be 522 formated with a high level format, such as RDF. 524 6.1.3. notifications/event 526 notifications: 527 +---n event 528 +--ro id? string 529 +--ro source? string 530 +--ro context? string 531 +--ro sequence? int64 532 +--ro timestamp? yang:date-and-time 533 +--ro payload? binary 535 The event notification inherits all the fields from the model of 536 external events defined above. It is intended to allow software and 537 hardware elements to send, receive, and interpret not just the events 538 that have been detected and notified by, for instance, a sensor, but 539 also the notifications issued by the underlying infrastructure 540 controllers, such as the OpenStack Controller. 542 6.2. YANG Module 544 . 546 module ietf-nmrg-nict-resource-anticipation { 547 namespace "urn:ietf:params:xml:ns:yang:ietf-nmrg-nict-resource-anticipation"; 548 prefix rant; 549 import ietf-yang-types { prefix yang; } 551 grouping external-event-information { 552 leaf id { type string; } 553 leaf source { type string; } 554 leaf context { type string; } 555 leaf sequence { type int64; } 556 leaf timestamp { type yang:date-and-time; } 557 leaf payload { type binary; } 558 } 560 grouping event-payload-basic { 561 leaf plid { type string; } 562 leaf data { type union { type string; type binary; } } 563 } 565 grouping event-payload-seismometer { 566 leaf plid { type string; } 567 leaf location { type string; } 568 leaf magnitude { type uint8; } 569 } 571 grouping event-payload-bigdata { 572 leaf plid { type string; } 573 leaf description { type string; } 574 leaf severity { type uint8; } 575 } 577 notification event { 578 uses external-event-information; 579 } 581 container events { 582 container event-payloads { 583 list event-payloads-basic { 584 key "plid"; 585 uses event-payload-basic; 586 } 587 list event-payloads-seismometer { 588 key "plid"; 589 uses event-payload-seismometer; 590 } 591 list event-payloads-bigdata { 592 key "plid"; 593 uses event-payload-bigdata; 594 } 595 } 596 list external-events { 597 key "id"; 598 uses external-event-information; 599 } 600 } 602 } 604 . 606 7. ARCA Integration With ETSI-NFV-MANO 608 In this section we describe how to fit ARCA on a general SDN/NFV 609 underlying infrastructure and introduce a showcase experiment that 610 demonstrates its operation on an OpenStack-based experimentation 611 platform. We first describe the integration of ARCA with the NFV- 612 MANO reference architecture. We contextualize the significance of 613 this integration by describing an emergency support scenario that 614 clearly benefits from it. Then we proceed to detail the elements 615 forming the OpenStack platform and finally we discuss some initial 616 results obtained from them. 618 7.1. Functional Integration 620 The most important functional blocks of the NFV reference 621 architecture promoted by ETSI (see ETSI-NFV-MANO [ETSI-NFV-MANO]) are 622 the system support functions for operations and business (OSS/BSS), 623 the element management (EM) and, obviously. the Virtual Network 624 Functions (VNFs). But these functions cannot exist without being 625 instantiated on a specific infrastructure, the NFV infrastructure 626 (NFVI), and all of them must be coordinated, orchestrated, and 627 managed by the general NFV-MANO functions. 629 Both the NFVI and the NFV-MANO elements are subdivided into several 630 sub-components. The NFVI has the underlying physical computing, 631 storage, and network resources, which are sliced 632 (see[I-D.qiang-coms-netslicing-information-model] and 633 [I-D.geng-coms-architecture]) and virtualized to conform the virtual 634 computing, storage, and network resources that will host the VNFs. 635 In addition, the NFV-MANO is subdivided in the NFV Orchestrator 636 (NFVO), the VNF manager (VNFM) and the Virtual Infrastructure Manager 637 (VIM). As their name indicates, all high-level elements and sub- 638 components have their own and very specific objective in the NFV 639 architecture. 641 During the design of ARCA we enforced both operational and 642 interfacing aspects to its main objectives. From the operational 643 point of view, ARCA processes observations to manage virtual 644 resources, so it plays the role of the VIM mentioned above. 645 Therefore, ARCA has been designed with appropriate interfaces to fit 646 in the place of the VIM. This way, ARCA provides the NFV reference 647 architecture with the ability to react to external events to adapt 648 virtual computer and network systems, even anticipating such 649 adaptations as performed by ARCA itself. However, some interfaces 650 must be extended to fully enable ARCA to perform its work within the 651 NFV architecture. 653 Once ARCA is placed in the position of the VIM, it enhances the 654 general NFV architecture with its autonomic management capabilities. 655 In particular, it discharges some responsibilities from the VNFM and 656 NFVO, so they can focus on their own business while the virtual 657 resources are behaving as they expect (and request). Moreover, ARCA 658 improves the scalability and reliability of the managed system in 659 case of disconnection from the orchestration layer due to some 660 failure, network split, etc. It is also achieved by the autonomic 661 capabilities, which, as described above, are guided by the rules and 662 policies specified by the administrators and, here, communicated to 663 ARCA through the NFVO. However, ARCA will not be limited to such 664 operation so, more generally, it will accomplish the requirements 665 established by the Virtual Network Operators (VNOs), which are the 666 owners of the slice of virtual resources that is managed by a 667 particular instance of NFV-MANO, and therefore ARCA. 669 In addition to the operational functions, ARCA incorporates the 670 necessary mechanisms to engage the interfaces that enable it to 671 interact with other elements of the NFV-MANO reference architecture. 672 More specifically, ARCA is bound to the Or-Vi (see ETSI-NFV-IFA-005 673 [ETSI-NFV-IFA-005]) and the Nf-Vi (see ETSI-NFV-IFA-004 674 [ETSI-NFV-IFA-004] and ETSI-NFV-IFA-019 [ETSI-NFV-IFA-019]). The 675 former is the point of attachment between the NFVO and the VIM while 676 the latter is the point of attachment between the NFVI and the VIM. 677 In our current design we decided to avoid the support for the point 678 of attachment between the VNFM and the VIM, called Vi-Vnfm (see ETSI- 679 NFV-IFA-006 [ETSI-NFV-IFA-006]). We leave it for future evolutions 680 of the proposed integration, that will be enabled by a possible 681 solution that provides the functions of the VNFM required by ARCA. 683 Through the Or-Vi, ARCA receives the instructions it will enforce to 684 the virtual computer and network system it is controlling. As 685 mentioned above, these are specified in the form of rules and 686 policies, which are in turn formatted as several statements and 687 embedded into the Or-Vi messages. In general, these will be high- 688 level objectives, so ARCA will use its reasoning capabilities to 689 translate them into more specific, low-level objectives. For 690 instance, the Or-Vi can specify some high-level statement to avoid 691 CPU overloading and ARCA will use its innate and acquired knowledge 692 to translate it to specific statements that specify which parameters 693 it has to measure (CPU load from assigned servers) and which are 694 their desired boundaries, in the form of high threshold and low 695 threshold. Moreover, the Or-Vi will be used by the NFVO to specify 696 which actions can be used by ARCA to overcome the violation of the 697 mentioned policies. 699 All information flowing the Or-Vi interface is encoded and formatted 700 by following a simple but highly extensible ontology and exploiting 701 the aforementioned semantic formats. This ensures that the 702 interconnected system is able to evolve, including the replacement of 703 components, updating (addition or removal) the supported concepts to 704 understand new scenarios, and connecting external tools to further 705 enhance the management process. The only requirement to ensure this 706 feature is to ensure that all elements support the mentioned ontology 707 and semantic formats. Although it is not a finished task, the 708 development of semantic technologies allows the easy adaptation and 709 translation of existing information formats, so it is expected that 710 more and more software pieces become easily integrable with the ETSI- 711 NFV-MANO [ETSI-NFV-MANO] architecture. 713 In contrast to the Or-Vi interface, the Nf-Vi interface exposes more 714 precise and low-level operations. Although this makes it easier to 715 be integrated to ARCA, it also makes it to be tied to specific 716 implementations. In other words, building a proxy that enforces the 717 aforementioned ontology to different interface instances to 718 homogenize them adds undesirable complexity. Therefore, new 719 components have been specifically developed for ARCA to be able to 720 interact with different NFVIs. Nevertheless, this specialization is 721 limited to the collector and enforcer. Moreover, it allows ARCA to 722 have optimized low-level operations, with high improvement of the 723 overall performance. This is the case of the specific 724 implementations of the collector and enforcer used with Mininet and 725 Docker, which are used as underlying infrastructures in previous 726 experiments described in ICIN 2017 [ICIN-2017]. Moreover, as 727 discussed in the following section, this is also the case of the 728 implementations of the collector and enforcer tied to OpenStack 729 telemetry and compute interfaces, respectively. Hence it is 730 important to ensure that telemetry is properly addressed, so we 731 insist in the need to adopt a common framework in such endpoint (see 732 [I-D.song-ntf]). 734 Although OpenStack still lacks some functionality regarding the 735 construction of specific virtual networks, we use it as the NFVI 736 functional block in the integrated approach. Therefore, OpenStack is 737 the provider of the underlying SDN/NFV infrastructure and we 738 exploited its APIs and SDK to achieve the integration. More 739 specifically, in our showcase we use the APIs provided by Ceilometer, 740 Gnocchi, and Compute services as well as the SDK provided for Python. 741 All of them are gathered within the Nf-Vi interface. Moreover, we 742 have extended the Or-Vi interface to connect external elements, such 743 as the physical or environmental event detectors and Big Data 744 connectors, which is becoming a mandatory requirement of the current 745 virtualization ecosystem and it conforms our main extension to the 746 NFV architecture. 748 7.2. Target Experiment and Scenario 750 From the beginning of our work on the design of ARCA we are targeting 751 real-world scenarios, so we get better suited requirements. In 752 particular we work with a scenario that represents an emergency 753 support service that is hosted on a virtual computer and network 754 system, which is in turn hosted on the distributed virtualization 755 infrastructure of a medium-sized organization. The objective is to 756 clearly represent an application that requires high dynamicity and 757 high degree of reliability. The emergency support service 758 accomplishes this by being barely used when there is no incident but 759 also being heavily loaded when there is an incident. 761 Both the underlying infrastructure and virtual network share the same 762 topology. They have four independent but interconnected network 763 domains that form part of the same administrative domain 764 (organization). The first domain hosts the systems of the 765 headquarters (HQ) of the owner organization, so the VNFs it hosts 766 (servants) implement the emergency support service. We defined them 767 as ``servants'' because they are Virtual Machine (VM) instances that 768 work together to provide a single service by means of backing the 769 Load Balancer (LB) instances deployed in the separate domains. The 770 amount of resources (servants) assigned to the service will be 771 adjusted by ARCA, attaching or detaching servants to meet the load 772 boundaries specified by administrators. 774 The other domains represent different buildings of the organization 775 and will host the clients that access to the service when an incident 776 occurs. They also host the necessary LB instances, which are also 777 VNFs that are controlled by ARCA to regulate the access of clients to 778 servants. All domains will have physical detectors to provide 779 external information that can (and will) be correlated to the load of 780 the controlled virtual computer and network system and thus will 781 affect to the amount of servants assigned to it. Although the 782 underlying infrastructure, the servants, and the ARCA instance are 783 the same as those those used in the real world, both clients and 784 detectors will be emulated. Anyway, this does not reduce the 785 transferability of the results obtained from our experiments as it 786 allows to expand the amount of clients beyond the limits of most 787 physical infrastructures. 789 Each underlying OpenStack domain will be able to host a maximum of 790 100 clients, as they will be deployed on a low profile virtual 791 machine (flavor in OpenStack). In general, clients will be 792 performing requests at a rate of one request every ten seconds, so 793 there would be a maximum of 30 requests per second. However, under 794 the simulated incident, the clients will raise their load to reach a 795 common maximum of 1200 requests per second. This mimics the shape 796 and size of a real medium-size organization of about 300 users that 797 perform a maximum of four requests per second when they need some 798 support. 800 The topology of the underlying network is simplified by connecting 801 the four domains to the same, high-performance switch. However, the 802 topology of the virtual network is built by using direct links 803 between the HQ domain and the other three domains. These are 804 complemented by links between domains 2 and 3, and between domains 3 805 and 4. This way, the three domains have three paths to reach the HQ 806 domain: a direct path with just one hop, and two indirect paths with 807 two and three hops, respectively. 809 During the execution of the experiment, the detectors notify the 810 incident to the controller as soon as it happens. However, although 811 the clients are stimulated at the same time, there is some delay 812 between the occurrence of the incident and the moment the network 813 service receives the increase in the load. One of the main targets 814 of our experiment is to study such delay and take advantage of it to 815 anticipate the amount of servants required by the system. We discuss 816 it below. 818 In summary, this scenario highlights the main benefits of ARCA to 819 play the role of VIM and interacting with the underlying OpenStack 820 platform. This means the advancement towards an efficient use of 821 resources and thus reducing the CAPEX of the system. Moreover, as 822 the operation of the system is autonomic, the involvement of human 823 administrators is reduced and, therefore, the OPEX is also reduced. 825 7.3. OpenStack Platform 827 The implementation of the scenario described above reflects the 828 requirements of any edge/branch networking infrastructure, which are 829 composed of several distributed micro-data-centers deployed on the 830 wiring centers of the buildings and/or storeys. We chose to use 831 OpenStack to meet such requirements because it is being widely used 832 in production infrastructures and the resulting infrastructure will 833 have the necessary robustness to accomplish our objectives, at the 834 time it reflects the typical underlying platform found in any SDN/NFV 835 environment. 837 We have deployed four separate network domains, each one with its own 838 OpenStack instantiation. All domains are totally capable of running 839 regular OpenStack workload, i.e. executing VMs and networks, but, as 840 mentioned above, we designate the domain 1 to be the headquarters of 841 the organization. The different underlying networks required by this 842 (quite complex) deployment are provided by several VLANs within a 843 high-end L2 switch. This switch represents the distributed network 844 of the organization. Four separated VLANs are used to isolate the 845 traffic within each domain, by connecting an interface of OpenStack's 846 controller and compute nodes. These VLANs therefore form the 847 distributed data plane. Moreover, other VLAN is used to carry the 848 control plane as well as the management plane, which are used by the 849 NFV-MANO, and thus ARCA. It is instantiated in the physical machine 850 called ARCA Node, to exchange control and management operations in 851 relation to the collector and enforcer defined in ARCA. This VLAN is 852 shared among all OpenStack domains to implement the global control of 853 the virtualization environment pertaining to the organization. 854 Finally, other VLAN is used by the infrastructure to interconnect the 855 data planes of the separated domains and also to allow all elements 856 of the infrastructure to access the Internet to perform software 857 installation and updates. 859 Installation of OpenStack is provided by the Red Hat OpenStack 860 Platform, which is tightly dependent on the Linux operating system 861 and closely related to the software developed by the OpenStack Open 862 Source project. It provides a comprehensive way to install the whole 863 platform while being easily customized to meet our specific 864 requirements, while it is also backed by operational quality support. 866 The ARCA node is also based on Linux but, since it is not directly 867 related to the OpenStack deployment, it is not based on the same 868 distribution. It is just configured to be able to access the control 869 and management interfaces offered by OpenStack, and therefore it is 870 connected to the VLAN that hosts the control and management planes. 871 On this node we deploy the NFV-MANO components, including the micro- 872 services that form an ARCA instance. 874 In summary, we dedicate nine physical computers to the OpenStack 875 deployment, all are Dell PowerEdge R610 with 2 x Xeon 5670 2.96 GHz 876 (6 core / 12 thread) CPU, 48 GiB RAM, 6 x 146 GiB HD at 10 kRPM, and 877 4 x 1 GE NIC. Moreover, we dedicate an additional computer with the 878 same specification to the ARCA Node. We dedicate a less powerful 879 computer to implement the physical router because it will not be 880 involved in the general execution of OpenStack nor in the specific 881 experiments carried out with it. Finally, as detailed above, we 882 dedicate a high-end physical switch, an HP ProCurve 1810G-24, to 883 build the interconnection networks. 885 7.4. Initial Results 887 Using the platform described above we execute an initial but long- 888 lasting experiment based on the target scenario introduced at the 889 beginning of this section. The objective of this experiment is 890 twofold. First, we aim to demonstrate how ARCA behaves in a real 891 environment. Second, we aim to stress the coupling points between 892 ARCA and OpenStack, which will raise the limitations of the existing 893 interfaces. 895 With such objectives in mind, we define a timeline that will be 896 followed by both clients and external event detectors. It forces the 897 virtualized system to experience different situations, including 898 incidents of many severities. When an incident is found in the 899 timeline, the detectors notify it to the ARCA-based VIM and the 900 clients change their request rates, which will depend on the severity 901 of the incident. This behavior is widely discussed in ICIN 2018 902 [ICIN-2018], remarking how users behave after occurring a disaster or 903 another similar incident. 905 The ARCA-based VIM will know the occurrence of the incident from two 906 sources. First, it will receive the notification from the event 907 detectors. Second, it will notice the change of the CPU load of the 908 servants assigned to the target service. In this situation, ARCA has 909 different opportunities to overcome the possible overload (or 910 underload) of the system. We explore the anticipation approach 911 deeply discussed in ICIN 2018 [ICIN-2018]. Its operation is enclosed 912 in the analyzer and decider and it is based on an algorithm that is 913 divided in two sub-algorithms. 915 The first sub-algorithm reacts to the detection of the incident and 916 ulterior correlation of its severity to the amount of servants 917 required by the system. This sub-algorithm hosts the regression of 918 the learner, which is based on the SVM/SVR technique, and predicts 919 the necessary resources from two features: the severity of the 920 incident and the time elapsed from the moment it happened. The 921 resulting amount of servants is established as the minimum amount 922 that the VIM can use. 924 The second sub-algorithm is fed with the CPU load measurements of the 925 servants assigned to the service, as reported by the OpenStack 926 platform. With this information it checks whether the system is 927 within the operating parameters established by the NFVO. If not, it 928 adjusts the resources assigned to the system. It also uses the 929 minimum amount established by the other sub-algorithm as the basis 930 for the assignation. After every correction, this algorithm learns 931 the behavior by adding new correlation vectors to the SVM/SVR 932 structure. 934 When the experiment is running, the collector component of the ARCA- 935 based VIM is attached to the telemetry interface of OpenStack by 936 using the SDK to access the measurement data generated by Ceilometer 937 and stored by Gnocchi. In addition, it is attached to the external 938 event detectors in order to receive their notifications. On the 939 other hand, the enforcer component is attached to the Compute 940 interface of OpenStack by also using its SDK to request the 941 infrastructure to create, destroy, query, or change the status of a 942 VM that hosts a servant of the controlled system. Finally, the 943 enforcer also updates the lists of servers used by the load balancers 944 to distribute the clients among the available resources. 946 During the execution of the experiment we make the ARCA-based VIM to 947 report the severity of the last incident, if any, the time elapsed 948 since it occurred, the amount of servants assigned to the controlled 949 system, the minimum amount of servants to be assigned, as determined 950 by the anticipation algorithm, and the average load of all servants. 951 In this instance, the severities are spread between 0 (no incident) 952 and 4 (strongest incident), the elapsed times are less than 35 953 seconds, and the minimum server assignation (MSA) is below 10, 954 although the hard maximum is 15. 956 With such measurements we illustrate how the learned correlation of 957 the three features (dimensions) mentioned above is achieved. Thus, 958 when there is no incident (severity = 0), the MSA is kept to the 959 minimum. In parallel, regardless of the severity level, the 960 algorithm learned that there is no need to increase the MSA for the 961 first 5 or 10 seconds. This shows the behavior discussed in this 962 paper, that there is a delay between the occurrence of an event and 963 the actual need for updated amount of resources, and it forms one 964 fundamental aspect of our research. 966 By inspecting the results, we know that there is a burst of client 967 demands that is centered (peak) around 15 seconds after the 968 occurrence of an incident or any other change in the accounted 969 severity. We also know that the burst lasts longer for higher 970 severities, and it fluctuates a bit for the highest severities. 971 Finally, we can also notice that for the majority of severities, the 972 increased MSA is no longer required after 25 seconds from the time 973 the severity change was notified. 975 All that information becomes part of the knowledge of ARCA and it is 976 stored both by the internal structures of the SVM/SVR and, once 977 represented semantically, in the semantic database that manages the 978 knowledge base of ARCA. Thus, it is used to predict any future 979 behavior. For instance, is an incident of severity 3 has occurred 10 980 seconds ago, ARCA knows that it will need to set the MSA to 6 981 servants. In fact, this information has been used during the 982 experiment, so we can also know the accuracy of the algorithm by 983 comparing the anticipated MSA value with the required value (or even 984 the best value). However, the analysis of such information is left 985 for the future. 987 While preparing and executing the experiment we found several 988 limitation intrinsic to the current OpenStack platform. First, 989 regardless of the CPU and memory resources assigned to the underlying 990 controller nodes, the platform is unable to record and deliver 991 performance measurements at a lower interval than every 10 seconds, 992 so it is currently not suitable for real time operations, which is 993 important for our long-term research objectives. Moreover, we found 994 that the time required by the infrastructure to create a server that 995 hosts a somewhat heavy servant is around 10 seconds, which is too far 996 from our targets. Although these limitations can be improved in the 997 future, they clearly justify that our anticipation approach is 998 essential for the proper working of a virtual system and, thus, the 999 integration of external information becomes mandatory for future 1000 system management technologies, especially considering the 1001 virtualization environments. 1003 Finally, we found it difficult for the required measurements to be 1004 pushed to external components, so we had to poll for them. 1005 Otherwise, some component of ARCA must be instantiated along the main 1006 OpenStack components and services so it has first-hand and prompt 1007 access to such features. This way, ARCA could receive push 1008 notifications with the measurements, as it is for the external 1009 detectors. This is a key aspect that affects the placement of the 1010 NFV-VIM, or some subpart of it, on the general architecture. 1011 Therefore, for future iterations of the NFV reference architecture, 1012 an integrated view between the VIM and the NFVI could be required to 1013 reflect the future reality. 1015 8. Relation to Other IETF/IRTF Initiatives 1017 TBD 1019 9. IANA Considerations 1021 This memo includes no request to IANA. 1023 10. Security Considerations 1025 The major security concerns of the integration of external event 1026 detectors and ARCA to manage SDN/NFV systems is that the boundaries 1027 of the control and management planes are crossed to introduce 1028 information from outside. Such communications must be highly and 1029 heavily secured since some malfunction or explicit attacks might 1030 compromise the integrity and execution of the controlled system. 1031 However, it is up to implementers to deploy the necessary 1032 countermeasures to avoid such situations. From the design point of 1033 view, since all oprations are performed within the control and/or 1034 management planes, the security level of the current solution is 1035 inherited and thus determined by the security masures established by 1036 the systems conforming such planes. 1038 11. Acknowledgements 1040 TBD 1042 12. References 1043 12.1. Normative References 1045 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1046 Requirement Levels", BCP 14, RFC 2119, 1047 DOI 10.17487/RFC2119, March 1997, 1048 . 1050 12.2. Informative References 1052 [ETSI-NFV-IFA-004] 1053 ETSI NFV GS NFV-IFA 004, "Network Functions Virtualisation 1054 (NFV); Acceleration Technologies; Management Aspects 1055 Specification", 2016. 1057 [ETSI-NFV-IFA-005] 1058 ETSI NFV GS NFV-IFA 005, "Network Functions Virtualisation 1059 (NFV); Management and Orchestration; Or-Vi reference point 1060 - Interface and Information Model Specification", 2016. 1062 [ETSI-NFV-IFA-006] 1063 ETSI NFV GS NFV-IFA 006, "Network Functions Virtualisation 1064 (NFV); Management and Orchestration; Vi-Vnfm reference 1065 point - Interface and Information Model Specification", 1066 2016. 1068 [ETSI-NFV-IFA-019] 1069 ETSI NFV GS NFV-IFA 019, "Network Functions Virtualisation 1070 (NFV); Acceleration Technologies; Management Aspects 1071 Specification; Release 3", 2017. 1073 [ETSI-NFV-MANO] 1074 ETSI NFV GS NFV-MAN 001, "Network Functions Virtualisation 1075 (NFV); Management and Orchestration", 2014. 1077 [I-D.geng-coms-architecture] 1078 Geng, L., Qiang, L., Lucena, J., Ameigeiras, P., Lopez, 1079 D., and L. Contreras, "COMS Architecture", draft-geng- 1080 coms-architecture-02 (work in progress), March 2018. 1082 [I-D.qiang-coms-netslicing-information-model] 1083 Qiang, L., Galis, A., Geng, L., 1084 kiran.makhijani@huawei.com, k., Martinez-Julia, P., 1085 Flinck, H., and X. Foy, "Technology Independent 1086 Information Model for Network Slicing", draft-qiang-coms- 1087 netslicing-information-model-02 (work in progress), 1088 January 2018. 1090 [I-D.song-ntf] 1091 Song, H., Zhou, T., and Z. Li, "Toward a Network Telemetry 1092 Framework", draft-song-ntf-01 (work in progress), March 1093 2018. 1095 [ICIN-2017] 1096 P. Martinez-Julia, V. P. Kafle, and H. Harai, "Achieving 1097 the autonomic adaptation of resources in virtualized 1098 network environments, in Proceedings of the 20th ICIN 1099 Conference (Innovations in Clouds, Internet and Networks, 1100 ICIN 2017). Washington, DC, USA: IEEE, 2018, pp. 1--8", 1101 2017. 1103 [ICIN-2018] 1104 P. Martinez-Julia, V. P. Kafle, and H. Harai, 1105 "Anticipating minimum resources needed to avoid service 1106 disruption of emergency support systems, in Proceedings of 1107 the 21th ICIN Conference (Innovations in Clouds, Internet 1108 and Networks, ICIN 2018). Washington, DC, USA: IEEE, 2018, 1109 pp. 1--8", 2018. 1111 [OPENSTACK] 1112 The OpenStack Project, "http://www.openstack.org/", 2018. 1114 Author's Address 1116 Pedro Martinez-Julia (editor) 1117 NICT 1118 4-2-1, Nukui-Kitamachi 1119 Koganei, Tokyo 184-8795 1120 Japan 1122 Phone: +81 42 327 7293 1123 Email: pedro@nict.go.jp