idnits 2.17.1 draft-yan-idn-consideration-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 30, 2017) is 2369 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'Device ID' is mentioned on line 251, but not defined == Missing Reference: 'Port ID' is mentioned on line 178, but not defined == Missing Reference: 'Link ID' is mentioned on line 251, but not defined == Unused Reference: 'RFC1195' is defined on line 755, but no explicit reference was found in the text == Unused Reference: 'RFC5301' is defined on line 764, but no explicit reference was found in the text == Unused Reference: 'RFC5304' is defined on line 768, but no explicit reference was found in the text == Unused Reference: 'RFC5305' is defined on line 772, but no explicit reference was found in the text == Unused Reference: 'RFC5308' is defined on line 776, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 none S. Yan 3 Internet-Draft Huawei 4 Intended status: Informational P. Martinez-Julia 5 Expires: May 3, 2018 NICT/Japan 6 A. Cabellos-Aparicio 7 Technical University of Catalonia 8 October 30, 2017 10 A General Considerations of Intelligence Driven Network 11 draft-yan-idn-consideration-00 13 Abstract 15 This document aims to pinpoint the work scope of Intelligence Driven 16 Network (IDN) and mine the potential standardization work. Firstly, 17 the problems and new requirements for the existing methods are 18 analyzed. Numbers of high value use-cases are proposed as examples 19 to instantiate them. A benchmark framework design is proposed, which 20 is important during the machine learning and inference process. 21 Finally, a reference model of IDN is proposed, based on which the 22 potential standardization work is analyzed. 24 Requirements Language 26 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 27 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 28 "OPTIONAL" in this document are to be interpreted as described in 29 [RFC2119] when they appear in ALL CAPS. When these words are not in 30 ALL CAPS (such as "should" or "Should"), they have their usual 31 English meanings, and are not to be interpreted as [RFC2119] key 32 words. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at https://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on May 3, 2018. 50 Copyright Notice 52 Copyright (c) 2017 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (https://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 68 2. Scope and use cases . . . . . . . . . . . . . . . . . . . . . 4 69 2.1. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . 4 70 2.2. High Value Use Cases . . . . . . . . . . . . . . . . . . 4 71 2.2.1. Traffic Prediction . . . . . . . . . . . . . . . . . 4 72 2.2.2. QoS management . . . . . . . . . . . . . . . . . . . 5 73 2.2.3. Deep Reinforcement-Learning Control of the Network . 6 74 2.2.4. QoE Management via Supervised Learning . . . . . . . 9 75 2.2.5. TBD . . . . . . . . . . . . . . . . . . . . . . . . . 10 76 3. Measurement and Data Format . . . . . . . . . . . . . . . . . 10 77 3.1. Measurement Tools and Methods . . . . . . . . . . . . . . 10 78 3.2. Data Format Analysis . . . . . . . . . . . . . . . . . . 10 79 4. Benchmarking Framework . . . . . . . . . . . . . . . . . . . 11 80 5. References Model and Potential Standardization Points . . . . 12 81 5.1. References Model . . . . . . . . . . . . . . . . . . . . 12 82 5.2. Measurement . . . . . . . . . . . . . . . . . . . . . . . 15 83 5.3. Data representation, transport and aggregation . . . . . 15 84 5.4. Legacy Device Route control . . . . . . . . . . . . . . . 16 85 5.5. TBD . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 86 6. Security Considerations . . . . . . . . . . . . . . . . . . . 16 87 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 88 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 16 89 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 16 90 9.1. Normative References . . . . . . . . . . . . . . . . . . 16 91 9.2. Informative References . . . . . . . . . . . . . . . . . 17 92 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 18 94 1. Introduction 96 Recently, AI technology has made a great achievement and become more 97 and more popular. The combination of AI and network is also a hot 98 topic. The concept of Intelligence Driven Network (IDN) has been 99 proposed. This concept is intended to describe the schemes that 100 introducing AI into network and provide new solutions for the current 101 and future network problems. There has been quite a lot of 102 discussions about the AI application in the network in both academic 103 and industrial area. However, the detail works, especially the 104 potential standard points are still not clear. 106 In this document, we want to summerize the valuable content in the 107 idnet maillist and make clear about the following. 109 o What are the requirements? In network area, what problems need AI 110 to solve? It always makes misunderstanding that AI is almighty. 111 But it is factual that AI has both advantages and disadvantages. 112 The work scope and scenarios, which AI may be useful and perform 113 well, will be discussed and analyzed. 115 o What are the gap when combining AI and network? The modern AI 116 algorithms are proposed by image processing area but not network. 117 Most of the algorithms cannot be migrated and used directly. Take 118 the data format as an example. The input and output of the AI 119 algorithm may be just numerical matrix or vector. The network 120 data are not entirely formatted and regular. They need to be 121 translated or converted before and after the algorithm. The gaps, 122 like the data format, data orchestration and etc., will be 123 analyzed. 125 o What are the potential and new standard points? The intruduction 126 of AI will bring new requirements for the current network. For 127 example, the AI engine may need high frequency and high accuracy 128 data to feed. Moreover, these data needs to be captured and 129 transmitted in real-time and continuously. What improvements 130 should be accomplished for the existing protocols? Whether there 131 are new protocol requirements? What communication processes are 132 universal and what kinds of data format that can be utilized in 133 most of the scenarios? 135 This document aims to become the blueprint for the future work. The 136 structure is organized as following. Section 2 describes the work 137 scope of idnet and summerize the use cases. Section 3 indicates the 138 analysis of measurement and data format. Section 4 discusses about 139 the benchmark of data. Section 5 abstracts the IDN architecture and 140 gives a brief analysis of potential standard points. Section 6 141 points out the new security challenge which AI brings to the network. 142 Section 7 to 9 are IANA, Acknowledgements and References. 144 TBD 146 2. Scope and use cases 148 TBD 150 2.1. Scope 152 A general description about what should be focused during the IETF 153 work and what should not. Clarify the work boundary. TBD 155 2.2. High Value Use Cases 157 There are numbers of use cases, which have been discussed in the 158 idnet mail list. Describe the scenarios that may be useful and 159 valuable. A details analysis may be helpful for the data and 160 protocol design. 162 2.2.1. Traffic Prediction 164 Collect the history traffic data and external data which may 165 influence the traffic. Predict the traffic in short/long/specific 166 term. Avoid the congestion or risk in previously. 168 The process, data format and message needs are: 170 Process: 1. Data collection (e.g. traffic sample of physical/logical 171 port ); 2. Training Model; 3. Real-time data capture and input; 4. 172 Predication output; 5. Fix error and go back to 3. 174 Data Format: 176 Time : [Start, End, Unit, Number of Value, Sampling Period] 178 Position: [Device ID, Port ID] 180 Direction: IN / OUT 182 Route : [R1, R2, ..., RN] (might be useful for some scenarios) 184 Service : [Service ID, Priority, ...] (Not clear how to use it but 185 seems useful) 187 Traffic: [T0, T1, T2, ..., TN] 188 Message : 190 Request: ask for the data 192 Reply: Data 194 Notice: For notification or others 196 Policy: Control policy 198 2.2.2. QoS management 200 It is worthy to predict the traffic change for avoiding the 201 congestion and ensuring QoS. As the following figure shown, the AI 202 system continuously collects link status data from the network. This 203 AI system is responsible for two things. One is monitoring and 204 predicting the traffic on each link and the other one is calculating 205 the usable route for any pair of nodes according to the prediction 206 and current link status. Assume that there is a VPN named VPN_S_D 207 from node S to D which pass through S-A-B-C-D. According to the 208 prediction, there will be a huge traffic flow from node A to C in the 209 future 10 min. The traffic will increase the end-to-end delay from S 210 to D so that the QoS will not be ensured. 212 x x 213 _ A ---- B ---- C._ link status +----------+ 214 ,' \ / `. =============>|IDN Engine| 215 -' \ / `- +----------+ 216 S ------I ---- J ---- K ---- D 217 . / \ ,' 218 `. / \ ,' 219 ' O ---- P ---- Q ' 221 There are at least two solutions. one is modifying the object's 222 configuration to avoid the potential congestion. For example, we 223 modify the VPN_S_D route from S-A-B-C-D to S-I-J-K-D. The other one 224 is restricting non-object's transmission so that to protect the 225 object's QoS. For example, we increase the reserved bandwidth of 226 VPN_S_D or modify the route of non-object flows from S-A-B-C-D to 227 S-I-J-K-D therefore most of the traffic will not affect VPN_S_D. 229 Here we may have some challenges. Challenge 1 is the AI prediction 230 and autonomic decision should be a quick response. The whole process 231 must be finished before the congestion happens meanwhile the AI 232 system is meaningless. The question is how to implement such quick 233 response? Challenge 2 is whether there is existing protocols which 234 can support high frequency measurement? Because AI system needs to 235 be fed with continuous link status data. And the real-time data need 236 to be captured frequently otherwise the route change will be 237 worthless. I think the protocols that support high frequency 238 measurement and data collection may become one of our focus point. 240 The process, data format and message needs are: 242 Process: 1. Data capture (e.g. traffic sample of physical/logical 243 port ); 2. Training Model; 3. Real-time data capture and input; 4. 244 Output percentages; 5. Fix error and go back to 3. 246 Data Format: 248 Time : [Timestamp, Value type (Delay/Packet Loss/...), Unit, 249 Number of Value, Sampling Period] 251 Position: [Link ID, Device ID] 253 Value: [V0, V1, V2, ..., VN] 255 Message : 257 Request: ask for the data 259 Reply: Data 261 Notice: For notification or others 263 Policy: Control policy 265 2.2.3. Deep Reinforcement-Learning Control of the Network 267 Recently important breakthroughs have been achieved in the area Deep- 268 Reinforcement Learning (DRL) [REF1] architectures where agents can be 269 trained online to operate complex environments and achieve quasi- 270 optimal configurations. In this context, a DRL can be used to 271 control the routing of the network and achieve the target policy set 272 by the administrators (e.g., [REF2, REF3, REF4]). 274 The following figure describes a common architecture of a DRL 275 operating a network. The agent acts upon the network (action) by 276 changing the configuration, this results in the network changing its 277 fundamental state (e.g, different per-link utilization and a 278 different traffic load). Finally, the reward function is defined by 279 the operator and represents the target performance (e.g., load- 280 balance the traffic in the network). The agent will learn how to act 281 upon the network to maximize the expected reward function. 283 +---------------+ 284 +------------------> | 285 | | Agent +---------------------+ 286 | +---------------> | | 287 | | +---------------+ | 288 | | | 289 State | | | 290 | | Reward Function (Policy) Action | 291 | | | 292 | | | 293 | | | 294 | | +------------------------------------+ | 295 | +----+ | | 296 +-------+ Network <-----------+ 297 | | 298 +------------------------------------+ 300 The main operational advantages of DRL agents with respect to 301 existing optimization techniques are: 303 1. DRL are able to learn and generalize from past experience to 304 provide solutions to unseen scenarios. This is not possible 305 using existing optimization techniques that do not learn from the 306 past. 308 2. Once trained, either offline or online, DRL agents can optimize 309 in one single step. On the contrary, existing optimization 310 techniques require to run iteratively each time a new scenario is 311 found, for instance when a link goes down or the traffic changes 312 in a significant way. It is worth noting that a common practice 313 is to run such techniques in advance of common scenarios and 314 store their resulting configurations, however it is very complex 315 to consider all the potential scenarios. 317 3. DRL agents see the network as a black-box and do no need any 318 prior assumption about the system. However heuristics, very 319 commonly used in optimization strategies, are tailored for the 320 problem they are trying to optimize. However, an operator only 321 needs to change the reward function to implement a different 322 target network policy. 324 In what follows we describe the process, data format and messages 325 needed assuming a DRL agent that seeks to load-balance the traffic of 326 the network that is, to minimize the maximum loaded link. This is a 327 very common optimization strategy. 329 Process: 1.- Act upon the network by changing the routing 330 configuration, for instance using a standard mechanism. 2.- Receive 331 the state of the network, this is the per-link delay and the current 332 traffic load. 3.- Compute the reward function as a function of the 333 state. 4.- Deep Reinforcement Learning training. 5.- Go back to step 334 1. 336 Data Format 338 (state) Per-Link Utilization: [link id, utilization, averaging 339 time] 341 (action) Change on the routing configuration. This can be done 342 through the SDN controller and/or other standard mechanisms. 344 (reward) This is an algorithm that has as input the state and as 345 output a value that represents how close we are to the target 346 policy set by the operator. More about this can be found in the 347 next section. 349 Messages: 351 State: Measure the per-link utilization 353 Action: Change the routing configuration 355 2.2.3.1. The Reward Function as the Network Policy 357 The agent seek to maximize the expected reward function and it 358 represents the target policy that the agent will aim to achieve and 359 configure on the network. In this context the reward function is the 360 mathematical representation of the target network policy. However, 361 the entire architecture includes a set of different pieces that may 362 come from different vendors but must interoperate, the pieces are: 363 the agent itself, the reward function and the state. This requires 364 the following standardization efforts: 366 1. The reward function and its translation from the human-readable 367 target network policy. The operators may want to use different 368 vendor DRL agents that need to understand the reward function. 369 Please note that the reward function depends on the 370 representation of the state. 372 2. The state includes monitoring information about the network, such 373 as the per-link utilization or the traffic load. Since the state 374 is an input of the agent and is used in the reward function, 375 there is a need for standard representation so that the different 376 pieces can interoperate. 378 2.2.4. QoE Management via Supervised Learning 380 Networks can measure low-level metrics, such as delay, jitter and 381 losses. However users perceive the performance of the network based 382 on QoE metrics, such as Mean Opinion Scores. Unfortunately, QoE 383 metrics cannot be typically directly measured over the wire and as 384 such, need the subjective views of the users. The challenge is then 385 to operate the network based on low-level metrics while fulfilling 386 non-measurable QoE metrics. One of the main reason behind this 387 challenge is that the relationship between the low-level and the QoE 388 metrics are very complex, i.e. multi-dimensional and non-lineal. 390 +-------------+ +---------------------+ 391 | Supervised | Extract |Relation between QoE | 392 | Learning +-Knowledge-->and low-level network+-------+ 393 | | |metrics | | 394 +------^------+ +---------------------+ | 395 + | 396 Learn | 397 | Install Knowledge 398 | | 399 +----------+--------------+ +-----------------v-----+ 400 | Network Analytics | | | 401 | (including Ground Truth)| | Network Management | 402 | | | | 403 +----------+--------------+ +-----------------------+ 404 ^ | 405 | | 406 | +-------------+ | 407 | | | | 408 +-----Monitor-------+ Network <----Operate----+ 409 | | 410 +-------------+ 412 For this a well-established technique (e.g., see [REF5] and the 413 references therein) is to follow the architecture depicted in the 414 following figure. First the network low-level metrics are measured 415 using telemetry, this information is stored in the Network Analytics 416 platform. In addition to this users and or applications are polled 417 to obtain QoE metrics of the network. The data-set containing both 418 the low-level metrics and the QoE metrics is considered the ground 419 truth. 421 By means of supervised learning (e.g., deep neural networks) we aim 422 to learn the relation between the low-level and the QoE metrics. As 423 an example we aim to learn the relation between the amounts of losses 424 in different wireless links, the SNR and the utilization with the 425 perceived MoS. Typically it has been shown that such relationship is 426 non-lineal and multi-dimensional and as such, can be understood by a 427 neural network. This relationship is the knowledge that we extract 428 from the ground truth and it is used by the Network Management (NM) 429 module. By means of this knowledge, the NM can understand how to 430 operate the network based on low-level metrics (e.g., keep losses 431 below a certain threshold) to fulfill QoE requirements. 433 2.2.5. TBD 435 3. Measurement and Data Format 437 TBD 439 3.1. Measurement Tools and Methods 441 The modern AI algorithms are mostly based on data-driven, which means 442 that the AI engine needs quite plenty of data to feed and upgrade. 443 In other words, higher frequency and accuracy data is required. The 444 high scalability requirement needs distributed measurement tools to 445 provide such abilities. The traditional methods and improvements may 446 hardly support. 448 Firstly, the current measurement methods mostly orient to the 449 service. For example, the voice service requires the end to end 450 delay and jitter in a low level. Besides that, the AI engine may 451 need more data from both network and other sources. For example, the 452 QoE and identity information may influence the AI engine to make 453 different decisions. The current measurement tools and data model 454 cannot support this ability. Thus, the potential usable tools and 455 methods, such as high frequency, high precision, new KPIs and so on, 456 may need to develop. 458 Secondly, the current measurement methods mostly cannot support high 459 frequency measurement. Even though it can, the data feedback scheme 460 is commonly closed. The word "closed" means that the measured data 461 is commonly sent to the device which launches the measure action 462 rather than the data demander (AI Engine). The future measurement 463 tools require more programmability, especially in the data feedback 464 scheme. 466 TBD. 468 3.2. Data Format Analysis 470 There is huge gap between the current network data and algorithm 471 data. The network data, such as IP address, delay, link utilization 472 and etc., is mostly semantic. It means that each data actually 473 describe a specific physical or logical entity. For example, one IP 474 address means a certain location or a certain host in the network. 475 However, the input and output data of an algorithm is usually non- 476 semantic, which means it is not responding to a specific 477 concept/action/device that can be found in the network. This depends 478 on the fundamental design of AI algorithm and is hardly changed in 479 the short term. 481 Another issue is that the AI engine potentially needs to obtain data 482 from external sources. For the data that can be provided one-off, it 483 is easily solved according to the application. For the data that 484 needs to be provided continuously (e.g. the real-time external data), 485 it is required to define the data format that satisfy the algorithm. 486 Similarly, the output of algorithm may need to be translated into 487 specific format that the next step devices can run and execute. 488 Otherwise, it is hard to build up the full autonomic close loop of 489 the network management. In other words, the data aggregation process 490 is important and it is valuable to build the bridge between the 491 network data and algorithm data. 493 TBD. 495 4. Benchmarking Framework 497 A standard benchmarking framework is required to assess the quality 498 of an AI mechanism when it is used to resolve a specific problem in 499 the network management and control area. It comprises a reference 500 set of procedures, methods, models, and boundary values that *must* 501 be enforced to the benchmarked mechanism, so that its operation can 502 be comparable to other mechanisms and users can easily understand 503 what to expect from each one. 505 Moreover, both the metrics included as a reference within the 506 benchmarking framework and the results obtained from its application 507 to a new mechanism must follow a standard format. Therefore, the 508 standard formats must be enforced to all data, either being 509 introduced to the benchmarking application or system (consumed), or 510 obtained from its application (produced). 512 A common and decentralized "data market" can (and would) arise from 513 the inclusion, dependency, and the general relation of all data, 514 considering it is represented using the same concepts (ontology) and 515 the standard format mentioned here. As a reference, it is worth to 516 mention that a similar approach has been already applied to genome 517 and protein data to build standardized and easily transferable data 518 banks [PMJ1][PMJ2] [PMJ3], and they have demonstrated to be key 519 enablers in their respective work areas. 521 The initial scope of input/output data would be the datasets, but 522 also the new knowledge items that are stated as a result of applying 523 the benchmarking procedures defined by the framework, which can be 524 collected together to build a database of benchmark results, or just 525 contrasted with other existing entries in the database to know the 526 position of the solution just evaluated. This increases the 527 usefulness of IDNET. 529 5. References Model and Potential Standardization Points 531 5.1. References Model 533 A three layers reference model of IDN has been proposed as follow. 534 This architecture can cover, explain and support most of the current 535 use cases and scenarios. 537 +-----------+ +----------+ 538 |Open |------------------------->| | 539 |Application| +---------------------+3rd Party |-+ 540 |Interface | | IDN Engine |Algorithm | | 541 +-----------+ | +---------+ +-----+ |Interface | | 542 +------------+ | |Algorithm| |Model| | | | 543 |Data Refiner+-->| +---------+ +-----+ +----------- | 544 +------------+ +----------------------------------+ 545 ^ | Training | Inference | 546 Intelligent | +----------------------------------+ 547 Layer +-----------------+ | 548 | | v 549 +-------------+ +-------------+ +-------------+ 550 |External Data| |Internal Data| | Policy | 551 |Interface | |Interface | | Generator | 552 +-------------+ +-------------+ +-------------+ 553 ^ ^ | 554 | | v 555 +----------+ +-------------+ +----------------+ 556 Control |3rd Party | |Aggregating |--->|Control Function| 557 Layer |Dataset | |Dataset | +----------------+ 558 +----------+ +-------------+ | Inference | 559 ^ ^ +----------------+ 560 | | | 561 | | | 562 | | v 563 +-------------+ +-----------+ +------------+ 564 Infras- |Terminal/User| |Measurement| | Network | 565 tructure|Device |--->|Function |<-----| Function | 566 Layer +-------------+ +-----------+ +------------+ 568 The under layer is Infrastructure layer, which contains network 569 function, measurement function and terminal/user device. The network 570 function stands for the traditional routers, switches and other 571 network devices, which are responsible for constructing the network 572 foundations and forwarding data. The Measurement function stands for 573 devices that can collect information from the network and various 574 devices. A popular option are probe system, which is deployed 575 distributed among the network. Besides that, some of the network 576 devices integrate the measure function and play two roles. The 577 information may involve but not limited the content listed in 578 following table. The Terminal/User Device stands for the device that 579 produces and consumes data, which may include PC, smart phone, 580 datacenter, content storage server, cloud and etc. Some of the data 581 produced by terminal/user devices is measurable. This type of data 582 will be captured by the measurement function. Other types of data 583 that cannot be measured directly by network measurement functions is 584 represented as 3rd party datasets, which hopefully can be utilized in 585 the future via 3rd party integration at the intelligence layer. 587 ----------------------------------------------------------------- 588 Type Content 589 ----------------------------------------------------------------- 590 Network Data Delay, Jitter, Packet Lose Rate, 591 Link Utilization, ... 592 Device Data Device Configuration, VPN Configuration, 593 Slicing Configuration, ... 594 User Data QoE Feedback, User Information, ... 596 Data Packet Packet Sample, Packet Character, ... 598 Other Type TBD 599 ----------------------------------------------------------------- 601 The middle layer is Control Layer, which contains Control Function, 602 Dataset Aggregation (Function) and 3rd Party Dataset. The control 603 function stands for entities that can control, configure and operate 604 devices, especially network devices. In SDN, controller and 605 orchestrator are control functions. Classical network devices such 606 as routers integrate the forwarding and control functions (although 607 as of today not with many instances of intelligent control 608 functions). Classical routers therefore include functions from two 609 layers. We foresee that the control function will most likely only 610 perform intelligent inference, but not learn. For example, to 611 execute neural networks, but do not train them. This is only an 612 assumption at this time though and may prove to be wrong in the 613 future when training becomes something easier defined into the 614 control layer. 616 The aggregated dataset function owns the ability to gather and tidy 617 the data. The database or database cluster is the typical example. 618 Some of the control devices, such as SDN controller, integrate this 619 function. Distributed instances aggregate data have also been 620 defined. The network data can be directly sent back to the control 621 function in support of network policies. For example, the controller 622 can adjust the flow table according to the local cache which collects 623 the network data periodically from the devices in its controlled 624 area. The 3rd party dataset involves the data that may be provided 625 by all kinds of applications or services. For example, the content 626 provider may own social contact data and the map service provider may 627 own the geographic data. This information does not belong to the 628 network but could be very helpful for intelligent analytics and 629 decision making in the network - which is why we device in the 630 architecture the ability to communicate it between 3rd parties and 631 the network. 633 The high layer, which is also the main body of IDN, is the 634 Intelligence Layer. This layer is commonly deployed in the 635 datacenter, or large scale computing centre that can support massive 636 storage and computing resources. To the south direction, there are 637 two interfaces which provides external data (3rd party data oriented) 638 and internal data (network data oriented) access. We define a data 639 refiner component to emphasize the need to adopt format and structure 640 of various types of collected information to the needs of the IDN 641 Engine. 643 The core of the IDN Engine are algorithm and model. The IDN Engine 644 can be built based on the result of the large body of research and 645 platform development work that already exists (albeit mostly 646 developed for and deployed with non-network data). The platform 647 should be agile extensible for future services, therefore we define a 648 3rd party Algorithm Interface to provide an adaptive developing 649 ability. The user (or a 3rd party) may develop his/her own 650 algorithms and upload then onto the IDN Engine via a northbound Open 651 Application Interface. Additional Northbound Open Application 652 interfaces can also be used to connect other software platforms to 653 the IDN Engine to create a cooperation between multiple systems (not 654 shown). 656 The output of IDN Engine is transmitted to the Policy Generator. 657 Since the policy language might be machine readable or unreadable, 658 the Policy Generator is responsible for generating the executable 659 commands and connect to the control devices. This process refers to 660 the interactions of northbound interface of control devices - which 661 is what often gets standardized. Therefore, some of the potential 662 standardization points will be mentioned in the following. 664 5.2. Measurement 666 In IDN, the intelligent system (or database) needs frequent and 667 repeat measurement to obtain the link information. A fast measure 668 and feedback protocol is needed to meet the requirement of 669 measurement and data collecting. It may be based on SNMP or an 670 absolutely new protocol. The intelligent system needs massive data 671 to feed and support to formulate the policy and decision. Therefore, 672 the measurement must be satisfy the data requirement of IDN. 673 Firstly, there may be higher-level requirement for the existing 674 measuring technology. The high timeliness is one of the potential 675 point. The IDN's control function needs accurate, global and highly 676 real-time network data support. The current measure technology can 677 only satisfy at least two characters of the three. Secondly, the IDN 678 may need more kinds of data type to measure. Not only the delay, 679 jitter and packet loss rate, but also the link utilization and other 680 necessary parameters. 682 5.3. Data representation, transport and aggregation 684 The data representation is significant. Most of the current AI 685 algorithms were born in the pattern recognition area, especially the 686 image processing. The advantage of these algorithms is that they are 687 very good at dealing with complex problems, especially mining and 688 modeling the hidden relationship among the non-semantic data. One of 689 the disadvantages is that almost all the algorithms require the 690 training data has a high concordance. Fortunately, the image file 691 instinctively owns this character. All the images can be expressed 692 as uniform binary vectors or can be easily transformed into uniform 693 format. But this condition is hardly satisfied in network area. 695 A uniform data format is required, which can implement the 696 justification, correlation and affiliation of the data. Which may 697 obtain the best performance of AI algorithm to mine the valid pattern 698 hidden in the data. Since the intelligent system is data-driven, and 699 the data resources are from different kind of vendors and device 700 types, the data representation SHALL be consistent so that the 701 intelligent system could merge the data and do the analysis/learning. 702 Also, the data collection interface might also need to be 703 standardized so that the interface is able to get the data the 704 intelligent system needs. 706 Moreover, it is significant to standard the policy representation. 707 Since there may multiply SDN controller system, a readable and 708 uniform policy representation is valuable to improve the policy 709 deploying efficiency and simplify the communication between 710 controllers on the East-West direction. 712 5.4. Legacy Device Route control 714 Similar with IPv4/IPv6 transition, the IDN potentially faces to the 715 legacy problem, which means that the new devices and functions will 716 co-work with the legacy devices. Therefore, it is potentially 717 required to design the control protocols to solve the transition 718 problems. 720 5.5. TBD 722 TBD 724 6. Security Considerations 726 When security relevant decisions are made based on the use of 727 intelligent analytics or automated intelligent decision making, care 728 must be taken to understand the new security challenges. When for 729 example more intelligent decisions are enabled through the collection 730 of ever more data, it needs to be analyzed how that potentially 731 enables attackers to easier feed data that derails the intelligent 732 system ability to distinguish good from bad behavior. 734 TBD 736 7. IANA Considerations 738 There is no IANA action required by this document. 740 8. Acknowledgements 742 TBD 744 9. References 746 9.1. Normative References 748 [ISO_IEC10589] 749 "Intermediate system to Intermediate system intra-domain 750 routeing information exchange protocol for use in 751 conjunction with the protocol for providing the 752 connectionless-mode Network Service (ISO 8473), ISO/IEC 753 10589:2002, Second Edition.", Nov 2002. 755 [RFC1195] Callon, R., "Use of OSI IS-IS for routing in TCP/IP and 756 dual environments", RFC 1195, DOI 10.17487/RFC1195, 757 December 1990, . 759 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 760 Requirement Levels", BCP 14, RFC 2119, 761 DOI 10.17487/RFC2119, March 1997, 762 . 764 [RFC5301] McPherson, D. and N. Shen, "Dynamic Hostname Exchange 765 Mechanism for IS-IS", RFC 5301, DOI 10.17487/RFC5301, 766 October 2008, . 768 [RFC5304] Li, T. and R. Atkinson, "IS-IS Cryptographic 769 Authentication", RFC 5304, DOI 10.17487/RFC5304, October 770 2008, . 772 [RFC5305] Li, T. and H. Smit, "IS-IS Extensions for Traffic 773 Engineering", RFC 5305, DOI 10.17487/RFC5305, October 774 2008, . 776 [RFC5308] Hopps, C., "Routing IPv6 with IS-IS", RFC 5308, 777 DOI 10.17487/RFC5308, October 2008, 778 . 780 9.2. Informative References 782 [PMJ1] , . 784 [PMJ2] , . 786 [PMJ3] , . 788 [REF1] "Human-level control through deep reinforcement learning. 789 Nature, 518(7540), pp.529-533.", 2015. 791 [REF2] "A Deep-Reinforcement Learning Approach for Software- 792 Defined Networking Routing Optimization. arXiv preprint 793 arXiv:1709.07080.", September 2017. 795 [REF3] "A roadmap for traffic engineering in SDN-OpenFlow 796 networks. Computer Networks, 71(C):1–30", October 797 2014. 799 [REF4] "Packet routing in dynamically changing networks: A 800 reinforcement learning approach. In Advances in neural 801 information processing systems, pages 671–678,", 802 1994. 804 [REF5] "A machine learning approach to classifying YouTube QoE 805 based on encrypted network traffic. Multimedia Tools and 806 Applications", January 2017. 808 Authors' Addresses 810 Shen Yan 811 Huawei 812 Beiqing 813 Beijing, Haidian 100095 814 China 816 Email: yanshen@huawei.com 818 Pedro Martinez-Julia 819 NICT/Japan 821 Email: pedro@nict.go.jp 823 Albert Cabellos-Aparicio 824 Technical University of Catalonia 826 Email: albert.cabellos@gmail.com