idnits 2.17.1 draft-bush-inline-predictive-mgt-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 125 instances of too long lines in the document, the longest one being 34 characters in excess of 72. ** There is 1 instance of lines with control characters in the document. ** The abstract seems to contain references ([8], [9]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 88: '...h simulated node MAY have its own noti...' RFC 2119 keyword, line 366: '...nt future state, MAY be transported. ...' RFC 2119 keyword, line 646: '...redictive management, models SHOULD be...' RFC 2119 keyword, line 647: '... the system. These models MAY coexist...' RFC 2119 keyword, line 659: '.... All functions MUST be defined. Ope...' (64 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 58 has weird spacing: '... Active and P...' == Line 63 has weird spacing: '...working and I...' == Line 196 has weird spacing: '...ication on an...' == Line 200 has weird spacing: '... System runs ...' == Line 201 has weird spacing: '...cations may e...' -- The exact meaning of the all-uppercase expression 'MAY NOT' is not defined in RFC 2119. If it is intended as a requirements expression, it should be rewritten using one of the combinations defined in RFC 2119; otherwise it should not be all-uppercase. == The expression 'MAY NOT', while looking like RFC 2119 requirements text, is not defined in RFC 2119, and should not be used. Consider using 'MUST NOT' instead (if that is what you mean). Found 'MAY NOT' in this paragraph: For the purposes of in-line predictive management, models SHOULD be specified and injected into the system. These models MAY coexist with the current SNMP management model supplementing the information with predictive values. This is denoted by adding algorithmic model information to the Case Diagram. A '+' sign after the name of an Object Identifier identifies the object as one that can return future values. The model used to predict the future information is written within braces near the Object identifier and incorporates the name of the SNMP object identifiers. This document SUGGESTS using a common syntax for the notation such as that used for code blocks by the C Programming Language block constructs, Java Programming Language blocks, or the notation used by any number of other languages. Standardization of the model syntax is outside the scope of interest for this document. All functions MUST be defined. Operating system function calls MAY NOT be used. The salient point is that the algorithm must be clearly and concisely defined. The algorithm must also be a faithful representation of the actual predictive model injected into the system. As shown in Figure 6, 'encodingErrors' is predictively enhanced to be 10% of 'inPackets' for future values. The predictive algorithm MUST run on the network node and MUST be immediately available as input for other predictively enhanced objects. The predicted value MUST be available as a response to SNMP queries for future state information, or for transfer to other nodes via virtual messages, explained later in this document. SNMP Objects that are enhanced with predictive capability are assumed to always have the actual monitored value at Wallclock time. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 2002) is 7927 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '3' is defined on line 1614, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' ** Obsolete normative reference: RFC 2570 (ref. '2') (Obsoleted by RFC 3410) ** Obsolete normative reference: RFC 2571 (ref. '3') (Obsoleted by RFC 3411) ** Obsolete normative reference: RFC 1450 (ref. '4') (Obsoleted by RFC 1907) ** Obsolete normative reference: RFC 2434 (ref. '6') (Obsoleted by RFC 5226) -- Possible downref: Non-RFC (?) normative reference: ref. '7' -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' -- Possible downref: Non-RFC (?) normative reference: ref. '10' -- Possible downref: Non-RFC (?) normative reference: ref. '11' Summary: 12 errors (**), 0 flaws (~~), 9 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Bush 3 Internet-Draft A. Kulkarni 4 Expires: December 30, 2002 N. Smith 5 GE GRC 6 July 2002 8 In-Line Network Management Prediction 9 draft-bush-inline-predictive-mgt-00 11 Status of this Memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at http:// 27 www.ietf.org/ietf/1id-abstracts.txt. 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on December 30, 2002. 34 Copyright Notice 36 Copyright (C) The Internet Society (2002). All Rights Reserved. 38 Abstract 40 In-line network management prediction exploits fine-grained models of 41 network components, injected into the communication network, to 42 enhance network performance. Accurate and fast prediction of local 43 network state enables more intelligent network control resulting in 44 greater performance and fault tolerance. Accurate and fast 45 prediction requires algorithmic capability. Active and Programmable 46 Networking have enabled algorithmic information to be dynamically 47 injected into the network allowing enhanced capability and 48 flexibility. One of the new capabilities is enhanced network 49 management via in-line management code, that is, management 50 algorithms embedded within intermediate network devices. In-line 51 network management prediction utilizes low-level algorithmic 52 transport capability to implement low-overhead predictive management. 54 A secondary purpose of this document is to provide general 55 interoperability information for the injection of general purpose 56 algorithmic information into network devices. This document may help 57 in some manner to serve as a temporary bridge between Internet 58 Protocol and Active and Programmable Network applications. This 59 may stimulate some thought as to the content and format of 60 "standards" information potentially required for Active Networking. 61 Management of the Internet Protocol and Active and Programmable 62 Networking is vital. In particular, coexistence and interoperability 63 of active networking and Internet Protocol management is specified 64 in order to implement the injection of algorithmic information into a 65 network. 67 Implementation Note 69 This document proposes a standard that assumes the capability of 70 injecting algorithmic information, i.e. executable code, into the 71 network. Active or programmable capability, as demonstrated by 72 recent implementation results from the DARPA Active Network Program, 73 Active Internet Protocol [8] or recent standards in Programmable 74 Networking [9], help meet this requirement. While in-line predictive 75 management could be standardized via a vehicle other than active 76 packets, we choose to use active networking as a convenient 77 implementation for algorithmic change within the network. 79 1. Introduction 81 This work in progress describes a mechanism that allows a 82 distributed model, injected into a network, to predict the state of 83 the network. The concept is illustrated in Figure 1. The state to 84 be predicted is modeled within each actual network node. Thus, a 85 distributed model, shown in the top plane, is formed within the 86 actual network, shown in the bottom plane. The top plane slides 87 ahead of wallclock time, although in an asynchronous manner. This 88 means that each simulated node MAY have its own notion of simulation 89 time. 91 ________________________________________________ 92 / /---------o... / 93 / o----o... / 94 / /------o---o... / 95 /_Distributed Network Model Plane_______________/ 96 (spatially located inside the actual network below, but 97 temporally located ahead of the actual network) 99 -------------------------------------------------------> 100 Wallclock 101 ________________________________________________ 102 / / 103 / /---------o... / 104 / o----o... / 105 / /------o---o... / 106 /_Actual Network Plane__________________________/ 108 Figure 1: The Distributed Model Inside the Network. 110 This concept opens up a set of interoperability issues which do not 111 appear to have been fully addressed. How can distributed model 112 components be injected into an existing network? In-line models are 113 injected into the network assuming the overlay environment shown in 114 Figure 2. In-line models in Figure 1 are designed to run as fast as 115 possible in order to maintain a simulation time that is ahead of 116 wallclock, communicating via virtual messages with future timestamps. 117 What if messages are processed out-of-order because they arrive out- 118 of-order at a node? How long do you wait (and slow your simulation 119 down) to make sure they are not out-of-order? This specification 120 provides a framework that allows synchronization to be handled in any 121 manner; e.g. via a conservative (blocking) or optimistic (Time-Warp) 122 manner within the network. Additionally, how can the models verify 123 and maintain a reasonable amount of accuracy? A mechanism is provided 124 in this document to allow local verification of prediction accuracy. 125 Attempts to adjust accuracy are implementation dependent. How do 126 independent model developers allow their models to work coherently in 127 this framework? Model operation is implementation dependent, however, 128 this specification attempts to make certain that model messages will 129 at least be transported in an inter-operable manner, both across and 130 WITHIN, intermediate network devices. How does one publish their 131 model descriptions? How are predicted values represented and 132 accessed? Suggestion solutions for these questions are presented in 133 this document as well. 135 1.1 Overview 137 In-line predictive network management, which enables greater 138 performance and fault tolerance, is based upon algorithmic 139 information injected into a network allowing system state to be 140 predicted and efficiently propagated throughout the network. This 141 paradigm enables management of the network with continuous projection 142 and refinement of future state in real time. In other words, the 143 models injected into the network allow state to be predicted and 144 propagated throughout the network enabling the network to operate 145 simultaneously in real time and in the future. The state of traffic, 146 security, mobility, health, and other network properties found in 147 typical Simple Network Management Protocol (SNMP) [2] Management 148 Information Bases (MIB) is available for use by the management 149 system. To enable predictive management of applications, new MIBs 150 will have to be defined that hold both current values as well as 151 values expected to exist in the future. 153 The AgentX [5] protocol begins to address the issue of independent 154 SNMP agent developers dynamically and seamlessly interconnecting 155 their agents into a single MIB under the control of a master agent. 156 AgentX specifies the protocol between the master and sub-agents 157 allowing the sub-agents to connect to the master agent. The AgentX 158 specification complements this work-in-progress, namely, in-line 159 network management prediction. The in-line network management 160 prediction specification provides the necessary interface between 161 agent functionality injected remotely via an Active Packet and 162 dynamically 'linked' into a MIB. The agent code may enhance an 163 existing MIB value by allowing it to return predicted values. 164 Otherwise, coexistence with AgentX is SUGGESTED. The in-line network 165 management prediction specification enables faster development of MIB 166 modules with more dynamic algorithmic capability because Active and 167 Programmable networks allow lower-level, secure, dynamic access to 168 network devices. This has allowed injection of predictive capability 169 into selected portions of existing MIBs and into selected portions of 170 active or programmable network devices resulting in greater 171 performance and fault tolerance. 173 1.2 Outline 175 This document proposes standards for the following aspects of in-line 176 predictive management: 178 o SNMP Object Time Series Representation and Manipulation 180 o Common Algorithmic Description 182 o Multi-Party In-line Predictive Model Access and Control 184 o Common Framework for Injecting Models into the Network 186 o Model Interface with the Framework 188 The high-level components of this proposed standard are shown in 189 Figure 2. The Active Network Framework [10] is a work in progress. 190 In-line Predictive Management is the subject of this document. The 191 Internet Protocol and SNMP are well-known. 193 Figure 2 shows the various ways in which in-line predictive 194 management can be used in an active network given an implementation 195 in a particular execution environment. The in-line predictive 196 management application runs as an active application on an active 197 node. The framework is independent of the underlying architecture of 198 the active network, which can take one of two forms. The protocol 199 stack on the left shows a fully active network in which the Node 200 Operating System runs one or more Execution Environments . Multiple 201 active applications may execute in any Execution Environment. The 202 protocol stack on the right shows the architecture of an active 203 network overlay over IP. Essentially, the overlay scheme uses the 204 Active Network Encapsulation Protocol (ANEP) [7] as a conduit to use 205 the underlying IP network. The predictive management application 206 executes alongside the other active applications and interacts with 207 any managed active applications to provide their future state. Since 208 the predictive management application requires only the execution 209 environment to run in, it is independent of whether the active 210 network is implemented as an overlay or it is available as a fully 211 active network. 213 +-------+-------+-----------+ +--------+---------+-------------+ 214 |Active |Active | In-line | | Active | Active | In-line | 215 | Appl | Appl | Predictive| | Appl | Appl | Predictive | 216 | | | Management| | | | Management | 217 +-------+-------+-----------+ +--------+---------+-------------+ 218 | Active Net EE | | Active Net EE | 219 +---------------------------+ +--------------------------------+ 220 | NodeOS | | Node OS | 221 +---------------------------+ +------------------+-------------+ 222 | ANEP | | ANEP | 223 +---------------------------+ +------------------+-------------+ 224 | Internet Protocol| SNMP | 225 +------------------+-------------+ 226 Active Network over IP 228 Figure 2: Relationship Among Underlying Assumptions about the 229 Predictive Management Environment. 231 The next section provides basic definitions. Following that, the 232 goals of this proposed standard are laid out. The remainder of the 233 document develops into progressively more detail defining 234 interoperability among algorithmic in-line network management 235 prediction components. Specifically, predictive capability requires 236 careful handling of the time dimension. Rather than change the SNMP 237 standard, a tabular technique is suggested. Then, in order to 238 simplify design of predictive management objects, an extension to 239 Case Diagrams is suggested for review and comment. This is followed 240 by the specification of a distributed predictive framework. It is 241 understood that multiple distributed predictive mechanisms exist, 242 however, this framework is presented for comment and review because 243 it contains all the necessary elements. Finally, the detailed 244 interface between the active or programmable code and IP standard 245 interfaces is presented. 247 1.3 Definitions 249 The following acronyms and definitions are helpful in understanding 250 the general concept of predictive network management. 252 o In-line 254 Located within, or immediately adjacent to, the flow of network 255 traffic. 257 o Predictive Network Management 259 The capability of reliably predicting network events or the state 260 of the network at a time greater than wall-clock time. 262 o Fine-Grained Models 264 Small, light-weight, executable code modules that capture the 265 behavior of a network or application component to enable 266 predictive network management. 268 o Algorithmic Information 270 Information, in the form of algorithms contained inside executable 271 code, as opposed to static, non-executable data. Depending upon 272 the complexity of the information to be transferred, an 273 algorithmic form, or an optimal tradeoff between algorithmic and 274 non-algorithmic form can be extremely flexible and efficient. 276 o Non-Algorithmic Information 278 Information that cannot be executed. Generally requires a highly 279 structured protocol to transfer with well-defined code pre- 280 installed at all points in route including source and destination. 282 o Small-State 284 Information caches that can be created at network nodes, intended 285 for use by executable components of the same application. 287 o Global-State 289 Information caches created at network nodes, intended to be used 290 by executable components of different applications. 292 o Multi-Party In-line Predictive Management Model 294 An in-line predictive management model comprised of multiple in- 295 line algorithmic models that are developed, installed, utilized, 296 and administered by multiple domains. 298 The following acronyms and definitions are useful in understanding 299 the details of the specific predictive network management framework 300 described in this document. 302 o A (Anti-Toggle) 304 Used to indicate an anti-message. The anti-message is initiated 305 by rollback and is used to keep the system within a specific range 306 of prediction accuracy. 308 o AA (Active Application) 310 An active network protocol or service that is injected into the 311 network in the form of active packets. The active packets are 312 executed within the EE. 314 o Active Network 316 A network that allows executable code to be injected into the 317 nodes of the network and allows the code to be executed at the 318 nodes. 320 o Active Packet 322 The executable code that is injected into the nodes of an active 323 network. 325 o Anti-Message 327 An exact duplicate of a virtual message except for that the Anti- 328 toggle bit is set. An Anti-message is used to annihilate an 329 invalid virtual message. This is an implementation specific 330 feature relevant to optimistic distributed simulation. 332 o DP (Driving Process) 334 Generates virtual messages. Generally, the DP is implemented as 335 an algorithm that samples network state and transforms the state 336 into a prediction. The prediction is represented by a virtual 337 message. 339 o EE (Execution Environment) 341 The active network execution environment. The environment that 342 resides on active network nodes that executes active packets. 344 o Lookahead 346 The difference between Wallclock and LVT. This value is the 347 distance into the future for which predictions are made. 349 o LP (Logical Process) 351 An LP consists of the Physical Process and additional data 352 structures and instructions which maintain message order and 353 correct operation as a system executes ahead of real time. 355 o LVT (Local Virtual Time) 357 The LP contains a notion of time local to itself known as LVT. A 358 node's LVT may differ from other nodes' LVT and Wallclock. LVT is 359 a local, asynchronous notion of time. 361 o M (Message) 363 The message portion of a Virtual Message is implementation 364 specific. This proposed standard SUGGESTS that the message 365 contents be opaque, however, an SNMP varbind, intended to 366 represent future state, MAY be transported. Executable code may 367 also be transported within the message contents. 369 o NodeOS (Node Operating System) 371 The active network Operating System. The supporting 372 infrastructure on intermediate networks nodes that supports one or 373 more execution environments. 375 o PP (Physical Process) 377 A PP is an actual process. It usually refers the actual process 378 being modeled, or whose state will be predicted. 380 o QS (Send Queue) 382 A queue used to hold copies of messages that have been sent by an 383 LP. The messages in the QS may be sent as anti-messages if a 384 rollback occurs. 386 o Rollback 388 The process of adjusting the accuracy of predictive components due 389 to packets arriving out-of-order or out-of-tolerance. Rollback is 390 specific to optimistic distributed simulation techniques and is 391 thus an implementation specific feature. 393 o RT (Receive Time) 395 The time message value is predicted to be valid. 397 o RQ (Receive Queue) 399 A queue used in the algorithm to hold incoming messages to an LP. 400 The messages are stored in the queue in order by receive time. 402 o SQ (State Queue) 404 The SQ is used as a LP structure to hold saved state information 405 for use in case of a rollback. The SQ is the cache into which 406 pre-computed results are stored. 408 o Tolerance 410 A user-specified limit on the amount of prediction error allowed 411 by an LP's prediction. 413 o TR (Real Time) 415 The current time as a time-stamp within a virtual message. 417 o TS (Send Time) 419 The LVT that a virtual message has been sent. This value is 420 carried within the header of the message. The TS is used for 421 canceling the effects of false messages. 423 o VM (Virtual Message) 425 A message, or state, expected to exist in the future. 427 o Wallclock 429 The current time. 431 1.4 Goals 433 The goals of this document are... 435 o Simplicity 437 This document attempts to describe the minimum necessary elements 438 for in-line management prediction. Model developers should be 439 able to inject models into the network allowing SNMP Object value 440 prediction. Such models should work seamlessly with other 441 predictive models in the network. The goal is to minimize the 442 burden on the model developer while also insuring model 443 interoperability. 445 o Conformance 447 This document attempts conformance with existing standards when 448 and where it is possible to do so. The concept is to facilitate a 449 gradual transition to the active and programmable networking 450 paradigm. 452 o In-line Algorithmically-Based Management 454 This document attempts to introduce the use of in-line algorithmic 455 management information. 457 2. A Common Representation of SNMP Object Time Series for In-line 458 Network Management Prediction 460 SNMP, as currently defined, has a very limited notion of time 461 associated with state information. The temporal semantics are 462 expected to be applied to the state by the applications reading the 463 information. On the other hand, predictive management requires 464 generation, handling and transport of information that understands 465 the temporal characteristics of the state, i.e. whether the 466 information is current, future, or perhaps past information. In 467 other words, capability for handling the time dimension of management 468 information needs to be extended and standardized in some manner. In 469 this section, we propose a mechanism for handling time issues in 470 predictive management that require minimal changes from the SNMP 471 standard. 473 A proposed standard technique for handling the time dimension in 474 predictive state systems is to build the SNMP Object as a Table 475 Object indexed by time. This is shown in the following excerpt from 476 a Load Prediction MIB... 478 . 479 . 480 ~ 481 . 482 . 483 loadPrediction OBJECT IDENTIFIER ::= { loadPredMIB 1 } 485 loadPredictionTable OBJECT-TYPE 486 SYNTAX SEQUENCE OF LoadPredictionEntry 487 MAX-ACCESS not-accessible 488 STATUS current 489 DESCRIPTION 490 "Table of load prediction information." 491 ::= { loadPrediction 1 } 493 loadPredictionEntry OBJECT-TYPE 494 SYNTAX LoadPredictionEntry 495 MAX-ACCESS not-accessible 496 STATUS current 497 DESCRIPTION 498 "Table of Atropos LP prediction information." 499 INDEX { loadPredictionPort } 500 ::= { loadPredictionTable 1 } 502 LoadPredictionEntry ::= SEQUENCE { 503 loadPredictionID 504 DisplayString, 505 loadPredictionPredictedLoad 506 INTEGER, 507 loadPredictionPredictedTime 508 INTEGER 509 } 511 loadPredictionID OBJECT-TYPE 512 SYNTAX DisplayString 513 MAX-ACCESS read-only 514 STATUS current 515 DESCRIPTION 516 "The LP identifier." 517 ::= { loadPredictionEntry 1 } 519 loadPredictionPredictedLoad OBJECT-TYPE 520 SYNTAX INTEGER (0..2147483647) 521 MAX-ACCESS read-only 522 STATUS current 523 DESCRIPTION 524 "This is the predicted load on the link." 525 ::= { loadPredictionEntry 2 } 527 loadPredictionPredictedCPUTime OBJECT-TYPE 528 SYNTAX INTEGER (0..2147483647) 529 MAX-ACCESS read-only 530 STATUS current 531 DESCRIPTION 532 "This is the predicted processor time used by a packet 533 on this node." 534 ::= { loadPredictionEntry 3 } 536 loadPredictionPredictedTime OBJECT-TYPE 537 SYNTAX INTEGER (0..2147483647) 538 MAX-ACCESS read-only 539 STATUS current 540 DESCRIPTION 541 "This is the time at which the predicted event will be valid." 542 ::= { loadPredictionEntry 4 } 543 . 544 . 545 ~ 546 . 547 . 549 Figure 3: MIB Structure for Handling Object Values with Predictive 550 Capability. 552 In Figure 4, the result of an SNMP query of the relevant predictive 553 MIB Object is displayed. Because the identifiers are suffixed by 554 time, the object values are sorted temporally. If a client wishes to 555 know the next predicted event on or before a given time, the the 556 query can be formulated as a GET-NEXT with the next predicted event 557 time to be determined as the suffix. The GET-NEXT-RESPONSE will 558 contain the next predicted event along with its time of occurrence. 559 Otherwise, a value outside the table will be returned if no such 560 predicted value yet exists. 562 . 563 . 564 ~ 565 . 566 . 567 loadPredictionTable.loadPredictionEntry.loadPredictionID.1 -> OCTET STRING- (ascii):AN-1 569 loadPredictionTable.loadPredictionEntry.loadPredictionPort.1 -> INTEGER: 3325 571 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.4847 -> INTEGER: 240 572 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.20000 -> INTEGER: 420 573 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.40000 -> INTEGER: 460 574 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.60000 -> INTEGER: 497 575 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.80000 -> INTEGER: 540 576 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.100000 -> INTEGER: 580 577 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.120000 -> INTEGER: 619 578 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedLoad.140000 -> INTEGER: 660 580 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.4847 -> INTEGER: 4847 581 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.20000 -> INTEGER: 20000 582 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.40000 -> INTEGER: 40000 583 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.60000 -> INTEGER: 60000 584 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.80000 -> INTEGER: 80000 585 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.100000 -> INTEGER: 100000 586 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.120000 -> INTEGER: 120000 587 loadPredictionTable.loadPredictionEntry.loadPredictionPredictedTime.140000 -> INTEGER: 140000 589 loadPredictionTable.loadPredictionEntry.loadPredictionCurrentLoad.1 -> INTEGER: 15949 590 loadPredictionTable.loadPredictionEntry.loadPredictionCurrentTime.1 -> INTEGER: 25639 591 . 592 . 593 ~ 594 . 595 . 597 Figure 4: Output from a Query of the MIB Structure for Handling 598 Object Values with Predictive Capability. 600 This allows SNMP GET-NEXT operations from a client to locate an event 601 nearest to the requested time as well as search in temporal order for 602 next predicted events. 604 3. A Common Algorithmic Description 606 SNMP, as currently defined, assumes that non-algorithmic descriptive 607 information will be generated, handled, or transported. Prediction 608 requires model development and execution. This proposed standard 609 SUGGESTS that models are to be small, low-overhead, and fine-grained. 610 Fine-grained refers to the fact that the models are locally 611 constrained in time and space. In this section, we propose 612 algorithmic descriptions of management models designed to encourage 613 the understanding and use of in-line predictive management 614 techniques. 616 Case Diagrams[4] provide a well-known representation for the relation 617 of management information to information flow as shown in Figure 5. 618 The details of Case Diagrams will not be discussed here (see the 619 previous reference for more information). The purpose of this 620 section is to illustrate an enhancement to the diagram that allows 621 algorithmic information to be specified, particularly for multi-party 622 predictive model interaction. 624 An excerpt of an SNMP Case Diagram serves to provide a flavor of its 625 current format. The diagram below shows packets arriving from a 626 lower network layer. Some packets are determined to have encoding 627 errors and are discarded. The remaining packets flow to the upper 628 layer. 630 ^ Upper Layer 631 | 632 ==+== outPackets 633 | 634 ~ 635 | 636 +==> encodingErrors 637 | 638 ~ 639 | 640 ==+== inPackets 641 ^ 642 | Lower Layer 644 Figure 5: An Example Case Diagram. 646 For the purposes of in-line predictive management, models SHOULD be 647 specified and injected into the system. These models MAY coexist 648 with the current SNMP management model supplementing the information 649 with predictive values. This is denoted by adding algorithmic model 650 information to the Case Diagram. A '+' sign after the name of an 651 Object Identifier identifies the object as one that can return future 652 values. The model used to predict the future information is written 653 within braces near the Object identifier and incorporates the name of 654 the SNMP object identifiers. This document SUGGESTS using a common 655 syntax for the notation such as that used for code blocks by the C 656 Programming Language block constructs, Java Programming Language 657 blocks, or the notation used by any number of other languages. 658 Standardization of the model syntax is outside the scope of interest 659 for this document. All functions MUST be defined. Operating system 660 function calls MAY NOT be used. The salient point is that the 661 algorithm must be clearly and concisely defined. The algorithm must 662 also be a faithful representation of the actual predictive model 663 injected into the system. As shown in Figure 6, 'encodingErrors' is 664 predictively enhanced to be 10% of 'inPackets' for future values. 665 The predictive algorithm MUST run on the network node and MUST be 666 immediately available as input for other predictively enhanced 667 objects. The predicted value MUST be available as a response to SNMP 668 queries for future state information, or for transfer to other nodes 669 via virtual messages, explained later in this document. SNMP Objects 670 that are enhanced with predictive capability are assumed to always 671 have the actual monitored value at Wallclock time. 673 ^ Upper Layer 674 | 675 ==+== outPackets 676 | 677 ~ 678 | 679 +==> encodingErrors+ { 0.1 * inPackets } 680 | 681 ~ 682 | 683 ==+== inPackets 684 ^ 685 | Lower Layer 687 Figure 6: A Sample Algorithmic Description. 689 If this were a wireless network, a more realistic algorithmic model 690 would likely incorporate channel quality SNMP Objects into the 691 'encodingErrors' prediction algorithm. In many cases, the 692 algorithmic portion of the Case Diagram will involve SNMP objects 693 from other nodes. Syntax should include the ability to identify 694 general topological information in the description of external 695 objects. For example, 'inPackets[adj]' or 'inPackets[edge]' should 696 indicate immediately adjacent nodes or nodes at the topological edge 697 of the network. 699 In the example shown in Figure 7, a 'packetsForwarded' object has 700 predictive capability denoted by the '+' symbol. The predictive 701 capability comes from an algorithmic model specified within the 702 braces next to the object name. In this case, the prediction will be 703 the value of the 'driverForwarded' object from the node closest to 704 the edge of the network. 706 ^ Upper Layer 707 | 708 ==+== outPackets 709 | 710 ~ 711 | 712 +==> packetsForwarded+ { driverForwarded[edge] } 713 | 714 ~ 715 | 716 ==+== inPackets 717 ^ 718 | Lower Layer 720 Figure 7: An Algorithmic Description Using State Generated from 721 Another Node Described in Figure 8. 723 In the following figure, which is an SNMP diagram of the edge node, 724 the 'driverForwarded' object is predicted by executing the algorithm 725 in braces. This algorithm predicts 'driverForwarded' packets to be a 726 linear approximation of a sample of 'appPackets'. The sample is 727 'epsilon' time units apart and the prediction is 'delta' time units 728 into the future. 730 ^ Upper Layer 731 | 732 ==+== driverPackets 733 | 734 ~ 735 | 736 +==> driverForwarded+ 737 | { delta * (appPackets(t-epsilon) - appPackets(t))/ epsilon } 738 ~ 739 | 740 ==+== inPackets 741 ^ 742 | Lower Layer 744 Figure 8: A Node Generating State Information Used by the Node in 745 Figure 7. 747 4. Multi-Party Model Interaction 749 Multiple developers and administrators of in-line predictive 750 algorithmic models will require mechanisms to ensure correct 751 understanding and operation of each others' models and intentions. 753 4.1 Model Registration 755 It may be necessary to register predictive models. Registration is 756 often an IANA function [6]. Algorithmic model registration needs to 757 be handled more dynamically than AgentX models. Algorithmic models, 758 while not necessary doing so, have the capability to install/de- 759 install at rapid rates. The in-line model installation and de- 760 installation proposed standard is described in Section 7. 762 4.2 Model Interaction 764 Multiple models residing on a node need to inter-operate with one 765 another. This document proposes to use SNMP Object Identifiers as 766 much as possible for communication of state information among models. 767 In addition, multiple Active Application models may choose to 768 communicate with one another via global state. 770 4.3 Co-existence with Legacy SNMP 772 Querying an IP addressable node for SNMP objects that are 773 predictively enhanced should appear transparent to the person polling 774 the node. Multiple ports, etc.. should not be required. A program 775 injected into a node that serves to extend an SNMP MIB MAY do so 776 using global state. A global state cache holds the SNMP object 777 values and responds via an internal port to connect with a master 778 SNMP agent for the node. 780 5. A Common Predictive Framework 782 This section specifies an algorithmic predictive management 783 framework. The framework allows details of distributed simulation, 784 such as time management, state saving, and model development to be 785 implementation dependent while ensuring in-line inter-operability 786 both with, and within, the network. The general predictive network 787 management architecture MUST contain at least one Driving Processes 788 (DP), MAY contain Logical Processes (LP), and MUST use Virtual 789 Messages (VM). 791 Figure 9 illustrates network nodes containing DPs and LPs. The 792 annotation under nodes AH-1 and AN-1 are an SNMP Object Identifier. 793 SNMP Object Identifier 'oid_1' represents state of node AH-1. The 794 predictively enhanced SNMP Object Identifier, 'oid+' on node AN-1 is 795 a function of 'oid_1'. Note that 'f()' is shown as an arbitrary 796 function in the figure, but MUST be well-defined in practice. 798 +------+ 799 +-----+ | LP |-->... 800 | VM | |(node)| 801 |(msg)| /+------+ 802 +------+ +-----+ +------+/ 803 | DP |------------------->| LP | 804 |(node)| |(node)|---->... 805 | AH-1 | | AN-1 | 806 +------+ +------+ 807 oid_1 oid+ {f(oid_1)} \ 808 \ 809 +------+ 810 | LP |-->... 811 |(node)| 812 +------+ 814 Figure 9: Framework Entity Types. 816 The framework makes a distinction between a Physical Process and a 817 Logical Process. A Physical Process is nothing more than an 818 executable task defined by program code i.e. it is the 819 implementation of a particular model or a hardware component or a 820 direct connection to a hardware component representing a device. An 821 example of a Physical Process is the packet forwarding process on a 822 router. Each Physical Process MUST be encapsulated within a Logical 823 Process, labeled LP in Figure 9. A Logical Process consists of a 824 Physical Process, or a model of the Physical Process and additional 825 implementation specific data structures and instructions to maintain 826 message order and correct operation as the system executes ahead of 827 current (or Wallclock) time as illustrated in greater detail in 828 Figure 10. The details of the DP and LP structure and operation are 829 implementation specific, while the inter-operation of the DP/LP 830 system must be specified. The LP architecture is abstracted in 831 Figure 10. The flow of messages through the LP is shown by the 832 arrows entering from the left side of the figure. The in-line 833 predictive framework components are shown in Figure 9, where AH-1 and 834 AN-1 are Active Host 1 and Active Node 1 respectively. In this 835 context, active hosts are nodes that can inject new packets into the 836 network while active nodes are nodes that behave as intermediate hops 837 in a network. 839 The Logical Process MUST handle time management for the model. The 840 Logical Process and the model that it implements MAY be implemented 841 in any manner, however, they must be capable of inter-operating. The 842 framework MUST be capable of supporting both conservative and 843 optimistic time management within the network. Conservative time 844 management REQUIRES that the model block when messages MAY be 845 received out-of-order while optimistic time management MAY allow 846 model processing to continue, even when messages are received out-of- 847 order. However, additional implementation specific mechanisms MAY be 848 used to account for out-of-order messages. Such mechanisms MAY be 849 embedded within the Logical Process and this specification does not 850 attempt to standardize them. 852 Virtual input messages directed to a Logical Process MUST be 853 received by the Logical Process, passed to the model, and processed. 854 Virtual output messages MAY be generated as a result. 856 +-------------------------------------------------------------+ 857 | Active Application | 858 | | 859 | +-----------------------------+ | 860 | | Logical Process | | 861 | | (Time Management) | | 862 | | +-------------+ | | 863 | | | Model | | | 864 | Virtual Input Msgs | | Virtual Output Msgs | 865 ========================>| |==========================> 866 | | | | | | 867 | | +------/\-----+ | | 868 | | || State | | 869 | | +--------\/-------+ | | 870 | | | State Queue | | | 871 | +------| Predicted Values|----+ | 872 | | (Small-state) | | 873 | +--------/\-------+ | 874 +-----------------------------||------------------------------+ 875 \/ 876 SNMP 878 Figure 10: A High-Level View of the Logical Process Framework 879 Component within an Active Application. 881 Virtual messages contain the following fields: 883 o Send Time (TS) which MUST contain the LVT (local simulation time) 884 at which the message was sent 886 o Receive Time (TR) which MUST denote the time the message is 887 expected to exist in the future 889 o MAY contain an (optional) Anti-toggle (A) bit for out-of-order 890 message handling purposes such as message cancellation and 891 rollback 893 o MUST contain the message content itself (M) which is model 894 specific 896 Thus, a Virtual Message (VM) MUST have the following structure... 898 0 1 2 3 899 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 900 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 901 | Source Address | 902 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 903 | Destination Address | 904 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 905 | Send-Time (TS) | 906 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 907 | Receive-Time (RT) | 908 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 909 | Real-Time (TR) | 910 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 911 |A| . | 912 +-+ . | 913 | . | 914 ~ Message (M) ~ 915 | . | 916 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 918 Figure 11: An In-line Management Prediction Virtual Message. 920 These in-line predictive messages, or virtual messages, that contain 921 invalid fields because the transmitting Logical Processes used an 922 incompatible time management technique MUST be dropped. However, it 923 is SUGGESTED that a count of such packets be maintained in a general 924 in-line predictive management framework MIB. The Receive Time field 925 MUST be filled with the time that this message is predicted to be 926 valid at the destination Logical Process. The Send Time field MUST 927 be filled with the time that this message was sent by the originating 928 Logical Process. The Anti-Toggle (A) field MUST be used for creating 929 an anti-message to remove the effects of false messages as described 930 later. A message MUST also contain a field for the current Real Time 931 (RT). If a message arrives at a Logical Process out-of-order or with 932 invalid information, that is, out of a pre-specified tolerance for 933 prediction accuracy, it is called a false message. The method for 934 handling false messages is implementation specific. The Receive 935 Queue, shown in Figure 13, maintains newly arriving messages in order 936 by Receive Time (TR). The implementation of the Receive Queue is 937 implementation specific. 939 The Driving and Logical Processes MUST communicate via virtual 940 messages as shown in Figure 12. The Driving Process MAY generate 941 predictions based upon SNMP queries of other layers on the local 942 node. The Logical Process MAY check its prediction accuracy via SNMP 943 queries of other layers on its local node. 945 +------------------+/--+ +------------------+/--+ 946 | DP |\-|| SNMP | LP |\-|| SNMP 947 +------------------+ || +------------------+ || 948 | Virtual Messages | || | Virtual Messages | || 949 +------------------+ || +------------------+ || 950 | ANEP |__|| | ANEP |__|| 951 +------------------+--|| +------------------+--|| 952 | IP |__|| | IP |__|| 953 +------------------+---+ +------------------+---+ 954 Driving Process Logical Process 956 Figure 12: Facility for Checking Accuracy with Actual Network SNMP 957 Objects in the In-line Predictive Management Framework. 959 The in-line predictive framework MAY allow for prediction refinement 960 and correction by communicating with the actual component whose state 961 is to be predicted via an SNMP query. The asynchronous prediction 962 mechanism has the following architecture for Logical Process... 964 +-------------------------------------------------------+ 965 | Logical Process | 966 | | 967 | State Queue (MIB) | 968 | +-+ | 969 | | | | 970 | +-+ | 971 | | | 972 | Virtual Message Route +-+ | 973 ========> ]O =============>| |=========> ]O ===============> 974 | Receive Queue +-+ Send Queue | 975 | Model | 976 +--------------------------/\---------------------------+ 977 || SNMP Object Id (oid) 978 || 979 +-------------------------------------------------------+ 980 | Actual Component Whose State is to be Predicted | 981 +-------------------------------------------------------+ 983 Figure 13: A Logical Process Implementation and Interface. 985 All of the Logical Process queues and caches MAY reside in an active 986 node's Small-State. Small-State is a persistent memory cache left 987 behind by an active packet that is available to trailing active 988 packets that have the proper access rights. Typically, any type of 989 information can be stored in Small-State. 991 The Receive Queue MAY maintain active virtual message ordering and 992 scheduling. All active packets MUST be encapsulated inside Active 993 Packets following the Active Network Encapsulation Protocol [7] 994 format. Once a virtual message leaves the Receive Queue, the virtual 995 time of the Logical Process, known as Local Virtual Time, MUST be 996 updated to the value of the Receive Time from the departing virtual 997 message. Virtual messages MUST originate from Driving Processes, 998 shown in Figure 9 that predict future events and inject them into the 999 system as virtual messages. The development of a Driving Process and 1000 Logical Process are dependent upon the model used to enhance the 1001 desired state of the system with predictive capability. Logical 1002 Processes MUST only operate upon the the arrival of virtual input 1003 messages and MUST NEVER spontaneously generate virtual messages. 1005 Following the arrows across Figure 13, virtual messages enter either 1006 the Physical Process. The state of the Logical Process is 1007 periodically saved in the State Queue (SQ) shown as the State Cache 1008 in Figure 13. State Queue values are used to restore the Logical 1009 Process to a known safe state when false messages are received. 1010 State values are continuously compared with actual values from the 1011 Physical Process to check for prediction accuracy, which in the case 1012 of load prediction is the number and arrival times of predicted and 1013 actual packets received. If the prediction error exceeds a specified 1014 tolerance, a rollback MAY occur. 1016 An important part of the architecture for network management is the 1017 fact that the State Queue within the in-line management prediction 1018 architecture is the node's Management Information Base. The State 1019 Queue values are the SNMP Management Information Base Object values; 1020 but unlike legacy SNMP values, these values are expected to occur in 1021 the future. The State Queue operation is implementation dependent, 1022 however, it holds the predicted SNMP Objects, is SUGGESTED to be 1023 implemented in small-state, and MUST use the interface specified in 1024 Section 7.2 to respond to SNMP queries. The current version of SNMP 1025 has no mechanism to indicate that a managed object is reporting its 1026 future state; currently all results are reported with a timestamp 1027 that contains the current time. In working on predictive active 1028 network management prediction there is a need for managed entities to 1029 report their state information at times in the future. These times 1030 are unknown to the requester. A simple means to request and respond 1031 with future time information is to append the future time to all 1032 Management Information Base Object Identifiers that are predicted. 1033 This requires making these objects members of a Management 1034 Information Base table indexed by predicted time as discussed in 1035 Section 2. This can be seen in the loadPredictionTable shown in 1036 Figure 3. Thus a Simple Network Management Protocol client, who does 1037 not know the exact time of the next predicted value, can issue a get- 1038 next command appending the current time to the known object 1039 identifier. The managed object responds with the requested object 1040 valid at the closest future time. The figure illustrates an SNMP 1041 request and the corresponding response. 1043 Future times are the LVT of the Logical Process running on a 1044 particular node. As Wallclock approaches a particular future time, 1045 predicted values MAY be adjusted, allowing the prediction to become 1046 more accurate. The table of future values MAY be maintained within a 1047 sliding Lookahead window, so that old values are removed and the 1048 prediction does exceed a given future time. Continuing along the 1049 arrows in Figure 13, any virtual messages that are generated as a 1050 result of the Physical Process or model computation proceed to the 1051 Send Queue (QS). 1053 The Send Queue is implementation dependent, however, it MAY maintain 1054 copies of virtual messages to be transmitted in order of their send 1055 times. The Send Queue is required for the generation of anti- 1056 messages during rollback. Anti-Messages annihilate corresponding 1057 virtual messages when they meet to correct for previously sent false 1058 messages. Annihilation is simply the removal of both the actual and 1059 the anti-message. Where the annihilation occurs is implementation 1060 specific and left to the implementor. After leaving the Send Queue, 1061 virtual messages travel to their destination Logical Process. 1062 Further details on the optimistic synchronization mechanism are 1063 implementation dependent and outside the scope of this work in 1064 progress. 1066 6. Summary of In-line Prediction Requirements 1068 An in-line management prediction model developer MUST implement at 1069 least one Driving Processing and MAY implement a Logical Process 1070 using the same time management technique. The model developer MAY 1071 include an SNMP client within the model in order to query the modeled 1072 component in order to improve prediction accuracy. The model 1073 developer's Driving Process MUST generate virtual messages. The 1074 Logical Process MUST receive and process those messages. The Logical 1075 Process MAY respond to virtual messages by generating virtual 1076 message(s). The Logical Process MAY use active network node Small- 1077 state to hold a time series of the SNMP Object Id whose value is 1078 being continuously predicted. The interface to the SNMP MIB small- 1079 state is specified in the following section. 1081 7. Details of the Active Network Interface 1083 The general active network architectural framework, without any 1084 specific network management paradigm implementation, is shown in 1085 Figure 14. 1087 Active Applications +----+ +----+ +----+ +----+ 1088 |AA 1| |AA 2| |AA 3| |AA 4| 1089 +----+ +----+ +----+ +----+ 1090 EE-specific ____________ ____________ 1091 Programming i/f's 1092 +----------+ +----------+ 1093 Execution | | | | 1094 Environments | EE 1 | | EE 2 | 1095 | | | | 1096 +----------+ +----------+ 1097 NodeOS i/f ========================== 1099 Low-level channels, threads, 1100 Abstractions state storage, ... 1102 Figure 14: The Active Network Framework. 1104 In-line network management prediction requires a general active 1105 network framework that supports active applications to be injected 1106 into the proper execution environments. The in-line management 1107 prediction framework enforces certain minimal requirements on the 1108 execution environment, which are listed below. 1110 7.1 Information Caches 1112 The execution environment MUST provide an information cache called 1113 'Small State' as defined in Section 1.3 to enable information 1114 exchange between active packets, defined in Section 1.3. The 1115 execution environment MAY also provide an information cache called 1116 'Global State', defined in Section 1.3, to enable the in-line 1117 management prediction framework to communicate with a predictively 1118 managed active application to query its current state. The EE MUST 1119 provide an API to be able to store and query both 'Small State' and 1120 also to 'Global State', if it is implemented. The EE SHOULD provide 1121 appropriate access control mechanisms to both 'Small State' and also 1122 to 'Global State', if it is implemented. 1124 7.2 Interface to SNMP 1126 The execution environment MUST provide an interface that enables both 1127 the in-line management prediction values and the values of the actual 1128 component being managed to publish their state to an SNMP MIB. This 1129 enables the in-line management prediction framework to store the 1130 predicted state in a well-known format and also enables legacy SNMP 1131 tools to query the predicted state using SNMP operations. 1132 Additionally, the managed application is also able to update its 1133 current state using SNMP, which the Logical Process will be able to 1134 query. In a particular implementation of such an interface, a 1135 generic SNMP agent coded as an active application MAY be injected 1136 into the active nodes. The agent creates a 'Global State' on the 1137 active node with a well-known name. The agent reads information 1138 coded in a known format that has been written to the 'Global State' 1139 and publishes it to the MIB. Any active application that wishes to 1140 advertise its state uses an interface that enables it to store its 1141 information in the well-known 'Global State' in the given format. 1143 The format of the messages that are posted between the SNMP agent 1144 and an active application are shown below, 1146 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1147 | Message Type | Object ID | 1148 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1149 | . | 1150 ~ Value ~ 1151 | . | 1152 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1154 Figure 15: Message Packet. 1156 The SNMP Agent and the active application MAY use special interfaces 1157 to implement messaging between them. A Message Packet, whose format 1158 is shown in Figure 15, is the basic unit of inter-application 1159 communication. Each message consists of a message type. The type 1160 SHOULD assume one of the following values: 1162 o MSG_ADDINT: to add a new MIB Object of type SNMP INTEGER 1164 o MSG_UPDATEINT: to update the value of an MIB Object of type SNMP 1165 INTEGER 1167 o MSG_GETINT: to get the value of an MIB Object of type SNMP INTEGER 1169 o MSG_ADDLONG: to add a new MIB Object of type SNMP LONG 1171 o MSG_UPDATELONG: to update the value of an MIB Object of type SNMP 1172 LONG 1174 o MSG_GETLONG: to get the value of an MIB Object of type SNMP LONG 1175 o MSG_ADDSTRING: to add a new MIB Object of type SNMP STRING 1177 o MSG_UPDATESTRING: to update the value of an MIB Object of type 1178 SNMP STRING 1180 o MSG_GETSTRING: to get the value of an MIB Object of type SNMP 1181 STRING 1183 The active application SHOULD send a message of the valid message 1184 type to the SNMP agent to perform the required operation. On receipt 1185 of a message, the SNMP agent SHOULD attempt to perform the requested 1186 operation. It MUST then respond with an acknowledgment message in a 1187 format shown in Figure 16. 1189 The acknowledgment message has the following format. 1191 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1192 | Status Code | 1193 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1194 | . | 1195 ~ Status Message ~ 1196 | . | 1197 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 1199 Figure 16: Acknowledgment Message Packet. 1201 The status code MUST have one of the following values: 1203 o OK: to indicate successful operation 1205 o ERR_DUPENTRY: if for a MSG_ADD operation, an Object identifier of 1206 given name already exists 1208 o ERR_NOSUCHID: if for a MSG_UPDATE operation, an Object identifier 1209 of given name does not exist. 1211 The Status message MAY be any descriptive string explaining the 1212 nature of the failure or SHOULD be "Success" for a successful 1213 operation. 1215 8. Implementation 1217 Models injected into the network allow network state to be predicted 1218 and efficiently propagated throughout the active network enabling the 1219 network to operate simultaneously in real time as well as project the 1220 future state of the network. Network state information, such as 1221 load, capacity, security, mobility, faults, and other state 1222 information with supporting models, is automatically available for 1223 use by the management system with current values and with values 1224 expected to exist in the future. In the current version, sample load 1225 and processor usage prediction applications have been experimentally 1226 validated using the Atropos Toolkit [11]. The toolkit's distributed 1227 simulation infrastructure takes advantage of parallel processing 1228 within the network, because computation occurs concurrently at all 1229 participating active nodes. The network being emulated can be 1230 queried in real time to verify the prediction accuracy. Measures 1231 such as rollbacks are taken to keep the simulation in line with 1232 actual performance. 1234 8.1 Predictive In-line Management Information Base 1236 Further details on the in-line network management prediction concept 1237 can be found in Active Networks and Active Network Management [1]. 1238 The SNMP MIB for the in-line predictive management system described 1239 in this proposed standard follows in the next section. 1241 ATROPOS-MIB DEFINITIONS ::= BEGIN 1243 IMPORTS 1244 MODULE-IDENTITY, OBJECT-TYPE, experimental, 1245 Counter32, TimeTicks 1246 FROM SNMPv2-SMI 1247 DisplayString 1248 FROM SNMPv2-TC; 1250 atroposMIB MODULE-IDENTITY 1251 LAST-UPDATED "9801010000Z" 1252 ORGANIZATION "GE CRD" 1253 CONTACT-INFO 1254 "Stephen F. Bush bushsf@crd.ge.com" 1255 DESCRIPTION 1256 "Experimental MIB modules for the Active Virtual Network 1257 Management Prediction (Atropos) system." 1258 ::= { experimental active(75) 4 } 1260 -- 1261 -- Logical Process Table 1262 -- 1263 lP OBJECT IDENTIFIER ::= { atroposMIB 1 } 1265 lPTable OBJECT-TYPE 1266 SYNTAX SEQUENCE OF LPEntry 1267 MAX-ACCESS not-accessible 1268 STATUS current 1269 DESCRIPTION 1270 "Table of Atropos LP information." 1271 ::= { lP 1 } 1273 lPEntry OBJECT-TYPE 1274 SYNTAX LPEntry 1275 MAX-ACCESS not-accessible 1276 STATUS current 1277 DESCRIPTION 1278 "Table of Atropos LP information." 1279 INDEX { lPIndex } 1280 ::= { lPTable 1 } 1282 LPEntry ::= SEQUENCE { 1283 lPIndex INTEGER, 1284 lPID DisplayString, 1285 lPLVT INTEGER, 1286 lPQRSize INTEGER, 1287 lPQSSize INTEGER, 1288 lPCausalityRollbacks INTEGER, 1289 lPToleranceRollbacks INTEGER, 1290 lPSQSize INTEGER, 1291 lPTolerance INTEGER, 1292 lPGVT INTEGER, 1293 lPLookAhead INTEGER, 1294 lPGvtUpdate INTEGER, 1295 lPStepSize INTEGER, 1296 lPReal INTEGER, 1297 lPVirtual INTEGER, 1298 lPNumPkts INTEGER, 1299 lPNumAnti INTEGER, 1300 lPPredAcc DisplayString, 1301 lPPropX DisplayString, 1302 lPPropY DisplayString, 1303 lPETask DisplayString, 1304 lPETrb DisplayString, 1305 lPVmRate DisplayString, 1306 lPReRate DisplayString, 1307 lPSpeedup DisplayString, 1308 lPLookahead DisplayString, 1309 lPNumNoState INTEGER, 1310 lPStatePred DisplayString, 1311 lPPktPred DisplayString, 1312 lPTdiff DisplayString, 1313 lPStateError DisplayString, 1314 lPUptime TimeTicks 1315 } 1317 lPIndex OBJECT-TYPE 1318 SYNTAX INTEGER (0..2147483647) 1319 MAX-ACCESS not-accessible 1320 STATUS current 1321 DESCRIPTION 1322 "The LP table index." 1323 ::= { lPEntry 1 } 1325 lPID OBJECT-TYPE 1326 SYNTAX DisplayString 1327 MAX-ACCESS read-only 1328 STATUS current 1329 DESCRIPTION 1330 "The LP identifier." 1331 ::= { lPEntry 2 } 1333 lPLVT OBJECT-TYPE 1334 SYNTAX INTEGER (0..2147483647) 1335 MAX-ACCESS read-only 1336 STATUS current 1337 DESCRIPTION 1338 "This is the LP Local Virtual Time." 1339 ::= { lPEntry 3 } 1341 lPQRSize OBJECT-TYPE 1342 SYNTAX INTEGER (0..2147483647) 1343 MAX-ACCESS read-only 1344 STATUS current 1345 DESCRIPTION 1346 "This is the LP Receive Queue Size." 1347 ::= { lPEntry 4 } 1349 lPQSSize OBJECT-TYPE 1350 SYNTAX INTEGER (0..2147483647) 1351 MAX-ACCESS read-only 1352 STATUS current 1353 DESCRIPTION 1354 "This is the LP send queue size." 1355 ::= { lPEntry 5 } 1357 lPCausalityRollbacks OBJECT-TYPE 1358 SYNTAX INTEGER (0..2147483647) 1359 MAX-ACCESS read-only 1360 STATUS current 1361 DESCRIPTION 1362 "This is the number of rollbacks this LP has suffered." 1363 ::= { lPEntry 6 } 1365 lPToleranceRollbacks OBJECT-TYPE 1366 SYNTAX INTEGER (0..2147483647) 1367 MAX-ACCESS read-only 1368 STATUS current 1369 DESCRIPTION 1370 "This is the number of rollbacks this LP has suffered." 1371 ::= { lPEntry 7 } 1373 lPSQSize OBJECT-TYPE 1374 SYNTAX INTEGER (0..2147483647) 1375 MAX-ACCESS read-only 1376 STATUS current 1377 DESCRIPTION 1378 "This is the LP state queue size." 1379 ::= { lPEntry 8 } 1381 lPTolerance OBJECT-TYPE 1382 SYNTAX INTEGER (0..2147483647) 1383 MAX-ACCESS read-only 1384 STATUS current 1385 DESCRIPTION 1386 "This is the allowable deviation between process's 1387 predicted state and the actual state." 1388 ::= { lPEntry 9 } 1390 lPGVT OBJECT-TYPE 1391 SYNTAX INTEGER (0..2147483647) 1392 MAX-ACCESS read-only 1393 STATUS current 1394 DESCRIPTION 1395 "This is this system's notion of Global Virtual Time." 1396 ::= { lPEntry 10 } 1398 lPLookAhead OBJECT-TYPE 1399 SYNTAX INTEGER (0..2147483647) 1400 MAX-ACCESS read-only 1401 STATUS current 1402 DESCRIPTION 1403 "This is this system's maximum time into which it can 1404 predict." 1405 ::= { lPEntry 11 } 1406 lPGvtUpdate OBJECT-TYPE 1407 SYNTAX INTEGER (0..2147483647) 1408 MAX-ACCESS read-only 1409 STATUS current 1410 DESCRIPTION 1411 "This is the GVT update rate." 1412 ::= { lPEntry 12 } 1414 lPStepSize OBJECT-TYPE 1415 SYNTAX INTEGER (0..2147483647) 1416 MAX-ACCESS read-only 1417 STATUS current 1418 DESCRIPTION 1419 "This is the lookahead (Delta) in milliseconds for each 1420 virtual message as generated from the driving process." 1421 ::= { lPEntry 13 } 1423 lPReal OBJECT-TYPE 1424 SYNTAX INTEGER (0..2147483647) 1425 MAX-ACCESS read-only 1426 STATUS current 1427 DESCRIPTION 1428 "This is the total number of real messages received." 1429 ::= { lPEntry 14 } 1431 lPVirtual OBJECT-TYPE 1432 SYNTAX INTEGER (0..2147483647) 1433 MAX-ACCESS read-only 1434 STATUS current 1435 DESCRIPTION 1436 "This is the total number of virtual messages 1437 received." 1438 ::= { lPEntry 15 } 1440 lPNumPkts OBJECT-TYPE 1441 SYNTAX INTEGER (0..2147483647) 1442 MAX-ACCESS read-only 1443 STATUS current 1444 DESCRIPTION 1445 "This is the total number of all Atropos packets 1446 received." 1447 ::= { lPEntry 16 } 1449 lPNumAnti OBJECT-TYPE 1450 SYNTAX INTEGER (0..2147483647) 1451 MAX-ACCESS read-only 1452 STATUS current 1453 DESCRIPTION 1454 "This is the total number of Anti-Messages transmitted 1455 by this Logical Process." 1456 ::= { lPEntry 17 } 1458 lPPredAcc OBJECT-TYPE 1459 SYNTAX DisplayString 1460 MAX-ACCESS read-only 1461 STATUS current 1462 DESCRIPTION 1463 "This is the prediction accuracy based upon time 1464 weighted average of the difference between predicted and real 1465 values." 1466 ::= { lPEntry 18 } 1468 lPPropX OBJECT-TYPE 1469 SYNTAX DisplayString 1470 MAX-ACCESS read-only 1471 STATUS current 1472 DESCRIPTION 1473 "This is the proportion of out-of-order messages 1474 received at this Logical Process." 1475 ::= { lPEntry 19 } 1477 lPPropY OBJECT-TYPE 1478 SYNTAX DisplayString 1479 MAX-ACCESS read-only 1480 STATUS current 1481 DESCRIPTION 1482 "This is the proportion of out-of-tolerance messages 1483 received at this Logical Process." 1484 ::= { lPEntry 20 } 1486 lPETask OBJECT-TYPE 1487 SYNTAX DisplayString 1488 MAX-ACCESS read-only 1489 STATUS current 1490 DESCRIPTION 1491 "This is the expected task execution wallclock time for this 1492 Logical Process." 1493 ::= { lPEntry 21 } 1495 lPETrb OBJECT-TYPE 1496 SYNTAX DisplayString 1497 MAX-ACCESS read-only 1498 STATUS current 1499 DESCRIPTION 1500 "This is the expected wallclock time spent performing a 1501 rollback for this Logical Process." 1502 ::= { lPEntry 22 } 1504 lPVmRate OBJECT-TYPE 1505 SYNTAX DisplayString 1506 MAX-ACCESS read-only 1507 STATUS current 1508 DESCRIPTION 1509 "This is the rate at which virtual messages were 1510 processed by this Logical Process." 1511 ::= { lPEntry 23 } 1513 lPReRate OBJECT-TYPE 1514 SYNTAX DisplayString 1515 MAX-ACCESS read-only 1516 STATUS current 1517 DESCRIPTION 1518 "This is the time until next virtual message." 1519 ::= { lPEntry 24 } 1521 lPSpeedup OBJECT-TYPE 1522 SYNTAX DisplayString 1523 MAX-ACCESS read-only 1524 STATUS current 1525 DESCRIPTION 1526 "This is the speedup, ratio of virtual time to wallclock time, 1527 of this logical process." 1528 ::= { lPEntry 25 } 1530 lPLookahead OBJECT-TYPE 1531 SYNTAX DisplayString 1532 MAX-ACCESS read-only 1533 STATUS current 1534 DESCRIPTION 1535 "This is the expected lookahead in milliseconds of this 1536 Logical Process." 1537 ::= { lPEntry 26 } 1539 lPNumNoState OBJECT-TYPE 1540 SYNTAX INTEGER (0..2147483647) 1541 MAX-ACCESS read-only 1542 STATUS current 1543 DESCRIPTION 1544 "This is the number of times there was no valid state to 1545 restore when needed by a rollback or when required to check 1546 prediction accuracy." 1547 ::= { lPEntry 27 } 1549 lPStatePred OBJECT-TYPE 1550 SYNTAX DisplayString 1551 MAX-ACCESS read-only 1552 STATUS current 1553 DESCRIPTION 1554 "This is the cached value of the state at the nearest 1555 time to the current time." 1556 ::= { lPEntry 28 } 1558 lPPktPred OBJECT-TYPE 1559 SYNTAX DisplayString 1560 MAX-ACCESS read-only 1561 STATUS current 1562 DESCRIPTION 1563 "This is the predicted value in a virtual message." 1564 ::= { lPEntry 29 } 1566 lPTdiff OBJECT-TYPE 1567 SYNTAX DisplayString 1568 MAX-ACCESS read-only 1569 STATUS current 1570 DESCRIPTION 1571 "This is the time difference between a predicted and an 1572 actual value." 1573 ::= { lPEntry 30 } 1575 lPStateError OBJECT-TYPE 1576 SYNTAX DisplayString 1577 MAX-ACCESS read-only 1578 STATUS current 1579 DESCRIPTION 1580 "This is the difference between the contents of an application 1581 value and the state value as seen within the virtual message." 1582 ::= { lPEntry 31 } 1584 lPUptime OBJECT-TYPE 1585 SYNTAX INTEGER (0..2147483647) 1586 --SYNTAX DisplayString 1587 MAX-ACCESS read-only 1588 STATUS current 1589 DESCRIPTION 1590 "This is the time in milliseconds that Atropos has been 1591 running on this node." 1592 ::= { lPEntry 32 } 1594 END 1596 Figure 17: The Atropos MIB. 1598 9. Security Considerations 1600 Clearly, the power and flexibility to increase performance via the 1601 ability to inject algorithmic information also has security 1602 implications. Fundamental active network framework security 1603 implications will be discussed in [10]. 1605 References 1607 [1] Bush, S. and A. Kulkarni, "Active Networks and Active Network 1608 Management (ISBN 0-306-46560-4)", March 2001. 1610 [2] Case, J., Mundy, R., Partain, D. and B. Stewart, "Introduction 1611 to Version 3 of the Internet-standard Network Management 1612 Framework", RFC 2570, April 1999. 1614 [3] Wijnen, B., Harrington, D. and R. Presuhn, "An Architecture for 1615 Describing SNMP Management Frameworks", RFC 2571, May 1999. 1617 [4] Case, J., McCloghrie, K., Rose, M. and S. Waldbusser, 1618 "Management Information Base for version 2 of the Simple 1619 Network Management Protocol (SNMPv2)", RFC 1450, April 1993. 1621 [5] Daniele, M., Wijnen, B., Ellison, M. and D. Francisco, "Agent 1622 Extensibility (AgentX) Protocol Version 1", RFC 2741, January 1623 2000. 1625 [6] Narten, T. and H. Alvestrand, "Guidelines for Writing an IANA 1626 Considerations Section in RFCs", BCP 26, RFC 2434, October 1627 1998. 1629 [7] University of Pennsylvania, USC/Information Sciences Institute, 1630 University of Pennsylvania, BBN Technologies, University of 1631 Pennsylvania, University of Kansas and MIT, "Active Networks 1632 Encapsulation Protocol", July 1997. 1634 [8] MIT and MIT, "The Active IP Option", September 1996. 1636 [9] IETF, "Proposed IEEE Standard for Application Programming 1637 Interfaces for Networks", October 2000. 1639 [10] Princeton University, "Active Network Framework", July 2002. 1641 [11] 1643 Authors' Addresses 1645 Stephen F. Bush 1646 GE Global Research Center 1647 1 Research Circle 1648 Niskayuna, NY 12309 1649 US 1651 Phone: +1 518 387 6827 1652 EMail: bushsf@crd.ge.com 1653 URI: http://www.crd.ge.com/~bushsf/ 1655 Amit B. Kulkarni 1656 GE Global Research Center 1657 1 Research Circle 1658 Niskayuna, NY 12309 1659 US 1661 Phone: +1 518 387 4291 1662 EMail: kulkarni@crd.ge.com 1664 Nathan J. Smith 1665 GE Global Research Center 1666 1 Research Circle 1667 Niskayuna, NY 12309 1668 US 1670 Phone: +1 518 387 6285 1671 EMail: smithna@crd.ge.com 1673 Index 1675 A 1676 AA 5, 5, 7 1677 Active IP 2 1678 Active Network 8 1679 Active Networking 2, 2 1680 Active Packet 8, 26 1681 AgentX 4 1682 Algorithmic Management 11 1683 Algorithmic Information 1 1684 Algorithmic Information 7 1685 Algorithmic 1686 Change 2 1687 Information 2 1688 Anti-Message 8, 25 1689 Anti-Toggle 7 1690 Atropos Toolkit 33 1692 C 1693 Case Diagram 16 1694 Common Predictive Framework 21 1695 Conformance 11 1697 D 1698 DARPA Active Network Program 2 1699 Descriptions 1700 Algorithmic 16 1701 Driving Process 21 1702 Driving Process 8 1704 E 1705 EE 5, 8 1706 Executable Code 2 1708 F 1709 Fine-Grained Models 6 1711 G 1712 Global-State 7 1714 I 1715 In-line Predictive Network Management 4 1716 In-line Management Code 1 1717 In-line 6 1719 L 1720 Legacy SNMP 20 1721 Local Virtual Time 9 1722 Logical Process 21 1723 Logical Process 8, 21 1724 Lookahead 8, 27 1726 M 1727 Management Algorithms 2 1728 Model Interaction 20 1729 Model Registration 20 1730 Multi-Party In-line Predictive Management Model 7 1731 Multi-Party Model Interaction 20 1733 N 1734 NodeOS 5, 9 1735 Non-Algorithmic Information 7 1737 P 1738 Physical Process 9, 21 1739 Predictive Network Management 6 1740 Predictive Network Management 2 1741 Programmable Networking 2 1742 Programmable Networking 2 1744 R 1745 Real Time 10 1746 Receive Queue 10, 26 1747 Receive Time 10 1748 Rollback 9, 27 1750 S 1751 Send Queue 9, 27, 27 1752 Send Time 10 1753 Simplicity 11 1754 Small-State 7, 26 1755 State Queue 10, 27 1757 T 1758 Tolerance 10 1760 V 1761 Virtual Message 9, 10, 17, 21 1763 W 1764 Wallclock 10 1766 Full Copyright Statement 1768 Copyright (C) The Internet Society (2002). All Rights Reserved. 1770 This document and translations of it may be copied and furnished to 1771 others, and derivative works that comment on or otherwise explain it 1772 or assist in its implementation may be prepared, copied, published 1773 and distributed, in whole or in part, without restriction of any 1774 kind, provided that the above copyright notice and this paragraph are 1775 included on all such copies and derivative works. However, this 1776 document itself may not be modified in any way, such as by removing 1777 the copyright notice or references to the Internet Society or other 1778 Internet organizations, except as needed for the purpose of 1779 developing Internet standards in which case the procedures for 1780 copyrights defined in the Internet Standards process must be 1781 followed, or as required to translate it into languages other than 1782 English. 1784 The limited permissions granted above are perpetual and will not be 1785 revoked by the Internet Society or its successors or assigns. 1787 This document and the information contained herein is provided on an 1788 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 1789 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 1790 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 1791 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 1792 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1794 Acknowledgement 1796 Funding for the RFC Editor function is currently provided by the 1797 Internet Society.