idnits 2.17.1 draft-ravi-icnrg-ccn-notification-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 545: '...r, CCN forwarder MUST NOT cache the Co...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 16, 2017) is 2476 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: 'TBD' on line 516 == Unused Reference: '20' is defined on line 992, but no explicit reference was found in the text == Unused Reference: '21' is defined on line 995, but no explicit reference was found in the text == Unused Reference: '22' is defined on line 998, but no explicit reference was found in the text == Outdated reference: A later version (-01) exists of draft-mosko-icnrg-ccnxmessages-00 Summary: 3 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICN Research Group R. Ravindran 3 Internet-Draft A. Chakraborti 4 Intended status: Informational S. Amin 5 Expires: January 17, 2018 Huawei Technologies 6 J. Chen 7 Winlab, Rutgers University 8 July 16, 2017 10 Support for Notifications in CCN 11 draft-ravi-icnrg-ccn-notification-01 13 Abstract 15 This draft proposes a new packet primitive called Notification for 16 CCN. Notification is a PUSH primitive and can be unicast or 17 multicast to multiple listening points. Notifications do not expect 18 a Content Object response hence only requires the use of FIB state in 19 the CCN forwarder. Emulating Notification as a PULL has performance 20 and routing implications. The draft first discusses the design 21 choices associated with using current Interest/Data abstraction for 22 achieving push and challenges associated with them. We follow this 23 by proposing a new fixed header primitive called Notification and a 24 CCN message encoding using Content Object primitive to transport 25 Notifications. This discussion are presented in the context of 26 CCNx1.0 [1] proposal. The draft also provides discussions on various 27 aspects related to notification such as flow and congestion control, 28 routing and reliability considerations, and use case scenarios. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on January 17, 2018. 47 Copyright Notice 49 Copyright (c) 2017 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 65 2. Notification Requirements in CCN . . . . . . . . . . . . . . 3 66 3. Using Interest/Data Abstraction for PUSH . . . . . . . . . . 4 67 4. Proposed Notification Primitive in CCN . . . . . . . . . . . 9 68 5. Notification Message Encoding . . . . . . . . . . . . . . . . 10 69 6. Notification Processing . . . . . . . . . . . . . . . . . . . 12 70 7. Security Considerations . . . . . . . . . . . . . . . . . . . 12 71 8. Annex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 72 8.1. Flow and Congestion Control . . . . . . . . . . . . . . . 13 73 8.1.1. Issues with Basic Notifications . . . . . . . . . . . 13 74 8.1.2. Flow and Congestion Control Mechanims . . . . . . . . 14 75 8.1.2.1. End-to-End Approaches . . . . . . . . . . . . . . 14 76 8.1.2.2. Hybrid Approaches . . . . . . . . . . . . . . . . 15 77 8.1.3. Receiver Reliability . . . . . . . . . . . . . . . . 17 78 8.2. Routing Notifications . . . . . . . . . . . . . . . . . . 18 79 8.3. Notification reliability . . . . . . . . . . . . . . . . 18 80 8.4. Use Case Scenarios . . . . . . . . . . . . . . . . . . . 19 81 8.4.1. Realizing PUB/SUB System . . . . . . . . . . . . . . 19 82 9. Informative References . . . . . . . . . . . . . . . . . . . 20 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 85 1. Introduction 87 Notification is a PUSH primitive used in the Internet today by many 88 IoT and social applications. The nature of notifications varies with 89 the application scenario, ranging from being mission critical to one 90 that is best effort. Notifications can be unicast or multicast 91 depending on whether the notification service is aware of all the 92 consumers or not. A notification service is preceded by a consumer 93 subscribing to a specific event such as, subscription to hash-tag 94 feeds, health emergency notification service, or temperature sensor 95 reading from a room in a building; following this subscription the 96 service pushes notifications to consuming entities. It has to be 97 noted that certain IoT applications expects notification end-to-end 98 latency of few milliseconds [2]. Industrial IoT applications have 99 more stringent requirement in terms of QoS, timeliness, and 100 reliability of message delivery. Though we term it as a 101 Notification, this primitive can also be used for transactional 102 exchange between two points. 104 CCN optimizes networking around efficiently distributing already 105 published content which the consumers learn through mechanisms like 106 manifests containing the names of published content chunks and their 107 locations. Applications relying on notifications requires event 108 driven data to be pushed from multiple producers to multiple 109 subscribers for which the current Interest/Data primitive is 110 inefficient. This draft proposes to extend CCN's current primitives 111 set with a new notification primitive that can be processed in a new 112 way by the CCN forwarder to serve notification objectives. 113 Notification here implies a PUSH semantic that is available with IP 114 today and supported by other FIA architectures like MobilityFirst [3] 115 and XIA [4]. 117 2. Notification Requirements in CCN 119 General notification requirements and features have been discussed 120 have been discussed in protocols such as CoAP's Observe proposal [5] 121 to push notifications from the server to the clients. Here we 122 discuss basic notification requirements from CCN's network layer 123 perspective. Other requirements related to reliability, low latency, 124 flow control can be engineered by the application or through more 125 network layer state once the following requirements are met. 127 o Supporting PUSH Intent: CCN should provide efficient and scalable 128 support for PUSH, where application's intent is to PUSH content to 129 listening application without expecting any data in return. 130 Efficiency relates to minimimizing control and forwarding overhead 131 and scalability refers to support arbitrary number of producers 132 and consumers participating in a general pub/sub or multicast 133 service. 135 o Multicast Support: CCN network should be able to handle multicast 136 notifications from a producer to multiple consumers. 138 o Security: Just as a content object in the context of Interest/Data 139 primitive provides data authentication and privacy, similar 140 features should also be offered by notification objects too. 142 o Routing/Forwarding Support: Name prefixes over which multicast 143 notifications are managed should be handled in a different manner 144 from the name prefixes over which Interest/Data primitive is used 145 for content distribution in order to support the PUSH intent. 146 This differentiation applies to the control as well as the 147 forwarding plane. 149 o Minimizing Processing: Notification processing in the forwarder 150 should be minimized considering the application's intent to PUSH 151 data to listening consumers. 153 3. Using Interest/Data Abstraction for PUSH 155 Recent CCN and NDN research [6][7] have studied the problem of 156 handling notifications and have proposed several solutions to handle 157 this. Here, we discuss several of them and point out their benefits 158 and issues: 160 Long-lived Interest v.1: The most intuitive solution makes the 161 assumption that the consumers know exactly the names of the 162 contents that will be published in the future. Yet, it is not 163 easy since the providers can give arbitrary names to each piece of 164 content, even though the contents might share a common prefix 165 (i.e., GROUP_PREFIX). To make it feasible, the providers can 166 publish the contents with sequential ID, e.g., /GROUP_PREFIX/ 167 SENQUENTIAL_ID[/SEGMENT_ID], so that the consumers can query the 168 contents with names /GROUP_ID/item_1, /GROUP_ID/item_2, ... (each 169 name represents a content item). The consumers can pipeline the 170 requests (always keep some unsatisfied requests in flight, similar 171 to TCP) to better utilize the network capacity. 173 However, this solution has several issues, especially in the 174 multi-provider scenario: 176 * Since it is unknown to the consumer (and the network) which 177 provider will use which sequential ID, each request has to be 178 forwarded to all the possible providers. This solution might 179 use up a large amount of state (PIT entries) in the network, as 180 each consumer can keep tens of requests (to all providers) in 181 flight for each group. 183 * Since each sequential ID should only be used by one provider, 184 many PIT entries will not be consumed until timeout (if there 185 is a timeout mechanism). E.g., P1 and P2 are 2 providers of a 186 group (/GROUP), the consumers have to send requests /GROUP/ 187 item_1, and /GROUP/item_2 to both providers. Assume that P1 188 publishes first so he uses the name /GROUP/item_1. The PIT 189 entries for /GROUP/item_1 towards P2 will not be consumed since 190 P2 should now publish with name /GROUP/item_2. 192 * When the PIT entries form loops in the network (it can happen 193 quite often in the multi-provider, multi-consumer scenario), 194 the data packets can waste network traffic while following the 195 loops and get discarded when redundancy happens. 197 * Other than the inefficiencies mentioned above, one major issue 198 with this solution is the difficulty of provider 199 synchronization. It is not easy to make sure that different 200 providers would use different sequential IDs especially when 201 the providers are publishing contents at the same time. 203 Polling v.1: To eliminate the requirement for a sequential ID when 204 publishing (to address the synchronization issue), the solution 205 Polling v.1 makes the providers publish contents with name format: 206 /GROUP_ID/TIMESTAMP. While querying the contents, the consumer 207 query using name /GROUP_ID/ with "exclude" field , where Tx is the latest version the consumer has 209 received. E.g., after receiving a content with name /GROUP_ID/ 210 v_1234 (v_1234 is the timestamp of the publication time), the 211 consumer would send a query with name /GROUP_ID/. He might get the next piece with name /GROUP_ID/v_2345 213 (assuming that there is no content published between these two 214 time stamps) without the need to know the exact names of the 215 contents. The content providers do not have to be synchronized on 216 the sequential IDs and use the timestamp instead. 218 While this solution is similar to the one used in NDN for getting 219 the "latest" version under a prefix, it has several issues when we 220 need to get "all" versions under a prefix: 222 * Ambiguity contents will appear when two providers of a same 223 group publish at the same time. 225 * Consumers might miss messages when the clocks are not 226 synchronized on the providers. E.g., one provider (with faster 227 clock) might publish a content with name /GROUP_ID/v_2345 after 228 v_1234. When the consumer queries for the earliest version 229 after v_1234, he will get the content. Yet, another provider 230 (with slower clock) would publish a content with name 231 /GROUP_ID/v_2234 after the consumer gets v_2345. The consumer 232 would miss the content with v_2234 as he will query for 233 . 235 * Consumers might miss messages due to different delivery latency 236 (e.g., cache hit vs. no cache hit) even when the clocks on the 237 providers are perfectly synchronized (e.g., via GPS signals). 238 E.g., when a client queries for content /GROUP_ID/, and there are two pieces of content exist in the 240 network (v_2234, and v_2345). It can happen that v_2345 is 241 returned earlier (either due to a cache hit or because the 242 provider is closer). The consumer would then query for 243 and miss v_2234 with this solution. 245 * Also just as with the previous approach, this mechanism also 246 requires the producers to sync so that they don't produce 247 content using the same name. 249 Long-lived Interest v.2: To completely address the issues with 250 multiple providers sharing a same prefix (e.g., synchronization in 251 Long-lived Interest v.1, and clock synchronization in Polling 252 v.1), Long-lived Interest v.2 gives a prefix to each provider. 253 The providers in this solution provide contents with name 254 /GROUP_ID/PROVIDER_ID/SEQUENTIAL_ID, and the consumers query the 255 full names accordingly (similar to Long-lived Interest v.1 but 256 with an extra prefix PROVIDER_ID). The consumer can still use 257 pipelining to improve the throughput. 259 While this solution can avoid packet losses in the previous 260 solution, it has several other issues: 262 * Consumers have to know all the potential providers, which might 263 be difficult in some applications where every user can send 264 messages in any group that he might be interested in. 266 * Compared to Long-lived Interest v.1, the consumers in this 267 solution have to keep multiple pending queries per group per 268 provider. It might consume even more states in the network, 269 which makes the solution less scalable. 271 * When a provider has more than one device (e.g., laptop and 272 smartphone) that can publish contents under a same name 273 /GROUP_ID/PROVIDER_ID, the solution would have the same 274 synchronization issue as Long-lived Interest v.1. If the 275 solution mandates each device to have a separate provider ID, 276 it will end up with even more PIT entries (states) in the 277 network, and the solution becomes less "information-centric". 279 Polling v.2: To reduce the states and the control overhead in Long- 280 lived Interest v.2, the solution Polling v.2 allows the provider 281 process the requests in the application layer. Periodically, the 282 consumer would query each provider "if there is any update after 283 Nx" (Nx is name of the last content the consumer has received). 284 The query would be in the format: /GROUP_ID/PROVIDER_ID/Nx/NONCE. 286 The provider would reply aggregated results in one response (with 287 different segments, but under the same name), and an indication of 288 "no update" if there is no publication after Nx. Since a same 289 query for /GROUP_ID/PROVIDER_ID/Nx can get different responses 290 ("no update", or aggregated publications), a NONCE has to be added 291 in the name to prevent possible cache hits in the network. This 292 solution can be effective in games since the publication rate 293 (actions of the provider in the game) is much higher than the 294 polling rate (refresh rate on the consumer). However, it still 295 has some issues (inefficiencies): 297 * There is a tradeoff between timeliness vs. in-network traffic 298 when choosing the polling frequency. The solution can be 299 inefficient when the polling is too frequent: most of the 300 polling will get "no update" responses. This can consume a 301 large amount of traffic in the network and extra computation on 302 both the providers and the consumers. The timeliness can be 303 impaired when the polling is infrequent since the publication 304 can only reach the consumer when the consumer queries. The 305 average delivery time of a publication in such solution is half 306 of the polling period. 308 * In-network cache cannot be used since the response to a same 309 query (without nonce) can be different according to the time 310 (and maybe the consumer). 312 * Consumers still have to know all the potential providers 313 similar to Long-lived Interest v.2. 315 Polling with A Server: To relieve the consumers from knowing all 316 potential providers in Polling v.2, solution Polling with A Server 317 introduces a server (or broker) as the delegate of all the 318 providers. The providers would publish data into the server and 319 the consumers would poll for the updates from the server (similar 320 to Twitter and Facebook in IP network). In this solution, the 321 consumers do not have to poll each provider for the updates, which 322 reduces the overhead in the network. With the aggregated response 323 on the server, the network traffic is further reduced. However, 324 it still has several issues: 326 * Similar to all the server-based solutions like Facebook and 327 Twitter, the server has to deal with all the polls. This can 328 cause single point of failure. 330 * It is not easy for the providers "publish contents to the 331 server". This becomes another notification problem and has to 332 be solved by the other solutions mentioned in this section. 334 * Cache is not used in this solution similar to Polling v.2. 336 * This solution is not really "information-centric" as the 337 consumers have to get the location of the content rather than 338 the content itself. 340 Interest Overloading: Since all the aforementioned query/response 341 solutions have issues with efficiency, scalability and/or 342 timeliness, Interest Overloading tries to modify the communication 343 pattern by using Interest packets to deliver publications 344 directly. The consumers in this solution propagate FIB entry of 345 /GROUP_ID to all potential providers (or simply flood the 346 network). When a provider sends a publication, he would send an 347 Interest with name /GROUP_ID/NONCE/ and the lifetime set 348 to zero. Since the traditional Interest packets do not have 349 payload, the solution has to embed (e.g., URL encode [1]) the 350 payload in the name of the Interest. NONCE is used to prevent PIT 351 aggregation since providers may publish contents with same payload 352 (e.g., sensor readings). This solution can address the timeliness 353 and scalability issues with the Polling and Long-lived Interest 354 solutions, yet there are still some issues: 356 * This solution creates ambiguity in the meaning of Interest 357 packets (and the corresponding forwarding behaviors on the 358 routers). For a normal Interest packet, the forwarding engines 359 should perform an anycast (send it to only one of the 360 providers) according to FIB. However, in this solution, the 361 forwarding engines should use multicast logic for prefix 362 /GROUP_ID (and avoid PIT storage). Solution in [8] specifies 363 some multicast prefixes so that the forwarding engines can 364 distinguish the publications from the normal requests. Yet, 365 this places higher overhead on both the forwarding engines and 366 the network management. It also prevents providers to create 367 contents under the /GROUP_ID prefix (since the query will be 368 forwarded using multicast, and not kept in the PIT). 370 * The routing is also a concern in this solution. When the 371 consumers propagate FIB, it should reach all potential 372 providers (in most of the time it will flood the network since 373 all the users can be potential providers). Naturally, in a 374 multi-provider, multi-consumer scenario, the FIB entries would 375 form a mesh in the network. It is less scalable compared to 376 the tree-based routing in IP multicast (PIM-SM). The network 377 has to specify another routing policy specifically for these 378 prefixes, which places even higher overhead on network 379 management. 381 * As is mentioned in [9], it is not efficient to embed large 382 amount of data into the name of the Interest packets. It adds 383 more computation and storage overhead in the forwarding engines 384 (PITs). 386 Interest Trigger: Similar to Interest Overloading, Interest Trigger 387 uses an Interest packet as notification. To eliminate the 388 overhead of embedding the content in the Interest, this solution 389 places the name of the publication in the name of the notification 390 (Interest) packet. On receiving the notification, the consumers 391 can extract the content name and send another query (Interest) for 392 the real content. While this solution reduces the overhead of 393 embedding the payload, it still has the ambiguity and routing 394 issues similar to Interest Overloading solution. It also incurs 395 additional round trip delay before the produced data arrives at 396 the listening consumer. 398 To summarize CCN and NDN operates on PULL primitive optimized for 399 content distribution applications. Emulating PUSH operation over 400 PULL has the following issues: 402 o It is a mismatch between an application's intent to PUSH data and 403 the PULL APIs currently available. 405 o Unless Interests are marked distinctly, overloading Interests with 406 notification data will undergo PIT/CS processing and are also 407 subjected to similar routing and forwarding policies as regular 408 Interests which is inefficient. 410 o Another concern in treating PUSH as PULL is with respect to the 411 effect of local strategy layer routing policies, where the intent 412 to experiment with multiple faces to fetch content is not required 413 for notification messages. 415 This motivates the need for treating notifications as a separate 416 class of traffic which would allow a forwarder to apply the 417 appropriate routing and forwarding processing in the network. 419 4. Proposed Notification Primitive in CCN 421 Notification is a new type of packet hence can be subjected to 422 different processing logic by a forwarder. By definition, a 423 notification message is a PUSH primitive, hence is not subjected to 424 PIT/CS processing. This primitive can also be used by any other 425 transactional or content distribution application towards service 426 authentication or exchanging contextual information between end 427 points and the service. 429 5. Notification Message Encoding 431 The wire packet format for a Notification is shown in Fig. 1 and Fig. 432 2. Fig. 1 shows the Notification fixed header considering the 433 CCNx1.0 encoding, and Fig. 2 shows the format for the CCN 434 Notification message, which is used to transport the notification 435 data. We next discuss these two packet segments of the Notification 436 message. 438 1 2 3 439 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 440 +---------------+---------------+---------------+--------------+ 441 | Version | PacketType= | PacketLength | 442 | | Notification | | 443 +---------------+---------------+---------------+--------------+ 444 | HopLimit | Reserved | Flags | HeaderLength | 445 +---------------+---------------+---------------+--------------+ 446 / Optional Hop-by-hop header TLVs / 447 +---------------+---------------+---------------+--------------+ 448 / Content Object as Notification Message / 449 +---------------+---------------+---------------+--------------+ 451 Figure 1: CCN Notification fixed header 453 1 2 3 454 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 455 +---------------+---------------+---------------+--------------+ 456 | MessageType = Content Object | MessageLength | 457 +---------------+---------------+---------------+--------------+ 458 | Name TLV | 459 +---------------+---------------+---------------+--------------+ 460 | Optional MetaData TLVs | 461 +---------------+---------------+---------------+--------------+ 462 | Message Payload Type | Message Type Length | 463 +---------------+---------------+---------------+--------------+ 464 | Payload or Optional Content Object | 465 +---------------+---------------+---------------+--------------+ 466 / Optional CCNx ValidationAlgorithm TLV / 467 +---------------+---------------+---------------+--------------+ 468 / Optional CCNx ValidationPayload TLV (ValidationAlg required) / 469 +---------------+---------------+---------------+--------------+ 471 Figure 2: CCN Notification Message 473 Notification Fixed Header: The fields in the fixed header that have 474 new meaning in the context of notifications are discussed next, while 475 the other fields follow the definition in [1]. 477 o Packet Type: This new type code identifies that the packet is of 478 type Notification [TBD]. 480 o Optional Hop-by-hop header TLVs : Encodes any new hop-by-hop 481 headers relevant to notifications [TBD]. 483 CCN Notification message: The CCN Notification message is a Content 484 Object as in [1]. Notifications are always routed on the top level 485 Content Object (outer CO) name. Notification itself can be encoded 486 in two forms depending on the application requirement: 488 o Notification with single name: In this case the notification 489 contains a single content object. Here the producer generates 490 notification using the same name used by consumers on which they 491 listen on. 493 o Notification with two names: In this case the notification 494 contains a top level Content Object (outer CO), that encapsulates 495 another Content Object (inner CO). With an encapsulated Content 496 Object, the meaning is that notification producers and consumers 497 operate on different name-spaces requiring separate name-data 498 security binding. A good application of the encapsulation format 499 is a PUB/SUB service, where the consumer learns about the 500 notification service name offline, and the producer who is 501 decoupled from the consumer generates a new Content Object using 502 its own name and pushes the notification to the consumer. 504 The interpretation of the fields shown in Fig. 2 are as follows: 506 o MessageType : The CCN message type is of type Content Object. 508 o Name TLV : Name TLV in the Content Object is used to route the 509 Notification. 511 o Optional Metadata TLV: These TLVs carry metadata used to describe 512 the Notification payload. 514 o Message Payload Type: This is of type T_PAYLOADTYPE defined in 515 CCNx.1.0 or a new encapsulation type (T_ENCAP) that indicates the 516 presence of another encapsulated Content Object [TBD]. 518 o Optional Encapsulated Content Object: This is an optional 519 encapsulated Content Object newly defined for the Notification 520 primitive. The name in the encapsulated Content Object 521 corresponds to the producer's name-space, or anything else based 522 on the application logic. The rational for an encapsulated 523 Content Object was discussed earlier. 525 o Optional Security Validation data: The Content Object optionally 526 carries security validation payload as per CCNx1.0. 528 6. Notification Processing 530 The following steps are followed by a CCN forwarder to process the 531 Notification packet. 533 o Notification packet type is identified in the fixed header of a 534 CCN packet with a new type code. The Notification carries a 535 Content Object, whose name is used for routing. This name is 536 matched against the FIB entries to determine the next hop(s). 537 Novel strategy layer routing techniques catering to the 538 notification traffic can be applied here. 540 o CCN forwarder also processes the optional metadata associated with 541 the Notification meant for the network to help with the forwarding 542 strategy, for e.g., mission critical notifications can be given 543 priority over all other traffic. 545 o As mentioned earlier, CCN forwarder MUST NOT cache the Content 546 Objects in the notifications. 548 7. Security Considerations 550 The proposed processing logic of Notifications that bypass the 551 processing of PIT/CS has the following security implications: 553 Flow Balance : PIT state maintains the per-hop flow balance over all 554 the available faces by enforcing a simple rule, that is, one Content 555 Object is send over a face for a single Interest. Bypassing PIT 556 processing compromises this flow balancing property. For scenarios 557 where the notification traffic volume is not high such as for IoT 558 applications, the impact may not be significant. However, this may 559 not be the case considering the plethora of social networking and 560 emerging IoT applications in a general Internet scenario. This flow 561 balance tradeoff has to be understood considering an application's 562 intent to PUSH data and the latency introduced by processing such 563 traffic if a PULL primitive is used. Also PIT offers a natural 564 defense mechanism by throttling traffic at the network edge, 565 considering the provisioned PIT size, and bypassing it could 566 exacerbate DDOS attacks on producing end points. 568 Cache Poisoning: This draft doesn't recommend the caching of the 569 Content Object in the Notification payload, though doing so might 570 help in increasing the availability of notification information in 571 the network. A possible exception would be if the inner CO is a 572 nameless object [10]. as those can only be fetched from CS by hash We 573 leave this possibility of applying policy-based caching of 574 Notification Content Objects for future exploration. The 575 recommendation for not caching these Content objects is that, in a 576 regular Interest/Content Object exchange, content arrives at the 577 forwarder and is cached as a result of per-hop active Interest 578 expression. Unsolicited Content Objects, as in the case of the 579 Notification, violates this rule, which could be exploited by 580 malicious producers to generate DDOS attack against the cache 581 resource of a CCN infrastructure. 583 8. Annex 585 8.1. Flow and Congestion Control 587 8.1.1. Issues with Basic Notifications 589 As mentioned in the previous sections, one of the main issues with 590 notification is the flow and congestion control. One naive way to 591 solve this issue is the routers drop the packets from aggressive 592 flows. Flow-based fair queueing (and its variation stochastic 593 fairness queueing) maintain queues for flows (or the hash of flows) 594 and try to give a fair share to each flow (or a hash). Flows can be 595 classified by the prefixes in the ICN case. However, according to 596 [11], the overall network throughput will be affected when there are 597 multiple bottlenecks in the network. Therefore, [11] promotes an 598 end-to-end solution for congestion control. Flow balance is a key 599 requirement to an end-to-end (or end-driven) flow and congestion 600 control. In the case of CCN query/response, flow balance entails 601 that an Interest pulls at most one Data object from upstream. The 602 data consumer can therefore control the amount of traffic coming from 603 the data source(s) either it is a data provider or a cache in the 604 network. However, the basic notification does not follow the rule of 605 flow balance (each Subscription can result in more than one 606 Notifications disseminated in the network). In the absence of a 607 proper feedback mechanism to notify the data sender or the network 608 the available bandwidth and local resource the consumer has, the 609 sender can easily congest the bottleneck link of the receivers 610 (causing congestion collapse) and/or overflow the buffer on the 611 receiver side. In the later sections, we will describe the possible 612 congestion control mechanisms in ICN and how to deal with packet loss 613 when both congestion control and reliability are required. 615 However, the basic notification does not follow the rule of flow 616 balance (each Subscription can result in more than one Notifications 617 disseminated in the network). There is no way a receiver can notify 618 the data sender or the network the available bandwidth and local 619 resource it has. As a result, the sender can easily congest the 620 bottleneck link of the receivers (causing congestion collapse) and/or 621 overflow the buffer on the receiver side. 623 8.1.2. Flow and Congestion Control Mechanims 625 Here we discuss broad approaches towards achieving flow and 626 congestion control in CCN as applied to Notification traffic. Since 627 the forwarding logic of the Notification packets are quite similar to 628 that of IP multicast, existing multicast congestion control solutions 629 can be candidates to solve the flow/congestion control issue with 630 Notification. In addition we also summarize recent ICN research to 631 address this issue. 633 8.1.2.1. End-to-End Approaches 635 In the multicast communication, it is not scalable to have direct 636 receiver-to-sender feedback loop similar to TCP since this would 637 result in each receiver sending ACKs (or NACKs) to the data sender 638 and cause ACK (NACK) implosion. To address the ACK implosion issue, 639 two types of solutions have been proposed in multicast congestion 640 control, namely, sender-driven approaches and receiver-driven 641 approaches. 643 8.1.2.1.1. Sender-driven Multicast 645 In the first category, the sender controls the sending rate and to 646 ensure the network friendliness, the sender usually align the sending 647 rate to the slowest receiver. 649 To avoid the ACK implosion issue, TCP-Friendly Multicast Congestion 650 Control (TFMCC [12]) uses rate based solution. This solution uses 651 TCP-Friendly Rate Control (TFRC) to get a proper sending rate based 652 on the RTT between sender and each receiver. The sender only needs 653 to collect the RTTs periodically instead of per-packet ACKs. 654 Similarly, in ICN, the sender can create another channel (namespace) 655 to collect the RTT measurement from the receivers. However, due to 656 the dynamics on each path, it is difficult to calculate the proper 657 sending rate. 659 To address the rate calculation issue, pgmcc [13], a window-based 660 solution is proposed. It uses NACKs to detect the slowest receiver 661 (the ACKer). The ACKer sends an ACK back to the sender on receiving 662 each multicast packet. A feedback loop similar to TCP is formed 663 between the sender and the ACKer to control the sending rate. Since 664 the ACKer is the slowest receiver, the sender adapts its sending rate 665 to the available bandwidth of the slowest receiver, the solution can 666 therefore ensure the network friendliness. In the ICN case, the 667 receivers can send NACKs in the form of Notification packets through 668 another namespace, and the ACKer can also use the same mechanism to 669 send ACKs. 671 However, since the sender is always aligning the sending rate to the 672 slowest receiver to ensure the network friendliness, the performance 673 of the solutions can be dramatically affected by a very slow 674 receiver. 676 8.1.2.1.2. Receiver-driven Multicast 678 Unlike the sender-driven solutions, the receiver-driven solutions 679 [14] choose to use layered-multicast to satisfy heterogeneous 680 receivers. The sender first initiates several multicast groups 681 (namespaces in the case of ICN) with different sending rates. Each 682 receiver would choose to join a multicast group with the highest 683 sending rate that it can afford. The sender can also adapt the 684 sending rate of each multicast group according to the receiver 685 status. 687 These solutions can support applications like video streaming (with 688 layered codecs) efficiently. However, they also have some issues: 1) 689 they complicate the sender and receiver logic, especially for simple 690 applications like file transfer; and 2) the receivers are limited by 691 the sending rates initiated by the provider and would therefore 692 under-utilize the available bandwidth. 694 8.1.2.2. Hybrid Approaches 696 In this approach, flow balance of Notification is achieved by the 697 receivers notifying the network (rather than the sender or other 698 receivers) about the capacity it can receive. Here, we take 699 advantage of operating the Notification service through a receiver- 700 driven approach and get support from the network. 702 A solution based on this approach is proposed in [15], which we 703 summarize next. 705 To retain flow balance, the consumers in this solution send out one 706 subscription for only one next Notification instead of the original 707 logic (that receives all the Notifications). Similar to the flow and 708 congestion control in query/response, the receivers can now maintain 709 a congestion window to control the amount of traffic coming from 710 upstream. 712 Here, instead of maintaining a (name, outgoing face) pair in FIB (or 713 subscription table), the routers now adds a third field -- 714 accumulated count -- for each entry. The accumulated count is 715 increased by 1 on receiving such a subscription and decreased by 1 on 716 sending a Notification to that face. The routers should also 717 propagate the maximum accumulated count upstream till the 1st hop 718 router of the provider (or the rendezvous point in the network). The 719 subscribers sends a subscription for every successfully received 720 notification. Here we also assume that, the subscribers operate 721 based on the AIMD scheme. 723 If the dissemination of Notification follows a tree topology in the 724 network, we define the branching point of a receiver R (BP_R) as the 725 router closest to R which has another outgoing face that can receive 726 data faster than R. For receivers that has bandwidth/resources to 727 receive all the data from the provider, BP_R is the 1st hop router of 728 the provider (or the rendezvous point). 730 In this solution, we can prove that there is a feedback loop between 731 each receiver and its branching point. Therefore, when a receiver 732 maintains its congestion window size using AIMD, the traffic between 733 the branching point and the receiver is similar to TCP. It can get a 734 fair share at the bottleneck on the path, even if the bottleneck is 735 not directly under the branching point. In the multicast tree, the 736 solution can ensure the fairness with other (TCP-like) flows on each 737 branch. 739 The solution can thus allow the sender to send at an application- 740 efficient rate rather than being affected by the slowest receiver 741 like pgmcc [13]. 743 It is true that the solution requires more packets and more states in 744 the network compared to the basic notification solution, but the cost 745 is similar to (and smaller than) that of query/response. Since we 746 are using one notification per subscription pattern, the amount of 747 traffic overhead is the same as query/response. As for the states 748 stored in the router, the solution only requires 1 entry per prefix 749 per face, which is smaller than the query/response which requires 1 750 entry per packet per face. Therefore, the overhead of the solution 751 is acceptable in CCN. 753 8.1.2.2.1. Other Challenges 755 o Sender Rate Control: The sender in the solution does not have to 756 limit the sending rate to the slowest receiver to maintain network 757 friendliness. Therefore, the choice of sending rate is a tradeoff 758 between network traffic and session completion time. In the case 759 where the application does not require a certain sending rate 760 (like file transfer), the sender can align the sending rate to the 761 slowest receiver (similar to pgmcc) to minimize the repair 762 traffic, but at the cost of longer session completion time. He 763 can also send at the rate of the fastest receiver and try to get 764 peer repair in the network. This allows faster receivers finish 765 the session earlier but causing higher network traffic due to the 766 repair. An ACKer-based solution similar to pgmcc can be adopted 767 to allow the sender align the rate at a proportion of users (e.g., 768 top 30%). The sender can collect feedback (throughput, latency, 769 etc.) from all the receivers periodically and pick an ACKer 770 according to the proportion it desires. On receiving a 771 Notification packet, the ACKer would send an ACK just like TCP. 772 The sender can maintain a congestion window also like TCP. The 773 feedback loop between the sender and the ACKer can align the 774 sending rate at the ACKers's available bandwidth. 776 o Receiver Window Control: Slightly different from one-sender one- 777 receiver window control in TCP, the sending rate in the hybrid 778 approach is not controlled by any of the receivers. Receiving 779 intermittent packets can indicate both congestion (similar to TCP) 780 and not enough window size (since the sending rate is higher). In 781 the first case, the receiver should reduce the window size while 782 in the second case, the receiver should increase the window size. 783 An indication of congestion (e.g., Random Early Detection, RED) 784 should be provided directly from the network.The receivers with 785 available bandwidth higher than the sending rate would have too 786 large window size since it does not see any packet loss. Please 787 refer to [15] for a detailed solution on this issue. 789 8.1.3. Receiver Reliability 791 The receiver would miss packets when the available bandwidth/resource 792 of the receiver is lower than the sending rate of the Notification 793 provider. Some applications (like gaming and video conferencing) can 794 tolerant such kind of packet loss while the others (like file 795 transfer) cannot. Therefore, another module that ensures the 796 reliability is needed. However, reliability should be separated from 797 the flow and congestion control since it is not a universal 798 requirement. 800 With the solution described in the receiver-driver or the hybrid 801 approach, the slower consumers would receive intermittent packets 802 since the sending rate can be faster than their fair share. The 803 applications that require reliable transfer can query the missing 804 packets similar to the normal query/response. This also requires 805 that each content in the Notifications should have a unique Content 806 Name (or hash in the nameless scenario). The clients should also be 807 able to detect the missing packets either based on the sequence 808 number or based on a pre-acquired meta-file. Caching in CCN can be 809 leveraged to achieve availability and reliability. 811 The network can forward the requests (Interests) of the missing 812 packets towards the data provider, the other consumers and/or the in- 813 network cache to optimize the overall throughput of the consumers. 814 This solution is similar to Scalable Reliable Multicast (SRM [16]). 815 However, as mentioned in [17], solutions like SRM requires the 816 consumers communicate directly with each other and therefore lose the 817 privacy and trust. CCN can ensure the privacy since the providers 818 cannot get the information of the identity of the consumers. Trust 819 (data integrity) is also maintained with the signature in the Data 820 packets. 822 8.2. Routing Notifications 824 Appropriate routing policies should be employed to ensure reliable 825 forwarding of a notification to its one or many intended receivers. 826 The name in the notification identifies a host or a multicast service 827 being listened to by the multiple intended receivers. Two types of 828 routing strategies can be adopted to handle notifications, depending 829 on whether or not an explicit pub/sub state is maintained in the 830 forwarder. 832 o Stateless forwarding: In this case the notification only relies on 833 the CCN FIB state to route the notification. The FIB entries are 834 populated through a routing control plane, which distinguishes the 835 FIB states for the notification service from the content fetching 836 FIB entries. Through this logical separation, Notifications can 837 be routed by matching its name with the matching FIB policy in the 838 CCN forwarder, hence processed as notification multicast. 840 o Stateful forwarding: In this case, specific subscription state is 841 managed in the forwarder to aid notification delivery. This is 842 required to scale notifications at the same time apply 843 notification policies, such as filter notifications or to improve 844 notification reliability and efficiency to subscribing users [18]. 846 8.3. Notification reliability 848 This proposal doesn't provide any form of reliability. Reliability 849 can be realized by the specific application using the proposed 850 notification primitive, for instance using the following potential 851 approaches: 853 Caching: This proposal doesn't propose any form of caching. But 854 caching feature can be explored to improve notification reliability, 855 and this is a subject of future study. For instance, consumers, 856 which expect notifications and use external means (such as periodic 857 updates or by receiving manifests) to track notifications, can 858 recover the lost notifications using the PULL feature of CCN. 860 Notification Acknowledgment: If the producer maintains per-receiver 861 state, then the consumer can send back notification ACK or NACK to 862 the producer of having received or not received them. 864 8.4. Use Case Scenarios 866 Here we provide the discussions related to the use of Notification in 867 different scenarios. 869 8.4.1. Realizing PUB/SUB System 871 A PUB/SUB system provides a service infrastructure for subscribers to 872 request update on a set of topics of interest, and with multicast 873 publishers publishing content on those topics. A PUB/SUB system maps 874 the subscribers' interests to published contents and pushes them as 875 Notifications to the subscribers. A PUB/SUB system has many 876 requirements as discussed in [19] which include low latency, 877 reliability, fast recovery, scalability, security, minimizing false 878 (positive/negative) notifications. 880 Current IP based PUB/SUB systems suffer from interoperability 881 challenges because of application-defined naming approach and lack of 882 support of multicast in the data plane. The proposed Notification 883 primitive can be used to realize large scale PUB/SUB system, as it 884 unifies naming in the network layer and support for name-based 885 multicasting. 887 Depending on the routing strategy discussed earlier, two kind of PUB/ 888 SUB approaches can be realized : 1) Rendezvous style approach ; 2) 889 Distributed approach. Each of these approaches can use the 890 Notification primitive to implement their PUSH service. 892 In the Rendezvous style approach, a logically centralized service 893 maps subscriber's topic interest with the publisher's content and 894 pushes it as notifications. If stateless forwarding is used, the 895 routing entries contain specific application-ID's requesting a given 896 notification, to handle scalability, a group of these application can 897 share a multicast-ID reducing the state in the FIB. 899 In the Distributed approach, the CCN/NDN protocol is further enhanced 900 with new subscription primitive for the subscription interested 901 consumers. When a consumer explicitly susbcribes to a multicast 902 topic, its subscription request is forwarded to the upstream 903 forwarder which manages this state mapping between subscription names 904 to the downstream faces which has expressed interest for 905 Notifications being pushed under that prefix. An example of the 906 network layer based approach is the COPSS notification proposal [19]. 907 Here a PUB/SUB multi-cast state state, called the subscribers 908 interest table, is managed in the forwarders. When a Notification 909 arrives at a forwarder, the content descriptor in the notification is 910 matched to the PUB/SUB state in the forwarder to decide the faces 911 over which the Notification has to be forwarded. 913 9. Informative References 915 [1] CCN Wire format, CCNX1., "http://www.ietf.org/id/ 916 draft-mosko-icnrg-ccnxmessages-00.txt.", 2013. 918 [2] Osseiran, A., "Scenarios for 5G Mobile and Wireless 919 Communications: The Vision of the METIS Project.", IEEE 920 Communication Magazine , 2014. 922 [3] NSF FIA project, MobilityFirst., 923 "http://www.nets-fia.net/", 2010. 925 [4] NSF FIA project, XIA., "https://www.cs.cmu.edu/~xia/", 926 2010. 928 [5] Observing Resources in CoAp, observe., 929 "https://tools.ietf.org/html/draft-ietf-core-observe-16.", 930 2015. 932 [6] Amadeo, M., Campolo, C., and A. Molinaro, "Internet of 933 Things via Named Data Networking: The Support of Push 934 Traffic", Network of the Future (NOF), 2014 International 935 Conference and Workshop on the , 2014. 937 [7] Shang, W., Bannis, A., Liang, T., and Z. Wang, "Named Data 938 Networking of Things.", IEEE IoTDI 2016, 2016. 940 [8] Zhu, Z. and A. Afanasyev, "Let's chronosync: Decentralized 941 dataset state synchronization in named data networking", 942 The 21st IEEE International Conference on Network 943 Protocols ICNP, 2013. 945 [9] Moiseenko, I. and O. Oran, "TCP/ICN: Carrying TCP over 946 Content Centric and Named Data Networks", Proceedings of 947 the 3rd ACM Conference on Information-Centric 948 Networking ICN, 2016. 950 [10] Mosko, M., "Nameless Objects.", IETF/ICNRG, Paris 951 Interim 2016, 2016. 953 [11] Floyd, S. and F. Kevin, "Promoting The Use of End-to-End 954 Congestion Control in The Internet.", IEEE ToN vol. 7(4), 955 pp. 458-472, 1999. 957 [12] Widmer, J. and M. Handley, "TCP-Friendly Multicast 958 Congestion Control (TFMCC): Protocol Specification.", IETF 959 RFC 4654, 2006. 961 [13] Rizzo, L., "pgmcc: A TCP-Friendly Single-Rate Multicast 962 Congestion Control Scheme.", SIGCOMM CCR vol. 30.4, pp. 963 17-28, 2000, 2000. 965 [14] McCanne, S., Jacobson, V., and M. Vetterli, "Receiver- 966 driven Layered Multicast.", SIGCOMM CCR pp. 117-130, 1996. 968 [15] Chen, J., Arumaithurai, M., Fu, X., and KK. Ramakrishnan, 969 "SAID: A Control Protocol for Scalable and Adaptive 970 Information Dissemination in ICN.", arXiv vol. 1510.08530, 971 2015. 973 [16] Floyd, S., Jacobson, V., Liu, C., McCanne, S., and L. 974 Zhang, "A Reliable Multicast Framework for Light-Weight 975 Sessions and Application Level Framing.", IEEE TON vol. 976 5(6), pp. 784-803, 1997. 978 [17] Floyd, N., Grossglauser, M., and KK. Ramakrishnan, 979 "Distrust and Privacy: Axioms for Multicast Congestion 980 Control.", Distrust and Privacy: Axioms for Multicast 981 Congestion Control NOSSDAV, 1999. 983 [18] Francois et al, J., "CCN Traffic Optimization for IoT", 984 Proc. of NoF , 2013. 986 [19] Chen, J., Arumaithurai, M., Jiao, L., Fu, X., and K. 987 Ramakrishnan, "COPSS: An Efficient Content Oriented 988 Publish/Subscribe System.", ACM/IEEE Symposium on 989 Architectures for Networking and Communications Systems 990 (ANCS 2011) , 2011. 992 [20] DNS Security Introduction and Requirements, DNS-SEC., 993 "http://www.ietf.org/rfc/rfc4033.txt.", 2005. 995 [21] Cisco System Inc., CISCO., "Cisco visual networking index: 996 Global mobile data traffic forecast update.", 2009-2014. 998 [22] CCNx Label Forwarding, CCNLF., "http://www.ccnx.org/pubs/ 999 ccnx-mosko-labelforwarding-01.txt.", 2013. 1001 Authors' Addresses 1003 Ravishankar Ravindran 1004 Huawei Technologies 1005 2330 Central Expressway 1006 Santa Clara, CA 95050 1007 USA 1009 Email: ravi.ravindran@huawei.com 1011 Asit Chakraborti 1012 Huawei Technologies 1013 2330 Central Expressway 1014 Santa Clara, CA 95050 1015 USA 1017 Email: asit.chakraborti@huawei.com 1019 Syed Obaid Amin 1020 Huawei Technologies 1021 2330 Central Expressway 1022 Santa Clara, CA 95050 1023 USA 1025 Email: obaid.amin@huawei.com 1027 Jiachen Chen 1028 Winlab, Rutgers University 1029 671, U.S 1 1030 North Brunswick, NJ 08902 1031 USA 1033 Email: jiachen@winlab.rutgers.edu