idnits 2.17.1 draft-edge-ai-cache-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 0 form feeds but 9 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 2020) is 1282 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET-DRAFT 2 Internet Engineering Task Force (IETF) Hong, Choong Seon 3 Category: Standards Track Kyung Hee University 4 Expires: October 10, 2020 Kyi Thar 5 Kyung Hee University 6 Ki Tae Kim 7 Kyung Hee University 8 Seok Won Kang 9 Kyung Hee University 10 October 2020 12 Edge AI assists Partial Content Caching with Smart Content 13 Prefetching Scheme 14 draft-edge-ai-cache-00.txt 16 Status of this Memo 18 This Internet-Draft is submitted to IETF in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 This Internet-Draft will expire on August 09, 2021. 39 Copyright Notice 41 Copyright (c) 2009 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents in effect on the date of 46 publication of this document (http://trustee.ietf.org/license-info). 47 Please review these documents carefully, as they describe your rights 48 and restrictions with respect to this document. 50 Abstract 52 Watching videos (Contents) from mobile devices has been causing most of 53 the network traffic and is projected to remain to increase 54 exponentially. Thus, numerous types of content and chunk based caching 55 schemes have been proposed to handle the increasing traffic. Those 56 caching schemes cache the whole videos at the edge nodes, but most 57 of the users view only the beginning of the videos. Hence, caching 58 the complete video on the edge node is an ineffective solution to 59 reduce the network traffic as well as to improve the cache 60 utilization. Thus, a chunk-level caching scheme to store popular 61 videos partially and a smart prefetching scheme is needed to 62 provide the missing chunks of the video. 64 This Internet-Draft will expire on August 09, 2021. 66 Copyright Notice 68 Copyright (c) 2020 IETF Trust and the persons identified as the 69 document authors. All rights reserved. 71 This document is subject to BCP 78 and the IETF Trust's Legal 72 Provisions Relating to IETF Documents 73 (http://trustee.ietf.org/license-info) in effect on the date of 74 publication of this document. Please review these documents 75 carefully, as they describe your rights and restrictions with respect 76 to this document. Code Components extracted from this document must 77 include Simplified BSD License text as described in Section 4.e of 78 the Trust Legal Provisions and are provided without warranty as 79 described in the Simplified BSD License. 81 Table of Contents 83 1. Introduction . . . . . . . . . . . . . . . . . . . .. . . . . . 2 84 1.1. Terminology and Requirements Language . . . . . . . . . 2 85 2. System Model . . . . . . . . . . . . . . . . . . . . . 3 86 3. Process of Sending Learning Model to predict the popularity . . 4 87 4. Process of Content retrieving process from the Edge Node. . . . 5 88 5. Process of Content retrieving process from the Content Server 89 via Edge Node. . . . . . . . . . . . .. . . . . . . . . . . . 6 90 6. Process of Content prefetching . . . . . . . . . . . .. . . . . 7 91 4. IANA Considerations . . . . . . .. . . . . . . . . . . . . . . 8 92 5. Security Considerations . . . . . . . . . . . . . . . . . . . 8 93 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 94 7.1. Normative References . . . . . . . . . . . . . . . . . . . . . 8 95 7.2. Informative References . . . . . .. . . . . . . . . . . . . . 8 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . .. . . . 9 98 1. Introduction 100 According to the CISCO, watching videos from mobile devices has been 101 causing most of the network traffic and is projected to remain to 102 increase exponentially [a]. Thus, many researchers are proposing 103 numerous types of caching schemes based on reactive approaches and 104 proactive approaches to handle the growing video traffic. In the 105 reactive caching, the edge node decides to store videos when the 106 requests or videos arrived [b]. In the proactive approach, popular 107 videos are cached based on the prediction results before requested 108 by any users [c][d]. 110 The performance of the proactive approach is 111 changing based on the efficiency of the prediction model. Currently, 112 the deep learning models get huge attention to utilize in 113 content's popularity prediction scheme because of the advances in 114 big data and high computing power. The aforementioned caching 115 schemes consider storing the complete popular videos at the edge 116 nodes (i.e., Base station). The main issue is that most of the 117 users view only the beginning of the videos because they stop 118 watching videos when they do not like the beginning. Hence, caching 119 the whole video is an ineffective solution to reduce network 120 traffic as well as to improve the users' Quality of Experience (QoE). 122 Therefore, edge Artificial Intelligence (AI) assists partial video 123 caching can be improved the cache performance. Additionally, edage 124 AI based smart prefetching scheme can reduce the latency to access 125 the missing chunks. The goal of this work is to minimize the 126 latency to access the videos from the users' devices. 128 1.1. Terminology and Requirements Language 130 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 131 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 132 document are to be interpreted as described in RFC 2119 [RFC2119]. 134 2. System Model 136 Figure.1 shows the overview system components needed to implement the 137 proposed scheme. As shown in Figure.2 the cache storage space is 138 divided into two partitions: i) Content Storage and ii) Prefetching 139 Buffer. The Content Storage partition stores the partial popular 140 videos and the prefetching buffer stores the current prefetching 141 chunks of videos. The Popularity Prediction module predicts video 142 popularity with the help of a deep learning model.The Cache 143 Decision module decides to store the chunks of the video based on 144 the popularity profile and historical data. The Prefetching Decision 145 module performs the missing chunks retrieving process. Note that 146 both Cache Decision and Prefetching modules utilize deep 147 reinforcement learning. 148 +-----------+ +------------+ 149 | Collected |<----------->| Popularity | 150 | Data |<- ->| Prediction | 151 +-----------+ | | +------------+ 152 | | 153 ----- | +---------+ 154 +-------------------+ | ->| | +-------+ 155 | Cache | ---->|Cache |<->|Content| 156 | +---------------+ |<----->|Decision | ->|Server | 157 | | Content | | +---------+ | +-------+ 158 | | Storage | | | ^ 159 | +---------------+ | +------------+| | 160 | +---------------+ |<---->|Prefetching |<- | 161 | | Prefetching | | --->| Decision | | 162 | | Buffer | | | +------------+ | 163 | +---------------+ | | | 164 +-------------------+ | | 165 ^ | +--------+ | 166 | -> |Request |<---------- 167 -----------------> |Handler | 168 +--------+ 170 Figure 1: System Model 172 2. Process of Sending Learning Model to predict the popularity 174 Figure 1 shows that the process of sending the learning models from the 175 cloud data center to the edge node, where the initial learning models 176 are constructed at the cloud data center. Then, the edge node utilized 177 the received learning models to predict the popularity of content and 178 chunks. 180 +----------+ +-------+ 181 |Cloud | | Edge | 182 |Datacenter| | Node | 183 +----------+ +-------+ 184 | Stage-1 | 185 | --------------------> | 186 | +--------------------+ | 187 | | Send Deeplearning | | 188 | | Model | | 189 | +--------------------+ | 190 | | 191 | | 193 Figure 2: Sending Learning model from Cloud Datacenter to Edge 195 3. Process of Content retrieving process from the Edge Node 197 Figure 3 shows that the content retrieving process form the edge node 198 with the case where the requested chunk of the content is located at 199 the edge node. When retrieving contents from the user reach a certain 200 chunk level threshold, the prefetching decision module pre-download 201 the chunks before requested by users. 203 +----------+ +-------+ +-------+ 204 | User | | Edge | |Content| 205 | | | Node | |Server | 206 +----------+ +-------+ +-------+ 207 | Stage-1 | | 208 | -------------------------> | | 209 | +--------------------+ | | 210 | | Request chunk 1 of| | | 211 | | content A | | | 212 | +--------------------+ | | 213 | Stage-2 | | 214 | <------------------------ | | 215 | +---------------------------+ | | 216 | | Return Content | | | 217 | | If the requested chunk of | | | 218 | | content A is in chace | | | 219 | +---------------------------+ | | 221 Figure 3: Content retrieving process from the Edge Node 222 4.Process of Content retrieving process from the Content Server via 223 Edge Node 225 Figure 4 shows that the process of the content retrieving process from 226 the Content server via edge node with the case where the edge node does 227 not have the requested chunk of the content. The edge node makes a 228 content popularity prediction based on the deep learning model and 229 constructs the popularity profile of the videos. Then, the edge node 230 makes a cache decision based on the collected videos accessed data 231 and predicted popularity profile. 233 +----------+ +-------+ +-------+ 234 | User | | Edge | |Content| 235 | | | Node | |Server | 236 +----------+ +-------+ +-------+ 237 | Stage-1 | Stage-2 | 238 | -------------------------> | -------------------------> | 239 | +--------------------+ |+---------------------------+ | 240 | | Request chunk | || Forward the request | | 241 | | of content A | || because the requested | | 242 | +--------------------+ || Content is not in cache. | | 243 | Stage-4 |+---------------------------+ | 244 | <------------------------ | Stage-3 | 245 | +---------------------------+ | <------------------------ | 246 | | Cache (if popular) and | | +-------------------------+ | 247 | | return requsted chunk of | | | Return requested chunk | | 248 | | content A | | | of content A | | 249 | +---------------------------+ | +-------------------------+ | 251 Figure 4: Content retrieving process from the Content Server via 252 Edge Node 254 5. Process of Content prefetching 256 Figure 5 shows the process of content prefetching where the edge node 257 autonomously retrieve the next chunks of the currently requested content 258 chunk. 260 +----------+ +-------+ +-------+ 261 | User | | Edge | |Content| 262 | | | Node | |Server | 263 +----------+ +-------+ +-------+ 264 | Stage-1 | Stage-2 | 265 | -------------------------> | -------------------------> | 266 | +--------------------+ |+-----------------------------+| 267 | | Request chunk 1 | || Forward the request chunk 1 || 268 | | of content B | || and chunk 1+n,cos requested || 269 | +--------------------+ || Content is not in cache. || 270 | Stage-4 |+-----------------------------+| 271 | <------------------------ | Stage-3 | 272 | +---------------------------+ | <------------------------ | 273 | | Cache and return | | +---------------------------+ | 274 | | requsted chunk 1 and 1+n | | | Return requested chunk 1 | | 275 | | of content B | | | and chunk 1+n of content B| | 276 | +---------------------------+ | +---------------------------+ | 278 Figure 5: Content prefetching process 279 4. IANA Considerations 281 There are no IANA considerations related to this document. 283 5. Security Considerations 285 This note touches communication security as in M2M communications and 286 COAP protocol. 288 6. References 290 6.1. Normative References 291 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 292 Requirement Levels", BCP 14, RFC 2119, March 1997. 294 [a] CISCO VNI. Accessed: Feb. 7, 2019. 296 [b] Saeed Ullah, Kyi Thar and Choong Seon Hong, "Management of 297 Scalable Video Streaming in Information Centric Networking," 298 Multimedia Tools and Applications, Multimed Tools Appl 299 (2016), 28pages, October 2016. 301 [c] Anselme Ndikumana and Choong Seon Hong, "Self-Driving Car 302 Meets Multi-access Edge Computing for Deep Learning-Based 303 Caching," The International Conference on Information 304 Networking (ICOIN 2019), Jan. 9-11, 2019, 305 Kuala Lumpur, Malaysia. 306 [d] K. Thar, T. Z. Oo, Y. K. Tun, D. H. Kim, K. T. Kim and C. S. Hong, 307 "A Deep Learning Model Generation Framework for Virtualized 308 Multi-AccessEdge Cache Management," in IEEE Access, vol. 7, 309 pp. 62734-62749, 2019. doi: 10.1109/ACCESS.2019.2916080 311 6.2. Informative References 312 Authors' Addresses 314 Choong Seon Hong 315 Computer Science and Engineering Department, Kyung Hee University 316 Yongin, South Korea 317 Phone: +82 (0)31 201 2532 318 Email: cshong@khu.ac.kr 320 Kyi Thar 321 Computer Science and Engineering Department, Kyung Hee University 322 Yongin, South Korea 323 Phone: +82 (0)31 201 2987 324 Email: kyithar@khu.ac.kr 326 Ki Tae Kim 327 Computer Science and Engineering Department, Kyung Hee University 328 Yongin, South Korea 329 Phone: +82 (0)31 201 2987 330 Email: glideslope@khu.ac.kr 332 Seok Won Kang 333 Computer Science and Engineering Department, Kyung Hee University 334 Yongin, South Korea 335 Phone: +82 (0)31 201 2987 336 Email: dudtntdud@khu.ac.kr