2.1.15 Web Intermediaries (webi)

NOTE: This charter is a snapshot of the 50th IETF Meeting in Minneapolis, Minnesota. It may now be out-of-date. Last Modified: 14-Mar-01


Ian Cooper <icooper@equinix.com>
Mark Nottingham <mnot@akamai.com>

Applications Area Director(s):

Ned Freed <ned.freed@innosoft.com>
Patrik Faltstrom <paf@cisco.com>

Applications Area Advisor:

Patrik Faltstrom <paf@cisco.com>

Mailing Lists:

General Discussion:webi@equinix.com
To Subscribe: webi-request@equinix.com
In Body: (un)subscribe
Archive: http://www.wrec.org/webi-archive/

Description of Working Group:

This working group addresses issues specific to intermediaries in the World Wide Web infrastructure, providing generic mechanisms which are useful in several application domains (proxies operated by access providers, content delivery surrogates, etc).

Intermediaries are commonly deployed to help scale the WWW. Lack of mechanisms to control and communicate with them brings about scalability issues with intermediaries themselves, and lack of strong, scalable coherence mechanisms limits their adoption by both end users and content publishers.
Furthermore, access providers who wish to provision caching proxies in their networks have no standardized mechanism to announce such devices to user agents. As a result, many access providers resort to the use of interception proxies, which break the end-to-end relationship between client and server at the transport layer, leading to undesired behaviors.
Accordingly, the group's work items are to:
1) Develop a resource update protocol.
2) Gather requirements for an intermediary discovery and description mechanism.
It is expected that after requirements for intermediary discovery and description are gathered and evaluated, the working group will re-charter to continue that work.
Issues pertaining to coordination between multiple administrative domains are explicitly out of scope in this group's work items. Work associated with the modification of messages by intermediaries is also out of scope. Additionally, this group will only address application-level (e.g., HTTP) intermediaries.

Goals and Milestones:

Feb 01


Submit Requirements for Resource Update Protocol as an Interne Draft

Mar 01


Meet at Minneapolis IETF

Jul 01


Submit Requirements for Intermediary Discovery and Description as an Internet-Draft

Aug 01


Meet at London IETF

Nov 01


Submit Resource Update Protocol as an Internet Draft

Dec 01


Meet at Salt Lake City IETF

Feb 02


Submit Resource Update Protocol to the IESG for consideration as standards-track publication

No Request For Comments

Current Meeting Report

Minutes of the WEBI Working Group meeting, IETF50 (Minneapolis)
Thursday 22 March 2001

Notes taken by Phil Rzewski <philr@inktomi.com>
converted to minutes by Ian Cooper <icooper@equinix.com>

(Also see slides)

The meeting provided a discussion forum for WEBI's current goals - the production of requirements documents for the "Resource Update Protocol" and "Intermediary Discovery and Description" work.

Dan Li provided a presentation and Q&A session on her experience with WCIP:

Ted Hardie commented that the WEBI RUP requirements document contains a base requirement of cache invalidation. Strong/eventual/delta are different to invalidation alone since they deal with how a replacement gets there. Queried whether there was some way to adapt the consistency model to talk about how invalidation is involved; so the loading of the new data is considered a completely separate process from the core invalidation requirement. There was some mis-communication between Ted and Dan and the conversation was taken off-line.

Oskar Batuner commented that when the origin knows a change time in advance and sends an advance notification then strong consistency will be provided. There is an issue of clock synchronization; not clear whether the notion of bringing new content into production at a predetermined time should be covered. This is not useful just for mirror sites, but in CDNs and perhaps other cache networks.

Mark Nottingham asked whether this was over and above cache control headers in HTTP. Dan commented that cache control directives are separate from invalidation messages; how one refreshes the system and how the content is obtained should not be included in the protocol.

Lisa Dusseault noted that folks might be confusing delta consistency with delta encoding.

In determining who drives invalidation, two options were presented: server driven (where invalidation is pushed to clients, providing "immediate" invalidation at the latency of network delay) and client driven (where a client polls on a volume of objects). Joe Touch questioned the situation where a client should have no business in asking about updates if the server has given a validity for a set time (even if the content has actually changed). Dan replied that there is an assumption that when content providers change something they usually want people to see the new version. Joe commented that this is not always the case, and that since content providers largely misunderstand the current semantics, they wouldn't necessarily do any better with a new set of semantics.

WCIP separates transport from semantics, defining a channel abstraction. So long as an implementation conforms to the channel abstraction, WCIP should be able to run on it. Have currently specified HTTP and are planning on BEEP and maybe PGM.

??? (Joe Touch?) Commented that in the presented URIs, the client would need to parse the entire URL before it could open a channel. Perhaps "beep://" would be better? The BEEP folks pointed out that "beep://" was inappropriate. Perhaps "beepw://" or "wcip-beep://". (Identified as a rathole; group moved on.)

Unclear whether the protocol should work on a per-object basis (which has obvious scaling issues). When manipulating groups of objects, should the grouping be different from current notions of volumes? Also, the cache nature of the clients breaks the attempt of a server to maintain state for each client.

Unclear whether the protocol should support delta encoded updates, or whether it should simply provide hints as to where to fetch the updates. Fred Douglis suggested we shouldn't give up on updates just yet. This breaks the clear design of just passing notifications. Oskar Batuner pointed out that mixing signalling with data leads to problems, including scaling, object consistency and security issues; the lack of clear choice of the delta encoding schema adds a lot of uncertainty to the protocol. Lisa Dusseault also commented that there was no need to reinvent the delta encoding methodology and pointed out the rsync work. Mark Day suggested that the signalling system shouldn't be tied to a particular delta encoding scheme; it the delta encoding technique could then evolve as necessary.

Oskar Batuner raised concern on the use of HTTP (asymmetric client-server protocol), as a symmetric protocol, for framing techniques it was not intended for within WCIP. Dan suggested that WCIP might use multiple channels to overcome some issues, but pointed out there may be problems with interception proxies.

There was concern in relation to authenticated content, and that WCIP as proposed did not seem to consider this. Mark Nottingham commented that he viewed authentication and cache consistency as two separate issues in HTTP.

Brad Cain questioned whether the protocol would only carry cache invalidations, or whether it would be a wider score meta-data update protocol; certain types of meta-data (e.g. authentication) might require a more acknowledged reliability model.

Ian Cooper led a discussion on the proposed Resource Update Protocol requirements and use cases:

Use cases identified were Internet proxy, Surrogates (CDN), intranet proxy/surrogate, and non-intermediary uses.

There appears a scaling problem in the case of general Internet proxies, though Phil Rzewski pointed out that where a contractual agreement exists it would be possible to utilize relays to facilitate scaling.

Mark Nottingham asked if an open update protocol would be used within (intra) a single CDN. Phil Rzewski noted that he knew of CDNs that were currently using non-standards protocols who would be willing to adopt a standard.

In relation to the proposed CDI working group Ian commented that there was some indication at the WG formation phase that WEBI might act as an umbrella group where a required protocol appeared common between WGs. CDI would continue to work on its own requirements documents, and WEBI should work closely with that group; it was not clear that WEBI would act as an umbrella. Gary Tomlinson commented that he viewed RUP in the case of intra-CDN rather than inter-CDN. Phil Rzewski noted that RIP existed before BGP; if RUP was available CDI might use it as a basis of their work, but that group might just watch RUP developments closely.

With reference to interception proxies, Joe Touch commented that trying to design a protocol standard to work with people who ignore protocol standards is a losing battle.

Mark Day commented that the "distribution" document being produced by CDI covered many of the use-case scenarios, and that WEBI should read that closely to ensure there were no major rifts between the CDI and WEBI documents.

Commenting on possible non-intermediary uses of RUP (e.g. within clients interested in receiving updates on material) Ted Hardie warned of the inherent scaling problems. However, within the CDI model the use of gateways may enable sufficient scaling. (CDI and WEBI need to look at content signalling requirements sooner rather than later.) Gary Tomlinson also noted that in the intranet proxy case there would be reasonable scaling opportunities. Ted Hardie repeated his warning, pointing out that while it may be relatively easy to do, it was still the wrong thing to do: "non-intermediary uses" is too vague.

??? raised the question of whether updates should be included within RUP, or whether we should simplify and stick with invalidations. Phil Rzewski commented that it may be possible to simply provide hints - direct the intermediary to fetch a new copy. Mark Day asked whether the protocol could send updates/invalidations/hints or whether the recipient had to obey them - these were identified as separate issues.

Mark Nottingham led a discussion on the use cases of the Intermediary Discovery and Description work:

Brad Cain asked whether it was possible to just do the User Agent components and leave mesh building and surrogates; perhaps we should be giving input to the DHCP/SLP groups? Mark asked if there was anyone from the client (browser) community in the room, identifying that while there were none there were members of the CDN community. Gary Tomlinson noted that surrogates look like origins, and that the notion that we may be discovering them implies a replacement for Request Routing. Mark commented that this was a case where we may be able to serve multiple needs.

?? commented that bootstrapping is a complex problem, agreeing with Brad on the notion of passing this work to DHCP or possibly zeroconf. There may be some interesting uses for this work in the inter-CDN case, to pass descriptions of what a particular CDN has knowledge of.

??? (Erik Guttman?) mentioned that he didn't fully understood all the requirements. There are three ways they use standard Internet protocols today to do configuration. Two of them look at services as being distinct: one server will do them as well as any other (such as in DNS, you find a list of services under a domain name). Similarly, DHCP, if there were a DHCP option (though it's not actively encouraged in the DHCP working group), you would configure all the DHCP clients with the same proxy. SLP takes a different approach, saying that services have a set of characteristics, and multiple services are described by URLs. Sounds very similar to what we're talking about here. Don't know how many URLs we're talking about having characteristics for. But the idea is that SLP is a directory-based model, so you actually discover things based on LDAP requests.

??? mentioned that a large organization he knew of ran into problems with proxy.pac files. While able to use existing function calls to make a smart decision on closest exit points, this was not as good as it could be. Don't care if it's DHCP/SLP/something else. A lot of corporations are looking for the User-Agent component of the proposed work. In particular this could be particularly beneficial in environments where mobile users are connecting, via VPN, to corporate networks where configured proxies may no longer be appropriate.

It is currently unclear whether there is sufficient interest to continue with IDD (differences between hum levels for and against were too close for a call on rough consensus).


WCIP: Doís and Doníts