2.1.9 WWW Distributed Authoring and Versioning (webdav)

NOTE: This charter is a snapshot of the 45th IETF Meeting in Oslo, Norway. It may now be out-of-date. Last Modified: 04-Jun-99


Jim Whitehead <ejw@ics.uci.edu>

Applications Area Director(s):

Keith Moore <moore@cs.utk.edu>
Patrik Faltstrom <paf@swip.net>

Applications Area Advisor:

Keith Moore <moore@cs.utk.edu>

Mailing Lists:

General Discussion:w3c-dist-auth@w3.org
To Subscribe: w3c-dist-auth-request@w3.org
In Body: Subject of subscribe
Archive: http://www.w3.org/pub/WWW/Archives/Public/w3c-dist-auth/

Description of Working Group:

This working group will define the HTTP extensions necessary to enable distributed web authoring tools to be broadly interoperable, while supporting user needs.

The HTTP protocol contains functionality which enables the editing of web content at a remote location, without direct access to the storage media via an operating system. This capability is exploited by several existing HTML distributed authoring tools, and by a growing number of mainstream applications (e.g. word processors) which allow users to write (publish) their work to an HTTP server. To date, experience from the HTML authoring tools has shown they are unable to meet their user's needs using the facilities of the HTTP protocol. The consequence of this is either postponed introduction of distributed authoring capability, or the addition of nonstandard extensions to the HTTP protocol. These extensions, developed in isolation, are not interoperable.

An ad-hoc group has analyzed the functional needs of several organizations, and has developed requirements for distributed authoring and versioning. These requirements encompass the following capabilities, which shall be considered by this working group:


*Locking: lock, lock status, unlock

*Name space manipulation: copy, move/rename, resource redirection (e.g. 3xx response codes)

*Containers: creation, access, modification, container-specific semantics

*Attributes: creation, access, modification, query, naming

*Notification of intent to edit: reserve, reservation status, release reservation

*Use of existing authentication schemes

*Access control

*Unprocessed source retrieval

*Informing proxies of an action's impact



*History graph


*Automatic Merging

*Naming and accessing resource versions

Further information on these requirements can be found in the document, "Requirements for Distributed Authoring and Versioning on the World Wide Web". <http://www.ics.uci.edu/~ejw/authoring/webdav-req-00.html

While the scope of activity of this working group may seem rather broad, in fact much of the functionality under consideration is well understood, and has been previously considered. This working group will leverage off of previous work when it is applicable. Discussion of the security issues concerning distributed authoring and versioning are essential to the creation of a protocol which implements this functionality.

Though the feature set described above bears a resemblance to the capabilities provided by a network file system, the intent of this working group is not to create a replacement distributed file system (e.g. NFS, CIFS). The WEBDAV emphasis on collaborative authoring of resources which are not necessarily stored in a file system, and which have associated metadata in the form of links and attributes, differentiate WEBDAV from a distributed file system.

Many decisions have been made to reduce the scope of effort of this working group. It is the intent of this working group to avoid the inclusion of the following functionality, unless it proves impossible to create a useful set of distributed authoring capabilities without it:


*Definition of core attribute sets, beyond those attributes necessary for the implementation of distributed authoring and versioning functionality

*Creation of new authentication schemes

*HTTP server to server communication protocols

*Distributed authoring via non-HTTP protocols (except email)

*Implementation of functionality by non-origin proxies

Eventually, it is desirable to provide access to WEBDAV capability by disconnected clients, or by clients whose only connectivity is via email. However, given the scope of developing requirements and specifications for disconnected operation, the initial target user group of fully connected clients, and the desire to work swiftly, the working group will address this issue by ensuring the protocol specification does not preclude a future body from developing an interoperability specification for disconnected operation via email.


The final output of this working group is expected to be three documents:

1. A scenarios document, which gives a series of short descriptions of how distributed authoring and versioning functionality can be used, typically from an end-user perspective. Ora Lassila, Nokia, currently visiting with the World Wide Web Consortium, is editor of this document.

2. A requirements document, which describes the high-level functional requirements for distributed authoring and versioning, including rationale. Judith Slein, Xerox, is editor of this document.

3. A protocol specification, which describes new HTTP methods, headers, request bodies, and response bodies, to implement the distributed authoring and versioning requirements. Del Jensen, Novell, is editor of this document.

The most recent versions of these documents are accessible via links from the WEBDAV Web page.

Goals and Milestones:

Mar 97


(Specification) Produce revised distributed authoring and versioning protocol specification. Submit as Internet Draft.

Apr 97


(Meeting, Specification, Requirements) Meet at Memphis IETF and hold working group meeting to review the protocol specification and requirements document.

Apr 97


(Scenarios) Revise scenarios document. Submit as Internet Draft.

Aug 97


(Scenarios) Create final scenarios document. Submit as Informational RFC.

Aug 97


(Requirements) Create final version of distributed authoring and versioning requirements document. Submit as Informational RFC.

Aug 97


(Specification) Produce revised distributed authoring and versioning protocol specification. Submit as Internet Draft.

Dec 97


(Specification) Complete revisions to distributed authoring and versioning specification. Submit as a Proposed Standard RFC.


Request For Comments:







Requirements for a Distributed Authoring and Versioning Protocol for the World Wide Web



HTTP Extensions for Distributed Authoring -- WEBDAV

Current Meeting Report

WebDAV WG Minutes
Oslo IETF 45
Thursday, July 15, 1999

A meeting of the WebDAV working group was held on Thursday, July 15, 1999, from 9:00AM to 11:30AM. Geoff Clemm was the acting chair for the duration of the meeting. Minutes were recorded by Lisa Lippert.


There was not enough consensus on what a direct reference should be, how it should interact with locking/collections/versioning, so this was dropped and the redirect bindings were used.

New method: BIND

Larry Masinter: It seemed that the model for these representations did map into the underlying architecture, is that true? Is it widespread? Do BIND and ordered collections map to existing capabilities or is it new capability that people are comfortable adding?

Geoff Clemm: People want this behaviour. The proposal already takes into account some subtleties of existing implementations.

Yaron Goland: Essentially it started with Unix, 'ln' type functionality. How can we best represent it in the protocol as closely as possible to existing linking functionality? On the other hand, ordered collections have apparently been implemented in very advanced doc mgmt systems, nevertheless Judy Slein did a good job of specifying this functionality.

Geoff: There are a couple of other people on the ML saying this is required functionality.

Geoff returned to discussing bindings: bindings allow you to bind "/a/b/foo.html" to "/a/b/bar.html", so that a resource has two names, a PROPPATCH will affect both. It does cause interesting issues with LOCK -- if you lock one, can you MOVE the other?

Explanation of how a binding to a collection works -- rather than have a shadow resource created for every resource within the collection, just the collection link is created. The purpose of the collection binding is, however, to have available all the collection resources under the new link.

Jules? worried about whether this was specifying the implementation, and Geoff clarified that the implementation discussion was only to ensure that a compliant implementation was possible, and any other way of implementing consistent with the protocol is valid.

Geoff explained how a collection binding requires that new resources in the source collection results in new resources available under the link collection.

Jules asked whether the protocol violated the reversibility of the bindings, and Geoff does not think this has been violated.

Geoff discussed circular bindings:

BIND /x/ to /x/circle-x/

This results in

We strongly distinguish bindings, which connect a name and a resource, and a mapping, which is a connection between some legal name and a resource. The creation of one new binding can induce an infinite number of mappings, i.e. names that are now linked to a resource. A delete of a binding can remove an infinite number of mappings.

Because of this, a circularity check is required. PROPFIND depth infinity does include a circularity check that only needs to be done by servers that support bindings. There is a new response code to be returned rather than return a 500 server error.

Yaron pointed out that servers may just ignore this requirement as roughly equivalent.

Nick Shelness: I have a hunch this breaks the model where resources know what they belong to. You might have an infinite number of membership. Geoff thinks this is OK though. Nick will think more about this.

Question: Why do we even allow circularity?

Geoff: it was argued that it was tougher to prevent circularity than to deal with it, but the more important argument is that it should be possible to just classify a collection as part of a set which contains itself, that there were contains-itself relationships that people wanted to model. Example from "include" files: a collection which contained all the files included might include itself. These might be excluded by some kind of #ifdef behaviour... Everybody in the design team had their own favourite example of why circularity should be allowed (otherwise would be faked in some way more painful than asking the server to do the circularity check)

Yaron: We've created a situation where any PROPFIND depth infinity request will generate an error if there is a circularity.

Geoff: We thought of some custom stuff to tell the server to "ignore circularities", but figured it's most useful for clients to be able to deal with the circularity error by controling depth more carefully in requests.

Yaron: These relationships have kind of poisoned effects. E.g. Depth infinity DASL requests will be the rule, and these will now result in errors because of a cycle somewhere, and that seems problematic. I would like to see some kind of discussion of what circularities would result in for other areas of DAV, like searching, versioning, etc.

Geoff: Yes, we wanted to open this up for discussion. If anybody else requires this feature, add your vote.

Question: Note that circular references might be much more complex, and involve many levels before the circularity is discoverable. This is a hard problem to find out when the binding is created: a complete tree traversal would be required every time a reference is created. It might make sense, when doing these traversals, to indicate to the server not to traverse links, just like in Unix.

Geoff: This was one of the reasons for removing support for direct bindings. There is no difference between the original URL to a resource and the new bound URL, they are both bindings to the actual resource. The result is that instead of paying on deep traversals, you pay on every binding. To require the server to do this check on every creation of a binding was actually the more serious implementation check than to do this only on PROPFIND depth infinity.

Lisa Lippert: The price for not checking when the binding is created could me more than we can know now, covering more than just PROPFIND depth infinity.

Geoff: Yes.

Yaron: MOVE and COPY are like this.

Geoff: MOVE is very like BIND, it just creates a new link between a name and a resource. If anybody else can identify the methods which concern them...

Yaron: That's why I'm pounding on the model. I think we just have to say we're running with scissors with this feature.

John Stracke: But since a MOVE is a COPY followed by a DELETE...

Yaron: NO. In the webDAV spec, MOVE is not defined as a copy followed by a delete. We said that a MOVE is logically equivalent to an atomically performed (with fixup) COPY then DELETE. It was an attempt to inherit certain behaviours that applied to COPY, without specifying implementation. We knew that certain servers would end up deleting a resource (i.e. in the case of a MOVE from one server to another). We needed a way to say that if a MOVE operation implementation did involve a DELETE, it would operate in a certain way to comply with requirements from the military.

Geoff: I need to explain how MOVE works with respect to bindings. A MOVE is, in the case of a binding, like a BIND followed by a DELETE. The difference is in all the other bindings to the thing that is being moved: the other bindings (other than the ones being moved) are unchanged; they continue to have the same name and point to the same resource. This is important, because the result does not involve creating a brand new copy of the resource, which is what a COPY then DELETE would cause.

Summary: the core of bindings is pretty simple, but it has some interesting implications. Modulo a few problems, the WebDAV base has been a good one to work on, it's a very effective base to work on for both the design teams I'm on.


Question: Do you feel the ordering sections of advanced collections are complete?

Geoff: We (the authors, at least) only agree that clients should be able to tell the server that this should be after that. We don't know what server-maintained orderings are or mean. Client-defined orderings are less controversial.

Question: It sounds like there are actually three independent specifications in advanced collections, at different levels of completion...

Geoff: Yes, the status of ordering is that not many cared, and the few who did only agree on client-side ordering. It would be a shame to hold up the rest of the protocol while we're resolving that issue. There's less administrative need to separate the redirect references from the binding functionality; on the other hand their connection is gratuitous.

Keith: Just put redirect references in a separate document.

Larry: Historically, redirect references were a response to requirements for collection-based functionality.

Geoff: I am happy to follow Keith's direction so if anybody else has a problem take it up in mail. Are there other issues with respect to these specifications -- is it OK for them to fall under the "finishing up WebDAV" WG?

Keith: Is it close to being done?

Geoff: Yes

Yaron: No

Keith: Take 3 months to finish, then decide what to do, but don't go under the assumption that the current WebDAV WG will continue to exist to do this. I bet you could get the bindings done in under 3 months. Do them in the order of how you think you can get them done.

Larry: Just split up the document and last-call the pieces.

Geoff: Whichever are done in 3 months fall under this WG...

Larry: It's not that there is less consensus for ordering, there are just fewer people interested in seeing this functionality. That doesn't mean it will drag out any longer than bindings.

Geoff: Any other issues?

Larry: There are two elements of functionality which I think need to be worked out before we can say we've accomplished a protocol that can be used to do distributed authoring:
- variants
- compound documents

This doesn't meet the charter until we've resolved those.

Keith: A lot of people have been chewing over those for decades. I'm not sure you can do anything.

Larry: I'm interested in pursuing protocol elements that would further this functionality, whether within this WG or elsewhere. This could be done in DMA.

Yaron: I would much rather this happen in IETF. We would have to do a lot of work before the work could continue in DMA with the same kind of openness.

Geoff: One could use the same gating function for a subset of the versioning work that is of sufficient maturity that it could be closed off in 3 months.

Keith: No.