2.2.2 Resource Capabilities Discovery (rescap) bof

Current Meeting Report

Resource Capabilities (RESCAP) BOF/WG Minutes

46th IETF, 9-Nov-1999, 15:45-18:00
Reported by Graham Klyne.


Introduction and charter review

Review of requirements
- draft-beck-rescap-req-02.txt

Discussion and resolution of disposition of two drafts
- draft-hoffman-rescap-protocol-01.txt
- draft-kempf-slp-rescap-00.txt

Other document
- draft-hoffman-rescap-mua-01.txt


Just before the meeting James Kempf withdraw his proposal from consideration by the working group, thus there is currently only one resolution protocol proposal before the group.

Charter Review

The Charter has been submitted to Keith Moore, Applications Area Director. We do expect approval prior to the next IETF meeting.

The Charter was distributed on the elist (archives available at cs.utk.edu) and many people had a chance to review it. Nonetheless, the key points of the scope of our work was reviewed.

In particular, many things on the Internet are identified by "resource identifiers". Our focus is to associate attributes with DNS-based URLs to define a protocol that:
- is highly scalable
- has low overhead (client and server)
- uses inputs are DNS-based URIs
- has inheritance to ease administration of large numbers (millions) of entries
- uses DNS to locate RESCAP server (probably with SRV records)
- has a means to register and extend attributes

The definition of attributes is currently out of scope except that the specification will include a "use-case" to show how the framework is intended to be used. Our "use-case" will be to resolve attributes for an email client. However, we do want to be a generalized service so we do want to encourage the review by other working groups of whether RESCAP is appropriate for their use. Some wordsmithing of the charter may be needed around this point. Keith Moore (the designated AD) agreed
to complete this work item.

We do need to collaborate with ENUM with a view to share protocol elements. The same was suggested for IMPP but the relationship was less clear. The result was to accept that dealing with specific issues of other protocols is generally out of scope.

The following question was deferred to being answered in the requirements document, "What sort of attribute update rate should be supported (c.f. possible IMPP interactions)?"

There was also consensus that locating a RESCAP server should be separate from the RESCAP protocol.

Specific milestones are indicated in the Charter. Our goal is to do all work, including administrative update protocol, by Decemeber 2000. (This is, unsurprisingly, an aggressive shedule.)

Review of requirements

[ Chairs' Note: The agenda had proposed only 15 minutes of discussion of the requirements, believing that the existing document was stable owing to the lack of discussion on the elist. However, those present at the meeting felt otherwise and the remainder of our meeting time was dedicated to discussing the requirements.

What follows is a list of the questions and observations raised during the discussion. The editor of the requirements document is responsible for responding to all of these issues in the next revision of the document. ]


Progress on the resolution protocol should not be held hostage to progress on administrative protocol issues.

Do we need to say something about the properties of the system as a whole, and derive protocol requirements from these? We could say something about the underlying assumptions, but trying to be too detailed in the requirements gets us too deep into protocol-specific territory.

Suggested that a definition of the scope of the term "capability" would be useful.

For information that falls outside the assumed properties for RESCAP information, a referral to some other information service could be provided.

Multiple protocols? Our goal is not, but referral to other protocols is a possibility.

Should default values be distinguished as such? Where did a provided value come from? What is its associated authority? This needs to be discussed on the mailing list. Metadata about capabilities?



Need to add some words about update frequency requirements? This is may just be an administration protocol issue.

What about attribute lifetime? Is the service intended to support volatile information? This is in part a property of the system as a whole. But what about lifetime granularity?

The general assumption seems to be that RESCAP information is relatively stable. Even after very great experience with DNS there is not a clear consensus about the appropriateness of serving rapidly changing changing information from DNS.

A distinction between "resolution" and "signalling" -- signalling deals with rapidly changing changing data, and is understood to be a "rathole".

Data that changes on the order of minutes rather than seconds seems to be reasonably doable. A maximum update rate of 10 seconds was suggested.

If the primary goal is email recipients, an update frequency of minutes/hours/days would probably be OK.

A lesson from history: capabilities advertised in one place, but not kept "adequately" up to date, forces clients to go figure out for themselves. (c.f. WKS record.) Suggest that the working group needs to review the history of WKS record.

Are we talking about update frequency, or resynchonization time? Is it OK to lose updates if there are many frequent updates.

It is claimed rapid updates occur in an email environment.

Chair proposal: express expiration time in seconds, and state that we plan to deal with relatively stable data, without stating exactly what is meant by "relatively stable".

Reply data

How much data may be returned? Short or long responses possible. Text or binary. Attempt to use UDP when possible, with fallback to TCP.

Proposed to be able to request more than one item at a time. OK.

BUT: what about a single request to request data about multiple resources? Does this carry a presumption of TCP usage? One request for multiple resources might result in multiple distinct responses (multiple UDP packets).

Scalability issue: may need to be able to handle responses from servers different than that to which the request was sent -- to allow distribution of query handling.

Concern about complexities of dealing with multiple TCP sessions.

Requirements are: (a) support for "cheap queries"; (b) support for queries that return large amounts of data.

Suggestion to support request/response tagging for matching purposes.

CONNEG issue: how to return a complicated statement of capabilities where the user does not necessarily really care about the detail, just an identifier. This is useful if backed by an auxiliary protocol to resolve (expand) details of the compsite. The "auxiliary protocol" could be another RESCAP resolution.

Use of UDP has implementation complexities if security data is too large for a UDP packet (say 512 bytes).

Question: is there any work done on analysis of when it is cheaper to use TCP from the start, rather than trying to use UDP?

TCP/UDP argument is an implementation issue, not a requirement.


Requirements are:
(a) support for "cheap queries";
(b) support for queries that return large amounts of data.

Granularity of request
- Nothing specific not already covered.

- Nothing specific not already covered.

- Referral to another server for certificates?

- Distinction here between trying to duplicate transport services and signed/encrypted attribute data which may be part of a response.
- Transport level is out of scope, other than deferral to TLS, etc.
- Need to identify threats and indicate how they are handled?
- There is a clear desire for signatures/encryption that can apply to just part of a response. In the limiting case, they may apply to an entire response -- but this is a special case, not the only case.
- There may be a reqirement to restrict the dissemination of attributes. But this requirement is regarded as out of scope for the working group. There is some charter language that should be copied into the requirements document (paragraph about purpose being for public information rather than private).

Server location
- Don't want requirements to be too specific about DNS record usage.

- Ranking is an attribute that is not a capability.
- There is some concern about handling of ranking. This could be information that is embedded in a capability response, rather than part of the generic protocol. Should this be removed from the protcol requirements? Can individual parts of a capability be signed, or updated, or returned separately?
- Negotiation within RESCAP is NOT a requirement.
- There may be some value in introducing an "additional information" concept as a performance optimization (sort of like DNS, but simpler so it's less prone to bugs).
- Requirement language should be tightened so that the response is sensibly related to the original request, rather than just whatever the server wants to send. It must be clear to the requester if and where the response contains the requested information.
- Proposal: remove preference information from the protcol requirements. The hum of the room indicates consent.
- Be clear that the protocol requests capabilities by name, and not by reference to content or content types of the capabilities. It's important to be clear about this when the server can return values which were not explicitly requested.
- Examples of RESCAP are provided in the MUA draft.

- Must be able to implement client or server in low cost device.


Access control
- Updates should be applied by authorized parties.

- Must be able to define a default that applies to a number of entries. Default values may be changed; retroactive applicability is not necessarily a goal -- discuss on the elist.

- Should there be a scalability requirement?
- Should there be a replication reqirement? Is this really a scalability requirement?
- Or, should there be a fault tolerant requirement?
- Is there really a need for an administration _protocol_ ? A detailed administrative model may be sufficient, but having done that one might as well define a protocol. Note that an existing protocol may be used [indeed, preferred?].
- Two kinds of administrative protocol? (a) to get information into the system, (b) to do replication.
- Requirement for atomicity (transactions) in the administrative protocol? This is an important point for discussion. Two dimensions of atomicity: all-or-nothing updates; consistent client visibility of a set of updates. Impact on replication?
- Cheap updates are not necessarily a requirement.
- Discussion of updates applied directly on RESCAP server, and updates to information accessed from elsewhere, such as LDAP. The latter case could be problematic to achieve atomicity.
- Discovery of administration servers: same mechanism to be used as that for discovering resolution server? To be covered in the resolution server discovery document.
- Scalability requirement needs to be an access protocol/administrative model issue. Make statement about update frequency/propagation latency? Also, cover replication issues.
- Need to discuss atomicity of replication (view consistency during replication).
- Many of the above points can be folded into a larger discussion of reliability and robustness. Look to other protocols that have addressed these issues, and pick a model.

Discussion and resolution of disposition of two drafts
(Discussion deferred due to lack of time.)

Other document
(discussion deferred due to lack of time.)

Wrapping up -- next steps
Requirements completion goal: January 2000

Next revision of requirements draft target Monday after Thanksgiving.


None received.