[vwrap] Comments on http://tools.ietf.org/html/draft-ietf-vwrap-intro-00

Cristina Videira Lopes <lopes@ics.uci.edu> Tue, 07 September 2010 15:56 UTC

Return-Path: <lopes@ics.uci.edu>
X-Original-To: vwrap@core3.amsl.com
Delivered-To: vwrap@core3.amsl.com
Received: from localhost (localhost [127.0.0.1]) by core3.amsl.com (Postfix) with ESMTP id EB0D53A694A for <vwrap@core3.amsl.com>; Tue, 7 Sep 2010 08:56:16 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -0.649
X-Spam-Level:
X-Spam-Status: No, score=-0.649 tagged_above=-999 required=5 tests=[AWL=-0.650, BAYES_50=0.001]
Received: from mail.ietf.org ([64.170.98.32]) by localhost (core3.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 6-mI+ryAlTG0 for <vwrap@core3.amsl.com>; Tue, 7 Sep 2010 08:56:11 -0700 (PDT)
Received: from david-tennant-v0.ics.uci.edu (david-tennant-v0.ics.uci.edu [128.195.1.174]) by core3.amsl.com (Postfix) with ESMTP id 0823D3A68B5 for <vwrap@ietf.org>; Tue, 7 Sep 2010 08:56:10 -0700 (PDT)
Received: from [169.234.247.235] (barbara-wright.ics.uci.edu [128.195.1.137]) (authenticated bits=0) by david-tennant-v0.ics.uci.edu (8.13.8/8.13.8) with ESMTP id o87FuWnn027959 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for <vwrap@ietf.org>; Tue, 7 Sep 2010 08:56:33 -0700
Message-ID: <4C8660AA.4050004@ics.uci.edu>
Date: Tue, 07 Sep 2010 08:56:26 -0700
From: Cristina Videira Lopes <lopes@ics.uci.edu>
User-Agent: Thunderbird 2.0.0.24 (Windows/20100228)
MIME-Version: 1.0
To: "vwrap@ietf.org" <vwrap@ietf.org>
Content-Type: text/plain; charset="ISO-8859-1"; format="flowed"
Content-Transfer-Encoding: 7bit
X-ICS-MailScanner-Information: Please send mail to helpdesk@ics.uci.edu or more information
X-ICS-MailScanner-ID: o87FuWnn027959
X-ICS-MailScanner: Found to be clean
X-ICS-MailScanner-SpamCheck: not spam, SpamAssassin (not cached, score=-1.363, required 5, autolearn=disabled, ALL_TRUSTED -1.44, TW_VW 0.08)
X-ICS-MailScanner-From: lopes@ics.uci.edu
Subject: [vwrap] Comments on http://tools.ietf.org/html/draft-ietf-vwrap-intro-00
X-BeenThere: vwrap@ietf.org
X-Mailman-Version: 2.1.9
Precedence: list
Reply-To: lopes@ics.uci.edu
List-Id: Virtual World Region Agent Protocol - IETF working group <vwrap.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/vwrap>
List-Post: <mailto:vwrap@ietf.org>
List-Help: <mailto:vwrap-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/vwrap>, <mailto:vwrap-request@ietf.org?subject=subscribe>
X-List-Received-Date: Tue, 07 Sep 2010 15:56:17 -0000

This is my first time participating in an IETF WG, so apologies in 
advance for any mis-steps on the process. I believe it's ok to send 
comments at this point. But I understand that the work has been going on 
for about a year without me being involved, so my comments may be out of 
order. This is one year's worth of comments, long and heavy email :-)
I leave to the Chairs of this WG the right to dismiss them, and I'll 
accept it if that happens.

I'm still not entirely sure the world is ready for "standards" for 
virtual world interoperability, but given that this working group 
exists, I hope my comments help strengthen the technical aspects of the 
work that has been going on here, so that, in the end, this document 
will appeal to others who have absolutely nothing to do with the Linden 
Lab family of virtual worlds -- so as to make good on the word 
"interoperability."

The document seems to be establishing the underlying assumptions and 
scope for the protocols recommended by this group. That's great.

The gist of my comments is that the particular assumptions established 
by the document do not accurately reflect the underlying assumptions and 
goals of the OpenSimulator platform in important ways, and I suspect 
they also don't reflect many other VW platforms. I don't know if that is 
a good thing or a bad thing or an irrelevant thing for the purposes of 
this working group; I'm just making this observation. Assuming that the 
Chairs accept my commentary for discussion, I delegate to the group the 
judgment on the consequences of this observation.

Let me expand on it by going through the document, and pointing out the 
differences in assumptions and goals, as well as the parts of the text 
that aren't clear to me, the parts that I think are good, and some 
general technical commentary.

----- Comment 1 -----
Section 1: "...This document introduces the
   Virtual World Region Agent Protocol (VWRAP) suite.  This protocol
   suite is intended to carry information about a virtual world: its
   shape, its residents and objects existing within it.  VWRAP's goal is
   to define an extensible set of messages for carrying state and state
   change information between hosts participating in the simulation of
   the virtual world."

This starting statement offers the concept of a single virtual world; 
reading further  it seems to mean that the ecosystem of VWRAP is, 
indeed, treated here as one single system involving a multitude of 
organizations. Section 2.1, par 5:
"The VWRAP suite assumes network hosts, likely operated by distinct
   organizations will collaborate to simulate the virtual world."

The OpenSimulator platform has been designed with a plurality of virtual 
worlds in mind, not just one. This plurality is a fundamental assumption 
of our platform, as it makes aggressive use of plugins for two important 
aspects of virtual worlds: (1) the scene server -- scene renderer client 
interaction; (2) the scene service -- resource services interaction.

What the use of these plugins means is that we strongly believe that 
each virtual world is an independent system that should be designed and 
implemented in a way that best fits that world's goals. We believe this 
for OpenSimulator-based worlds already by providing those important 
aspects as plugins, so we also believe it for virtual worlds that aren't 
based in OpenSimulator software. The decisions made by, say, Blue Mars, 
with respect to how to send scenes to client software and how to manage 
their resources, are to be respected and treated independently of the 
decisions made by, say, ReactionGrid. But that doesn't mean that worlds 
operated by these organizations can't interoperate. They can, given a 
minimum interoperability protocol. In summary, and this will recur 
throughout my commentary:

OpenSimulator does not assume the existence of one single virtual world 
involving a multitude of organizations; it assumes the existence of 
multiple virtual worlds, each one under the authority of one single 
organization -- which, in turn, may make its own internal decisions 
about sharing ownership and resources of parts of that world with other 
organizations, but that is an internal decision that is irrelevant for 
purposes of virtual world interoperability.
For some background on the virtual world model assumed by OpenSimulator, see
http://opensimulator.org/wiki/Virtual_World_Model

----- Comment 2 -----
Section 2.1, the list of 3 characteristics:

"1. The virtual world exists independent of the participating
   clients." ... "VWRAP assumes the state virtual world is "always on"... "

OpenSimulator does not make these assumptions. While OpenSimulator 
worlds are usually run on servers to which clients connect over the 
network, it is possible to take the OpenSimulator framework and merge it 
with a renderer component, producing a virtual world that is both a 
client and, eventually, a server if that virtual world is to support 
multiple users over a network.

Also, OpenSimulator does not assume that the virtual world is "always 
on", on the contrary. Many OpenSimulator-based virtual worlds that 
already exist seem to be ran on people's personal computers in their 
home networks, which are often turned off.

"2. Avatars have a single, unique presence in the virtual world."

OpenSimulator does not make this assumption. It is possible to have 
multiple presences (sessions) for the same user on the same virtual 
world. This supports use cases of users wanting to be in two different 
places of the world at the same time, e.g. for attending two different 
events. The decision on whether to allow this or not pertains entirely 
to the virtual world operator.

----- Comment 3 -----
Section 2.2, architectural patterns:

"1. Systems implementing virtual world services must be distributed."
...
"But however large (or small) a virtual world deployment is, or
       how many distinct organizations contribute to its operation,
       software implementing virtual world services MUST assume
       resources required to perform its function are distributed
       amongst multiple hosts."

OpenSimulator does not impose this. In fact, the most popular 
configuration of OpenSimulator-based worlds is a "standalone" 
configuration where both the scene service and all the resource services 
of that world execute in one single process of one machine. In a 
standalone VW system, none of the services (e.g. assets, inventory, etc) 
are "distributed." Furthermore, as explained above, the OpenSimulator 
framework also supports the existence of single-process virtual worlds 
that include the renderer itself.

The only two requirements imposed by OpenSimulator, and indeed by any 
software system, are: (1) in cases where these worlds wish to serve 
multiple users over the network, then network endpoints must exist that 
serve the scene to the clients used by those users; (2) in cases where 
these worlds wish to interoperate, then network endpoints must exist 
that serve certain resources from those worlds to other worlds.

"2. Services supporting collaboration are hosted on 'central'
   systems."

No such assumption in OpenSimulator. The way that virtual world 
operators decide to design and implement their collaborative features 
are entirely up to them.

For example, in OpenSimulator the default implementation of Instant 
Messaging is peer-to-peer, with only presence being looked up centrally; 
there is no central IM server. This feature is implemented as a plugin, 
so other implementations (e.g. centralized) are possible.

Maybe this architectural pattern pertains to inter-world collaboration? 
If that is the case, then this is clearly not a desired goal, as most 
OpenSimulator virtual worlds want to operate their own collaboration 
sub-systems, and do not wish to depend on third parties, unless there is 
no alternative. As an example, take the Groups service, which is 
implemented outside of the OpenSimulator core distribution. While there 
is an instance of that group service ran by an individual (the original 
developer) that can support groups for many virtual worlds, most grid 
operators choose to run their own instance.

"3. Virtual world services default to being 'open'."
...
"In other words, requiring
       two services (e.g. - physics simulation and asset storage) be
       managed by the same organization's servers is an issue of local
       policy, not of protocol."

I don't understand what 'open' means. Does it mean that they default to 
being available on the internet? The quoted sentence doesn't seem to 
relate to the headline, so I'm not sure what this architectural pattern 
is saying, I think it needs to find its message.

----- Comment 4 -----
Section 3.1:

This is the best section of the document, but it needs a lot more depth. 
I would go as far as suggesting that protocol flexibility (and not just 
data presentation flexibility) may very well be the single most 
important contribution that an interoperability standard in this area 
might make.

Indeed, this document fails to address one critical question: what 
assumptions do we make wrt how the server serves scenes to the clients? 
Do we assume that this is a fixed part for interoperability (i.e. 
there's only one protocol for doing this in the entire collection of 
interoperable virtual worlds) or so we assume that each virtual world 
does it in whichever way it wants?

To illustrate the issue, let me point out the spectrum of possibilities.

On one extreme we have what is currently emerging on the Web: on-line, 
multi-user 3D scene servers that simply send their "viewer code" to the 
web browsers in JavaScript. You go from one game to another, and you get 
radically different pieces of JavaScript code that get the data from the 
servers and render it; these JavaScript "viewers" are black boxes that 
have nothing to do with each other.

On the other extreme of this spectrum, we have systems like Second Life 
and World of Warcraft, with customized, non-programmable viewers that do 
things in exactly one way, tightly coupling the protocol with the fixed 
game-specific UI.

Somewhere in the middle, we have Flash-based worlds (and others) that 
run on the general-purpose Web browser and that are identified by 
certain MIME types (e.g. application/x-shockwave-flash).

So where does this group stand wrt this critical issue?

As I said above, in principle, OpenSimulator assumes that each virtual 
world has its own client-server protocol -- so in line with what is 
emerging on the Web. Unfortunately we still don't have viewers capable 
of doing this well: Web browsers still can't render the kinds of content 
we need, and existing VW clients aren't general enough. But that should 
not be an impediment to accepting the underlying principle of letting VW 
systems do it their way, and having that be a programmable component of 
a possible interoperability framework.

With this, we set up an interoperabilty framework that accommodates the 
variety of protocols that already exist, and that are unlikely to be let 
go by their implementers. We also set up the stage for accommodating the 
variety of protocols that still don't exist but that are bound to happen 
as the Web starts experimenting with 3D immersion.

----- Comment 5 -----
Section 3.2:

This section expands more on the basic VWRAP premise of "the single 
virtual world" composed of multiple services under the authority of 
multiple organizations. As explained before, this is very different from 
the VW model assumed by OpenSimulator, where a myriad of virtual worlds, 
each under one authority, are assumed to exist, and where the internals 
of those virtual worlds are out of scope.

----- Comment 6 -----
Section 3.3:

This section is contrary to the design philosophy of OpenSimulator, 
where very few conditions are imposed on the internal implementation of 
virtual worlds.

----- Comment 7 -----
Section 4.1:

This section seems to be a summary of the document
http://tools.ietf.org/html/draft-ietf-vwrap-authentication-00

That document seems to describe a straightforward login procedure, 
similar to all login procedures out there that authenticate a user onto 
a service running on a server. It defines specific on-the-wire data 
representations, and a specific protocol for error handling.

I understand that the introduction of the seed CAP, and subsequent 
invocation, is a new thing that most login procedures don't have.

But I'm at loss as to why this matters for interoperability purposes. 
Each virtual world should be free to do the initial user authentication 
in whichever way it finds most suitable, and to send whichever 
information it needs to send, in whichever way, to the client.

Maybe CAPs has a special role in inter-world authentication in VWRAP? 
Unfortunately, I read all the available documents, and I couldn't find 
anything that explains inter-world authentication in VWRAP.

----- Comment 8 -----
Section 4.2:

This section introduces concepts that haven't been explained before and 
that aren't explained here either, specifically "agent domain" and 
"region domain". I know what these are, as I've read the AWG documents 
on the Web; but another  less informed reader won't know.

Many other things in this section are unclear because of the 
non-existence of referenced materials e.g. "VWRAP Teleport 
specification", so perhaps these summaries shouldn't be in this intro 
document at all. But since they are, let me offer some comments.

In 4.2.2: unclear, but I'm assuming this section pertains to inter-world 
movements? That should be made clear. Because intra-world movements 
should be out of scope for interoperability purposes: the way that 
virtual worlds choose to move the agents within themselves, if they do 
at all, is irrelevant.

I believe there is some confusion here between model and mechanism, but 
maybe I know too much. I'm assuming that the word "capability" in VWRAP 
means "capability URL"; if I'm wrong, and if it means just "capability" 
as given by the Webster dictionary (roughly, authorization), then that 
would be great!

Also in 4.2.2 there is an underlying assumption that the client keeps a 
direct connection to the "agent domain". For example "The client signals 
to the agent domain its desire to move..."
The standard Linden client does not do this. But more important than the 
Linden client, Web browsers, i.e. stateless clients, also don't keep 
state. So if we ever were to have a Web-browser-based viewer for a web 
of virtual worlds, we wouldn't be able to do what is suggested in this 
entire section.

These protocols hinted at in this section are the crux of 
interoperability. I understand this is just the intro document, and it's 
not supposed to have all the details, that's fine. If you go ahead with 
the protocol hinted at here, then it is important to note somewhere in 
this intro document that it assumes stateful clients, and that Web 
browser viewers (that aren't browser plugins) are N/A.

As I said in comment #4, I would like to see a better analysis of what 
virtual world inteoperability demands from the viewer software. I used 
to think that web browsers were an evolutionary dead-end, but with the 
introduction of HTML5 and all the exciting work that is going on with 
running verifiably-secure native code on Web browsers, I came to change 
my mind about this. Since the Web browser is the ubiquitous client 
software, it seems like a bad move to exclude this client from 
interoperability for virtual worlds, because we're only going to see 
more 3D immersion, not less, on web browsers.

----- Comment 9 -----
Section 4.3:

This has nothing to do with interoperability and seems to be focusing 
exclusively on how the Linden Lab world works. Statements like "The host
   in the region domain responsible for managing spatial chat applies a
   proximity algorithm to the chat to determine which avatars or objects
   are close enough to hear it."
are too specific to be worth mentioning in this document. Other virtual 
worlds may apply different schemes wrt chat.

----- Comment 10 -----
Section 4.4:

I suggest replacing the ad-hoc terminology "at rest" and "in world" with 
more standard terminology. Assets are stored in persistent storage. 
There's technical terminology for "in world":  assets that are 
referenced by a 3D scene. There's also technical terminology for "at 
rest": assets that are not referenced by a 3D scene directly but that 
are referenced by other resources like a user's inventory or scripts.

This section is full of details that should not be here:
(1) how the scene server decides to manage the assets is an internal 
decision of each virtual world. The scene server may have its own 
assets, or it may have its own asset storage server on the same data 
center, or it may use amazon. It may pre-fetch or may perform lazy 
fetch. It may cache or it may not. It doesn't matter for the purposes of 
interoperability.
(2) how the scene server decides to make the assets available to the 
clients is an internal decision of each virtual world; in fact, a 
central part of the client(viewer)-server protocol, of which there 
should be many. It may send them on-line via UDP; it may zip them up in 
a zip file and send them; it may tell the client to fetch them from 
another server. It doesn't matter.

Interoperability should not impose any conditions whatsoever on how 
these things are done, or the word "interoperability" will start moving 
into the realm of internal virtual world architecture, which is 
something this group probably should not be doing, lest alienating 
potential parties. Even in the small ecosystem of OpenSimulator, we are 
already seeing a variety of new clients being developed that have 
nothing to do with the Linden Lab client (e.g. Unity3D). That's a 
direction that we very much encourage.

----- Comment 11 -----
Section 4.2.2:

This section seems to be implying that host based trust carried by X.509 
/ PKIX certificates will ensure that the receiving party will honor the 
asset's metadata. These (i.e. certificates and honoring the metadata) 
are two different things. It should be made clear that, just because all 
parties are who they say they are, it doesn't mean that they will honor 
the metadata.

Again, I wish I knew more of what people in this group have in mind for 
authentication, but I couldn't find any information.