[Agenda | ALTO protocol | ALTO server discovery protocol | Incremental updates | Websocket-based notifications | SDN use case | Extensions for data center information | Extension for hi-bandwidth information]
Meeting notes from ALTO WG meeting, IETF 84, Vancouver, BC, Canada
Tuesday, July 31, 2012 1700-1830

About 100 attendees

This report has been distilled from the detailed meeting notes taken
by Fabio Picconi and Brian Trammell (thanks, guys!).

Chairs: Enrico Marocco and Vijay Gurbani.

Agenda
------

Meeting materials: 
  slides, MP3 (2m:02s, 983 KBytes), Meetecho recording (HTML5) 

Chairs bashed agenda.  Progress since Paris IETF: ALTO requirement
draft is now with RFC editor.  Chairs will like to focus on the ALTO
protocol document and send it to IESG after Vancouver meeting.  The
document has been through WGLC.  Chairs asked the WG to focus on 
the protocol document so any open issues can be closed shortly.  
Certain extensions (i2aex, CDN, etc.) depend on the protocol document 
being stable, so it is important to get the document completed.

ALTO protocol
-------------

Meeting materials: 
  slides, MP3 (6m:42s, 3.21 MBytes), I-D, Meetecho recording (HTML5)

Richard Alimi went through the modifications to the protocol
document since WGLC.  New requirement was added on operations and
management (RFC5706).  Richard solicited feedback from the working
group on what aspects from RFC5706 should be in scope of the ALTO
protocol?

Martin Stiemerling indicated that logging and failure discovery are
in scope, whereas ipfix does not seem to be.  He indicated that the
WG should go through the list of operational and management issues
to determine which are relevant.  Richard asked what sort of events
should be logged --- all?  Martin: No, more likely that events that
are of interest to the operator are well worth logging, server failure,
database being down, etc.

Richard indicated that more discussion to follow on list on what
should be included from RFC5706.

Martin (as individual) asked how many implementations have been updated
to support -11 or -12?  One hand went up.  Time for an interop?

Vijay exhorted Richard to open up a discussion on the mailing list to
determine what aspects from RFC5706 should be included in the ALTO
protocol document (PS: Richard has opened up a mailing list thread
on this topic, please see 
http://www.ietf.org/mail-archive/web/alto/current/msg01555.html, and
more importantly, participate!).

Server discovery
-------------------

Meeting materials: 
  slides, MP3 (5m:20s, 2.56 MBytes), I-D, Meetecho recording (HTML5)

Michael Scharf went through the changes since last version.  Reverse
DNS query has been removed and new text on agent/proxy/VPN scenarios
added.  Major new addition to the draft is PPP extension for access 
networks.

The authors of the draft believe that it is now ready to move forward.
Further feedback is sought.

Enrico mentioned the need to keep an eye on a new WG (weirds) in APP
that is standardizing the whois service as a possible impact point on
this draft.  However, he felt that this draft was progressing in a
consistent fashion with decisions made the last time (keeping third-
party discovery out).

Vijay mentioned that this draft is a chartered item and asked the
WG to look at it with an eye towards moving it to WGLC after 
Vancouver.

Incremental updates
-------------------

Meeting materials:
  slides, MP3 (15m:51s, 7.61 MBytes), I-D, Meetecho recording (HTML5)

Michael Scharf presented this draft on behalf of the authors who
could not be in Vancouver.  Went through the updates in -02.
Michael presented a numerical comparison between JSON Patch and
an ALTO extension for incremental update.  The results appear to show
a little over 2x efficiency in favour of the ALTO extension.

Architecturally, JSON Patch will be a standardized solution, whereas
the ALTO extension is syntactically equivalent to an exsiting ALTO
server --- filtered maps.

In summary, there is no clear winner and WG input is sought on next
steps.

Enrico asked (as participant) whether the messages were compressed
when the evaluation was done.  Michael indicated that they were
uncompressed.  Enrico felt that JSON Patch may have a higher
compression rate.

Diego Lopez felt that JSON Patch would be more future-proof with
respect to the evolution of the protocol (that is, the ALTO protocol).
Diego was not sure whether extensions such as multiple cost types
can be accomodated by the ALTO extension for incremental updates.

Michael noted that this may be true, but JSON Patch has some
limitations and overhead of processing as well.

Richard Alimi felt that the major savings in the ALTO extension
come from overlaying one object onto another one with the same schema.  
JSON Patch is a more general solution that requires a path to an element 
and some operation to perform on the element.  Would it be possible to 
come up with equivalent generality for the ALTO extension yet retain
its speed?  If we can make this happen, then we can make it future-
proof as well.

Enrico expressed opinion that in the IETF we sometimes build large
protocols with an eye on reuse, and at other times we suffer from
the Not-Invented-Here syndrome and reinvent the wheel.  He felt
that JSON Patch should cover our use cases and we should consider
getting on the same page as with the authors of JSON Patch and
seek input on whether the ALTO use case is distinct enough to require
a separate approach than reusing JSON Patch.  Or maybe we can provide
JSON Patch authors input to improve what they are doing.

Martin Stiemerling felt that feedback can always be had by asking
for an expert review or opinion.

No further discussion on this.

Websockets-based notification
------------------------

Meeting materials: 
  slides, MP3 (18m:57s, 9.09 MBytes), I-D, Meetecho recording (HTML5)

Jan Seedorf presented the material and noted that some of this work
derives from the i2aex BoF from the Paris IETF.  This draft proposes to use
websockets for server-initiated notifications.  He mentions that the
thoughts in the draft are at a very high level and the intention of the
authors is to present some early thoughts to the WG to start discussion
on design tradeoffs.  Websockets is only one option, there are others
(XMPP, BGP, SNMP).

Websockets have certain advantages since they were explicitly designed
to provide bi-directionality to HTTP (ALTO based on HTTP), HTTP
authentication framework can be used, etc.  Jan went through an example
where he shows a ws:// URI being returned in a ALTO Information
Resource Directory response (then Jan starts to dance to a cool snazzy
ringtone, but the owner of the phone silenced the ringtone, depriving the 
WG of Jan's excellent dancing).

Jan noted that there has been some discussions on the mailing list on
whether websockets was a good fit when you have infrequent updates,
and the number of websocket connections that the server will be forced
to open and keep alive.  He then opens the floor to discussions.

Stefano Previdi noted that this is an interesting approach especially
in relation to incremental updates --- you need websockets for
incremental updates.  Jan asked if Stefano felt that websockets made
sense here and Stefano answered that his feedback was on the
functionality not on a specific solution.

Someone (Emile?) mentioned that netconf provides similar functionality.
He asked the authors to add netconf to the list of candidate protocols.
Netconf (already an RFC) already specifies the subscription for notification 
over websockets.  Jan and Enrico agreed to take a look at this.

Michael Scharf noted that while incremental updates are a pull model,
it makes sense for them to be "pushed" over websockets.  Stefano 
later agreed to this, especially since routing protocols for the last
30 years have been doing something similar (pushing routing updates).
Ben Niven-Jenkins expressed the opinion that the pull model works fine
for incremental updates and it really depends on the timeliness of the
updates required by clients.  If you want fast updates, the push model
is better, but if updates are infrequent, then pull is adequate.  Jan
added another dimension to this by noting that the frequency of 
updates also depends on how valuable the information is to the client.
In CDN domain, push would be better whereas in P2P domain, pull may
win.  Ben noted that there may be some cases in CDN where pull may
well be sufficient.

Richard Yang agreed that using websockets makes a lot of sense.  P2P
browser-based client is a use case and these use websockets to get
information from the server.  At issue is the fact that current ALTO
information is static, but when ALTO provides more dynamic information
websockets will be useful.  He asked the authors to characterize these
scenarios.  Jan noted that for CDN he sees a necessity of a more dynamic
setting.

Richard Alimi commented that the probability of the server being able
to make outbound connections to millions of clients is very low due
to NAT issues.  In such cases, the client has to open connection to
the server and keep it open.  Richard does not see a use case for a
server initiating connections to clients.

Enrico noted again that the main goal of the work is to start a
discussion, and while he and Jan like this option, he will like to have
a larger discussion on other protocols.  Enrico felt that XMPP would
also be a good candidate protocol.

Diego Lopez said that while he was an early proponet of XMPP, the
problem is that it will require us to rethink the ALTO transport
protocol because it would make sense to have the pull model be
specified in XMPP as well.  Otherwise, we will be in a situation
where two connections --- one for pull and one for push --- will be
needed.  Enrico agreed that the bi-directional nature of websockets
makes it easy to fit in here but felt that XMPP may have other
benefits that should be discussed.  Ditto for Netconf.  Diego noted
that he likes that XMPP allows for a forward (?) channel.

Enrico asked people to put proposals for other protocols.

SDN use case
-------------

Meeting materials: 
  slides, MP3 (19m:04s, 9.15 MBytes), I-D, Meetecho recording (HTML5)

Diego Lopez provided an overview of the application of ALTO to SDN.

Diego thinks that a single SDN controller controlling a whole network
is not feasible; instead there will be multiple controllers.  SDN
partitioning, therefore, is inevitable and in fact already a common
practice (see FlowVisor enabled slices).

The main idea here is that the several SDN controllers become the main
source of network information for ALTO.  The vertical architecture 
depicted on slide 5 of the presentation is the preferred one.
Diego forsees that the SDN controllers can publish information
in the ALTO.  This is outside the current specifications of ALTO,
but something that may be possible in the future.  On the downward
flow, one controller can use information from ALTO that has been
pushed by another controller.  Security is obviously important.
Diego finished by showing two use cases --- one regarding on-demand
bandwidth use case and the other about choosing an appropriate
CDN.

Richard Alimi asked if ALTO was envisioned as THE protocol for 
indicating what gets programmed in the controller, or is it
just one of the sources?  Diego indicated that ALTO is one source.

Martin Stiemerling would like to have some focus at the beginning
to flesh out exactly what an SDN is, what are the demands of it on
ALTO, etc.

Discussion ensued on the northbound interface and that it needs to
be defined concretely in the SDN community first before thinking
about its impact on ALTO.  Diego noted that he has already seen
cases of having an OpenFlow controller go to a Radius server and
collect data.  Here we simply use ALTO.

Nitin Bahadur noted that the presentation was much focused on
OpenFlow.  Furthermore, there are many models of controllers and if
the current work was based on the OpenFlow controller, then the
model is probably right.  Diego noted that they were not assuming
that the SDN controller was the only source of information for 
ALTO; there will be others.

Volker Hilt noted that ALTO provides some benefits of being in
an SDN model.  However, this draft is too broad and talks about
many models without honing in on a problem that needs to be 
solved.  Why do we need an interface from ALTO to SDN when we
are not standardizing other interfaces that feed ALTO.  Diego agreed
that the current version of the draft is fairly wide.  The idea
is that an ALTO server coordinates several independant SDN 
controllers.  Each SDN controller uploads some state that other
SDN controllers can avail themselves of.

Stefano Previdi said that there was already a protocol that does
what the ALTO-SDN draft is proposing: see the BGL-LS draft, which
is an extension to BGP to carry topology information.  BGP-LS has
the right granularity.

The chairs asked for more discussion to be moved to the list.

Extensions for data center information
---------------

Meeting materials: 
  slides, MP3 (9m:18s, 4.47 MBytes), I-D, Meetecho recording (HTML5)

Young Lee presented an extension discussed at previous IETF meetings on
large bandwidth use cases.  The problem deals with datacenters 
connected through networks and present an opportunity to look at
datacenter resource and network resources and select the servers 
and locations that the data may migrate to.

Today's ALTO summary of path vector may not be sufficient to
schedule optimal selection of resources.  We are proposing that ALTO
collects the right information at the ALTO client that interacts with
an application orchestrator to deliver the right service (see slide 3).

Young introduced new cost types --- summary and graph as constraints
used during filtering.  Richard Alimi asked whether "summary" was the
current semantics of ALTO costs and graph the new, to which Young Lee
replied in the affirmative.  An example is provided in the slide deck
that shows the constraints.

Next, Young showed how ALTO can collect datacenter resource
information.  An example was provided as illustration.  Future work
includes federation aspects (multi-domain).

Extension for hi-bandwidth information
------------------------

Meeting materials: 
  slides, MP3 (9m:24s, 4.51 MBytes), I-D, Meetecho recording (HTML5)

Greg Bernstein noted that in traditional "small bandwidth" applications
such as p2p, each request will not be blocked due to lack of bandwidth.
The network administrator can simply change the cost map to compensate
when there is indeed a lack of bandwidth.  But in large scale optical
core where big pipes transfer a lot of bandwidth, optimization is a
harder problem.

Richard Alimi asked whether the bandwidth demand are a function of
time and can change rapidly.  Indeed that is the case, Greg replied,
adding that technologies like server notification discussed earlier
are beneficial to disseminate information rapidly.

Greg then showed a data model for representing paths (slide 6).  Greg
went on to state that constructing minimum spanning trees from a graph
may be a beneficial reduction strategy.  This looks almost like
topology aggregation but is different in that the focus here is bottleneck
paths.

In summary, two ways to share bandwidth constraint information: one based on
abstracted graphs and the other based on path properties coupled with
shared links.  The latest revision of the draft takes into account
a large number of technologies; further feedback is sought.

Enrico urged attendees to read the draft and comment on list.