Re: [aqm] Question re draft-baker-aqm-recommendations recomendation #1

"Fred Baker (fred)" <fred@cisco.com> Wed, 24 April 2013 20:10 UTC

Return-Path: <fred@cisco.com>
X-Original-To: aqm@ietfa.amsl.com
Delivered-To: aqm@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 7794921F8900 for <aqm@ietfa.amsl.com>; Wed, 24 Apr 2013 13:10:15 -0700 (PDT)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -109.817
X-Spam-Level:
X-Spam-Status: No, score=-109.817 tagged_above=-999 required=5 tests=[AWL=0.555, BAYES_00=-2.599, RCVD_IN_DNSWL_HI=-8, SARE_SUB_OBFU_Q1=0.227, USER_IN_WHITELIST=-100]
Received: from mail.ietf.org ([12.22.58.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 9TXGO++5bfAi for <aqm@ietfa.amsl.com>; Wed, 24 Apr 2013 13:10:11 -0700 (PDT)
Received: from rcdn-iport-1.cisco.com (rcdn-iport-1.cisco.com [173.37.86.72]) by ietfa.amsl.com (Postfix) with ESMTP id F1D7C21F8E79 for <aqm@ietf.org>; Wed, 24 Apr 2013 13:10:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=cisco.com; i=@cisco.com; l=7454; q=dns/txt; s=iport; t=1366834211; x=1368043811; h=from:to:cc:subject:date:message-id:references: in-reply-to:content-id:content-transfer-encoding: mime-version; bh=kP7ybtReWwv7YtEvdETN/a69uMydHUyj+/A2MUyAXsk=; b=QIPB6seVFmXhcQVlPrEkTjLR6OJZy6HpV8AOixNCE51NkC0Wu0s2YHjt 5jVeg2f2zUx6giDHECilfLh8Uo7E/56mP5lolvV/d6ynALHKaHKKMiFs2 /1T2b4Eid+9QT3JE7FfuqFhNBGGN0XsxHZW7tnmSew4ErqWlhnAll6V/d U=;
X-IronPort-Anti-Spam-Filtered: true
X-IronPort-Anti-Spam-Result: Ai4FAJI7eFGtJV2a/2dsb2JhbABQgwY2gU28X4ECFnSCHwEBAQMBAQEBawsFCwIBCBgKAiIhBgslAgQKBAEECBECh2cDCQYMtV4NiC6MWoIjAjECBYJqYQOIWIxggwmKXIUfgw6CKA
X-IronPort-AV: E=Sophos;i="4.87,545,1363132800"; d="scan'208";a="202489318"
Received: from rcdn-core-3.cisco.com ([173.37.93.154]) by rcdn-iport-1.cisco.com with ESMTP; 24 Apr 2013 20:10:08 +0000
Received: from xhc-rcd-x09.cisco.com (xhc-rcd-x09.cisco.com [173.37.183.83]) by rcdn-core-3.cisco.com (8.14.5/8.14.5) with ESMTP id r3OKA8Bv029477 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Wed, 24 Apr 2013 20:10:08 GMT
Received: from xmb-rcd-x09.cisco.com ([169.254.9.83]) by xhc-rcd-x09.cisco.com ([173.37.183.83]) with mapi id 14.02.0318.004; Wed, 24 Apr 2013 15:10:08 -0500
From: "Fred Baker (fred)" <fred@cisco.com>
To: Dave Taht <dave.taht@gmail.com>
Thread-Topic: [aqm] Question re draft-baker-aqm-recommendations recomendation #1
Thread-Index: AQHOQSe3nn6XNvyUwkCOQf3gPsJhwg==
Date: Wed, 24 Apr 2013 20:10:08 +0000
Message-ID: <8C48B86A895913448548E6D15DA7553B820E7D@xmb-rcd-x09.cisco.com>
References: <8C48B86A895913448548E6D15DA7553B81F454@xmb-rcd-x09.cisco.com> <CAA93jw650dTcMm2C2n+s9c4ksJAyuXd_ebCpFU1zUfsRfEXxUA@mail.gmail.com>
In-Reply-To: <CAA93jw650dTcMm2C2n+s9c4ksJAyuXd_ebCpFU1zUfsRfEXxUA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-originating-ip: [10.19.64.123]
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <0E3E42E8A7D052479C770B087D6926EA@emea.cisco.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "aqm@ietf.org" <aqm@ietf.org>
Subject: Re: [aqm] Question re draft-baker-aqm-recommendations recomendation #1
X-BeenThere: aqm@ietf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "Discussion list for active queue management and flow isolation." <aqm.ietf.org>
List-Unsubscribe: <https://www.ietf.org/mailman/options/aqm>, <mailto:aqm-request@ietf.org?subject=unsubscribe>
List-Archive: <http://www.ietf.org/mail-archive/web/aqm>
List-Post: <mailto:aqm@ietf.org>
List-Help: <mailto:aqm-request@ietf.org?subject=help>
List-Subscribe: <https://www.ietf.org/mailman/listinfo/aqm>, <mailto:aqm-request@ietf.org?subject=subscribe>
X-List-Received-Date: Wed, 24 Apr 2013 20:10:15 -0000

On Apr 24, 2013, at 12:38 PM, Dave Taht <dave.taht@gmail.com>
 wrote:

> On Wed, Apr 24, 2013 at 7:00 AM, Fred Baker (fred) <fred@cisco.com> wrote:
>> Do we generally agree with the recommendation of http://tools.ietf.org/html/draft-baker-aqm-recommendation-01#section-4.1?
> 
> Sure. If we have to struggle for consensus on that, we're in for an
> uphill battle. :)

Yes. But it still pays to check one's assumptions.

> I don't make the solid distinction between packet scheduling
> algorithms and queue length management algorithms as you do.  the
> phrase "active queue management" is a bit overloaded and I tend to
> blend together packet scheduling with a queue length management drop
> strategy when thinking about it.

Well, yes. You do tend to blend the conversations. That makes it hard when one is trying to have a specific conversation. For example, if I am looking specifically at the effects of CoDel, PIE, AVQ, or pick-your-approach, and the response I get blends in queuing effects, I don't know whether the result seen is due to mark vs drop, randomness vs determinism, or the queuing methodology. It's kind of like if we try to have a conversation about oranges and how the taste affects people's preferences, and someone yammers about the color and bug-resistance.

The reason I approach it this way - and I'm open to other viewpoints, but I'm expressing my own here - is that (a) queuing is rather a different mechanism, and (b) for each queue in a WRR system or class in a calendar based WFQ system, I'm still going to need some mechanism to signal to TCP/SCTP in the form of dropping or marking. In a WRR system, I might apply AQM in each queue, and I might even apply a *different* AQM algorithm or set of parameters to each queue. So I would prefer to have the dropping/marking methodology separately described from the queuing methodology.

Let me comment for a moment on what queuing does. The logic behind packet-pair and packet-burst is that as data segments go in the forward direction, they will be spaced out at the rate of the slowest link en route, which is the actual bit rate of the link (at least ideally) reduced by the mean rate of competing traffic at that instant. As a result, the ratio of the number of bytes or segments acknowledged to the interval between first ack received and last ack received gives you a transmission rate, and the resulting ack pacing has the sender sending (after slow-start and in the normal case) at precisely the rate of the bottleneck as so measured. If I use FIFO queuing, it is possible that my segments arrive in a burst, and so measure the rate of the link rather than the rate of the link less the traffic I'm competing with; it won't be until the third or fourth RTT that I can clear that out of the measurement. If I use a fair queuing technology (WRR or WFQ), I'm explicitly allowing competing traffic to take transmission slots between the segments in the segment burst, so that I am more likely to measure something akin to the rate of the link less the traffic I'm competing with. So any form of fair queuing will have a superior effect on the sender to FIFO. War stories on request. It's a very effective technology.

BUT - in any queuing system, whether WRR or WFQ and whether per-microflow or by some other classifications schema, you want to apply the drop/mark technology to an individual queue or class, not to the entire system. Imagine, if you will, a WRR system with N queues, one of which has a stampeding elephant in it. Let's further imagine that you are dropping or marking packets after the total system has more than some number of packets in queue. If you apply that logic to the queue (or a packet dequeued from the queue) that has the stampeding elephant, you have some probability (perhaps 1) of signaling to the elephant. If you drop/mark from any other queue, you have a probability of 0 of hitting the session that is currently making life difficult. So restrict the drop/mark algorithm, whatever it is, to to the deepest queue in the system. Implementers of other approaches can transfer the logic to their paradigms.

Coming specifically to flow queuing, as you call it, what you have is a WRR system in which zero or more micro flows go to individual queues that are expected to fill and remain full while everything else goes to a distinguished queue that is normally empty or not very deep. The distinguished queue might be given priority (if it has traffic, that traffic always goes "next", but the expectation is that it is empty a significant percentage of the time) or might simply take its turn with the other WRR queues. The WRR queues operate in rotation, either each getting a packet to send or sending up to some number of octets in DWRR manner in rotation. Flows with allocated queues leave their queues when their queues go empty, and flows in the distinguished queue get moved to other queues when the distinguished queue is deeper than some threshold and one of the other queues is available. In a CoDel world, I would expect a packet to be dropped/marked if it waits for more than some interval, which is most likely to apply to the per-flow queues. In a PIE world, I would expect only those queues to become deeper than the threshold of interest, with the same effect. But I would expect neither to apply to the distinguished queue, at least in the normal case, because that queue rarely attains anything resembling a deep queue depth or implied latency.

I'm glad you like flow queuing, and I'm all for implementations like yours. That doesn't make the methodology for the entire queuing system or a sub-class of it pertinent to the drop/mark policy of a single queue within it. In my view.



> IMHO:
> 
> "flow queue-ing" + "a drop strategy" is better than either alone and
> allows for better techniques to be applied (notably drop head of a
> given flow and the "fast/slow" queue concept in fq_codel and fq_pie.
> So far.)
> 
> Many of the problems mentioned in section 3 in particular are
> mitigated by better scheduling.
> 
> So I can live with "active queue management" but feel obligated to
> bring up the visibility of scheduling in the draft, if you can live
> with that?
> 
> Related to that, the definition of "active queue management" is rather
> late in the draft:
> 
> "The solution to the full-queues problem is for routers to
>   drop packets before a queue becomes full, so that end nodes can
>   respond to congestion before buffers overflow.  We call such a
>   proactive approach "active queue management".  By dropping packets
>   before buffers overflow, active queue management allows routers to
>   control when and how many packets to drop."
> 
> I have a few days free to take a stab at this draft coming up.
> 
>> 
>> 
>> _______________________________________________
>> aqm mailing list
>> aqm@ietf.org
>> https://www.ietf.org/mailman/listinfo/aqm
> 
> 
> 
> -- 
> Dave Täht
> 
> Fixing bufferbloat with cerowrt: http://www.teklibre.com/cerowrt/subscribe.html
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm