TOC 
Network Working GroupA. Morton
Internet-DraftG. Ramachandran
Intended status: InformationalG. Maguluri
Expires: April 23, 2010AT&T Labs
 October 20, 2009


Reporting Metrics: Different Points of View
draft-ietf-ippm-reporting-metrics-00

Status of this Memo

This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79. This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt.

The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html.

This Internet-Draft will expire on April 23, 2010.

Copyright Notice

Copyright (c) 2009 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents in effect on the date of publication of this document (http://trustee.ietf.org/license-info). Please review these documents carefully, as they describe your rights and restrictions with respect to this document.

Abstract

Consumers of IP network performance metrics have many different uses in mind. This memo categorizes the different audience points of view. It describes how the categories affect the selection of metric parameters and options when seeking info that serves their needs. The memo then proceeds to discuss "long-term" reporting considerations (e.g, days, weeks or months, as opposed to 10 seconds).

Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 (Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels,” March 1997.) [RFC2119].



Table of Contents

1.  Introduction
2.  Purpose and Scope
3.  Effect of POV on the Loss Metric
    3.1.  Loss Threshold
        3.1.1.  Network Characterization
        3.1.2.  Application Performance
    3.2.  Errored Packet Designation
    3.3.  Causes of Lost Packets
    3.4.  Summary for Loss
4.  Effect of POV on the Delay Metric
    4.1.  Treatment of Lost Packets
        4.1.1.  Application Performance
        4.1.2.  Network Characterization
        4.1.3.  Delay Variation
        4.1.4.  Reordering
    4.2.  Preferred Statistics
    4.3.  Summary for Delay
5.  Test Streams and Sample Size
    5.1.  Test Stream Characteristics
    5.2.  Sample Size
6.  Reporting Results
    6.1.  Overview of Metric Statistics
    6.2.  Long-Term Reporting Considerations
7.  IANA Considerations
8.  Security Considerations
9.  Acknowledgements
10.  References
    10.1.  Normative References
    10.2.  Informative References
§  Authors' Addresses




 TOC 

1.  Introduction

When designing measurements of IP networks and presenting the results, knowledge of the audience is a key consideration. To present a useful and relevant portrait of network conditions, one must answer the following question:

"How will the results be used?"

There are two main audience categories:

  1. Network Characterization - describes conditions in an IP network for quality assurance, troubleshooting, modeling, etc. The point-of-view looks inward, toward the network, and the consumer intends their actions there.
  2. Application Performance Estimation - describes the network conditions in a way that facilitates determining affects on user applications, and ultimately the users themselves. This point-of-view looks outward, toward the user(s), accepting the network as-is. This consumer intends to estimate a network-dependent aspect of performance, or design some aspect of an application's accommodation of the network. (These are *not* application metrics, they are defined at the IP layer.)

This memo considers how these different points-of-view affect both the measurement design (parameters and options of the metrics) and statistics reported when serving their needs.

The IPPM framework [RFC2330] (Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, “Framework for IP Performance Metrics,” May 1998.) and other RFCs describing IPPM metrics provide a background for this memo.



 TOC 

2.  Purpose and Scope

The purpose of this memo is to clearly delineate two points-of-view (POV) for using measurements, and describe their effects on the test design, including the selection of metric parameters and reporting the results.

The current scope of this memo is primarily limited to design and reporting of the loss and delay metrics [RFC2680] (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Packet Loss Metric for IPPM,” September 1999.) [RFC2679] (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Delay Metric for IPPM,” September 1999.), but will also discuss the delay variation and reordering metrics where applicable. Sampling, or the design of the active packet stream that is the basis for the measurements, is also discussed.



 TOC 

3.  Effect of POV on the Loss Metric

This section describes the ways in which the Loss metric can be tuned to reflect the preferences of the two audience categories, or different POV. The waiting time to declare a packet lost, or loss threshold is one area where there would appear to be a difference, but the ability to post-process the results may resolve it.



 TOC 

3.1.  Loss Threshold

RFC 2680 (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Packet Loss Metric for IPPM,” September 1999.) [RFC2680] defines the concept of a waiting time for packets to arrive, beyond which they are declared lost. The text of the RFC declines to recommend a value, instead saying that "good engineering, including an understanding of packet lifetimes, will be needed in practice." Later, in the methodology, they give reasons for waiting "a reasonable period of time", and leaving the definition of "reasonable" intentionally vague.



 TOC 

3.1.1.  Network Characterization

Practical measurement experience has shown that unusual network circumstances can cause long delays. One such circumstance is when routing loops form during IGP re-convergence following a failure or drastic link cost change. Packets will loop between two routers until new routes are installed, or until the IPv4 Time-to-Live (TTL) field (or the IPv6 Hop Limit) decrements to zero. Very long delays on the order of several seconds have been measured [Casner] (, “A Fine-Grained View of High Performance Networking, NANOG 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html,” May 20-22 2001.) [Cia03] (, “Standardized Active Measurements on a Tier 1 IP Backbone, IEEE Communications Mag., pp 90-97.,” June 2003.).

Therefore, network characterization activities prefer a long waiting time in order to distinguish these events from other causes of loss (such as packet discard at a full queue, or tail drop). This way, the metric design helps to distinguish more reliably between packets that might yet arrive, and those that are no longer traversing the network.

It is possible to calculate a worst-case waiting time, assuming that a routing loop is the cause. We model the path between Source and Destination as a series of delays in links (t) and queues (q), as these two are the dominant contributors to delay. The normal path delay across n hops without encountering a loop, D, is


           n
          ---
          \
D = t  +   >   t  + q
     0    /     i    i
          ---
         i = 1

 Figure 1: Normal Path Delay 

and the time spent in the loop with L hops, is



      i + L-1
       ---
       \                         (TTL - n)
R = C   >   t  + q  where C    = ---------
       /     i    i        max       L
       ---
        i

 Figure 2: Delay due to Rotations in a Loop 

and where C is the number of times a packet circles the loop.

If we take the delays of all links and queues as 100ms each, the TTL=255, the number of hops n=5 and the hops in the loop L=4, then

D = 1.1 sec and R ~= 50 sec, and D + R ~= 51.1 seconds

We note that the link delays of 100ms would span most continents, and a constant queue length of 100ms is also very generous. When a loop occurs, it is almost certain to be resolved in 10 seconds or less. The value calculated above is an upper limit for almost any realistic circumstance.

A waiting time threshold parameter, dT, set consistent with this calculation would not truncate the delay distribution (possibly causing a change in its mathematical properties), because the packets that might arrive have been given sufficient time to traverse the network.

It is worth noting that packets that are stored and deliberately forwarded at a much later time constitute a replay attack on the measurement system, and are beyond the scope of normal performance reporting.



 TOC 

3.1.2.  Application Performance

Fortunately, application performance estimation activities are not adversely affected by the estimated worst-case transfer time. Although the designer's tendency might be to set the Loss Threshold at a value equivalent to a particular application's threshold, this specific threshold can be applied when post-processing the measurements. A shorter waiting time can be enforced by locating packets with delays longer than the application's threshold, and re-designating such packets as lost. Thus, the measurement system can use a single loss threshold and support both application and network performance POVs simultaneously.



 TOC 

3.2.  Errored Packet Designation

RFC 2680 designates packets that arrive containing errors as lost packets. Many packets that are corrupted by bit errors are discarded within the network and do not reach their intended destination.

This is consistent with applications that would check the payload integrity at higher layers, and discard the packet. However, some applications prefer to deal with errored payloads on their own, and even a corrupted payload is better than no packet at all.

To address this possibility, and to make network characterization more complete, it is recommended to distinguish between packets that do not arrive (lost) and errored packets that arrive (conditionally lost).



 TOC 

3.3.  Causes of Lost Packets

Although many measurement systems use a waiting time to determine if a packet is lost or not, most of the waiting is in vain. The packets are no-longer traversing the network, and have not reached their destination.

There are many causes of packet loss, including:

  1. Queue drop, or discard
  2. Corruption of the IP header, or other essential header info
  3. TTL expiration (or use of a TTL value that is too small)
  4. Link or router failure

After waiting sufficient time, packet loss can probably be attributed to one of these causes.



 TOC 

3.4.  Summary for Loss

Given that measurement post-processing is possible (even encouraged in the definitions of IPPM metrics), measurements of loss can easily serve both points of view:



 TOC 

4.  Effect of POV on the Delay Metric

This section describes the ways in which the Delay metric can be tuned to reflect the preferences of the two consumer categories, or different POV.



 TOC 

4.1.  Treatment of Lost Packets

The Delay Metric [RFC2679] (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Delay Metric for IPPM,” September 1999.) specifies the treatment of packets that do not successfully traverse the network: their delay is undefined.

" >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined (informally, infinite)<< means that Src sent the first bit of a Type-P packet to Dst at wire-time T and that Dst did not receive that packet."

It is an accepted, but informal practice to assign infinite delay to lost packets. We next look at how these two different treatments align with the needs of measurement consumers who wish to characterize networks or estimate application performance. Also, we look at the way that lost packets have been treated in other metrics: delay variation and reordering.



 TOC 

4.1.1.  Application Performance

Applications need to perform different functions, dependent on whether or not each packet arrives within some finite tolerance. In other words, a receivers' packet processing takes one of two directions (or "forks" in the road):

So, it is important to maintain a distinction between packets that actually arrive, and those that do not. Therefore, it is preferable to leave the delay of lost packets undefined, and to characterize the delay distribution as a conditional distribution (conditioned on arrival).



 TOC 

4.1.2.  Network Characterization

In this discussion, we assume that both loss and delay metrics will be reported for network characterization (at least).

Assume packets that do not arrive are reported as Lost, usually as a fraction of all sent packets. If these lost packets are assigned undefined delay, then network's inability to deliver them (in a timely way) is captured only in the loss metric when we report statistics on the Delay distribution conditioned on the event of packet arrival (within the Loss waiting time threshold). We can say that the Delay and Loss metrics are Orthogonal, in that they convey non-overlapping information about the network under test.

However, if we assign infinite delay to all lost packets, then:

As further evidence of overlap, consider the Cumulative Distribution Function (CDF) of Delay when the value positive infinity is assigned to all lost packets. Figure 3 (Cumulative Distribution Function for Delay when Loss = +Infinity) shows a CDF where a small fraction of packets are lost.



 1 | - - - - - - - - - - - - - - - - - -+
   |                                    |
   |          _..----''''''''''''''''''''
   |      ,-''
   |    ,'
   |   /                         Mass at
   |  /                          +infinity
   | /                           = fraction
   ||                            lost
   |/
 0 |_____________________________________

   0               Delay               +o0

 Figure 3: Cumulative Distribution Function for Delay when Loss = +Infinity 

We note that a Delay CDF that is conditioned on packet arrival would not exhibit this apparent overlap with loss.

Although infinity is a familiar mathematical concept, it is somewhat disconcerting to see any time-related metric reported as infinity, in the opinion of the authors. Questions are bound to arise, and tend to detract from the goal of informing the consumer with a performance report.



 TOC 

4.1.3.  Delay Variation

[RFC3393] (Demichelis, C. and P. Chimento, “IP Packet Delay Variation Metric for IP Performance Metrics (IPPM),” November 2002.) excludes lost packets from samples, effectively assigning an undefined delay to packets that do not arrive in a reasonable time. Section 4.1 describes this specification and its rationale (ipdv = inter-packet delay variation in the quote below).

"The treatment of lost packets as having "infinite" or "undefined" delay complicates the derivation of statistics for ipdv. Specifically, when packets in the measurement sequence are lost, simple statistics such as sample mean cannot be computed. One possible approach to handling this problem is to reduce the event space by conditioning. That is, we consider conditional statistics; namely we estimate the mean ipdv (or other derivative statistic) conditioned on the event that selected packet pairs arrive at the destination (within the given timeout). While this itself is not without problems (what happens, for example, when every other packet is lost), it offers a way to make some (valid) statements about ipdv, at the same time avoiding events with undefined outcomes."



 TOC 

4.1.4.  Reordering

[RFC4737] (Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, S., and J. Perser, “Packet Reordering Metrics,” November 2006.)defines metrics that are based on evaluation of packet arrival order, and include a waiting time to declare a packet lost (to exclude them from further processing).

If packets are assigned a delay value, then the reordering metric would declare any packets with infinite delay to be reordered, because their sequence numbers will surely be less than the "Next Expected" threshold when (or if) they arrive. But this practice would fail to maintain orthogonality between the reordering metric and the loss metric. Confusion can be avoided by designating the delay of non-arriving packets as undefined, and reserving delay values only for packets that arrive within a sufficiently long waiting time.



 TOC 

4.2.  Preferred Statistics

Today in network characterization, the sample mean is one statistic that is almost ubiquitously reported. It is easily computed and understood by virtually everyone in this audience category. Also, the sample is usually filtered on packet arrival, so that the mean is based a conditional distribution.

The median is another statistic that summarizes a distribution, having somewhat different properties from the sample mean. The median is stable in distributions with a few outliers or without them. However, the median's stability prevents it from indicating when a large fraction of the distribution changes value. 50% or more values would need to change for the median to capture the change.

Both the median and sample mean have difficulty with bimodal distributions. The median will reside in only one of the modes, and the mean may not lie in either mode range. For this and other reasons, additional statistics such as the minimum, maximum, and 95%-ile have value when summarizing a distribution.

When both the sample mean and median are available, a comparison will sometimes be informative, because these two statistics are equal only when the delay distribution is perfectly symmetrical.

Also, these statistics are generally useful from the Application Performance POV, so there is a common set that should satisfy audiences.



 TOC 

4.3.  Summary for Delay

From the perspectives of:

  1. application/receiver analysis, where subsequent processing depends on whether the packet arrives or times-out,
  2. straightforward network characterization without double-counting defects, and
  3. consistency with Delay variation and Reordering metric definitions,

the most efficient practice is to distinguish between truly lost and delayed packets with a sufficiently long waiting time, and to designate the delay of non-arriving packets as undefined.



 TOC 

5.  Test Streams and Sample Size

This section discusses two key aspects of measurement that are sometimes omitted from the report: the description of the test stream on which the measurements are based, and the sample size.



 TOC 

5.1.  Test Stream Characteristics

Network Characterization has traditionally used Poisson-distributed inter-packet spacing, as this provides an unbiased sample. The average inter-packet spacing may be selected to allow observation of specific network phenomena. Other test streams are designed to sample some property of the network, such as the presence of congestion, link bandwidth, or packet reordering.

If measuring a network in order to make inferences about applications or receiver performance, then there are usually efficiencies derived from a test stream that has similar characteristics to the sender. In some cases, it is essential to synthesize the sender stream, as with Bulk Transfer Capacity estimates. In other cases, it may be sufficient to sample with a "known bias", e.g., a Periodic stream to estimate real-time application performance.



 TOC 

5.2.  Sample Size

Sample size is directly related to the accuracy of the results, and plays a critical role in the report. Even if only the sample size (in terms of number of packets) is given for each value or summary statistic, it imparts a notion of the confidence in the result.

In practice, the sample size will be selected taking both statistical and practical factors into account. Among these factors are:

  1. The estimated variability of the quantity being measured
  2. The desired confidence in the result (although this may be dependent on assumption of the underlying distribution of the measured quantity).
  3. The effects of active measurement traffic on user traffic
  4. etc.

A sample size may sometimes be referred to as "large". This is a relative, and qualitative term. It is preferable to describe what one is attempting to achieve with their sample. For example, stating an implication may be helpful: this sample is large enough such that a single outlying value at ten times the "typical" sample mean (the mean without the outlying value) would influence the mean by no more than X.



 TOC 

6.  Reporting Results

This section gives an overview of recommendations, followed by additional considerations for reporting results in the "long-term".



 TOC 

6.1.  Overview of Metric Statistics

This section gives an overview of reporting recommendations for the loss, delay, and delay variation metrics based on the discussion and conclusions of the preceding sections.

The minimal report on measurements MUST include both Loss and Delay Metrics.

For Packet Loss, the loss ratio defined in [RFC2680] (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Packet Loss Metric for IPPM,” September 1999.) is a sufficient starting point, especially the guidance for setting the loss threshold waiting time. We have calculated a waiting time above that should be sufficient to differentiate between packets that are truly lost or have long finite delays under general measurement circumstances, 51 seconds. Knowledge of specific conditions can help to reduce this threshold, but 51 seconds is considered to be manageable in practice.

We note that a loss ratio calculated according to [Y.1540] (ITU-T Recommendation Y.1540, “Internet protocol data communication service - IP packet transfer and availability performance parameters,” December  2002.) would exclude errored packets form the numerator. In practice, the difference between these two loss metrics is small if any, depending on whether the last link prior to the destination contributes errored packets.

For Packet Delay, we recommend providing both the mean delay and the median delay with lost packets designated undefined (as permitted by [RFC2679] (Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Delay Metric for IPPM,” September 1999.)). Both statistics are based on a conditional distribution, and the condition is packet arrival prior to a waiting time dT, where dT has been set to take maximum packet lifetimes into account, as discussed above. Using a long dT helps to ensure that delay distributions are not truncated.

For Packet Delay Variation (PDV), the minimum delay of the conditional distribution should be used as the reference delay for computing PDV according to [Y.1540] (ITU-T Recommendation Y.1540, “Internet protocol data communication service - IP packet transfer and availability performance parameters,” December  2002.) or [RFC3393] (Demichelis, C. and P. Chimento, “IP Packet Delay Variation Metric for IP Performance Metrics (IPPM),” November 2002.). A useful value to report is a pseudo range of delay variation based on calculating the difference between a high percentile of delay and the minimum delay. For example, the 99.9%-ile minus the minimum will give a value that can be compared with objectives in [Y.1541] (ITU-T Recommendation Y.1540, “Network Performance Objectives for IP-Based Services,” February  2006.).



 TOC 

6.2.  Long-Term Reporting Considerations

[I‑D.ietf‑ippm‑reporting] (Shalunov, S. and M. Swany, “Reporting IP Performance Metrics to Users,” July 2009.) describes methods to conduct measurements and report the results on a near-immediate time scale (10 seconds, which we consider to be "short-term").

Measurement intervals and reporting intervals need not be the same length. Sometimes, the user is only concerned with the performance levels achieved over a relatively long interval of time (e.g, days, weeks, or months, as opposed to 10 seconds). However, there can be risks involved with running a measurement continuously over a long period without recording intermediate results:

For these and other reasons, such as

there is value in conducting measurements on intervals that are much shorter than the reporting interval.

There are several approaches for aggregating a series of measurement results over time in order to make a statement about the longer reporting interval. One approach requires the storage of all metric singletons collected throughout the reporting interval, even though the measurement interval stops and starts many times.

Another approach is described in [I‑D.ietf‑ippm‑framework‑compagg] (Morton, A., “Framework for Metric Composition,” December 2009.) as "temporal aggregation". This approach would estimate the results for the reporting interval based on many individual measurement interval statistics (results) alone. The result would ideally appear in the same form as though a continuous measurement was conducted. A memo to address the details of temporal aggregation is yet to be prepared.

Yet another approach requires a numerical objective for the metric, and the results of each measurement interval are compared with the objective. Every measurement interval where the results meet the objective contribute to the fraction of time with performance as specified. When the reporting interval contains many measurement intervals it is possible to present the results as "metric A was less than or equal to objective X during Y% of time.

NOTE that numerical thresholds are not set in IETF performance work and are explicitly excluded from the IPPM charter.



 TOC 

7.  IANA Considerations

This document makes no request of IANA.

Note to RFC Editor: this section may be removed on publication as an RFC.



 TOC 

8.  Security Considerations

The security considerations that apply to any active measurement of live networks are relevant here as well. See [RFC4656] (Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. Zekauskas, “A One-way Active Measurement Protocol (OWAMP),” September 2006.).



 TOC 

9.  Acknowledgements

The authors would like to thank Phil Chimento for his suggestion to employ conditional distributions for Delay, and Steve Konish Jr. for his careful review and suggestions.



 TOC 

10.  References



 TOC 

10.1. Normative References

[RFC2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels,” BCP 14, RFC 2119, March 1997 (TXT, HTML, XML).
[RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, “Framework for IP Performance Metrics,” RFC 2330, May 1998 (TXT, HTML, XML).
[RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Delay Metric for IPPM,” RFC 2679, September 1999 (TXT).
[RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, “A One-way Packet Loss Metric for IPPM,” RFC 2680, September 1999 (TXT).
[RFC3393] Demichelis, C. and P. Chimento, “IP Packet Delay Variation Metric for IP Performance Metrics (IPPM),” RFC 3393, November 2002 (TXT).
[RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. Zekauskas, “A One-way Active Measurement Protocol (OWAMP),” RFC 4656, September 2006 (TXT).
[RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, S., and J. Perser, “Packet Reordering Metrics,” RFC 4737, November 2006 (TXT).


 TOC 

10.2. Informative References

[Casner] “A Fine-Grained View of High Performance Networking, NANOG 22 Conf.; http://www.nanog.org/mtg-0105/agenda.html,” May 20-22 2001.
[Cia03] “Standardized Active Measurements on a Tier 1 IP Backbone, IEEE Communications Mag., pp 90-97.,” June 2003.
[I-D.ietf-ippm-framework-compagg] Morton, A., “Framework for Metric Composition,” draft-ietf-ippm-framework-compagg-09 (work in progress), December 2009 (TXT).
[I-D.ietf-ippm-reporting] Shalunov, S. and M. Swany, “Reporting IP Performance Metrics to Users,” draft-ietf-ippm-reporting-04 (work in progress), July 2009 (TXT).
[Y.1540] ITU-T Recommendation Y.1540, “Internet protocol data communication service - IP packet transfer and availability performance parameters,” December  2002.
[Y.1541] ITU-T Recommendation Y.1540, “Network Performance Objectives for IP-Based Services,” February  2006.


 TOC 

Authors' Addresses

  Al Morton
  AT&T Labs
  200 Laurel Avenue South
  Middletown, NJ 07748
  USA
Phone:  +1 732 420 1571
Fax:  +1 732 368 1192
Email:  acmorton@att.com
URI:  http://home.comcast.net/~acmacm/
  
  Gomathi Ramachandran
  AT&T Labs
  200 Laurel Avenue South
  Middletown, New Jersey 07748
  USA
Phone:  +1 732 420 2353
Fax: 
Email:  gomathi@att.com
URI: 
  
  Ganga Maguluri
  AT&T Labs
  200 Laurel Avenue
  Middletown, New Jersey 07748
  USA
Phone:  732-420-2486
Fax: 
Email:  gmaguluri@att.com
URI: