2.6.13 Robust Header Compression (rohc)

NOTE: This charter is a snapshot of the 48th IETF Meeting in Pittsburgh, Pennsylvania. It may now be out-of-date. Last Modified: 17-Jul-00


Carsten Bormann <cabo@tzi.org>
Mikael Degermark <micke@sm.luth.se>

Transport Area Director(s):

Scott Bradner <sob@harvard.edu>
Allison Mankin <mankin@east.isi.edu>

Transport Area Advisor:

Allison Mankin <mankin@east.isi.edu>

Mailing Lists:

General Discussion:rohc@cdt.luth.se
To Subscribe: majordomo@cdt.luth.se
In Body: subscribe
Archive: http://www.cdt.luth.se/rohc/

Description of Working Group:

Note: Erik Nordmark (nordmark@eng.sun.com) is serving as the Technical Advior to the group.

Due to limited bandwidth, IP/UDP/RTP/TCP packets sent over cellular links benefit considerably from header compression. Existing header compression schemes (RFC 1144, RFC 2508) do not perform well over cellular links due to high error rates and long link roundtrip times, particularly as topologies and traffic patterns become more complex. In addition, existing schemes do not compress TCP options such as SACK or Timestamps.

The goal of ROHC is to develop header compression schemes that perform well over links with high error rates and long roundtrip times. The schemes must perform well for cellular links built using technologies such as WCDMA, EDGE, and CDMA-2000. However, the schemes should also be applicable to other future link technologies with high loss and long roundtrip times. Ideally, it should be possible to compress over unidirectional links.

Good performance includes both minimal loss propagation and minimal added delay. In addition to generic TCP and UDP/RTP compression, applications of particular interest are voice and low-bandwidth video.

ROHC may develop multiple compression schemes, for example, some that are particularly suited to specific link layer technologies. Schemes in addition to those listed in the milestones below may be added in consultation with the area directors.

A robust header compression scheme must:

* assure that when a header is compressed and then decompressed, the result is semantically identical to the original;

* perform well when the end-to-end path involves more than one cellular link;

* support IPv4 and IPv6.

Creating more thorough requirements documents will be the first task of the WG.

The working group shall maintain connections with other standardization organizations developing cellular technology for IP, such as 3GPP and 3GPP-2, to ensure that its output fulfills their requirements and will be put to good use.

In addition, the WG should develop a solid understanding of the impact that specific error patterns have on the compression schemes, and document guidelines to Layer 2 designers regarding what Layer 2 features work best to assist Layer 3 and Layer 4 header compression.

Finally, working group documents will address interactions with IPSEC and other security implications.

Goals and Milestones:

Mar 00


Submit I-D on Requirements for IP/UDP/RTP header compression.

May 00


Submit I-D of layer-2 design guidelines.

May 00


Submit I-D(s) proposing IP/UDP/RTP header compression schemes.

May 00


Submit I-D of Requirements for IP/TCP header compression.

Jun 00


Requirements for IP/UDP/RTP header compression submitted to IESG for publication as Informational.

Jul 00


Requirements for IP/TCP header compression submitted to IESG for publication as Informational.

Jul 00


Resolve possibly multiple IP/UDP/RTP compression schemes into a single scheme.

Aug 00


Submit I-D on IP/TCP header compression scheme.

Sep 00


Layer-2 design guidelines submitted to IESG for publication as Informational.

Sep 00


IP/UDP/RTP header compression scheme submitted to IESG for publication as Proposed Standard.

Dec 00


IP/TCP compression scheme submitted to IESG for publication

Jan 01


Possible recharter of WG to develop additional compression schemes.


No Request For Comments

Current Meeting Report

THURSDAY, August 3, 2000, 1530-1730

* WG admonishments

Scott Bradner presented briefly about IPR issues in IETF. The main message is that if the authors know about patents or any patent applications relevant to presented technology it has to be mentioned. If that can not be done the technology should not be presented at all.

Another issue he emphasised is the IETF way of working. The work is to be done on mailing lists only and in IETF WG meetings. It was emphasised that work is not done by phone calls among 'relevant' persons or in interim meetings. IETF has used lot of time and effort to develop the IETF way of working and it has to be respected and maintained.

Carsten Bormann repeated the process and IPR issues that were presented in the Adelaide IETF and in the Stockholm interim meeting. Main points were the same as Bradner had in his presentation: IETF way of working, IPR issues, etc. IPR policy is defined in RFC2026. He also mentioned that unencumbered solutions are preferred over encumbered technology in IETF.

* Agenda bashing

Bormann proposed an agenda for the Thursday and Friday WG meetings. There were no comments to the proposed agenda.

* WG document status

Bormann reviewed the charter. None of the robust TCP header compression related milestones can be met. The robust IP/UDP/RTP header compression milestones are in much better shape: Requirements document is mostly done and there were no comments to that document in the meeting, either. Other RTP related documents are developing. The schedule is tight but by hard work the milestones can be reached.

** draft-ietf-rohc-lower-layer-guidelines-00.txt

Krister Svanbro made a presentation of the recent changes (that still have to be made) to the lower layer guidelines document. The document is mainly in good shape. The items that are missing from the current version are: packet duplication, support for feedback packets, robust TCP/IP related header compression aspects and high level description of the generated header stream. It needs to be clarified in the document that packet duplication is not allowed on the link i.e. between compressor-decompressor. The lower layer guidelines does not currently identify the link requirements related to the feedback packets i.e. from decompressor to compressor. Those have to be added before the document can be seen as ready. Furthermore, it is beneficial to add a description of the generated header stream to the lower layer guidelines document. From the current version it is still missing and will be added within a month.

Robust TCP/IP header compression related aspects are not treated in the current lower layer guidelines document at all and there has been no activity or interest to do it. These issues are postponed to the phase that activities among TCP/IP header compression can be increased. The document will need to be partioned in two parts: one for the RTP part and one for the TCP part to better match the progress of those two tracks of header compression development.

Last item to be added is a description of a produced header stream using VoIP as an example. Finally, the unequal error detection and protection have to be clarified. They exist in the document but due to confusion e.g. in 3GPP they have to be rewritten. The required clarification is that none of them is required but both might benefit header compression.

** draft-ietf-rohc-rtp-requirements-02.txt

Mikael Degermark, the author of the RTP ROHC requirements document, was not present and therefore Bormann just asked whether there are any comments to the current version of the document. The only comment was about transparency of the produced headers. The question was related the to 0-byte header compression proposals if they are accepted to be a part of the ROHC WG or not. This was discussed in the end during the medium term WG schedule.

** draft-ietf-rohc-rtp-01.txt

Bormann made a short presentation of the document. He described the size of document (>100 pages) and the text that has been added after Stockholm meeting. The document includes state models, different technologies to provide efficiency and robustness in different modes and list based compression. The document requires significant technical and editorial work to meet the milestones.

* Context Status Transfer

The next section of the meeting was about context status transfer between compressors in the case of handover (this section was dealt with early to accommodate some flight schedules). Bormann explained that while defining the protocol that carries out the context transfer is out of scope of this WG, it may useful to specify the information that makes up the context to be transferred.

** draft-koodli-rohc-hc-relocate-00.txt

Rajeev Koodli presented the problem related to context state transfer. Unless the new router gets the compressor context from the previous node the 'downlink' (compressor receives packets from network, compresses them and sents to mobile node) will end up in sending full headers in the beginning and thus wasting capacity. To the 'uplink' direction (decompressor receives packets from the mobile node) the received packets have to be discarded until the new context have been obtained from the mobile node. The proposal in the presentation was that the new router would request the context from the previous router when the context relocation happens. Koodli explained that some parameters need to be defined by ROHC to enable the context state transfer work in other WGs (e.g. in mobile IP WG). The parameters that are required by other groups are: compression profile type (i.e. the information about high level items e.g. IP/UDP/RTP/audio /video, etc. NOTE: The profile concept is not the same as has been in ROCCO draft.). CID (CID in the presentation was a logical identifier and it is not required to be carried over the radio in all scenarios e.g. highly optimised VoIP service) needs to be made visible to the IP layer so that signalling in IP layer can be performed. Compression context has to be defined and it should be a fixed size. Last item is to define a filter for a context which identifies the IP stream that is undergoing the compression.

Comments after the presentation:

Bormann made an attempt at a summary by saying that the presentation can have two kinds of impacts:

1) Do we see a requirement to be able to freeze the compressor state during handover? If the answer is yes, another question is how long is the compressor needed to stay in frozen state? This depends on the handover delay. (Discussion indicated that, for cdma2000, a 100-200ms freeze is not needed -- 20-30ms might be reasonable.)
2) Do we see a need to standardize the interchange format for the context state transfer? Should the requirements document state the requirement for functionality to relocate the context state?

* Report of ROCCO and CRTP over WCDMA air interface - field trial results (Svanbro)

Krister Svanbro explained that simulations that had been carried out to verify the operation of the ROCCO proposal have now been verified by field trials. Further, these give practical information to be used in algorithm development. The field trials were carried out in Chiba with Japan Telecom.

Field trial settings: Laptops with VoIP and implementations of both CRTP and ROCCO header compression schemes. Header compression was run on top of PPP, the VoIP packets were not necessarily an exact fit to the physical frames. Used radio bearer was typical real time bearer, no retransmissions, BER 1E-3. The network was unloaded, essentially without interference. The base station antenna height was 50 m, the mobile station was mounted in a vehicle (tested both stationary and moving, in city, no 'elevator' events). The round trip time was 400ms (this may be unrealistic, though). There were no handovers.

The results showed the number of packet losses and average header sizes. For stationary mobile mean header sizes were: (ROCCO 1 octet algorithm) 1.0073, (ROCCO 2 octet algorithm) 2.0068, (CRTP) between 2.0038-2.6422. Packet losses were respectively: (ROCCO 1 octet) between 0-3.69%, (ROCCO 2 octet) between 0.02-3.77%, (CRTP) 0-34.88%. The average header size results for the moving mobile were: (ROCCO 1 octet algorithm) 1.005, (CRTP) 2.0253-2.2204. The variations are due to different BER settings.

The longest consecutive loss was 4 packets; most were 1 or 2.

As a summary, the result is basically just the same as simulations done previously.


* Unidirectional/optimistic SO format

The next section of the WG meeting was dedicated to resolving the issue which format/algorithm (or formats/algorithms) should be used for the unidirectional and optimistic cases. Draft-ietf-rohc-rtp-01.txt contains text for two formats/algorithms, commonly referred to as the keyword and the CRC approaches. The advocates of the two approaches took 20 minutes each to present their case.

** Keyword approach (Burmeister)

Carsten Burmeister presented a comparison/question list between keyword approach and CRC approach. He proposed to use a strong CRC in FO packets, as it prevents damage propagation and detects errors. In SO packets, a long (6-bit) sequence number together with a keyword bit are enough to be robust against bit errors. Two different packet types should be defined: update and non-update packets. Update packets would be indicated by a flag and can exist only in FO state. Non-update packets can exists in SO or FO. In FO state, they are recognised from the flag that is not set. Burmeister described that there are two different strategies for updating the sequence number window: (1) send FO packets after 60 SO packets or (2) compression stays in the SO state. The latter proposal is susceptible for bit errors, so this is not being proposed.

Burmeister then described that the keyword approach copes with long losses better than CRC. The keyword approach would stand packet losses of up to 60 packets while CRC could stand only loss of 12 packets. When he reminded the audience that 3GPP states a hard handover may take 200ms; a comment was made that 200ms is longer than what users will tolerate, so the more likely case should have losses more like 80ms.

In further comparison, he mentioned that undetected bit error in the link layer leads to following problems: in CRC approach may lead to context invalidation, while the keyword approach will only lead to one incorrect compression (next packet will be decompressed correctly again).

Burmeister arrived at the following conclusions: The keyword approach is robust and suffers no damage propagation; CRC is weak in case of residual bit errors (a 3-bit CRC cannot detect all errors in a 320-bit header), and the 4-bit sequence number is too short for long loss events. He proposed to either use a 2 byte SO header with a strong CRC and a long sequence number or use a one-byte header with a keyword bit and 6-bit sequence number, depending upon the conditions (possibly decided adaptively).

In this phase, there were no comments as Jonsson asked to make his presentation to answer the questions and comments.

** SO headers based on CRC compared to keyword approach (Jonsson)

Lars-Erik Jonsson briefly explained the purposes of CRC: catch long loss events, enable correct decompression even if there are residual bit errors after error protection in link layer, protect against errors introduced by external mechanisms e.g. timestamp generation, continuously shift context forward, and enable reverse decompression. The normal environment where compressor/decompressor operates is: no undetected errors i.e. no residual bit errors, long losses uncommon, CRC verified and context updated correctly.

Long loss: in CRC 12-16 packets can be lost without entering into long loss situation. The simulations have shown that probability of the CRC not detecting a long loss event is 1/24 instead of 1/8. The reason is that errors are not randomly distributed. Furthermore, if the detection of long loss fails the error will not propagate as it will be detected in the next packet. It was asked what kind of error models were used. Jonsson answered: typical models for the WCDMA environment. He also explained that long loss detection can be improved by pre-verifying CRC, changing polynomials and calculation methods and by using timers or wall clocks. (A question on how to choose a better polynomial in three bits revealed that you really can't.) If the pre-verification is in use and it fails more information has to be sent than normal SO packet. (A question was asked on what to do at the CRC preverification stage in the case the preverification fails (every 24 packets). Do you send more info? No.) When compared to the keyword approach both mechanisms work if timers are available but CRC is more reliable. In the case timers are not available the keyword approach can not detect long loss in some cases at all.

Residual BER:
Usually residual errors are detected by the CRC mechanism. The CRC can only fail to detect bit errors if several bit errors occur. In such a case, the context may be updated incorrectly. However, the probability that to happen is very low: P(several errors)*P(undetected)*P(wrap around). If undetected error and uncorrect context update occurs it will not propagate but may require context update. Additional reconstruction attempts can be used to avoid context update requests. Comments in this phase from the audience: Burmeister argued that undetected errors invalidate the context and lead to error propagation. Jonsson repeated the issues above. Tmima Koren stated that sequence number wrap around case can be detected from the timers.

Robustness comparison:
In CRC approach, the number of consecutive packet losses can be handled as long as the number of losses is less than 15. In keyword approach, up to 63 packet losses can be handled only if they happen between keyword updates, i.e., the robustness of keyword is not uniform over time. If the loss happens during the context updates (i.e. FO packets) the limit is (number of updates)-1. To the question of typical talk spurt length, Bormann commented that we are looking at an average talkspurt length of about a second of audio, in a distribution with a long tail. (Discussion ensued of how likely longer losses were to occur.)

Efficiency comparison:
Time stamp jump in the CRC approach requires FO packets to be sent a sufficient number of times that it can assumed to be received (with rather high probability). In the keyword approach, this is similar, but the time stamp jump causes overhead if it happens directly after KW update as new keyword can not be created immediately and FO packet structure has to be used for an extended time.

Summary by Jonsson:


Q: what happens when there is not a packet occurring every frame time because of silence suppression and comfort noise generation?
A: then you have to send an FO packet because the timestamp changes.

Q: How can you say errors will never propagate? If you don't detect an error with CRC, will update the context incorrectly, then won't that cause the next packet to look like an invalid packet even if it has no error.
A: need to judge the undetected packet as still being wrong because the change in SN is different than what is reasonable.

Q: But this assumes small jitter on the stream as well as some additional intelligence in decision making.

After the two presentations, a long debate between audience and authors of the presented mechanisms ensued, without a pronounced conclusion. Bormann asked who understands the schemes and who prefer one over another (without asking which one is preferred). A minority of the audience showed that they understand the schemes (apparently very few outside the two sides of the present argument). Bormann diagnosed a lack of communication. In particular, it seems hard to evaluate the tradeoff until the various "intelligent" algorithms the proponents have in mind are written down so you can know what the compressor and decompressor would do in the various circumstances being discussed. Bormann then issued a charge to the two groups to document before the Friday morning meeting what the cases are where the differences occur, as well as to produce a list of questions for Friday morning which are used to select a mechanism. He also announced that all interested parties should discuss these issues in Thursday evening so that reasonable consensus could be found on Friday morning and technical work could proceed in order for the WG to be able to meet the milestones.

FRIDAY, August 4, 2000, 0900-1130

Bormann gave a presentation of the results of Thursday night's ad-hoc discussions (two slides). The ad-hoc group had performed a brief analysis of both approaches, covering the issues discussed in the Thursday WG meeting. During the discussions, a number of problems with the CRC scheme came up the solutions for which are not covered in the draft. First, if the CRC check indicates an error, this may be because the present header is damaged or because incorrect context was installed from the previous header. To better disambiguate this, the group described a scheme they called window reconsideration: after each window update, save the previous window; if CRC detects an error and the SN LSB in the compressed header would have a different interpretation in the previous SN window try that as an alternative context, and go back to that context if the CRC check succeeds. Second, after a time gap during which nothing was received, the receiver may attempt to update the MSBs according to the elapsed time; this was termed local repair.

With these mechanisms added to the CRC scheme, the results of the analysis show that the KW overhead is slightly higher for long talkspurts and that KW uses FO for short adjacent talkspurts creating about 2 bytes more overhead per packet in this case. On the other hand, the CRC complexity is higher as the CRC calculation has to be carried over 10 to 14 non-static bytes in the header fields for each packet (this assumes that the algorithm is chosen so that the CRC is computed over the static bytes first -- otherwise it has to be computed over a large part of 40 bytes).

Estimates were given for the loss propagation in three cases:

1) hard handover case (4 to 10 packets lost): CRC 0, KW 10/64 to 40/64 on average (less in unidirectional case)

2) 12+ packet loss case: CRC 3, KW 50/64 average ((n-2)*5/64)

3) "elevator event" long loss case: CRC 3, KW 5 (10.5 in unidirectional case)

Bormann asked for comments about these results.

One person asked about how the improvements of last night's meeting would affect patents. A general discussion about IPR issues started. Bradner explained that it never can be known if there are IPRs even if the developing person/company does not know of them. RFC2026 is to be followed in IETF process. The question again was raised about the status of the new techniques that have been discussed in the ad-hoc meeting and are not in the draft. Bradner pointed out that, unless the speaker said something about IPR for those new ideas, it is assumed they are not covered by IPR claims known to the speaker -- this is reinforced by the yellow sheet of paper that is now part of the IETF attendee package.

With respect to the IPR statement required from the contributors, Bradner reminded the audience that an IPR statement needs to be submitted to the IETF secretariat. Bormann reiterated that unencumbered solutions are preferred over encumbered solutions; one way to arrive at the unencumbered solution would be a waiver from the IPR holder, the pivotal point being that a separate license does not need to be executed for every implementer. Bradner pointed to RFC1822 as one example for an IPR statement that would have this effect and protect the IPR holder against other IPR claims.

On the technical side, Casner pointed out that the increased computational complexity of the CRC approach may be an issue for concentrator-like devices that compress a large number of streams.

It was agreed by the WG that the CRC approach is technically better, but the point was made that a more favorable IPR situation for the KW approach might be cause for reconsideration of this decision.

Bradner pointed out that we cannot expect to get all the IPR statements that might exist, and that we can't delay work to wait for that point. We need to proceed with the technical work.


Consensus of the WG is that there are technical benefits of CRC approach over keyword approach. A show of hands was clearly in favor of proceeding with CRC as working assumption for the technical approach while IPR statements are collected. Bormann promised to generate a mail concerning the clarification of IPR issues.

* WG short term schedule

Bormann proposed that the WG last call for RTP ROHC must be on September 13 to be able to make the end-of-september IESG submission target of the WG. August 21 was given as the deadline for filling in the missing parts of the IP/UDP/RTP draft.

Bormann asked whether there was a need for an editing meeting. Bradner explained that editing meetings are not an accepted IETF method as it is too hard to keep the process open.

* Issues to be discussed

Issue: Context IDs (CIDs)

Bormann presented the issue of the CID size. So far, context IDs had not been part of the discussion of the packet formats. Obviously, adding one or two bytes for the context ID is neutral for the packet format after these. Bormann argued that we want to allow multiple streams on one channel even if secondary channels might be created only later for a new stream. He also argued that there is a need for a zero-bit format of CID for the streamlined voice case. He proposed using a 4-bit CID in IR/FO packets, but 0 in at least one SO format.

In summary:

One comment was that having formats with and without CID or short and long CID means that you need expend bits for two packet type codes. Bormann explained that the idea is that for each channel, there is one context that is streamlined, and that the others require longer headers. Jonsson: if a 4-bit CID is chosen, how will the remaining 4 bits be used?

Jonsson: Some link layers always provide the muxing you need, so allowing negotiation of always using 0 for those layers would be useful. New channels could be created when appropriate. This raised the following questions: How expensive is it to set up a new channel? What limits are there on the number of channels? There is a need to go look at the real links. Bormann proposed to study how expensive it is to set up additional channels (i.e. can the lower layers do the multiplexing.

Bradner proposed that the CID length should be more than four bits. The negotiation of it should be done in the beginning of the context establishment. Jonsson was saying that at least two SO packet types are required in such case (one would have 0 bit CID for one data flow and the other data flow(s) would use more than 0 bits for CID).

While the discussion was somewhat inconclusive, it is probably fair to say that there was little support for the 4-bit CID approach.

Issue: Negotiation and announcement

Negotiation and announcement was related to CID discussion. A problem was identified that if the CID length have to be negotiated in the beginning of the connection which protocols would be used for carrying the information throughout the network.

Bormann reminded the audience of the history of RFC 2509, which describes the negotiation for RFC 2507. FOr ROHC, 'son-of-2509' would define the set of information that needs to be negotiated. He also proposed that this could be in a separate document, produced independently of the main document. The ADs support having this in charter. They also warn of the IPR problem on PPP compression negotiation (Motorola).

3GPP and 3GPP2 might have different viewpoints: 3GPP requires only the information that have to be negotiated, while 3GPP2 would prefer having both information and the mechanisms to negotiate it in the standard. Svanbro proposed that WG defines what needs to be negotiated and that would be written in the main draft. For 3GPP2 purposes a separate draft could be developed that would that define the mechanisms. The agreement was to include the parameters to be negotiated in the ROHC specification, and to generate a separate document for the PPP negotiation.

Sub-Issue: Announcement protocols

The algorithm may need a hook for negotiation to set up the channels and contexts. A long discussion ensued about how to set-up the correct types of contexts before having any knowledge of the stream. An announcement protocol may be needed to give boxes on the ends of links in the middle of the network information about the specific needs for the compression of a flow. Not only need to know it is RTP, but perhaps what is above RTP because different profiles may be needed for different media. Question: Can we still just do it by inference.

Finally, Bormann proposed that the scheme should start in a mode which does not damage the flow and is efficient for voice and reasonably efficient for other traffic modes, rather than trying to putting too much mechanism to optimize the others. After obtaining more information of the stream the compression can change characteristics to provide more efficient and robust solution. Svanbro proposed that the per context specific compression options are signalled in the full header (including any details of the mode that is going to be used). This is possible since the full header has to be received in order to start decompression of more compressed headers.

Issue: List based compression

Not discussed due to shortage of time. Bormann just informed that the newest version includes a mechanism for list based compression.

* Future

Bormann pointed out that the WG is scheduled to complete the current focus of attention (RTP ROHC) between the current and the next WG meeting and that we should have some discussion of the WG future to be able to plan for the San Diego WG meeting.

As an introduction Bormann reviewed the charter and mentioned that it allows development of multiple solutions but does not require it. ROHC IP/UDP/RTP compression is optimised for typical 3G wireless voice links and for transparency.

** 0-byte solutions

The basic idea of zero byte solutions (i.e., solutions that do not normally transmit information beyond the RTP payload) is to utilize the radio frame timing information to progress SN/timestamp. The limitations of such a scheme are:

Svanbro pointed out that usage of such proposal would be equal to circuit switched voice and in such case why not to use directly circuit switched voice service. Some discussion of the transparency of the algorithm ensued. The point was made that during the spring time, it was not allowed to have non-transparent solutions but now 0-byte solution would require non-transparency. The reasoning behind this was that the WG now already has a transparent solution that is progressing and therefore can look also other solutions.

Instead of further discussing 0-byte solutions in the abstract, a specific proposal was presented:

** draft-hiller-rohc-gehco-00.txt

Hiller made a presentation on the proposal termed "Good Enough Header COmpression" (GEHCO). Motivation of GEHCO proposal has been the integrated support of VoIP and multimedia directly to the mobile in cdma2000. In cdma2000, PPP is extensively used. The problem it poses is that, with 2 bytes of PPP overhead and 1 byte IP/UDP/RTP header, the overhead is significant. Even without PPP overhead the spectrum loss would be 13%.

Hiller first presented approaches often termed "Header stripping and regeneration": By not sending headers at all, they maximise the spectral efficiency. In a gateway approach, the mobile sets up a circuit path to a gateway. For non-transparent header compression, all necessary data and associated circuit identifier are sent in the beginning of the connection.

The basic structure of GEHCO is:

For GEHCO to be usable, the following assumptions need to be made about the link:

One comment was that 3 types of channels are needed, not just 2:

It was identified that there is a possible problem if the RTP stream is not recognised by the compressor or if its type (e.g., cdma2000 audio) is misjudged. Also, there was concern and discussion about the number of required or recommended number of channels. However, this is outside the scope of this WG.

Finally, Hiller concluded the presentation:

Svanbro commented that GEHCO is a very specific solution to a very specific problem. Hiller disagrees according to the assumptions that were listed in the beginning. Some support was given by WG members to make GEHCO a working group item. Svanbro commented about semantically identical solutions. Bradner concluded that charter does not allow any link specific solutions and he proposed not to justify right now if GEHCO and 0-byte solutions will be WG items or not.

Hiller raised that his proposal needs a PPP codepoint, so he brought it to the IETF. Bradner commented that a PPP codepoint can be obtained without a WG, but that he would recommend considering this approach more broadly for a variety of links, and that is why the ROHC WG should consider it.

Bormann summarized: there is some support for having this area of work being included in this WG, but not necessarily taking this particular document as a WG item.

* Medium term schedule

Bormann proposed to:

The work on 0-byte solutions should not interfere with the tight time schedule of the non 0-byte solution.

With respect to TCP ROHC, Bormann commented:

and concluded that a wider community than the people working on RTP ROHC need to provide input on next generation TCP compression. There was some concern whether it would be possible to actually get any new input, as RFC 1144 and, recently, RFC 2507 are in wide use. It was decided to solicit submissions on TCP ROHC for the San Diego IETF.


Header Compression Context Relocation in IP Movile Networks
ROCCO and CRTP over WCDMA Air Interface - Field Trial Results
Good Enough Header Compression (GEHCO)
Lower Layer Guidelines for Robust Header Compression
SO Headers Based on CRC
SO Headers Based on Keywords