Skip to main content
  • Banishing the bane of bufferbloat

    Bufferbloat affects everyone who uses the Internet, resulting in frustratingly slow web browsing, laggy video calls, and overall poor quality of experience for Internet users and there's a lot of work underway in the IETF to address it.

    • Bjørn Ivar TeigenIETF Participant
    23 May 2023
  • IETF 116 post-meeting survey

    IETF 116 Yokohama was held 25-31 March 2023 and the results of the post-meeting survey are now available on a web-based interactive dashboard.

    • Jay DaleyIETF Executive Director
    26 Apr 2023
  • Catching up on IETF 116

    Recordings are now available for sessions held during the IETF 115 meeting and the IETF Hackathon, where more than 1500 participants gathered in London and online 5-11 November 2022.

      1 Apr 2023
    • Reducing IETF Meeting Scheduling Conflicts

      With many IETF participants active across a number of active working groups and limited time slots in an IETF meeting week, we aim to arrange sessions in the agenda to minimize conflicts that prevent participants from joining sessions that are of interest to them. In each post-meeting survey we ask meeting participants to comment on the scheduling conflicts they experienced in the meeting agenda and we then use this information to improve the meeting agenda.

      • Alexa MorrisIETF Managing Director
      31 Mar 2023
    • Messaging Layer Security: Secure and Usable End-to-End Encryption

      The IETF has approved publication of Messaging Layer Security (MLS), a new standard for end-to-end security that will make it easy for apps to provide the highest level of security to their users. End-to-end encryption is an increasingly important security feature in Internet applications. It keeps users’ information safe even if the cloud service they’re using has been breached.

      • Nick SullivanMLS Working Group Chair
      • Sean TurnerMLS Working Group Chair
      29 Mar 2023

    Filter by topic and date

    Filter by topic and date

    IETF 109 Post-Meeting Survey

    • Jay DaleyIETF Executive Director

    7 Dec 2020

    The result from our IETF 109 post-meeting survey are now available.

    The survey results for the IETF 109 post-meeting survey are now available on a web-based interactive dashboard.  Once again we are very grateful for the detailed feedback that we have received and will continue to process over the next few months.  The commentary below highlights a couple of areas where changes we have made based on feedback have been a success, and areas we still need to work on.  

    Analysis

    In total 258 responses were received.  Of those 249 participated in IETF 109 from a population of 1282 giving a margin of error of +/- 5.6%.  The weighting of respondents (Q1) was a bit more towards Europe and less to US, Canada than we usually get reflecting the change in regional participation most likely brought about by the time zone of the meeting.

    The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1.  While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages.

    In this commentary a comparison is made with the IETF 108 Meeting Survey using a comparison of means that assumes the two samples are independent even though they’re not but neither are they fully dependent.  A dependent means calculation may give a different result. A few comparisons are also made using a comparison of proportions.

    Q34 uses a new technique for our surveys - importance-satisfaction gap analysis.  This is where we ask the same question in two ways, once to gauge importance and once to gauge satisfaction.  The gap between the two means is then calculated to provide a priority order for respondents views.  Doing it this way eliminates being misdirected by areas of low satisfaction that have low importance and ensures that we focus on those where the gap between importance and satisfaction is greatest.   

    Overall satisfaction 

    The mean satisfaction score for IETF 109 (Q12) was 3.84 with 76% either ‘Satisfied’ or ‘Very satisfied’.  There is no statistically significant difference between this result and the 3.95 mean satisfaction score for IETF 108.  

    Preparation

    This time around we asked a four point scale question (Q10) on preparedness rather than a yes/no, with 33% saying they were well prepared and 50% sufficiently prepared, for a total of 83%, down from the 93% who said they were prepared for IETF 108.  Only 5% reported that they were definitely under-prepared with the explanatory comments given a variety of reasons why.

    Satisfaction with the agenda

    We added an ‘N/A’ option this time to counter the problem we believe we had with the last survey with people scoring parts of the agenda they did not attend making these results much more reliable.  The highest satisfaction score (Q13) was 4.00 for the “Newcomers’ sessions” and the lowest by far at 2.80 was for “Opportunities for social interaction”.   More on that below.  The main WG/RG parts to the agenda generally received acceptable satisfaction scores in the 3.94-3.97 range, with BOFs (3.60) and HotRFC (3.55) slightly lower and “Side meetings” much lower at 3.08. Most of these are above the scores for IETF 108 but none of the differences are statistically significant.

    Overall satisfaction with the agenda was 3.86 down slightly from the 3.89 for IETF 108 but again not significantly.

    Satisfaction with the meeting structure

    The overall satisfaction score for the meeting structure (Q16) was 3.88 with some elements (Q15) better, including “5 day meeting” at 4.12, “60/120 minute sessions lengths” at 4.11 and 30 minute break at 4.10.  Both of these show a statistically significant improvement over the equivalent questions for IETF 108, indicating that the changes were well made.

    The “8 parallel tracks” were rated noticeably lower at 3.64 with the “Overall length of each day” in the middle at 3.86. 

    We asked two questions about the timezone to ensure that we respond correctly to the feedback.  The first was “Bangkok time zone” which had a very poor satisfaction score of 2.89 while the second “The policy of scheduling online meetings in the timezone of the in-person meeting they replace” was much higher at 3.45 though still poor.  The “Madrid time zone” question from IETF 108 had a satisfaction score of 3.86 by comparison.

    To help understand if more changes are needed we asked a question (Q17) “Does this structure need a rethink for IETF 110 in March 2021?” to which 64% answered “No”, 25% “Yes - some minor tweaks” and 11% “Yes - a major rethink”.  This seems pretty conclusive and work is now needed to go through the comments to see what minor tweaks can be made, if any, to improve the meeting structure.

    Sessions

    68% of participants participated in 2-10 sessions with 28% participating in 11+ (Q19).  This is consistent with the answers from IETF 108, though we do not have data from in-person meetings to make a comparison.

    35% experienced no conflicts at all (Q20), down from 42% for IETF 108 though not a statistically significant difference.  Likewise the distribution of the number of conflicts people experienced is similar (Q21).  To assist us in understanding if we are improving the conflict situation a new satisfaction question has been added (Q22) with the initial score at 3.81.  The list of session conflicts that you provide is very valuable and we are considering ways to use this on an ongoing basis not just from one meeting to the next.

    39% of participants reported sessions that ran out of time (Q23), down from 58% for IETF 108, another indicator that the switch from 50/100 minute sessions to 60/120 has produced the desired effect.

    Participation mechanisms

    IETF 109 had a number of technical issues with the participation mechanisms and we were expecting this to be reflected in the satisfaction scores (Q24), and this was the case as the Meetecho satisfaction score was 3.73 a statistically significant drop from 4.01 for IETF 108.  This is clearly an area where significant improvement is required, which will be the subject of a future blog post.

    The satisfaction score for jabber was 3.68.  We did not ask about jabber for IETF 108 as no changes were made so we need to go back to IETF 107 when the score was 3.54, though not a significant difference.  We perhaps could have asked about the overall groupchat experience but instead asked specifically about the Matrix and Zulip trials, which score 3.87 but with only 23 respondents.

    The standout result was for the YouTube streams, which gained a satisfaction score of 4.20, the highest for any element of the IETF 109 meeting.

    To assist with planning for future choices around Remote Participation Services we asked a set of importance satisfaction gap questions (Q34 in the dashboard - though not that number in the survey).  The top results (those with the biggest gaps and therefore the highest priorities to address) are: Reliability 1.01 gap, Audio quality 0.45, Usability 0.39, Native client 0.26.  The bottom results (where the satisfaction is higher than the importance and therefore no more needs to be done on them, perhaps even too much has been done) are IPv6 connectivity -0.31, Customizability of the user interface -0.29, Chat integrated with jabber rooms -0.28, Built on open standards -0.22.  For completeness, those in the middle are Cross platform support 0.15, Browser client 0.12, Access to meeting materials 0.08, Support 0.07, Video quality -0.05.

    The satisfaction score for Gather was 3.59, up slightly but not significantly from 3.51 for IETF 108.  We added a new question suggested by a participant about how people used Gather (Q27) and the highest answer was “To socialise” however the majority of participants at 55% did not use Gather.  We clearly need to do more to encourage its use for social interaction.

    Problem reporting

    For the second meeting in a row, we asked about participants’ experiences with our support services (Q28).  13% reported a problem, down from 19% for IETF 108.  The anecdotal experience of the team is that this was largely due to changes made to Datatracker to handle people who registered for the meeting with one email address and tried to join it with another.

    The satisfaction score for this meeting was 3.63, well down from 4.18 from IETF 108, with only 63% either Satisfied or Very satisfied.  This is a concerning fall and significant changes are already planned for our problem reporting, with rollout starting before the end of the year.

    Final feedback

    As always we provide an open-ended question at the end (Q33).  Many of the comments echoed those from earlier in the survey and again we have a number of people noting that an online meeting can never properly replace an in-person meeting.  The many thank you comments are much appreciated.

    Thanks to our sponsors!

    FInally, a big thanks to the sponsors that made this meeting possible:

    • Cisco our Meeting Host
    • Huawei our Gold Sponsor
    • Fastly our Silver Sponsor of Diversity & Inclusivity
    • Afilias as a Bronze Sponsor
    • ICANN as a Bronze Sponsor of Diversity & Inclusivity, and Running Code
    • Cisco, Google and FutureWei for sponsoring fee waivers
    • Cisco and Juniper, our equipment sponsors
    • and our other Global Hosts: Comcast, Ericsson, Huawei, Juniper, NBC Universal, Nokia

    Share this page