Skip to main content
  • Time to update the Network Time Protocol

    The Network Time Protocol (NTP) is a foundational Internet standard that has provided clock synchronization between computer systems since the early 1980s. It was first standardized as RFC 958 in Sept 1984 with several revisions in the following years. Discussions have been ongoing in the NTP working group for a few years now about updating NTPv4 to NTPv5. This update is motivated by lessons learned, ongoing vulnerabilities, and emerging use cases.

    • Karen O'DonoghueNTP Working Group Chair
    • Dieter SieboldNTP Working Group Chair
    30 Sep 2022
  • Report from IAB workshop on Analyzing IETF Data (AID) 2021 published

    The Internet Architecture Board (IAB) has published the official report from the workshop on Analyzing IETF Data (AID) 2021.

      28 Sep 2022
    • Applied Networking Research Prize presentations at IETF 115

      The Internet Research Task Force (IRTF) open session at the IETF 115 meeting will feature presentations on research into domain hijacking, the IETF's organizational culture, and DDoS attack detection and mitigation.

        27 Sep 2022
      • Supporting diversity and inclusion at IETF meetings by providing childcare

        Thanks to the generous support of IETF Diversity & Inclusion sponsors, onsite childcare at an IETF meeting was provided for the first time ever during IETF 114. The successful experience and continued support of sponsors means it will again be offered at the IETF 115 meeting on 5-11 November 2022.

          14 Sep 2022
        • IETF Annual Report 2021

          The IETF Annual Report 2021 provides a summary of Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Research Task Force (IRTF), and RFC Editor community activities from last year.

            9 Sep 2022

          Filter by topic and date

          Filter by topic and date

          IETF 114 post-meeting survey

          • Jay DaleyIETF Executive Director

          18 Aug 2022

          The results from our IETF 114 post-meeting survey are now available.

          The survey results for the IETF 114 post-meeting survey are now available on a web-based interactive dashboard. As always, we are very grateful for the detailed feedback that we have received and will continue to process over the next few months. The commentary below highlights where changes we have made based on feedback have been a success, and areas we still need to work on.

          Analysis

          In total 242 responses were received, all of whom participated in IETF 114. At the time of writing, we're still confirming the observed number of participants (distinct from those who registered) and so no margin of error for this survey is available yet. The table below shows participation in the meeting and the survey for the last six meetings:

          Participants numbers for last six IETF meetings and survey response rate
          Meeting Registered Observed Onsite Remote Survey responses Margin of error
          IETF 114 Philadelphia 1427 242
          IETF 113 Vienna 1428 1273 164 +/- 7.15%
          IETF 112 Online 1344 1175 - 1175 157 +/- 7.28%
          IETF 111 Online 1411 1329 - 1329 164 +/- 7.17%
          IETF 110 Online 1329 1196 - 1196 287 +/- 5.05%
          IETF 109 Online 1279 1282 - 1282 249 +/- 5.6%

          The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1. While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages.

          In this commentary a comparison is made with the IETF 113 Meeting Survey results using a comparison of means that assumes the two samples are independent even though they’re not but neither are they fully dependent. A dependent means calculation may give a different result. Some comparisons may be made using a comparison of proportions.

          Demographics

          The geographic spread of participants (Q1) was quite different from previous surveys, reflecting the location of this meeting. As the demographics only refer to those people who responded to the survey, not the full set of participants, they will only be used for dimensional analysis.

          Satisfaction

          Overall satisfaction for the meeting (Q10) was 4.19 with 89.40% either ‘Satisfied’ or ‘Very satisfied’ which is a statistically significant drop from the mean satisfaction score for IETF 113 at 4.36 with 90.85% either, which in turn was a statistically significant increase from the 4.15 mean satisfaction score for IETF 112.

          As we now have a good dataset of survey answers, here's a table that shows satisfaction scores for the last six meetings:

          Satisfactions scores of the last six meetings
          IETF 114 IETF 113 IETF 112 IETF 111 IETF 110 IETF 109
          Overall satisfaction 4.19 4.36 4.15 4.13 4.20 3.84
          AGENDA
          Overall agenda 4.06 4.16 4.11 3.91 4.04 3.86
          Sessions for new WGs 4.15 4.18 4.10 4.03 4.00 3.94
          Sessions for existing WGs 4.10 4.24 4.19 4.04 4.18 3.97
          BOFs 4.09 4.04 3.92 4.01 3.87 3.60
          Sessions for existing RGs 3.95 4.13 4.05 3.99 4.10 3.94
          Plenary 3.98 3.94 - 3.91 4.03 3.76
          Side meetings 3.73 3.52 3.46 3.84 3.22 3.08
          Hackathon 4.30 4.09 3.83 4.14 4.10 3.79
          HotRFC 3.94 4.17 3.54 - - 3.55
          Office hours 4.09 3.96 3.91 4.12 3.96 3.86
          Opportunities for social interaction 3.89 3.51 2.79 2.90 3.11 2.80
          STRUCTURE
          Overall meeting structure 4.19 4.26 4.23 4.08 4.20 3.88
          Start time 4.20 4.12 3.95 3.01 3.96 2.89
          Length of day 4.10 4.20 4.21 3.93 4.12 3.86
          Number of days 4.30 4.23 4.36 4.14 4.26 4.12
          Session lengths 4.25 4.31 4.26 4.12 4.17 4.11
          Break lengths 4.25 4.16 4.15 4.09 4.16 4.10
          Number of parallel tracks 3.86 3.92 3.92 3.60 3.58 3.64

          Satisfaction with the agenda and meeting structure goes up and down between meetings, with the only noticeable trend being the improvements from the early fully online meetings. There are some parts that repeatedly score Acceptable or below, including side meetings and opportunities for social interaction, and some highly variable parts such as HotRFC.

          New for this dashboard are some visualisations and tables providing a breakdown of these views by participation type: onsite or remote. (Be warned that due to limitations in the product some tables read Very Satisfied to Very Dissatisfied while all visualisations read in the other direction). These clearly show the difference in perceptions between onsite and remote participants. When reading these tables please note that the sample size for remote participants is much smaller than that for onsite participants and the margin of error is about double (see above for more details).

          Preparedness and newcomers

          For the last survey this section was reduced to just three with no questions about specific resources provided for preparation. Overall satisfaction with everything we provide was good at 4.46 (Q6) for IETF 113, and preparedness (Q8) was at 3.11 down from 3.18 for IETF 113 (this is a four point scale question and the score does not fit the scale above).

          (Q52) only shown to newcomers (participated in less than 6 meetings) asks about the sessions and materials provided for the them with some great results for the Onsite newcomers' overview at 4.55, the Newcomers' feedback session at 4.50 and the Blog post on sessions for newcomers at 4.35. the only result below Good was the Onsite newcomers' dinner.

          Sessions and conflicts

          29% experienced no session conflicts (Q18), the lowest score for the last six meetings, however satisfaction at conflict avoidance at 3.78 (Q20) was roughly in line with the trend over the last six meetings. Full details in the table below:

          Conflict avoidance key metrics for last six meetings
          IETF 114 IETF 113 IETF 112 IETF 111 IETF 110 IETF 109
          No session conflicts 29.77% 35.53% 36.67% 40.79% 34.68% 35.09%
          Satisfaction with conflict avoidance 3.78 3.89 4.00 3.76 3.73 3.81

          A new question (Q20a) was added to solicit views on possible changes to the meeting structure in order to make more session slots available. Each of the measures had 20% objecting suggesting that any are possible. To assist with comparisons, a mean score is calculated for each option with Yes=3, Only if needed=2, No=1. This should not be compared to other means in this survey.

          While the option of "Have a longer day", appears to be preferred, when this is broken out by participation type (remote, onsite) then while onsite participants prefer "Have a longer day", remote participants prefer "Add a session on Friday afternoon". The option of "Add another track" came last.

          Participation mechanisms

          The following table shows the satisfaction scores with our various participation mechanisms over the last six meetings. Gather remains a problem, but we are unable to find any alternative that could replace it (suggestions most welcome). Satisfaction with Zulip remains only Acceptable and we will track if that improves as the service matures.

          For the first time in some years, we asked about the onsite network and will track that score over time. This meeting saw some external challenges that are likely reflected in the score for the network.

          Satisfaction with participation mechanisms for last six meetings
          IETF 114 IETF 113 IETF 112 IETF 111 IETF 110 IETF 109
          Meetecho 4.23 4.36 4.36 4.29 4.30 3.73
          Gather 3.06 3.04 3.40 3.77 3.90 3.59
          Jabber - 3.80 3.75 3.68 3.85 3.68
          Zulip 3.56 2.91 - - 3.90 (trial)
          Audio streams 4.05 4.14 4.41 3.84 4.22 3.90
          YouTube streams 4.22 4.25 4.41 4.09 4.37 4.20
          Onsite network and WiFi access 3.82 - - - - -

          COVID management

          This was our second onsite meeting with COVID management restrictions in place and we tweaked our question on COVID management (Q24a) to add in views on the mask wearing policy, with satisfaction at 4.08 (Good). Satisfaction with compliance with the mask wearing policy was 4.02, still Good but down from 4.44 for IETF 113, though in that case they were legal regulations not IETF policy. Satisfaction with compliance with individual social-distancing preferences was 4.04, also down from 4.40 for IETF 113. Finally, satisfaction with communications regarding COVID was 4.24 also down from 4.59. Clearly, more needs to be done to get these scores back to the IETF 113 level.

          A new question was added (Q24c) about usage of the COVID specific facilities - tests, masks, badges - with 64% reporting using the free COVID tests, 59% using the free FFP2 masks and 53% using the free social distancing badges (note the text of that last question was cut off in the survey, but it appears that people understood it). The one issue identified was that 8% did not know about the free social distancing badges.

          Problem reporting

          The table below shows the percentage of survey respondents who have reported a problem (Q23) and their satisfaction with the response received. As can be seen, this has jumped up and down with a satisfaction peak for IETF 112, our last fully online meeting, and now declined. This may well reflect the larger than usual number of local network and connectivity issues the NOC had to contend with. The request in the comments for queue visibility has been noted and is something we had hoped to have a couple of meetings ago but issues with the ticketing system have meant that has not been achieved.

          When broken out by participation type (onsite, remote) there is a noticeable difference in satisfaction with problem resolution.

          Key metrics on problem reporting for the last six meetings
          IETF 114 IETF 113 IETF 112 IETF 111 IETF 110 IETF 109
          Reported a problem 21.86% 14.47% 11.84% 12.50% 5.24% 13.49%
          Satisfaction with the response 4.15 4.25 4.43 4.00 4.31 3.63

          Final feedback

          A lot of detailed feedback was provided and it will take some time to digest this and act on it.

          The comments about the mechanics of mask wearing have made it through into the current consultation on IETF 115. The feedback that we need to stop imposing a mask policy has been listened to but the consultation proposes continuing with them - if you are one of those who commented that masks should not be required, then please make your views known in that consultation.

          The comments about seating areas and the hotel location/quality have been listened to. As we book our meetings some years in advance (it would be impractical not to) there is always a risk that the hotels decline after booking but that is rare, and this is one of the few examples so far.

          The comments about pets have been noted - this was something unexpected and we probably need to open this up to a broader community discussion.

          Finally, everyone loved the coffee carts!


          Share this page