Skip to main content
  • Suggested IETF 119 Sessions for Getting Familiar with New Topics

    These IETF 119 meeting sessions included discussions and proposals that are accessible to a broad range of Internet technologists whether they are new to the IETF or long-time participants.

      26 Feb 2024
    • Google and consortium of local organizations to host first Australian IETF meeting in over 20 years

      Google, auDA, and Internet Association Australia (IAA) provide key support for Brisbane meeting to be held 16-22 March 2024

        23 Feb 2024
      • JSONPath: from blog post to RFC in 17 years

        Today the JSONPath RFC (RFC 9535) proposed standard was published, precisely 17 years after Stefan Gössner wrote his influential blog post JSONPath – XPath for JSON that resulted in some 50 implementations in various languages.

        • Glyn NormingtonRFC 9535 Editor
        21 Feb 2024
      • Stepping towards a Sustainable Internet

        The IAB’s new Environmental Impacts of Internet Technology (E-Impact) program will hold its first virtual interim meeting over two slots on 15 and 16 February 2024. These interim meetings are open to participation, and we invite all interested community members to join, participate, and contribute.

        • Jari ArkkoE-Impact Program Lead
        • Suresh KrishnanE-Impact Program Lead
        7 Feb 2024
      • What’s the deal with Media Over QUIC?

        In 2022, the IETF formed a working group for Media Over QUIC (MoQ)—a media delivery solution that has the potential to transform how we send and receive media during live streaming, real-time collaboration, gaming, and more.

        • Brett BralleyThought Leadership Content Writer, Cisco
        25 Jan 2024

      Filter by topic and date

      Filter by topic and date

      IETF 117 post-meeting survey

      • Jay DaleyIETF Executive Director

      11 Aug 2023

      IETF 117 San Francisco was held 22-28 July 2023

      The results of the IETF 117 San Francisco post-meeting survey are now available on a web-based interactive dashboard. Thank you to all of you who responded to this survey as we use your views to continually adjust the meeting experience.

      Analysis

      We received 253 responses, of which 251 participated in IETF 117, 192 onsite and 59 remote. As only 2 of the respondents did not participate in IETF 117 the specific questions for them are not shown in the dashboard, but their views were read and considered. With 1579 registered participants, this gives the survey a maximum margin of error of +/- 5.66%.

      The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1. While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages. Please note that a small number of questions are on a four point scale.

      Question changes since the last survey

      For this survey we added some new questions about the venue and the onsite experience. This is in addition to the questions on accommodation that we added in the last survey. We also added some new questions specifically for those who participated online to understand more about that.

      Actions taken following the last survey

      For this meeting, we made the following changes, prompted by survey feedback:

      • Rented a significant amount of hallway seating that was delivered to the venue and distributed to key areas.
      • Provided each room with printed QR codes in a form that can be handed around. We also undertook a manual count of people in each session so that we can compare this to the Meetecho records to scope just how big a problem the non-recording of participation is.
      • Set up a standard remote participation (Webex) for the side meeting rooms and purchased remote microphones for the Owl360 A/V system already provided.

      Satisfaction

      Overall satisfaction is 4.30 which is again a good result. With some key exceptions, the satisfaction scores remain high, reflecting the various improvements made since we returned to onsite meetings.

      The table below shows the satisfaction scores for the last six meetings, along with colour coded indicators for the five point scale above: excellent (🔵), good (🟢), acceptable (🟡), poor (🔴), very poor (⚫️)

      Satisfaction scores for the last six meetings
      IETF 117 San Francisco IETF 116 Yokohama IETF 115 London IETF 114 Phila. IETF 113 Vienna IETF 112 Online
      Overall satisfaction 4.30 🟢 4.30 🟢 4.28 🟢 4.19 🟢 4.36 🟢 4.15 🟢
      AGENDA
      Overall agenda 4.16 🟢 4.18 🟢 4.22 🟢 4.06 🟢 4.16 🟢 4.11 🟢
      Sessions for new WGs 4.19 🟢 4.17 🟢 4.12 🟢 4.15 🟢 4.18 🟢 4.10 🟢
      Sessions for existing WGs 4.22 🟢 4.22 🟢 4.22 🟢 4.10 🟢 4.24 🟢 4.19 🟢
      BOFs 3.95 🟡 4.11 🟢 4.10 🟢 4.09 🟢 4.04 🟢 3.92 🟡
      Sessions for existing RGs 4.12 🟢 4.14 🟢 4.10 🟢 3.95 🟡 4.13 🟢 4.05 🟢
      Plenary 3.99 🟡 3.98 🟡 3.98 🟡 3.98 🟡 3.94 🟡 -
      Side meetings 3.75 🟡 3.73 🟡 3.81 🟡 3.73 🟡 3.52 🟡 3.46 🔴
      Hackathon 4.25 🟢 4.34 🟢 4.35 🟢 4.30 🟢 4.09 🟢 3.83 🟡
      HotRFC 3.89 🟡 3.84 🟡 4.21 🟢 3.94 🟡 4.17 🟢 3.54 🟡
      Pecha Kucha 4.15 🟢 - - - - -
      Office hours 3.98 🟡 4.23 🟢 4.00 🟢 4.09 🟢 3.96 🟡 3.91 🟡
      Opportunities for social interaction 4.11 🟢 3.72 🟡 3.98 🟡 3.89 🟡 3.51 🟡 2.79 ⚫️
      STRUCTURE
      Overall meeting structure 4.28 🟢 4.28 🟢 4.28 🟢 4.19 🟢 4.26 🟢 4.23 🟢
      Start time 4.28 🟢 (9:30am) 4.16 🟢 (9:30am) 4.28 🟢 (9:30am) 4.20 🟢 (10:00am) 4.12 🟢 (10:00am) 3.95 🟡 (12:00pm)
      Length of day 4.30 🟢 4.30 🟢 4.32 🟢 4.10 🟢 4.20 🟢 4.21 🟢
      Number of days 4.27 🟢 (5+2) 4.30 🟢 (5+2) 4.32 🟢 (5+2) 4.30 🟢 (5+2) 4.23 🟢 (5+2) 4.36 🟢 (5)
      Session lengths 4.41 🟢 (60 / 90 / 120) 4.36 🟢 (60 / 90 / 120) 4.32 🟢 (60 / 90 / 120) 4.25 🟢 (60/120) 4.31 🟢 (60/120) 4.26 🟢 (60/120)
      Break lengths 4.32 🟢 (30/90) 4.38 🟢 (30/90) 4.36 🟢 (30/90) 4.25 🟢 (30/90) 4.16 🟢 (30/60) 4.15 🟢 (30)
      Number of parallel tracks 4.08 🟢 (8) 4.01 🟢 (8) 3.90 🟡 (8) 3.86 🟡 (8) 3.92 🟡 (8) 3.92 🟡 (8)
      PARTICIPATION MECHANISMS
      Meetecho 4.35 🟢 4.45 🟢 4.45 🟢 4.23 🟢 4.36 🟢 4.36 🟢
      Gather 3.52 🟡 3.46 🔴 3.37 🔴 3.06 🔴 3.04 🔴 3.40 🔴
      Zulip 3.66 🟡 3.77 🟡 3.73 🟡 3.56 🟡 2.91 ⚫️ -
      Jabber - - - - 3.80 🟡 3.75 🟡
      Audio streams 4.02 🟢 4.21 🟢 4.04 🟢 4.05 🟢 4.14 🟢 4.41 🟢
      YouTube streams 4.32 🟢 4.36 🟢 4.25 🟢 4.22 🟢 4.25 🟢 4.41 🟢
      CONFLICTS
      Conflict avoidance 3.90 🟡 3.94 🟡 3.91 🟡 3.78 🟡 3.89 🟡 4.00 🟢
      VENUE & ACCOMM
      Overall accommodation 4.07 🟢 4.09 🟢 - - - -
      Overall venue 3.90 🟡 - - - - -
      Location 3.60 🟡 - - - - -
      Venue facilities 4.07 🟢 - - - - -
      Cost of rooms 2.87 ⚫️ - - - - -
      Availability of rooms 4.07 🟢 - - - - -
      ONSITE
      Overall 4.29 🟢 - - - - -
      Badge collection 4.69 🔵 - - - - -
      WiFi 3.98 🟡 4.06 🟢 4.10 🟢 3.82 🟡 - -
      QR Codes 4.11 🟢 - - - - -
      Break F&B 4.44 🟢 - - - - -
      Breakout seating 4.08 🟢 - - - - -
      Signage 4.22 🟢 - - - - -
      Coffee carts 4.56 🔵 - - - - -
      Childcare 4.06 🟢 - - - - -

      Areas for improvement

      Gather / Social interaction for remote participants

      This is an ongoing issue that we are struggling to address. Please contact me directly if you have any thoughts on this.

      Zulip

      Satisfaction for Zulip is still below target, but we continue to see this as largely due to a lack of familiarity and something that will change over time, rather than a fundamental issue with the product.

      Side meetings

      While we added the remote participation service, as identified in previous feedback, it does not seem to have made any difference to the satisfaction score. We will continue to watch this to see if things improve as people get more familiar with that service.

      Conflict avoidance

      We again follow the new process to reduce conflicts as described in our blog post, but again this failed to achieve any improvement in the satisfaction score. We will consider the feedback and look at other ways to improve this.

      Office Hours

      This took an unexpected and significant dip downwards, which we are in the process of trying to understand.

      Individual comments

      The individual comments covered a number of key themes:

      • The location of the venue. The feedback is that this area of town, and SF generally, are too run down and unsafe to meet in. Unfortunately, this meeting, and IETF 127 in 2026, which will be in the same venue, were both booked many years ago and deferred due to concerns about people being able to attend. In other words, we're stuck with them, but we will see how this area develops and if it looks not to have improved closer to 2026 then we may consider taking the financial hit of a cancellation.
      • Options for dining. The feedback is twofold, firstly that the hotel had poor evening dining options, and secondly that this meant people weren't able to bump into people and chat over dinner. That latter problem was particularly felt by some new participants.
      • WiFi. The feedback is that the WiFi was unreliable. This did not become clear until late in the week due to the lack of reports to the helpdesk, which may be due to us removing a physical helpdesk due to lack of usage. It appears as if this was caused by a bug either in the network kit or the network stack on certain machines, and the NOC are still tracking it down.
      • Room rates. Satisfaction with these was very low, back up with multiple comments. There were an unusual number of complications with the hotel room rates and we did not communicate these well enough, which we will correct in future. We've heard the calls for us to recommend a nearby, much cheaper hotel, but we're reluctant to do this as the only way we can stop a rate from being raised when people book, is by committing to a minimum number of rooms. However, when we do this we've seen poor takeup and in some case suffered a loss. We will look at this again to see if we can manage this any other way.

      Why people participated remotely

      For the first time, we asked people who participated remotely (Q5), why they did and if they would have preferred to participate onsite (Q5a). We heard a lot of feedback during the meeting that visas were a factor and in the survey 9 of 58 people identified them as such. We need to get data from more meetings to see how this compares.

      The major factor though, cited by 36 of the 58, was the lack funding to travel, all of whom would have participated onsite if they could have. The margin of error here is 12.31% (58 out of 674 remote participants answered this question) and so there were between 335 and 501 people who would have participated onsite with funding.

      And finally

      Thank you everyone who responded to this survey, your feedback is much appreciated.


      Share this page