Skip to main content

Filter by topic and date

Filter by topic and date

IETF 110 post-meeting survey

25 Mar 2021

The result from our IETF 110 post-meeting survey are now available.

The survey results for the IETF 110 post-meeting survey are now available on a web-based interactive dashboard.  We are always very grateful for the detailed feedback that we have received and will continue to process over the next few months.  The commentary below highlights a couple of areas where changes we have made based on feedback have been a success, and areas we still need to work on.  

Analysis

In total 299 responses were received.  Of those 287 participated in IETF 110 from a population of 1196 giving a margin of error of +/- 5.05%.

The results for satisfaction questions include a mean and standard deviation using a five point scale scoring system of Very satisfied = 5, Satisfied = 4, Neither satisfied nor dissatisfied = 3, Dissatisfied = 2, Very dissatisfied = 1.  While there’s no hard and fast rule, a mean of above 4.50 is sometimes considered excellent, 4.00 to 4.49 is good, 3.50 to 3.99 is acceptable and below 3.50 is either poor or very poor if below 3.00. The satisfaction score tables also include a top box, the total of satisfied and very satisfied, and a bottom box, the total of dissatisfied and very dissatisfied, both in percentages.

In this commentary a comparison is made with the IETF 109 Meeting Survey results using a comparison of means that assumes the two samples are independent even though they’re not but neither are they fully dependent.  A dependent means calculation may give a different result. A few comparisons are also made using a comparison of proportions.   

Overall satisfaction 

The mean satisfaction score for IETF 110 (Q12) was 4.21 with 89% either ‘Satisfied’ or ‘Very satisfied’.  This is a statistically significant improvement from the 3.84 mean satisfaction score for IETF 109.  

Preparedness

This time only 1.45% were definitely under-prepared (Q10), down from 5% for IETF 109. We should in a future survey, unpick the various components of preparation, including reading drafts, learning the technology, etc.

Satisfaction with the agenda

All of the IETF 110 scores (Q13) were up on IETF 109 and most of those improvements were statistically significant. Many of the scores tipped into the >4.00 range, indicating a good level of satisfaction, though still not excellent. The highest was 'Sessions for existing working groups' at 4.18, up from 3.97, and the lowest was 'Opportunities for social interaction' at 3.11 up from 2.8. Overall agenda satisfaction was 4.04, up from 3.86 and another statistically significant improvement.

Satisfaction with the meeting structure

Satisfaction with most parts of the IEF 110 agenda (Q15), including the local timezone and the timezone policy, were up on IETF 109. The notable standout, where satisfaction dropped though not in a statistically significant way, was the switch from 8 parallel tracks to 9 parallel tracks. Given that satisfaction was up on almost every other measure, we can probably read this as meaningful reaction to this switch.

Sessions

35% experienced no session conflicts (Q20) in IETF 110, the same as for IETF 109 and satisfaction with conflict avoidance (Q22) at 3.73 was down from 3.81 for IETF 109, statistically significant drop. We will need to look harder at conflict avoidance for IETF 111.

38% of people reported sessions running out of time (Q23), only slightly down from 39% for IETF 109 with similar reasons given in the comments, again indicating that we need to look at this harder.

Participation mechanisms

With all of the effort to improve this area of the IETF 110 meeting experience, it was good to see satisfaction with Meetecho (Q24) jump to 4.30 from 3.73 for IETF 109. All of the other scores rose too. This time round we separated out the Matrix trial from the Zulip trial with Zulip ahead and even slightly higher than jabber but not significantly.

Problem reporting

This is another area that had a lot of effort put into it before and during the meeting and satisfaction with the response to problem reports (Q29) jumped from 3.63 to 4.31.

Final feedback

The final open-ended question again included some very useful feedback such as multiple people wishing for a return to face-to-face meetings, and some areas that require further investigation including one very concerning one about sexist behaviour that I've quoted here in full:

There were a couple of uncomfortable comments in the plenary session chat. Someone commented that "women at the IETF don't often serve in leadership roles because they know better than to stick their heads up out of their foxholes," and someone responded (I assume sarcastically) "yeah, because the problem with women at the IETF is foxes and holes..." It seemed kind of rude and sexist for a professional meeting.

Thanks to our sponsors!

Finally, a big thanks to our sponsors:

  • Google our Meeting Host
  • Comcast as a Bronze Sponsor of Diversity & Inclusivity
  • ICANN as a Bronze Sponsor of Diversity & Inclusivity and Running Code
  • Cisco and Juniper, our equipment sponsors
  • and our critically important Global Hosts: Comcast, Ericsson, Huawei, Juniper, NBC Universal, Nokia

Share this page