The Internet Engineering Task Force (IETF) Live Transcription

IETF 88 takes place November 3-8 in Vancouver, BC, Canada. Please see the IETF 88 Meeting page for more information about the event.

IETF 88 Technical Plenary Transcription

On Wednesday, November 6, 2013, from 9:00-11:30am Pacific Time (17:00-19:30 UTC) the Technical Plenary focused on the topic of "Internet Hardening". The complete transcript can be found below.

You can also view the video recording, listen to audio records and view other materials related to the Technical Plenary.

For some background information about the topic of pervasive monitoring, please see:

If you are active in social media, you can see updates and photos of the Technical Plenary via:

The complete transcript of the technical plenary is below:

>>Russ Housley: Good morning. If people would start finding a seat, please. And since this session will be recorded and live-streamed, please silence your cell phones.

To minimize the amount of stage adjustments during the plenary, I would like all the speakers to come to the stage now. That's Lars and Heather and Alexey, and Alissa, and Bruce, and Brian, and Stephen.

Thank you.

From up here, it looks rather odd, because this part of the room is densely packed, and there's lots of empty chairs over there.

So if you're looking for a seat, there's plenty over here (indicating).

Good morning. If you could -- the people in the back would please close the doors to get the noise down.

Again, there's lots of open seats at the right hand from where I'm standing side of the room.

Today's agenda has three pieces of reporting at the beginning: The IAB chair report, the IRTF chair report, the RSE and RSOC chair report.

Because of the amount of time we expect to spend on the rest of the agenda, we are going to keep those to an absolute minimum.

The technical topic is Internet hardening. And we have three speakers to queue up that topic. At the end of those three speakers, we will have the open mic. We have a longer time than normal for open mic because we think that this community will need it.

And the IAB will join the people who are already on stage for the open mic session.

I want to first share a few highlights of things that have happened since the last time the IETF met.

First is the IAB appointed Russ Mundy to the ICANN nominating committee for 2014. We want to thank Ole Jacobsen for doing that for the two previous years.

And then the IAB, along with the Internet Society, the IEEE, and the W3C, worked together on a message that says, basically, the principles of the open stand are the ones that can be used to help make sure that no participant is inappropriately modifying the outcome of a consensus process. Basically, those principles have been -- were discussed last year, and we think that they are the foundation of a trustworthy standards process, and so that statement was announced. And you can see from the links at the bottom of the slide, if you fetch them from the Web site, you can see the full set of principles and the statement that the IAB made with the other three organizations.

The IAB chair, along with leaders from nine other Internet organizations, signed a thing that has now become known as the Montevideo Statement. I want to point out that the positions regarding IANA are consistent with RFC6220 and previous IAB statements, including those made to NTIA in their notice of inquiry, further notice of inquiry. And, again, you can see the full statement with the -- following the link on the slide.

The IAB also published an RFC 6950 on the architectural considerations on application features in the DNS.

And we sent some comments to the U.S. National Institute of Standards and Technology supporting the reopening of a comment period on their Special Publication 800-90A, which is the standard that we believe has some security issues with it. And we believe that the proposed changes should be adopted going forward for all cybersecurity and cryptographic standards used by NIST.

Again, you can get the full statement from the link.

The IAB has two upcoming workshops. The first one will be in December, in Cambridge. This was discussed last plenary. And it's Internet technology adoption and transition. And it actually probably has a little bit to do with the discussion topic of today in terms of the economic aspects of it.

We also have a workshop planned for the Friday before the next IETF meeting. We expect a call for papers to come out later this month on that. And today's plenary will inform what actually comes out in that call for papers.

ICANN is reinvigorating a thing called the Technical Liaison Group, it is something that has existed for a long time but has actually been dormant. The IAB, in March 2005, decided to leave our seats in the Technical Liaison Group vacant. Since ICANN is reinvigorating it, we are going to name two people for this, one with a one-year term, and one with a two-year term.

We sent out a call for volunteers, and, again, the link on the slide is -- has the whole announcement there. But if you are interested and know a lot about the IETF and ICANN and the technical standards that are related -- associated with that relationship, please consider volunteering.

I'm not going to walk through every document on the slide. It's there for you if you need it. We -- These are the RFCs that we've completed in 2013 and the status of the Internet drafts that we're working on right now.

Since the last IETF, we have not received any appeals.

[ Applause ]

>>Russ Housley: The IAB has programs that support our getting the work done, essentially. I'm not going to walk through any of the reports. They're in the slides, if you're interested in what one of those is doing.


>>Lars Eggert: Good morning.

I'm Lars Eggert, I'm the IRTF chair. As Russ said, we're trying to keep this really short. So the slides are even shorter than they normally are. If you're interested in what I left out, it's in the proceedings of the IRTF open meeting which happened yesterday. So you can look at the slightly longer deck of slides. And I think Russ is finding them now.

So we had five meetings this week. This is sort of our usual set, about half of our groups. We have nine at the moment chartered our meeting. Some of the meetings have already happened. We already had our open meeting as well.

There's a new Network Coding Research Group that is sort of in the proposed category. I think after this IETF that they've met three times. Then we'll make a decision whether we are going to charter them. I think they're on a good track.

There's a lot of IPR in the network coding space, so we're trying to work on a charter that will make it very clear to all the participants, especially the new participants that this group is attracting, what the rules of the IETF and the IRTF are. But they certainly have been having very good discussions.

On Thursday morning's breakfast, the IAB reviews one of the research groups. This time, we're referring the Crypto Forum Research Group, which is also quite timely.

We had one publication on our stream, which is a document out of the Scalable Active Multicast Research Group. They have one more RFC with the RFC editor at the moment, and then they will close.


On the Applied Network Research Prize side, you missed the last prize talk of 2013 by Idilio Drago on his measurements on how DropBox performs. I found this very interesting. On his university network, for example, DropBox is already or was two years ago already one third of the traffic that YouTube caused on that same network. So it's not a small amount of traffic. And there's not a lot of data about how that client behaves. So, again, the proceedings of the IETF open minutes have the talk. And I think it got recorded by the (indiscernible) guys.

At the moment, we actually have the nomination period open for the awards for 2014. That is closing on November 30, so in a few weeks. If you've come across any good research work that is related to the IETF and the IRTF, please nominate the authors of that paper. There's a link on the slide here that basically gets you to a Web form where you can with a few clicks nominate some paper. And then we'll consider it for the prize next year.

We're handing out these prizes jointly with ISOC. For 2013, you can see we had 36 nominations. We picked four. This is quite selective. People get invited to the IETF open meeting, they hang around for the week. They get exposed to the IETF and the IRTF, and in the past, we've been doing this for a few years now. Several of the talks have actually led to new work in the IETF and the IRTF, which I consider great.


And that's it. Thank you.

>>Alexey Melnikov: Hello. I am Alexey Melnikov, the chair of RSOC.

RSOC is overseeing work of RFC Series Editor and RFC Series in general. And on this slide, you can see membership of the committee.

Since last IETF, the two major things that happened are, RSOC helped review and made recommendations about statement of work for production center and RFC publisher and also contract for RFC Series Editor.

>>Heather Flanagan: And I'm Heather Flanagan, the RFC Series Editor.

On the screen are the avenues of communication that people can most easily use to find out what's going on, what I'm doing, and ask general questions.

Next slide.

One of the things I've been working on since the last meeting is, of course, the style guide, highlighting some of the points that I've paid particular attention to. But, to be honest, this has not been the biggest thing I've been working on since Berlin.

Next slide.

No, that would be the RFC format.

So in Berlin, we chose a design team, and I thank them very much for their time and effort. List of the members of the design team, as well as all of our work, are available on the Wiki link that's listed at the bottom of the page.

It's been a very active, very engaged group of people. I think we peaked out at 130 messages a day as we discussed the different requirements we were looking at for changes in format to the series.

So it was a very exciting, very exciting time.

We're going to go into more detail about this on -- at the BOF. And I strongly encourage you to actually look at the content of the Wiki, because the BOF is one hour. And there's a lot of material there. If you've actually looked at it ahead of time, it will be a much more efficient and effective conversation to have. But....

One more.

And there we go. Let's go on to the more interesting discussions today.

>>Russ Housley: I've asked Alissa to chair and facilitate the next part of the meeting, as she's the lead in the IAB's privacy program.

>>Alissa Cooper: Okay. So I'm sure you are all here -- I think this is the fullest plenary, technical plenary session, that I've ever seen -- so you're all here to hear about hardening of the Internet. That's the technical topic for today. And what that means is that we're focusing on what can be done to protect the Internet and its users from pervasive surveillance by well-funded adversaries. We'd like to focus on who needs to be doing work in the technical community specifically.

So there's lots of different constituencies that are impacted by these recent developments. But we are specifically interested in what can be done within the technical community, both here in the IETF and also elsewhere, where it relates to work within the IETF or IETF participants.

So we can do that through protocol design and development in the IETF. Within the IAB, we can do that by thinking about longer-term architectural issues, opportunities for stronger security and potential barriers to stronger security. And we can do that all together as a community. So we want people to really think about what their role can be in this process.

We have three speakers here today to help us think about that.

First we'll hear from Bruce Schneier, who is a fellow at the Berkman Center for Internet and Society, among many other things. Bruce has been working intensively on this issue, writing about it, and is using as the basis for that his access to the tens of thousands of documents recently obtained by Edward Snowden.

Next we'll have Brian Carpenter, who's the former IETF chair, former IAB chair, and former chair of the ISOC Board of Trustees. He's going to give us an overview of the IETF history that's related to this topic.

And then we'll have Stephen Farrell, who's the -- one of the sitting security area directors, who's going to guide our discussion about potential IETF activities.

So with that, I will turn it over to Bruce.

>>Bruce Schneier: Good morning.

So my favorite part about all of this are the code names. The most recent one we got was MUSCULAR, which is the NSA code name for getting traffic out of Google, Yahoo!, probably others, by exploiting links between their data centers. This is different from their program of getting data -- exploiting the links from the users to the servers, which we don't know a code name for, or PRISM, which is their program of getting that same data by asking the companies directly.

The generic name that we're using for a lot of this collecting data on the fiber is Upstream. My guess is that's not a code name, but it includes a lot of code names, usually different companies. Fairview, Blarney. That's AT&T. Oak Star, Little. Little is Level 3 Communications. Remedy. Remedy is BT. There are lots of those.

Quantum is the NSA code name for packet injection attacks from the backbone. This runs on things that are called Tumult, Turbulence, Turmoil, I can't tell if those are three different generations or different types.

Another cool code name is Fox Acid. Fox Acid is an exploit orchestrator, which is basically like Metasploit with a budget. Different exploits that Fox Acid can serve include Validator, United Rake, and also my nomination for stupidest code name, Egotistical Giraffe.

[ Laughter ]

>>Bruce Schneier: There are lots more. The particular exploit that you get is determined by what I think is one of the most cool code names, Ferret Cannon.

And in addition to those exploits that just own machines, there are various implants that do different things, Black Heart, Mineral Eyes, Highlands, Vagrant.

Data, when collected, is dropped into a variety of analysis tools: Marina, Pin Whale, Main Way, X Key Score, lots more. Bull Run. Bull Run is the NSA's program for subverting security products. Right? Covertly inserting back doors. And there's a lot more. There's a lot more I haven't done. There's a lot more to come.

The basic takeaway here is that the NSA has turned the Internet into a giant surveillance platform. This is robust. It is robust politically, it is robust legally, and it is robust technically.

There's a lot of details we don't know, and there's a lot we're never going to know. Details on cryptography, near as I can tell, are not in the documents. Company names are largely not in the documents. Right? This is ECI, which is extremely compartmented information that basically means it's not written down. So we're probably not going to know which products have been subverted.

In some ways, that's okay. I mean, knowing the details, I think, is going to lead us to chase yesterday's problems. And the real issue is to understand and fix today's, and especially tomorrow's problems.

So what are the trends here?

I think it's important to understand how we got here. All right? Fundamentally, data is a by-product of the information society. All computer processes produce data. All right? This data is being increasingly stored and increasingly searched, simply because of Moore's Law. It is cheaper to save it than to throw it away.

All right. This is exacerbated by cloud computing. This is exacerbated by user devices where the vendor controls a lot more. And the results are what we know, wholesale surveillance, surveillance backwards in time, the loss of ephemeral conversation, systems that never forget. Right? And this is not a question of malice on anybody's part. This is the way computers work.

So we've ended up with basically a public-private surveillance partnership. There's a basic alliance of government and corporate interests.

NSA surveillance largely piggybacks on corporate capabilities, through cooperation, through bribery, through threats, through compulsion. All right? Fundamentally, surveillance is the business model of the Internet.

The NSA didn't wake up and say, "Let's just spy on everybody." They looked up and said, "Wow, corporations are spying on everybody. Let's get ourselves a copy."

And this is not -- one of the arguments I hear again and again is it's only metadata. All right? Metadata equals surveillance. Metadata is extraordinarily important.

Do the thought experiment. If you hire a private detective to eavesdrop on somebody, you know what he'll do: Put a bug in their room, their car, tap their phone, he'll get the conversations.

You hire that same private detective to surveil somebody, you get a report, where he went, who he spoke to, what he purchased, what he read. That's all metadata. Metadata equals surveillance.

So when President Obama says it's only metadata, he's really saying, you are all really only under surveillance. And this is not just about the NSA. Right? We have an extraordinary window into the NSA's activities. But this is also the FBI, the CIA. This is what any well-funded nation-state adversary would do.

The United States has a privileged position on the Internet which allows it to do more, and it has enormous budget, but we know that other countries do the same things. And we also know that technology democratizes. Today's secret NSA program becomes tomorrow's Ph.D. thesis, becomes the next year's high school science fair projects.

[ Laughter ]

>>Bruce Schneier: So we have a choice, we have a choice of an Internet that is vulnerable to all attackers or an Internet that is secure for all users.

All right? We have made surveillance too cheap, and we need to make it more expensive.

Now, there's good news-bad news. All right? Edward Snowden said in the first interview after his documents became public, he said, "Encryption works. Properly implemented, strong cryptosystems are one of the few things that you can rely on."

All right? We know this is true. This is the lesson from the NSA's attempts to break Tor. They can't do it, and it pisses them off.

This is the lessons of the NSA's attempt to collect contact lists from the Internet backbone. They got about ten times as much information from Yahoo! users than from Google users, even though I'm sure the ratio of users is the reverse. The reason? Google use SSL by default; Yahoo! does not.

This is the lessons from MUSCULAR. You look at the slides. They deliberately targeted the data where SSL wasn't protecting it. Encryption works.

The bad news is Snowden's next sentence in that same interview. He said, unfortunately, end-point security is so terrifically weak that the NSA can frequently find ways around it. All right, encryption works, but it's hard to get it to work.

What I wrote when I was writing about this is that the math works, but math has no agency.

All right. So what does this mean? We do know that there are some pieces of cryptoanalysis that the NSA has that we don't. All right? The NSA makes a huge investment in mathematics, and presumably, it's not all wasted.

In the black budget that was released, DNI, James Clapper has a quote. He said, "We are investing in ground-breaking cryptographic capabilities to defeat adversarial cryptography and exploit Internet traffic."

That doesn't sound like we're hiring a bunch of mathematicians and hope we get lucky. All right? That sounds like we have something at the edge of practicality and we're investing to make it practical.

We don't know what that is. We're likely not to. I mean, I have a few guesses. First one is elliptic curves. It's perfectly reasonable that they have some advances that we don't in elliptic curve cryptography, either generic techniques to break it quicker or classes of curves that they know about that if they can steer us to use, they have an advantage.

The second are general factoring and discrete log advances. We in the academic community get advances every few years. Assume they're a decade or so ahead.

The third is a practical attack against RC4. And my ordering of these changes depending on what day it is. But those are my guesses. A long shot of something (indiscernible) ES. I really doubt it, but you know, I wouldn't put it beyond possibility.

But we do know that most of how the NSA breaks cryptography is by getting around it. Right? They exploit bad implementations. They exploit default or weak keys. They deliberately insert back doors in encryption products. And they have a group that exfiltrates keys, which means stealing them. That is primarily how they break cryptography.

So solutions here are varied. And I think they necessarily are. There's not one solution that's going to solve this. First, I think there are going to be some internal self-corrections. As amazing as it seems, the NSA had no contingency plans in place for the secrets being completely and totally leaked. It took them two months to come up with a viable response. Right? That's over.

And I think the cost-benefit analysis has changed.

I mean, the NSA is going to have to incorporate the risk of exposure when they decide whether to exploit something or not.

By the nature of secrecy -- the nature of secrecy is changing. The Internet has done that. I think everything they do will eventually become public. This wouldn't be a problem if the Snowden documents said the NSA was spying on North Korea and the Taliban. It's that we're spying on Belgium, or I guess that the U.K. was spying on Belgium, which is like Connecticut spying on Nebraska.

Corporations also have a new cost-benefit analysis. Pre-Snowden, there was no cost to cooperate because your cooperation would be secret. Now companies have to assume that it will be public. There's been huge losses of sales, mostly foreign, hardware manufacturers, software, cloud providers. And there is a PR benefit in fighting. And more companies are realizing that.

You know, oddly, near as I can tell, the NSA doesn't weigh the costs and benefits of their programs. In some ways, it's not surprising, because the TSA doesn't, either. But I'm always amazed when I just don't see the cost-benefit analyses.

Anyway, I think that's going to happen internal to NSA regardless.

There's also going to be external self-corrections. There's been enormous political blow-back to the NSA surveillance. In the United States, that's where the political push-back is happening.

And I think this will limit what the NSA does. All right? To a lesser extent, the effectiveness of bulk collection has been challenged. The previous NSA directors, both General Alexander and General Harding, have been of the attitude of, "Collect everything."

And it's not obvious that this is effective, for a whole bunch of reasons.

There are limitations of intelligence, fundamental limitations of what we can get for intelligence. A lot of examples. But it's -- really seems like a lot of what the NSA's getting is pure voyeurism. There's not a lot we can do about it.

And remember, the NSA has a dual mission. They're charged with protecting communications and exploiting communications. And more and more, these missions are coming into conflict, post-9/11, the exploiting side won, won completely. I think there's going to be some natural rebalancing as our fear subsides.

So that's inside the government.

There are a lot of technical things we can do.

The goal is to make eavesdropping expensive. That's the way to think of it. To force the NSA to abandon wholesale collection in favor of targeted collection. So ubiquitous encryption on the Internet backbone, that will do an enormous amount of good, provide some real security, right, cover traffic for those who need to use encryption. But the more you can encrypt data as it flows around the Internet, the better we'll do.

Second, target dispersal.

We were safer when our email was at 10,000 ISPs than when it's at ten. Fundamentally, it makes it easier for the NSA and others to collect. So anything to disperse targets makes sense.

Usable application layer encryption. And it was a lot here. The lessons of PGP are, you know, one-click encryption is too hard. On the other hand, OTR is doing great, and so is disk encryption.

More open standards, more open source, these are harder to subvert. More end point security. The NSA name for these are PSPs, personal security products. And they irritate them a lot. So the more we can do, the better.

Better integrated anonymity tools. And really, assurance. This is a hard one. This is an important one. Right?

We need some way to guarantee, to determine, to be -- have some confidence that the software we have does what it's supposed to do and nothing else. The goal here is to leverage the economics, leverage the physics, leverage the math. The NSA has a big budget, but they are not magical. They have the same limitations as everybody else. Largely, though, this is a political problem. The political solutions are stuff we know. Transparency. Oversight accountability. No surprise.

We know the problems. Laws have lagged technology. General Hayden talked about this in some interview maybe a year ago. What he said is, "Give me a box, and I'll operate to the very edges of that box. Tell me the law, and I'll operate right up to the law." The problem is, when laws lag technology, there's increasing empty space in this rapidly growing box. I'm not sure how we fix that, but I think we need to think about it.

And there are laws that are going through the books. My worry is the ones that are being talked about now are just point solutions; right? They solve individual programs and individual authorities. And they won't actually change anything.

The other problem, of course, is that reining in the U.S. -- the NSA only affects the United States, right, mostly doesn't affect non-U.S. persons. Fundamentally, any law the U.S. passes is going to completely ignore any international persons. Right? It doesn't affect the actions of other countries.

And this is really an arms race. We haven't heard this, but we're going to. "You can't rein in the NSA, because if you do, China wins." That's the arms race talking. It's a zero sum game, and it's us versus them.

I mean, long term, we need to get everyone to understand that a secure Internet is in everyone's best interest, that it's not us versus them, that it's secure Internet versus a vulnerable Internet. That turns a zero sum game into a positive sum game and actually makes political solutions possible.

So once you do that, you have laws and treaties to support it, you have technologies to support the legal regime, and you have laws and technologies to deal with noncompliant actors.

Right? We're probably not going to win the "stop doing this" argument. We're probably going to win the "tell us about it."

But, again, the problem is robust.

It is robust politically, legally, and technically. And we need to solve it, not just for the NSA, but for everybody. For other governments, for cyber criminals, for rogue actors. We need to do this while preventing the balkanization of the Internet. Probably the worst blow-back of all of this is how we seem to be fracturing the Internet or how it emboldens even worse governments. And we need to figure out a good new governance model. The Internet has largely been run as the United States benign dictatorship, because everyone kind of believed the United States was acting in the world's best interests. That's over. You know, so we need something good, or it's going to be the ITU.

[ Laughter ]

>>Bruce Schneier: Right? So we need to figure this out. And, lastly, this problem actually is bigger than the NSA, this is a fundamental problem of data, of data sharing, of surveillance as a business model, about the societal benefits of big data versus the individual risks of big data. This is, fundamentally, the same problem when you look at behavioral data of advertising, of health data, of education data, of movement data. I mean, how do we design systems that benefit society as a whole but, at the same time, protecting people individually? I believe this is the fundamental issue of the information age. This right here.

And this is an example of it. It's an important example of it. Solving it is going to take decades, and I want to start. So thank you.

[ Applause. ]

>>Brian Carpenter: Okay. Smooth transition, if Russ can find my slides.

Okay. So I'm Brian Carpenter. I would like to mention that my affiliation is University of Auckland. I'm also a consultant for Huawei. I have no security clearances, and I've never had any security clearances. And I have never signed the official secrets act.

The image on the slide is from Wikipedia who stole it from the movie "1984." It's a plenary session with Big Brother on the screens.

And, in fact, I brought the manual to read on the plane. This is my copy of 1984. I did find that the solution is in the manual. Winston Smith is talking to an older man who has not been brainwashed in any shape or form and still has his own brain working. Winston says, "There's no telescreen."

"Ah," said the old man "I never had one of those things. Too expensive, and I never seem to feel in need of it somehow."

So there is a solution, which is not using the Internet. Right? If you don't like that solution, we'd better go into the next slide.

So in the IETF -- I want to talk about what the IETF should do, not what the rest of the world should do. And I want, therefore, to review previous IETF interactions with this issue and with public policy in this area.

So we need to look at the ancient traditions. The ancient tradition, as far as I could determine, started in approximately RFC 1126, which is the first one that contains this famous phrase, "Security issues are not addressed in this memo." And in other RFCs, you'll find "Security issues are not discussed in this memo," so you have to use grep carefully to find them.

As far as I can see, from looking at the RFCs, there were one or two RFCs slightly earlier than 1126, which also said we don't talk about security or even had a few words of security considerations. And there were, of course, a few RFCs that actually talked about security from the very beginning. But, basically, the tradition started in 1989 of saying we aren't even talking about security.

In 1993, Jon Postel published a version of the guidelines for RFC authors, which said for the first time, all our RFCs must contain a section near the end of the document that discusses the security considerations. From then on this phrase "Security issues are not discussed in this memo" became very popular. Because he just cut and pasted that from the previous RFC. In 1998, RFC 2316 said, "If we wish to eliminate the phrase 'Security issues are not discussed in this memo' from future RFCs, we must provide guidance."

Next slide.

So okay. It's easy to sneer at the IETF for not taking security seriously before 1998, but it's actually unfair. The Site Security handbook came out in 1991. The first IPsec RFCs and the first S/MIME RFCs came out in 1995. And also, at a certain point, we adopted SSL and PGP, which had come from the community, came into the IETF.

But there was still a general tendency to ignore security issues, including confidentiality and privacy issues until the late 1990s. That's a fact. And that left a legacy of protocols and operational practices that were unfavorable to privacy.

And I guess we're still paying the price for that today.

Next, please.

So, in fairness to the IAB, since this is the IAB's plenary, the first security workshop was held in 1994, produced a report. The second security workshop was in 1997, which produced a report. And it led -- it took a while, but it led to RFC 3365, strong security requirements for IETF standard protocols, which means at least since then we've laid down some guidelines on how to get this a bit closer to right.

There was a privacy workshop a couple of years ago which led to RFC 6973 privacy considerations for Internet protocols. You'll notice in both cases there's quite a long gap between the IAB workshop and the resulting substantive RFC.

And I think that's indicative -- these are actually complex issues, and it takes a while to get them right. And it also takes a while to motivate people to get them right.

Anyway, next, please.

So now we come to the two instances when the IETF, as a whole, has found itself interacting with public policy issues in this area.

So in the mid '90s, it was extremely clear that a security Internet needed strong cryptography and so did secure e-commerce, which was a popular buzz word at the time. If you remember the context of mid 1990s, people were trying to make e-commerce a reality exactly the time, of course, when the web was taking off and the Internet entered its maximum growth phase.

However, at that time, also, many governments wanted to restrict the use of strong cryptography. And, of course, the reason they wanted to restrict the use of strong cryptography is because senior intelligence agencies had told them that, if they didn't restrict the use of strong cryptography, the signal intelligence agencies would not be able to read people's traffic. So surveillance is not a new phenomenon. The desire of the signal intelligence agencies to survey traffic is not exactly a new phenomenon.

And this shackled the IETF in many discussions. I remember distinctly people saying we can't do that in some document because it would be illegal in France. I mention France not to discriminate against France but simply because, don't have the impression it's just the NSA and just the U.S. government that doesn't like the idea of strong cryptography getting out into the community.

Next, please.

So what do we do about it?

Well, we had a long debate, lots of discussion on the mailing list and lots of discussion in plenary sessions. We even had a speaker from the NSA once. I think he just left the NSA or something, so he was allowed to speak. But anyway. There it was.

The plenary discussion took place in Salem, Massachusetts, famous in American history for being the home of the witch trials in 1692-1693. It's now known as Danvers, Massachusetts. But it's somehow appropriate that's where we had this plenary discussion. The result was RFC 1984. The RFC number was, of course, one of Jon Postel's little jokes.

The RFC is signed by the IAB and the IESG. I think that's quite important. It really was designed to indicate a very strong consensus in the community. Its key recommendation is to encourage policies -- the word "policies" -- and it didn't mean technical policies. It meant political policies -- that allow ready access to uniform strong cryptographic technology for all Internet users in all countries. There are a lot of other words, but that's the key point. We said we want to use strong cryptography on a international basis. We don't want to be hindered by export controls or legal restrictions.

Now, it's not for me to judge whether that RFC really changed reality. What's clear to me is that the signals intelligence agencies around the world were certainly not amused by it. And, since my name is on that RFC, I've always assumed ever since that they're reading my email. I don't care, because I have nothing to hide from them. But I've certainly worked on that assumption.

So next, please.

The second time we got into a debate that impacted public policy in this area was around the turn of the century. There were a lot of requests coming in to IETF working groups to document features for wiretapping, politely known as legal intercept, in IETF specifications.

It was quite obvious that many governments influenced by both signal intelligence agencies and by the normal police wanted to observe traffic and, therefore, wanted to be able to do this and, therefore, would have liked the standards to tell them how to do this.

And this led to a very complex and difficult debate in the IETF. Actually, more difficult, I think, than the strong cryptography debate. And there were, obviously, privacy concerns and obvious potential for misuse. You know, even if you admit that the police are allowed to perform wiretapping, the fact that you provide features for it means that bad people might be able to do wiretapping as well. And it was felt that wiretapping features built into protocols would intrinsically increase security loopholes because they are, by definition, security loopholes. And so this debate was quite complicated.


next slide, please.

It was said that in some countries operators and vendors would be legally forced to provide wiretaps. I think the reality is in pretty much every country, vendors and operators will be legally forced to provide wiretaps.

And the result of all that was RFC 2804, signed again by the IAB and the IESG. It's a very subtle RFC. You probably need to read all the words to get all the subtlety.

But the key recommendation was the IETF has decided not to consider requirements for wiretapping as part of the process for creating and maintaining IETF standards.

It also says the IETF does not take a moral position. We wanted to keep out of the debate about whether wiretapping was good, bad, or indifferent.

And it also said that wiretapping mechanisms should be openly described. So there are words in that RFC that would absolutely permit informational RFCs discussing wiretapping techniques, even though we agreed not to consider requirements for wiretapping as part of the process of creating and maintaining standards.

Once again, I assume that the police and signal intelligence agencies were not amused.

Next, please.

So is there an underlying principle there between those two RFCs? This is my wording, but I think what I see in those two RFCs is that IETF technology should be able to make the Internet secure, including protecting privacy, but should be neutral with respect to the varying cultural views of legality and privacy.

Next, please.

So my personal comment is I expect we'll have another long debate. It seems to have started. I personally hope there will be significant improvements in privacy protection in future IETF specifications. I also assume that, once again, that police and signal intelligence agencies will not be amused. And that's something we have to accept and deal with. That's it.

[ Applause. ]

>>Stephen Farrell: Okay. I also don't have any clearances. I'm not even sure the Irish government does them.

So this is a bit more about what do we think of it? What can we do about it? And hopefully, to tee up some of the open mic. The photograph is a place in Wales in the U.K. It's a National Park called Snowdonia, which is kind of where we are.

[ Laughter ]

>>Stephen Farrell: So what I think is it's, basically, an attack. And that's the way we should treat it. So, if you look -- forget the motives. Forget the political kind of stuff. If you look at the actions that NSA and their partners are doing, whether coerced or not, it's, essentially, a multi-faceted form of attack.

And, even if you think it's not, it's indistinguishable from it. So, if a piece of equipment somewhere on the Internet or an application is doing something, there's kind of no way to tell if -- you know, if whoever has been exfiltrating a key is doing that for some kind of supposedly good reason. It's not unique. I think that's been said. NSA and their partners are doing it. Others are probably doing the same. I'm quite sure that a whole bunch of people, now with all the media, will decide to start doing the same or trying to do it. Perhaps on a smaller scale, but nonetheless.

The scale, I think, is interesting. While a lot of that kind of specific techniques, as far as we know them or guess them, seem to be things that we've known about, doing all of those things at that scale is, arguably, a threat model that we haven't really considered when designing protocols. So I think there is some work for us to do when we design protocols to kind of consider this threat model and to kind of work it out. Because we haven't worked it out fully. I think you'll see probably, when the discussion starts, none of us has really very good terminology for dealing with this situation. We need to work with that and understand it better.

But, while there's things we can do, we can't solve the problem. So we shouldn't really be setting out to solve the problem. But we should be doing stuff. So the question is what will we end up doing and will we end up with rough consensus on the content of the slide? Main thing it's an attack. I hope so.

Next slide.

So there are things we can do. I think the main kind of focus on what we do, can do, is to say that there are these attacks, this kind of pervasive monitoring attacks. And there are technical things we can do that can significantly affect the costs to do pervasive monitoring. For example, if we can encrypt things at the right place, perhaps opportunistically, then we might be able to increase the cost by requiring much, much more work. There's a couple backup slides which I don't show which try to define how to significantly affect the cost. So, if you download them from the agenda page, you should see those.

Similar things we can do is kind of small and relatively easy, although getting kind of changes in the IETF is not easy at all. But there are some things that are fairly obvious and are kind of point changes we can do. Other ones will take longer to figure out. Take longer to get agreed and get deployed and so on.

But, for at least some of it, if we're kind of serious about attacking the problem, some changes will probably affect IETF processes or, you know, long-held positions. For example, in the security community in the IETF, we've insisted on going from plain text to kind of mutually authenticated highly -- not assured, but highly secure things. And maybe we kind of did too much of that, and we need to kind of start off not with plain text, but maybe with something like opportunistic encryption. Getting these changes is not trivial. And it affects long-held positions that security people have as well. And this will affect deployments or operational things that people have to do and business models. Some of the business models around require the companies who make loads and loads of money to read all that email in some sense of "read."

So there's a bunch of things that we need to look at. Theres a bunch of tradeoffs we have to make. One is can we do better than RFC 3365, which is you must have management to implement security things, to more than that, to try to get things used. Too much to say that we can insist that everybody encrypts everything all the time. It would be kind of tricky for DHCP, for example. So it's not always possible. But, getting further along might be something we can do, but it will take probably ages for us to figure that out. There's tradeoffs between confidentiality and wanting to look at the internal packets for network management reasons which are not nefarious. So there's a bunch of tradeoffs there. And sometimes people are looking at packets that are nefarious. There was a really good discussion at the HTTPbis working group about that yesterday and what would be the impact if you start turning on TLS much more ubiquitously. And there's a lot of non-trivial things to think about.

And then there's anonymity and pseudonymity versus authentication requirements we might like to have versus law enforcement or advertising. That ends up touching on business models, some of the things we might do. We're not going to stay with business models.

So next slide. So we should do them. All right. We shouldn't -- Brian's kind of point about the gap between the IAB workshop and the resulting kind of RFC is interesting. But, in this case, it's quite possibly the case there's a bunch of things we can do. And, if we start now while everything is fresh in the news and so on, we might get changes made more easily, agreed more easily and go for it. Obviously, we shouldn't go too quick. We don't want to go about it wrong. Also things that seem not to be damaging, and there's a PR angle here. So we probably should be aiming to be seen to be doing stuff while we're doing stuff.

Most important is to do stuff.

And the "we" in all of the above is not just us here doing IETF stuff. The "we" is also going back to companies and sponsors of various types and trying to get them to do stuff. I imagine that a lot of companies are looking again at their source code. A lot of operators are perhaps looking at their networks again and saying who's been doing what? So it's not just what we can do in the IETF but also things outside the IETF are related to.

Next slide. Quick example. So one of the issues that's kind of been raised is that in security we have an idea of trusted computing base. And, quite often, you might assume that the implementation of crypto was part of that. And one way of looking at some of the speculation mostly, but not all, is that we are running a dodgy computing base. People are worried about that. Some of that paranoia is useful. A bunch of people got together -- mostly Leif, Linus, Lucy, and Randy. And we're going to try and do this kind of activity to get together a kind of open hardware crypto engine. That's very early days, but just an example of what kind of things can be done. And there's a side meeting tomorrow at lunchtime. If you're interested, come out with us. Russ has done a bit of work to try to get money for us. There are a couple people interested in sponsoring this. And, hopefully, when we come up with something more concrete, we should get support of that. That's just an example that a bunch of people can do stuff that in this case might have a really good use case for the RPKI to help with insurance that some of the people doing signatures in the RPKI can need and also outside of that. It's not an IETF activity, but it's related. And there's others ones. I get the IAB commenting on the NIST stuff.

>>Russ Housley: Special pub number 800-90a.

>>Stephen Farrell: Special pub number, number, number. The IAB kind of commenting on this and looking together at their processes of how to do cryptographic documents is another example. There's a whole bunch of them. We should look for more.

Next slide.

So, back in the IETF context, there's a kind of bunch of -- in scare codes, easy things we can do. So discuss the topic. We have a mailing list that's been really quiet, not much happening at all. No crazies.

[ Laughter ]

>>Stephen Farrell: So, you know, we set up a mailing list. We can discuss the issues. It's mostly intended to try out issues, find the right place where a piece of work can be done. And this kind of discussion is -- like a lot of the sessions this week have kind of been discussing this topic. The IAB workshop, which may be jointly run with W3C, talking about that. And so on.

Maybe the IRTF. I think Lars would be kind of keen if somebody comes up with a good topic for a research group. There you go.

So there's things we can do, and talking about it openly is part of it. I think we should. I mean, I think there has been, to some extent, a slight tendency for people to say nothing. Some people say too much. But that's always true, right? But I get a sense there's a reticence or self-censoring going on here. That's an unfortunate thing. Not too many, but some. So we should work those out.

So we have a draft on the -- very early draft on documenting the threat model that's kind of new. We're looking at that. We're looking at better ways of deploying TLS with the secrecy, which is to increase the cost to measure against the pervasive monitoring. There was a good session of the appsarea meeting on Monday. A bunch of people went to dinner last night and figured out maybe a charter for a workshop to figure out a name for it? They have a cute name. But anyway.

So the idea there is to look at iMap, POP, mail, submit, over TLS, come up with a description for best current practice. A lot can be done a lot better than it is. We can help by documenting some of this and giving people guidance as to how to turn this stuff on.

Operational changes is something that I think one of the Int IDs suggested we kind of have this bullet. Maybe, if you have more localized IXPs, more direct fiber and so on. Or maybe there's things we can do to help operators to kind of run their networks a bit better. Maybe they can tell us about the things we can do. And there's a lot of good kind of problem statement text. A bunch of people have posted drafts. You'll find them, if you go look at the perpass list archive. There's a kind of summary, which you'll find if you look at the BoF, agenda for following lunch today. We've lots of good problem statement texts. It's harder when you get to what to do next.

Next slide.

So then there's a couple of trickier things. So one is we can kind of write a BCP about being better with privacy. Alissa, myself, and John have drafted a thing that says essentially care about personally identifying information and try to minimize it. And, if you have it, then do encryption. Do use confidentiality.

There is also a discussion about doing more than boundary to implement. Going beyond 3365. And very interesting kind of discussion that really hasn't kind of got to a clear outcome because it's tricky. I can discuss that we can envision coming to solutions there. There's other relevant issues going from kind of hard to very hard. What happens if we turn on TLS everywhere for the web? There's a lot of subtlety there. There's a lot of kind of we don't know. So one of the issues is, if you turn on TLS everywhere, if you do it in some opportunistic encryption way, there's a bunch of people who would have gone and bought certificates or got certificates for free, which you can, who would now use this opportunistic thing. So there's a set, a population maybe that are going to be less secure than otherwise and maybe a bigger population that would be more secure. It's a tradeoff. And we don't know what the facts are, because we are not there yet. Same is true for other layers. Looking at IPsec or opportunistic mode. Those are things we can work on. There's a bunch of hard issues. Then going to end-to-end security, turns out, you know, S/MIME and PGP are really hard. Doing the same for SMPP or SIP is not something we've succeeded in so far. We need to look at it and try to do it again. But that's kind of a hard problem. And even harder, WebRTC and Internet stuff. Do we want the NSA to be spying on every toilet flush? That's what's going to happen.

Fingerprinting and traffic analysis, even harder, right? So a lot of the kinds of things we do, people are still able to extract metadata regardless of all the crypto stuff I've talked about already. I don't know how you can deal with that from fingerprinting right from radio or to the application. It's a hard problem. And IP addresses in that case are things we cannot use. But they can be personally identifying in some cases. Am I going to encrypt the source address of all the packets? That's not going to be easy. And then there's another layer, which is probably getting more outside the IETF scope, is this corporate cloudy stuff that people do. And it's privacy unfriendly. Some of that would be affected, if we succeed. If we succeed in getting a workable end-to-end solution, some of the big-end providers wouldn't be able to scrape or mail and service ads. So there would be impact there.

Next slide.

So I hope that last slide is the kind of thing we have discussion about at the open mic.

Conclusion: It's an attack. It is a new scale of attack. I think the right response for us is, as usual, to develop technical kind of mitigations.

The goal not to try to solve the problem, but just to make it significantly harder to do this widely pervasive kind of monitoring. We can do stuff, so go ahead and do them.

That's all I have. Thank you.

[ Applause ]

>>Russ Housley: For the open mic, I'd like the IAB to join the stage. And, Alissa, over to you.

>>Alissa Cooper: So while folks are making their way up here, a couple notes on the open mic session.

We want to hear from as many of you as possible who are interested and have something to say. So, please, please, please, keep your comments brief. Think about what you're going to say in advance.

Also remember that we are being recorded, so if you can speak slowly and be brief -- good luck -- but, honestly, you know, there's a lot of folks out there in the world who are listening and watching. And they want to be able to understand what you say.

We'd really like to keep the -- if you're going to come up and talk about security and surveillance, please stay focused on what we in the technical community can do.

And I will cut the mic at some point before we actually hit the end of our time so that Russ can do a little wrap up at the end.

So let's start.

Sorry. We should do introductions of the IAB. I know you all so well. Do you want to start at that end, Marc.

>>Marc Blanchet: Marc Blanchet.

>>Joel Halpern: Joel Halpern.

>>Dave Thaler: Dave Thaler.

>>Andrew Sullivan: Andrew Sullivan.

>>Xing Li: Xing Li.

>>Eliot Lear: Eliot Lear.

>>Lars Eggert: Lars Eggert.

>>Alexey Melnikov: Alexey Melnikov, not IAB.

>>Heather Flanagan: Heather Flanagan, RFC Series editor.

>>Russ Housley: Russ Housley, IAB chair.

>>Alissa Cooper: Alissa Cooper.

>>Brian Carpenter: Brian Carpenter, nobody at all these days.

>>Stephen Farrell: Still Stephen Farrell.

>>Bruce Schneier: Still Bruce Schneier.

>>Bernard Aboba: Bernard Aboba, IAB.

>>Jari Arkko: Jari Arkko, IETF chair.

>>Mary Barnes: Mary Barnes, IAB executive director.

>>Erik Nordmark: Erik Nordmark, IAB.

>>Ross Callon: And Ross Callon, IAB.

>>Alissa Cooper: Great.

Let's start at the far end.

>> Franck Martin: Hi. Franck Martin.

With the raising of the security and privacy, don't we see a dying breed, which are the Internet vigilante, because they won't be able to access information that helps attack -- stop attack to, you know, no more folks?

You know, the law enforcement agencies don't have too many resources. They are focusing only on the big stuff. So we are relying a lot on Internet vigilante to keep the Internet secure.

>>Alissa Cooper: I have an answer.

So, I mean, I think I know some of what we heard, particularly from Bruce, is that technology cuts all ways; right?

So anything that you do to make it easier for one constituency to either be anonymous or not or obtain access to a resource or not, it's very difficult to prevent a different actor with a different set of motivations from making use of that as well. And that's just a -- that's just a challenge; right? It's the same as -- give the TSA example. It's the same as when you go to the airport. We all have to go through the security, even though most of us are not trying to launch attacks. And that's a reality of the situation that we have to deal with and keep in mind but that I don't think we're going to necessarily be able to solve.


>>Phil Hallam-Baker: Phil Hallam-Baker not necessarily for COMODO.

I'd urge people to not just to look at the defensive, but also look -- the defensive use of cryptography and so on, because if you want to build a billion dollar infrastructure, you've got to deliver a billion dollars of value to the people who you're going to pay for it. So when you're thinking about how we can do more cryptography on the Internet, look at the economic benefits that those can provide as well.

If you look at what SSL has done, you know, 3.5% of global GDP is now attributed to the Internet. That is in large part because we've got TLS. And one of the real problems here with Bull Run is that we have a government agency that has been spending time and money and effort undermining the trust in the global economic system. And that's unforgivable.

>>Alissa Cooper: Thanks.


>>Dave Crocker: Dave Crocker.

First of all, Stephen, Brian, and Bruce, thank you. I think that for this moment in time, under these conditions and in this place, your talks were exactly the right set of things and in the right way.

So also Alissa and whoever helped you put this together, thank you. This is really quite good.

I do have a specific question for Bruce.

[ Applause ]

>>Alissa Cooper: I should say that the organization was really not mostly me; it was the IAB, Russ, Eliot, Hannes, who unfortunately isn't here. Lots of people helped with things.

>>Dave Crocker: So sometimes it is possible to reach consensus.

>>> I'm not so sure about that.

>>> Let's not go there.

[ Laughter ]

>>Dave Crocker: I was making a reference to something else, Bruce, that other people are tracking.

Bruce, a point I wanted to ask you about, and it's, again, an issue for us, because we, as a community, love dealing with things at the level of crypto and let's do TLS more. And all of that strikes me as clearly good stuff to do and a lot of detail. But -- and this is -- this is the question I have to you -- what you cited was corporate cooperation, which is the ability to get at the data when it's not encrypted. And so if you could somehow help us understand what the encryption will help with and what it won't help with that we also need to focus on, that would be great.

>>Bruce Schneier: The first thing encryption does is it makes it harder. If it is easy for the NSA to grab all of Google's traffic as it goes between its data centers, it will do it. If it's encrypted, it can't do that. Right? Encryption helps against bulk collection. Encryption will force the NSA to go after only high-vault targets. I mean, they will go in and grab keys. And there's some kind of VPN connection between two links -- and that's important -- they'll go in and grab the keys. But if you're just using MS Chat, they'll grab it off the backbone and they'll grab everybody's because it's easy.

The more we use encryption, the more we raise the cost of surveillance and the more we eliminate the possibility of bulk surveillance. Some places we can't. It makes no sense to encrypt your Facebook posts, because everyone has to see them. And we're stuck there.

But the more we can encrypt, the more channels we deny.

And, I mean, the Google story's a really good one. Because we have at least -- we know right now three different ways under at least two different laws the NSA's getting after Gmail traffic. Right?

So the more we can eliminate them, the more we force the NSA -- and anybody else -- into narrower channels, and to go after the targets that are valuable and not to go after everybody else. That's the value of encryption. It raises the cost.

Even if it's lousy encryption, it raises the cost.

>>Marc Blanchet: Alissa. A comment.

I think so we should discuss about -- or understand the issues that are at hand. And also I think it's important for all of us during this discussion that we discuss about what we can do. Right? Because we can speak a lot about all those things, but I think we should go back to what we can do as a community.

>>Alissa Cooper: Agreed.

This one.

>>James Woodyatt: Hi. I'm James Woodyatt, currently between jobs.

And one of the things that Mr. Schneier said that I thought was very important that didn't seem to show up in Mr. Farrell's presentation was that Mr. Schneier made -- and just now again he reinforced the idea that it's very important to improve target dispersal. He said we were a lot -- the Internet user community was a lot more secure when our email traffic was handled by 1,000 different ISPs, but now there's ten. And that's a big deal.

Well, the Internet of things, probably going to make that a lot more difficult, because now it's your thermostat and all of the metadata coming out of your house and industrial plants, all of that stuff is basically going through the Internet backbones and first mile links that we've created that are built around the idea of a lot of centralized computing.

And if the IETF really wants to make a difference here, I think that you need to hear what Mr. Schneier said, which was, find a way to encourage more target dispersal.

>>Alissa Cooper: Eliot.

>>Eliot Lear: James, thanks very much for your comment. And I took note of Bruce's point about target dispersal. And I want to reiterate something that I think Alissa just said, which is that there are tradeoffs here, Cybersecurity tradeoffs, in particular.

When we talk about ten big email service providers, that's a whole lot of concentration. On the other hand, I know that those guys are pretty professional in terms of maintaining their code. And as a -- when this thing broke, I was thinking about a thought exercise. I think everybody always likes to jump to the conclusion that, yeah, let's all run our own services. And in order to do that, oh, goodness, yes, of course, we need IPv6 everywhere, by the way, for that.

But if we had all of these little boxes that were handling all of our mail and they were as well-maintained as all the other little boxes that we have in our house, how will they be --

[ Laughter ]

>>Eliot Lear: -- how will they be exploited?

Now, what I'm trying to point out -- yeah. Yes, Russ just said, "Easily."

What I'm trying to point out is that there are tradeoffs here. It's -- this is not, I think, the thing that I really think this organization should focus too much on, at least in terms of the work we have to do. Which leads me to a question that was also asked, and I want to reiterate that, which is, for every person in the room, what are you going to do to help address this problem of surveillance and help address the tradeoffs?

Please, if you're at the microphone, maybe take one sentence and answer that question first, and then ask your question.

>>Alissa Cooper: One more right here. I'm going to cut the mic line very soon, because there are a lot of people. So if you want to say something, now is the time to stand up and wait in line for a long time.

[ Laughter ]

>>Russ Mundy: Hi, I'm Russ Mundy.

What an opportunity, Eliot, because that was one of the things I wanted to do, is point out, my little box here is running IPSec, TLS, using a DANE certificate and DNSSEC.

And I urge everybody in this room, because they probably have type or some amount of this capability themselves on their own machines, turn it on. Use it right now. Go back to all your mailing lists that you participate in in the IETF. Examine them. From a security perspective, what are or are not they doing in those particular areas? Some people know me as the DNSSEC campaigner. And I guess that's probably true.

And one of the things I'll point out there is one of the absolute hardest places we've had to make progress in DNSSEC is getting the registrar community to move forward. The registrar community's normal response is, there is no demand for DNSSEC.

So in response to that, I say everybody here, start using what you have. Work with your own organizations. And when you think it's right, push your providers for giving you the additional security information, or capabilities, whether it's DNSSEC or any other thing. Urge for it. Ask for it. Because it's important that the vendors hear the needs and the requests.

>>Alissa Cooper: Thank you.

[ Applause ]

>>Alissa Cooper: So the mic lines are closed. And we're going to go to Charlie.

>>Charlie Perkins: I'm Charlie Perkins. And I have two comments, very brief.

First one is not technical, but I think it's important. Recently, after all this news, I've been asking people, "Well, what do you think your privacy will be like in five years?"

And almost unanimously, the answer was, "Well, I won't have any."

I think there's a public relations problem. In particular, I asked my sons. And they seem to be fatalistic about it, not that they're looking for that, but that's just the way they think it's going to be.

So we really need to, I believe, first of all, emphasize why it's important, and second of all, have a credible story to make people believe in it again.

And the second thing is, separately, being involved with mobility groups for a while, I was following our parallel development while watching Brian's history slides. And I had to say, we, I guess, had it pounded into our heads quite early the importance of make secure signaling, and specifically, about mobility management, you cannot do it without identity management of some sort. So we did have a lot of discussion in that.

And then during -- you know, for this workshop, I guess, and perhaps in other ways, I think that mobility management, everybody here is using mobile devices, without exception, I reckon. And if you're not using one, I'd be surprised.

But, anyway, so that means that we all actually should care about the way that our mobility management happens. And mostly, the ISPs keep track of every single byte that you send, because they want to charge you for, it at least that's the cover story. And reasonably so. I mean, that's their business.

So I think that the way that movement is handled throughout the Internet and the way that the protocols are designed can make it easier or harder to accomplish what I think we share as an objective.

>>Alissa Cooper: Thanks.


>>Erik Kline: Thank you.

Me? Erik Kline.

I was wondering, what can the technical community do about certain economic and sort of, you know, other disincentives. I -- for example, I remember that it was sort of considered very expensive to even implement your own crypto, because everyone was waiting for RSA patents to expire in 2001. And then even though some SSL certs may be free in certain places and whatever you think about the security of SSL, it's still not the default; right? Everybody -- most of the people I know charge more for SSL. And so there are just economic incentives stacked against people who might even want to try to do security.

Is there anything that the technical community can do about things like that?

>>Eliot Lear: That's a great question.

That's a great question.

We're having a workshop on Internet technology adoption and transition, which, in part, looks at the -- at economic incentives. We're looking at different forms of modeling along these lines.

But, actually, I would sort of like Bruce, maybe you could talk a little bit about WEIS, this is the Workshop on the Economics of Information Security.

Do you want to say a few words about that?

>>Bruce Schneier: Only that, yes, this is hard. We fundamentally have -- we do have incentives issues. And it's not just in that, it's in a lot of security protocols that the benefits accrue only when a lot of people use things, and the first adopters do pay a penalty that is effectively a subsidy of the people who wait. So there's an incentive to wait.

I mean, I don't see any way around this other than to somehow force people to do things either by sunsetting protocols or making things mandatory.

This is a fundamentally difficult problem in a lot of areas, not just security. I don't have any magic here.

>>Alissa Cooper: Andrew, did you want to jump in?

>>Andrew Sullivan: So I'm just a little worried, though, that they're saying there's an economic thing, so we just want to change the economics.

I want to change the economics, too. But I wonder if the technology is making us back, that is, you know, the world has changed. We've introduced certain kinds of technologies that have reshaped the world.

If you think about the automobile, you know, we now have a situation in the real world where the automobile has reshaped our geography. And we can't go back from that. We just -- there's no way to sort of change the economics so that, you know, the suburban development that we've got all over North America, for instance, disappears. Something will happen to it. And if you don't believe that, ask the people in Detroit.

So I'm wondering if what we've done here is introduced a change, a fundamental change, in the human environment. And so concentrating too much on the -- on the economics, or concentrating exclusively on the economics is going to -- is going to set us up for failure. I think maybe this is in response partly to what Charlie said about his kids just being fatalistic about it.

>>Alissa Cooper: Thanks.

Let's go to the back corner.

>>Kerry Lynn: Kerry Lynn.

I think the challenge for us working in the Internet of things and cyber physical systems is that we're trying to fit as much functionality as we can in the smallest possible processor. So I guess this is input, but I'd like to get comments from the panel.

What are the threats, you know, to devices as opposed to laptops and phones that, you know, obviously identify us?

And are you thinking in terms of some sort of a road map about what's the most bang for the buck in terms of security to put into devices, realizing that Moore's Law is going to allow us to throw more gates at this problem as time goes on?

>>Alissa Cooper: Does one of you want to respond to that?

>>Stephen Farrell: Yes, just think about it. There's crypto available that works on all these things. Go use it. I mean, I don't think it's -- just because -- a lot of these things IOT things can do a lot of the security protocols. I mean, Taro (phonetic) has done some great work with IPSec and demonstrated that, okay, ignoring the certificate handling, you can IPSec in a tiny amount of code and it can work. Don't think that just because the device is small that you can't use the technology.

>>Bruce Schneier: I'm actually very worried about the Internet of things, because I think it multiplies the amount of sensors out there by several orders of magnitude. It's not just one thing. I mean, it's not just drones. It's drones plus cameras, plus facial recognition, plus Facebook's tagged database file, plus everything else.

And when you think about cell phones, right, this is our -- this is our ubiquitous tracking device we put in our pocket every day. And, you know, now they're having better motion chips where we're going to see more different sensors of our appliances, our cars. You know, all of this, each one by itself -- and it's all going to be looked at individually -- is not going to do much, but together, it paints a very compelling picture.

And we're not very good in policy at looking at big pictures. We're better at, you know, we're going to talk about drones now, we're going to talk about cameras now, we're going to talk about something else, where the real issue is everything together. And that's my worry about the Internet of things, that it just -- it disperses sensors in a way that policy will ignore them.

>>Jari Arkko: I would actually add that I'm not too worried about the devices themselves. I'm perhaps more worried about what happens with the data coming from devices. So, you know, if you think of Gmail through the entity, its role, you know, if in the future we have this huge database, a single entity that has all the sensitive data from all of us, I think that's a scary thought, at least for me. So trying to make things that are distributed not just in terms of the devices, but where the data goes is really, really important.

>>Alissa Cooper: Yep. Let's stay there.

>>Michael Abrahamsson: I'm Michael Abrahamsson.

So one of my biggest concerns with this is the usability. Like, how do we expose this to the user? The failure mode of DNSSEC is basically nothing works and you get a cryptic error message (indiscernible). That's what happened to me last time.

So how do we expose the failure modes to the user? And how do they interact with all this upcoming cryptography?

I mean, if it doesn't work, the customer's going to turn it off or the end user is going to turn it off.

I think someone needs to create, basically, a story, a systemwide view, where do we want to be in five years? How should this work from the user's point of view so that it's actually usable. I mean, if you want to do cryptography properly, it's really hard. You need manual interaction. You need all that.

So I think that -- I don't know if there are any usability experts active in the IETF. I don't think it's that many. But I think we will need some.

>>Alissa Cooper: Go ahead.

>>Ross Callon: I think that's a very good point. I think that the least technically knowledgeable and savvy person in this room is an order of magnitude more savvy and technologically knowledgeable than the overwhelming vast majority of not only users, but even I.T. department staff. And, you know, we want this stuff to go in the home, and we want it to go to companies that have 20 or 30 or 50 people, and their I.T. can sort of barely get a laptop out of box and plug it in.

And so I think -- I think you have a very good point. And I think it's a very strong challenge.

>>Alissa Cooper: Thanks. Murray.

>>Murray Kucherawy: Hi. Murray Kucherawy channeling a question from home.

The scope of the revelations that Snowden precipitated was staggering. To what degree do we need to start thinking about security and privacy in a different way than we have been versus just continuing to do the same thing, but harder?

And what are the major takeaway lessons from this? Have we been too slack in doing it so far? Have we just been thinking about it all wrong? To both Bruce and the IAB.

>>Bruce Schneier: I mean, I think -- To me, the biggest takeaway is how robust the system is, I mean, how surprisingly robust, that if you have an enormous budget, you don't make A/B choices, you make A and B choices. And that's really why I think this is a hard problem. There are just so many ways that these adversaries are using to get at our data simultaneously.

>>Stephen Farrell: The other thing is that we need to kind of actually work on describing the threat model in a way that would be useful to people doing work in the IETF. I think that's maybe the first thing to do. You're more than welcome to jump in and help.

>>Murray Kucherawy: I plan to.

I'm curious that -- does that mean that the threat model we have had so far has been wrong or too narrowly focused or something like maybe both of those things?

>>Stephen Farrell: I think the only kind of new technical attack not quite in recent times was related to FLAME; right? So a lot of the times, it's other things we knew could happen, but just all of them happening at once under single control, with international collabora- -- blah, blah, blah.

>>Murray Kucherawy: Just think bigger? Right?

>>Stephen Farrell: Yeah. I mean, to some extent, we would consider, you know, a passive attacker on our link, but now it's on, like, every link.

>>Alissa Cooper: Sam.

>>Sam Hartman: Hi. Sam Hartman.

I've been listening to Brian's and Bruce's presentations and Stephen's presentation. I'm noticing -- I'm going to try to summarize something that seems to be falling out a part of all of this, which is that the Internet is different from where you stand. So I may decide that the NSA is not really someone I'm worried about, that they're friendly.

There is almost certainly someone out there who I don't think is friendly. And that's true for everyone.

Similarly, I may have an idea about a business model. I don't like clouds, because, you know, they concentrate my data. Yet, there's someone else who's out there who has -- you know, who says, "Yeah, but it's so cool that my phone automatically knows about my airline trips because Gmail scraped that for me. It's so cool that I don't get spam because, you know, I have this huge cloud that's dealing with that for me."

And I think that we, as technologists, are enablers. But it is -- we're trying to enable a lot of different things and to enable the Internet to be what people need it to be from all of those different standpoints.

So, okay, Sam, that's great. I've tracked your crap. How do you actually plan on getting us something that we can do from that?

Okay. Here are some specific suggestions.

First of all, I think it's pretty clear that in the past, we have -- we have considered certain attacks improbable. I think it is now clear that any attack we can imagine is sufficiently probable that we can -- should consider it. And that when someone goes, "Oh, yeah, but who would ever actually do that?" the answer is, "No, we're quite certain they already are," no matter what it is. If it's an attack.

And --

[ Applause ]

>>Sam Hartman: -- that is going to change the conversation. You can clap now. But whenever I'm sitting there doing a review of your document --

[ Laughter ]

>>Sam Hartman: -- and I'm saying, "No, you don't get to ignore this issue because it's obscure, it is a real security attack," you may not be clapping as much. But I think that is an important change we need to make.

Number two, I think we have a pretty clear answer that, yes, encryption, even outside of authentication, has value. That isn't to say we should do it everywhere. I'm not going to go so far. Like, there are places where I think opportunistic encryption is valuable. But I think the basic statement that we need to understand that encrypting has value even if that's all you get is another big change.

Number three, I would encourage us to enable things, not to -- in all of our technologies. So rather than -- like in the HTTP discussion, yes, maybe it turns out that -- maybe it turns out that, you know, enabling opportunistic encryption pushes people towards -- you know, means some people don't get TLS certs.

We probably aren't the people who decide what the market should do. We should enable the technologies and allow the people sitting on the Internet from different standpoints to make that decision themselves and be enablers of technology.

[ Applause ]

>>Sam Hartman: And that's the same for cloud stuff. We should make sure that our protocols do not force you towards central stuff. If we try to make sure that they didn't permit central stuff, it wouldn't -- you know, our participants would fix that for us real quick.

So, basically, we should make sure that our protocols are deployable, both with and without, you know, large central stuff in the middle. And those are my specific suggestions for the IETF.

>>Alissa Cooper: Thank you. We all look forward to your reviews of our documents. Brian. And I, actually, I should note tat we have less than three minutes per speaker left in the line, which is, roughly, what we've been doing so far. So thank you, everyone, for keeping it brief. But please keep that in mind both with your questions and with responses from the stage. Go ahead, Brian.

>>Brian Trammel: Hi. Brian Trammel. Yeah, I also intend to clap the next time I get a review from Sam. I want to respond. Actually, I changed what I wanted to come up here and say. I wanted to respond to a point that Bruce made that policy is not very good at looking at the big picture, specifically, with respect to the iterative things.

We're not that good at looking at the big picture either. Right? We're moving in a whole lot of directions at once. Each area, each working group has a pretty good handle on the problem they're trying to solve, and they're running off in the direction of trying to solve that problem.

When we're now talking about moving sort of the entire focus toward hardening the Internet, toward making pervasive surveillance more costly, we're trying to move a lot of these sort of independent movements all kind of in one direction at once. And that's creating a lot of tension. So there's a lot of tradeoffs. We're talking about the cost of encryption. Encryption also increases latency. This is latency. We're putting a lot of effort into trying to decrease latency all over the stack. So, when you talk about decreasing observability for pervasive surveillance, we're also decreasing manageability and debug-ability.

So, when we layer things, we're adding robustness. But we're also increasing siloing. So people at the security layer don't see what's going on on the app side. The apps guys think the security guys are going to say here's a review from Sam, great.

So I just sort of would like to ask the panel how can we decrease that sort of -- the fact that we're a little bit fractured, that we're not doing a really good job of the big picture here, I think we need to do that better before we can address this problem sort of in a unified way.

So I just would like to ask for thoughts on that.

>>Stephen Farrell: I guess I'd push back a bit. I'm not sure we should step back waiting for the big picture to emerge.

>> No, no, no.

>>Stephen Farrell: Okay. You didn't say that. Also, if you look at the TLS 1.3 stuff that was presented yesterday, at the expense of some complexity, you can increase the security and reduce latency. We've learned some things about how to do that better. And we'll see if we do it well this time.

>>Alissa Cooper: Xing, did you want to respond?

>>Xing Li: I'd like to mention a little bit about globalization. Because, actually, the rumors in the world are the same. All people want to protect the privacy. However, there are tradeoffs. One danger I feel is, if we put too strong encryption in there, probably some government were not try to connect Internet directly. And maybe that's where split Internet. That's not a good thing. So, actually, I encourage the global collaboration from our engineer and IETF community.

>>Alissa Cooper: Let's go to the front mic.

>>Bhumip Khasnabish: My name is Bhumip Khasnabish. I have a question for all three panelists. I don't think it's a technical issue. The people trying to monitor and attack, they have both limited technical and financial resources. So, with that in mind, what kind of socioeconomic business policies do you think society as a whole or users should be aware of? Especially in the Berkman Center, if there is any projects or anything going on so that people become aware of it and can prevent this from happening and, if there are any projects that provide funding for those projects. Thank you.

>>Bruce Schneier: So I disagree that this is not a technical issue. Yes, we're dealing with adversaries with effectively unlimited budgets. But they cannot break the laws of physics. They cannot break the laws of mathematics. And those are on our side. The neat thing about crypto is math is on the defender's side. And the more we use that, the better off we'll be. With security, it's more balanced. But there are still technical things you can do to make the job harder. I agree with you that a lot of this is political. And it's internationally political, which makes it really hard. And, you know, I think long term we are going to move into a world where there will be more privacy and less surveillance. I think it's a generation away, but I do think it's there. Specific projects: Nothing comes to mind. I know there are things, but nothing comes to mind right now.

>>Stephen Farrell: I don't have much to add to that. But, ironically, Brian, the last guy at the mic, used to have a project called Prism, which is increasing privacy.

>> Could he sue for copyright?

>>Brian Carpenter: I'll just add the best thing we can do, as a technical community, to defuse the political debate, which is not our business, is to provide the technology that makes the thing secure and private.

>>Wes Hardaker: Wes Hardaker. So I'd like to start by telling you about a private conversation I had with my wife. Actually, I didn't have it. And that's really the point. On my way here to the IETF, I arrived at the airport nice and early, like I always try to do. I was going to send her a text. And I got out my phone, and I started typing it. And I realized I can't send this. Even though I was sending a private conversation to my wife who would very much understand the point of my message, the statement "This airport is dead" just let me believe, okay, who else is going to read it while I'm standing in the airport? And is it safe? You know, to actually make that statement? These days. And it's sad, right, that I felt that I actually had to stop and think about what I was saying and what I was typing when, really, the only recipient was one person that I trusted to get the context right.

And, you know, there really was nobody in the airport.

But the fundamental problem is that in the IETF, we have been architecting something under two thoughts that are now functionally no longer true. And, if you look at all of our architectures, yes, as everybody else pointed out, we centralize everything. We're always sending messages to something else for it to eventually get to the end user. As Bruce said, well, you have to post stuff publicly to Facebook. But we're also posting private messages through Facebook. And that's bad.

The reality is two things have changed: One, with the ubiquitous deployment of IPv6, we can actually now get to them directly, right? So one of the fundamental scares recently has been NATS. If you have gone into any XMPP discussion or anything else, we have to send it to a central server because we can't get to them directly. Hopefully, that's going to change. But we need to start architecting the protocols to make use of that as opposed to still trying to advent the old way.

But the other thing that's changed drastically, if you go back and look at the beginning of SMTP, why do we send it through mail servers instead of directly? They're not on line. We have no idea whether they're online or offline. The reality is the phone is always online. We need to change our thinking to take into account two things: One, we probably can get to them and; two, they probably have something that will be online for themselves, now. Everybody is in that boat now. So, when we do have to send through a central server, we should also encrypt it not to the central server. TLS over everything actually doesn't help too much when you're going to be decrypted in the middle by everything.

Or everybody, as the case may be. So we really need to encrypt all the messages that we do send through private traffic. Not to them. Forget the on-the-wire transmission. But to the end recipient. And I know that's hard because key problems. But thank you.

>>Alissa Cooper: Eliot.

>>Eliot Lear: Wes, we look forward to your drafts and implementations to move that forward. Thank you.

>>Alissa Cooper: Matt.

>>Matt Mathis: There's sort of a knee-jerk reaction. What we need here is more security, stronger end-to-end security, stronger encryption, and so on. We need something more complicated, which is modulated security. And let me describe or let me illustrate what I mean by describing a problem that I'm aware of. There are a number of providers that provide strong TLS security for all their users. In the U.S., at least, corporations have a legal right to intercept all of their employees' email. These two facts are in direct conflict with each other. And the result in the dogmatic point of view that we can't do proxy support in TLS means that corporations are forced to do something rather kludgy to make up for that fact. And that tends to be something in the form of forging credentials, forging root certs so that their own employees can connect to the proxies. This is considerably lower security than actually building in an architecture for saying, even though we think we're doing TLS, there are use cases for which we need to have proxies. And there are use cases for which we need to modulate the security. These need to be sort of modulated and understood. And the dogmatic position that more security is always better forces some use cases to work around the security.

>>Stephen Farrell: I'm going to disagree with you. The example you chose wasn't a great one because there are examples for mail that have been around for quite a while. If you want to propose doing a man in the middle attack of doing TLS as a standard, that will be about the 6th time. And you can get rid of me first before that's going to happen, basically. Yes, that's dogmatic.

[ Applause ]

>>Matt Mathis: So Chrome has cert pending. And this makes deploying certain kinds of secure applications in enterprises hard to do on Chrome because of the cert pending.

[ Speaker off microphone ]

>> That's a good thing.

>>Eliot Lear: I just want to highlight, Matt, that your ideas are, I think, not unique. I think they're good ideas. And I think the person two behind you in the queue has been actually doing a very good job of working. And that's Mark Nottingham who has been bringing those issues to light in the HTTPbis working group. Mark has a draft which I think you should read.

>>Alissa Cooper: Let's move to the next person.

>>Greg Mirsky: Greg Mirsky. I think it was already mentioned that there seems to be not on a technical but not even national but international problem. And the problem is that legislation is spotted in terms of the rights of certain organizations to eavesdrop. And that's where I'm concerned that I think Jari said that in Internet of things, we don't know where the data will get collected and where they end up. And with virtualization of the names, that becomes even more ambiguous.

So I think that that's important to keep in mind when we create these virtual domains and environments that -- to know where the data get all sent to and where they end up.

>>Alissa Cooper: Do you want to respond, Jari?

>>Jari Arkko: Yes. So, having information about where data lives is, obviously, important. I think there are other issues, obviously, in this space, I think. As was commented earlier, the ability to communicate directly and, you know, think hard before you use the cloud kind of advice is -- that also needs to be taken into account. And I guess for us at the IETF, the challenge is to make sure that we have the tools to be able to talk direct and distribute our data in several places as opposed to always having to put it in one place in order to do the functions that we actually do need as users. So, I mean, there's a lot of work there. But yeah. Understanding where data is is one part of it.

>>Alissa Cooper: Let's go to the far mic.

>>Doug Otis: Doug Otis from Trend Micro. We do depend on the Internet for what we're doing. And, in fact, we depend heavily on the cooperation of ISPs and law enforcement to take action when we find there's something bad happening. Maybe it's malware being loaded up to our mobile phones and that kind of stuff. So we need to be able to keep track of what's going on even though we're not really interested in what any one person needs to say. But, when there's a problem, that needs to be handled in kind of a cooperative manner. One of the things we need to consider when we consider putting encryption on everything -- and I think that's a great thing -- for server to server we need to take a different view of how that should work. We have the capability within all these protocols to exchange the client certificate with the server certificate. And I think that needs to be more of a common practice so that when we're exchanging information on a larger scale, that, if there is a problem, we can track it down.

And I think that's one of the reasons why we're only seeing a handful of open services that are capable of existing. Because, once you get to a certain size, no one can stop you. And I think that's why you end up seeing only a handful of people existing in that environment.

If we had something that was more automatic, you know, at the client certificate level, I think then you could start seeing a greater diversity of aggregation points, plus, an ability to handle a problem.

>>Alissa Cooper: Thanks.

>>Doug Otis: One more thing. As we go to the IPv6 realm -- and that's coming -- we can't use the IP address any more.

And that's what we've been relying on, and that goes away.

>>Alissa Cooper: Tim.

>>Tim Polk: Tim Polk from NIST. On the category of what we're going to do, NIST announced on Friday that it's going to do a public review of its crypto standards process. I am actually going to be talking about that tomorrow at 1:00 in the SAAG meeting or in the 1:00 SAAG session. And so I would invite people who are interested to come to that.

I would like to -- I wanted to mention that, since Stephen had mentioned the NIST special pub 800-90a, I would like to add one observation. That was just an advertisement. But I would like to add the observation that we do have -- we can do better with the tools, but we have a lot of tools that are available that are not being used today. And, until we can make customers, whether it's home users or enterprises, believe that security is something that they're willing to invest in, it's not going to improve things. Because security does have a cost. It's going to have a cost in usability. It's going to have a cost in terms of performance. And it has a real cost when you have to decide whether or not you want to fail secure or not. People said oh, gee, if I turn on DNSSEC, then I can't get places I want to go.

Well, yeah, but you don't know that you were going to get where you wanted to go. Maybe that was something we need to consider. So I don't know how we solve that problem. But I want to point out that this is only a first step and that there's a real awareness issue that has to happen as well. Thanks.

>>Jari Arkko: Yeah. So cost of security and getting the users to actually use the tools that exist or will exist is key. But I think we have a unique opportunity now. And this is because all of this publicity. This is like a one-time opportunity that we have this decade or in a long time to actually do a major change. Because the users are motivated. Various organizations are motivated. It makes actual business sense for organizations to turn on more security because they can say, look, we're doing everything we can to protect your data. So now it is the time. We need to use the time very, very wisely.

>>Tim Polk: I would agree that it's a short window. And I hope that people will take advantage of it. And part of it will gain momentum if the customers respond to those sorts of things.

>>Alissa Cooper: Thanks. Harald.

>>HARALD ALVESTRAND: Hi, Harald Alvestrand. A little bit philosophical. Three meanings of the word -- of the color black. One, sadness. When I put on a T-shirt this morning, I was a bit sad about what we are coming here to do. Because second meaning of black, what goes on in the darkness, what needs to be hidden. We are making common cause with those who have reason to hide, those who oppose regimes we don't like, and those who oppose regimes we like. We are seeking to empower those who want to hide. And that's a real cost to society. It's a cost we have to take. But I think it's worth it.

So the action we have to take is the third meaning of the word "black." Go dark.

>>Alissa Cooper: Thanks, I think.

[ Laughter ]

>>Alissa Cooper: Yeah, I just have -- go ahead. I'll respond after you.

>>Brian Carpenter: You know, I wanted to repeat something I said before. Definitely, this is personal opinion. I think our technology and the way we develop it should be neutral with respect to varying cultural views of legality and privacy. I don't think we should actually have the discussion that Harald sort of opened the rat hole door to.

>>Alissa Cooper: I was just going to say I think I agree that, you know, delving into the details doesn't help. But I don't think we should underestimate the contextual nature of privacy. So we shouldn't just think about this problem vis-a-vis, you know, dissidents who are located in oppressive regimes who are going to get thrown in jail if someone finds out what they're doing.

When I call my doctor, that's, perhaps, something that I don't want lots of people to know. I might not want my employer to know that I called my doctor or that my doctor called me. And that's a situation that we've all been in most likely, if you go to the doctor.

I don't think we should underestimate the extent to which the kinds of changes we're talking about here will affect a very, very, very broad base of anyone who ever wants any kind of confidentiality in any context. Because we have the potential to have that effect. Let's go to the back corner.

>>Terry Davis: Terry Davis. I want to take just a slightly different tack on the privacy issue. I first spoke on Internet hardening in '98 at Interop and the need for it.

I spent the last 15 years working in critical infrastructure. SCADA, ICS, and aviation communication to aircraft in flight.

We really don't provide any real good guidance to build these type of networks. This is an opportunity, as well as creating privacy, to help us build networks that are truly advocate for critical infrastructure.

In a decade the aircrafts that you flew to this meeting will be talking over the Internet getting guidance from ground control. I think we want to do that very well. I would encourage the IESG again to consider forming a working group on critical infrastructure networking.

[ Applause. ]

>>Alissa Cooper: Do you want to respond?

>>Eliot Lear: Terry, Eliot here.

Just briefly, I couldn't agree more with your comments.

And I would highlight, there is a recent article about how ADS-B, which is one of the protocols for ground-to-air, needs substantial work in terms of its security profile. And perhaps there is an opportunity for collaboration right there.

>>> I'm well aware of that problem.

>>Alissa Cooper: Dean.

>>Dean Willis: Dean Willis.

I'm one of those people Stephen talks about that talks too much. But first I'd like to translate a lot of what I have heard today into Texan.

Encryption is like birthday cake. Generally speakin', the more layers to the cake, the better it tastes, but the messier it gets to eat.

[ Laughter ]

>>Dean Willis: Let's talk about the economics of that birthday cake for just a second.

In the U.S. and in most of the western world, the officers of publicly traded companies have a fiduciary responsibility to their shareholders.

They have to operate the company according to generally accepted practices to maintain a safe harbor that keeps them from being sued for losses that occur to the shareholders.

So, for example, if a company runs its accounting system not according to GAAP, the generally accepted accounting practices, and there's a loss to shareholders, the next thing that happens is there's a shareholder lawsuit. And those company officers can end up paying out of pocket to compensate the losses to the shareholders.

We sit in a unique position now, partially because of the highlight of recent disclosures, but also because of where we've been in the past, to help say what those generally accepted best practices for running a network are and to build a referenceable set of documents that companies can use to say, yeah, we built this right. You know, we did what they told us to do. If we have a major data breach, we're not in violation.

And that's what we need to do to get the large-scale adoption that we need to have these things happen.

>>Bernard Aboba: I'd like to reply. I think you're making an excellent case, because in addition to Sarbanes-Oxley, there's also HIPAA. And so there actually is a very large market for these things. And so when people said there's no money in it, that's actually not true. Having to deal with this on a daily basis, I can tell you that the number of solutions available are dramatically lower than what people actually want to buy. So there actually is a market, even for the level of security we have today, let alone what we probably should be doing.

>>Dean Willis: The third point on that is to make sure that those practices are kind of where we want them to be. I think we, as a community, need to adopt an important concept. And that is that in protocol design, susceptibility to pervasive surveillance is as much harm to the Internet as, say, poor congestion control might be. This needs to be a critical principle that we use in evaluating every proposal.

>>Alissa Cooper: Thank you.

Go ahead.

>>Joe Salowey: Joe Salowey.

I like the way Stephen had characterized this current situation we're in as an attack. One of the things when you're dealing with incident response to vulnerability disclosures, et cetera, is you never let a good vulnerability go to waste, because it lets you motivate people to focus on problems that need to be addressed and educate people as to what the problems are that we're facing today.

So I think we do have an opportunity, just like Jari had pointed out, this is a good time, because people are going to be more receptive to some of the changes we would want to make and probably need to make.

So, you know, a lot of the ideas that I've heard today are really good. So I think it's very promising, you know, and I think certainly increasing the amount of encryption would be a good thing.

One of the things that for me is -- working in product security space that keeps me awake at night is that there's a lot of soft targets out there, and it has nothing to do with encryption. It has more to do with the quality of the implementations. And one of the things I'd like to ask the group up here -- and we don't need a complete answer now -- is what sort of things can we do in our protocol specifications to make that -- the end result of these implementations more hardened and stronger? Thanks.

>>Alissa Cooper: Thoughts?

Go ahead.

>>Bruce Schneier: I think -- I mean, the more idiot proof, the better. I mean, you see things fall apart at every step of the process. But the more it is prescribed, the less someone can mess it up. And so you'll see random number generators that are lousy because they're not specified as part of whatever the standard is. You see software choice, I mean lots of places it falls apart. The less options you give the person who doesn't know security, the less chance he's going to mess it up. That would be the thing I would say is most important.

>>Alissa Cooper: Eliot.

>>Eliot Lear: I agree with everything Bruce just said, Joe.

And there's a joke that I use, which is, you know, the stuff should be designed by geniuses, not for geniuses.

And to be more direct, I think, Joe, that we need to be a lot more considered in terms of the user. I'm less concerned about hardware implementations and more concerned about usability and actual code -- and the actual deployment, not the -- yes, there are going to be bugs. And that is a big problem, especially in the Internet of things, where things won't ever get updated probably. But users and using the stuff have a difficult time today.

>>Alissa Cooper: Did you -- Stephen, did you want to respond?

>>Stephen Farrell: I think I kind of agree with what's been said. I think if we can kind of move more towards having whatever security and privacy features are defined in the protocols be the things that get used when people do interrupt testing. So make them more -- you know, so the mandatory to implement doesn't always get that done. You know, we do a lot of interrupts on the clear text version and then ignore the supposedly secure version and then eventually we fix it and add a secure version which may be different from the first two.

So the more we can get people designing protocols where during the interrupt you're using the security and privacy features, the better.

>>Alissa Cooper: I think we're --

Go ahead.

>>Marc Blanchet: Short comment.

We've been saying that the scope of our work is usually protocol on the wire, if there are bits on the wire.

It seems to me that the comment is maybe around maybe we should think about this code, because we are too narrow, and therefore there's the other part where we don't care or don't work on are actually the security weaknesses.

>>Alissa Cooper: And this is, I mean, something we have discussed in the IAB, right, which is, we have tremendous mind share here, but it doesn't necessarily get sort of evangelized out to the people who have to go implement things, and is there a way to do that better after things have been specified.


>>Mark Nottingham: I came to the mic to disagree with something Eliot said or maybe modify it.

You said that we shouldn't encourage target dispersal. And --

>>Eliot Lear: (Off mic.)

>>Mark Nottingham: Okay. Well, then I misheard you. I don't think we should encourage it, but I do think we need to allow it in that I don't use Gmail. I used to run my own mail server. Now I use some guy in Florida who I trust to keep it secure. And I'm looking for somebody in Switzerland for some reason. Good country. And I'm able to do that because we have open protocols that allow us to do that.

Now, there's another situation. You know, one of the big emerging applications in the last couple of years has been this kind of file sync thing, you know, DropBox, iCloud, whatever. We don't have any standard open protocols for that.

And some people came to the IETF in Berlin, and we had a bar BOF about -- they had an open source project. They wanted to standardize it. They wanted to have something interoperable. This is a hard problem. The economics are hard. But the turnout was really low. And these are the problems I think we need to be able to solve.

It's ridiculous that I am being shoved into using DropBox when I want to run my own server or I want to use my employer's server. And that's the stuff I'd like to see us, as a community, focus on.

Regarding the larger discussion, and then the previous comment about enterprises, there are a lot of things we want. I'm really unhappy with the current situation over all. I want things to be very secure. I don't want to have to think about this stuff as an Internet user, or my family.

But then there's what we can get. And I'm concerned here, and I've heard this from a number of people that the perfect is the enemy of the good, that there are people who want to take a principled approach here and say this has to be perfectly secure, we have to have a very high bar. And what I'm concerned is, the line we have to walk as a community is the one between really good security and the potential for fragmenting the Internet or fragmenting the Web and having whole communities drop off, whether it's countries or companies or having people blocking Web sites because they don't know them and they can't inspect the traffic. I don't know what the right answer is there. But I don't yet know that we have the luxury of being principled about everything.

And I'm sure Eliot's going to want to respond to that.

>>Alissa Cooper: Go ahead.

>>Eliot Lear: Sorry, very quickly, there is a spectrum between very high concentration of a small number of servers and total dispersion. And what I wanted to point out is that there is a tradeoff to be had in between. I went all the way to the other extreme when I was talking about little home devices.

>>Mark Nottingham: I'm with you there.

>>Eliot Lear: So there's this middle ground.

And to your other point, I certainly agree with you that there is a problem with the -- or the potential for good being the enemy of perfect. But there's an interesting thing going on in your working group, which is that people are looking to propel adoption through the incentive -- propel adoption of TLS through the incentive of gaining the benefits of HTTP2. And I think that's a very interesting thing for people to be studying. And it's a bit of a roll of the dice; right?

>>Mark Nottingham: Yeah.

>>Eliot Lear: If people accept HTTP2, then, man, are we going to see a whole lot of TLS deployment in that environment is essentially what's being said.

>>Mark Nottingham: Okay. Thank you.

>>Suresh Krishnan: Suresh Krishnan.

So this is a question really for Bruce.

One of the things you said is, like, if you do a lot of encryption, like we make it much more difficult for the passive monitoring. But there's kind of an ethical issue there in my mind, okay?

So I thought about this a bit. So you said, like, Tor as an example of something that works fine. So I could use Tor to start a flame war on Reddit. I'm not saying I did that, but, like, I'm just saying it's --

[ Laughter ]

>>> We thought it was you.

>>Suresh Krishnan: But the thing is, like, Tor's got (indiscernible) capacity. It's not a technical problem, but, like, it takes, like, balls of steel to run a Tor exit node. Okay? Maybe NSA is running a few. I don't know; right?

But it is -- When I'm doing something, like, that's, like, kind of trivial, maybe I still want it to be confidential, like, I don't know, Alissa said talking to a doctor or something.

But there is, like, different levels of protection people need when doing this. And I don't know how to balance this, like, with somebody who is, like, a whistle blower in some repressive country, for example; right?

So how do you present these ethical issues to people? I know of people who run, like, BitTorrent on Tor, which really, like, takes away the capacity for these things. So encrypt everything is not, like, a solve-everything solution, really, to do something.

>>Bruce Schneier: It's not, but I think it helps a lot. I think the more we encrypt, the better we do.

I mean, yes, you're right that some of the tools are kludgey. Running a Tor exit node is annoying. But the more people who do it, the safer it is for everybody who's using Tor. So I would like to see as many of us run Tor exit nodes as we can. And the more of us who run it, of course, less traffic load on each. So --

And, I mean, can we put that into some server code? Can we make it easier? I mean, Tor is someone else's project, but, again, that gets back to the usability. I think OTRs are a really good example. That's off the record. That's encrypted add-on for chat. It's really nice, really easy to use, works well, gives people security. What level of security? You give people a good level. Because people aren't going to be able to modulate, well, I need level 3 security for this, level 5 for that. That's too much to ask of the user. The user wants to go online and expect, you know, their DropBox to be secure, their email to be secure, whatever that means.

So, again, the less we granularize this, I think the better we'll be because the more usable it will be.

>>Suresh Krishnan: And the second question is that so since you talked about, like, everybody should run an exit node, I actually thought about doing it. Okay? I started reading about it. And I have the technical knowledge to run a Tor exit node. Okay? But the thing is, like, if you look at the legal repercussions of running an exit node, it looks scary. Right? I become, like, liable for so many things that --

>>Bruce Schneier: Has anyone been prosecuted for that? I actually don't think it's scary at all.

Yes, there is safety in numbers here. It's scary to be a whistle blower. It's scary to build privacy tools. It's scary to use encryption. And as long as it stays that way, we lose. Yeah, you use encryption, it is a red flag, which is why we should all use it if we don't have to.

>>Suresh Krishnan: Okay. Thank you very much.

[ Applause ]

>>Alissa Cooper: So we're doing great on the time. Please don't forget to be brief.

>>Paul Wouters: Paul Wouters.

Speaking as a document editor, I see two kinds of messages that are happening to me now more than in the past. One is people telling me, you're doing your opportunistic encryption graphs. They're really awesome, they're great. Please continue it. But I'm telling you this in private because I don't want to fight with my former colleagues from my company that we shared ten years ago and failed to become a big company.

Second, I'm getting comments saying, Paul, your drafts are really awesome and nice. I would say so publicly, but I have in the past taken NSA money, and therefore I think if I publicly comment on your draft, then everything is tainted.

So I would like to remind people that we are technical people. Whatever you say, I don't care if you work for the NSA, I still encourage your comments to all of my drafts and encourage you to be as misleading as possible in reviewing these drafts, because if we can't figure out the technical things that are wrong, then we have lost anyway. Technology is our basis, and we should really trust in that process.

So please keep reviewing drafts publicly, no matter who funded you now or in the past.

>>Stephen Farrell: Well said.

>>Alissa Cooper: Thank you.

>>Jari Arkko: And there is -- our process is public. And as long as we have, you know -- everything's done in the open and there's a large number of people reviewing things, we have not so much to worry about, you know, someone affecting our standards in inappropriate ways.

So the more of you who can comment on things, the better.

>>Russ Housley: That was the point the IAB was trying to make in the open stand comment is what -- the review by a broad community is what's going to keep the protection of the community standards.

So we have to work together.

>>Bob Moskowitz: Bob Moskowitz, Verizon Enterprise Services.

In the Detroit area -- and maybe Duncan can pull off his second turnaround, we'll see. But I am an internal consultant to a lot of organizations inside Verizon. The Terremark people tend to ignore me, but I've been told getting HIPAA compliance in the Florida data center was really hard and it constrained a lot of their typical cloud operations to get that. And maybe that may be some of the bearings on what's been happening recently. But, again, I'm not connected with the Terremark people, so don't ask me.

But I am connected with our wire line people, and I've been working and trying to get deployment of 802.1a/e MAC layer security. But at the same time, I've been involved in discussions with a commercial off-the-shelf product that they can put on the fiber so they can get management and information that's occurring on that particular fiber link. So they don't want to turn encryption on because whoever plugged that device in would then have to get somebody else to turn the encryption off so they can then do the monitoring. So we've got this classic -- and that goes back to '97, I tried to do this in Chrysler, to get IPSec inside of Chrysler for a finance application, and the network people said, "No, you've stopped our management capability."

So, again, these are these pressures that you get inside. But we are getting some movement to get that.

In terms of certificate usage -- and, Bruce, I sent you a private message. I hope you respond to it on elliptic curve. You'll read it later, hopefully.

We have two groups, our UIS and our MCS group, and I'm working with them to get more end user type certificates, 802.1a/r certificates for devices in SDN use and so forth. So we are seeing -- And I'm working with other companies in this area also to get some movement on that.

But this is a hard problem also in terms of how to get the costs down, how to leverage existing certificates of individuals for medical, how we can get medical certificates based on a person's medical license to just say that this is, indeed, a doctor or nurse, pharmacist, to leverage that as saying that, okay, we now have a medical certificate, and our UIS people have been working on deploying that particular model.

But how do we get that now in a broader scope, beyond professionals, to the general public guy out there as well to leverage that identity for a trust model for certificates to get the cost down for those certificates?

These are challenging problems that we are working internally, and advice on this and guidelines would be useful.

And also, I also have a SAAG presentation coming up on application security tomorrow.

But my comment and interest -- again, I'm kind of stuck here wearing two hats between the fact that I work for a major ISP now, for the past seven years, and that kind of limits me but gives me opportunities as well. But we need to, again, work more on identity attacks, or not necessarily the crypto stream, but the identity attacks, as was mentioned, the forged certificates, the forged certificates and the change (indiscernible) identity model, which makes it so you really don't have it. Just more guidelines on these things and more efforts on it.

>>Alissa Cooper: Thanks.

>>Hirotaka Nakajima: Hi. Hirotaka Nakajima.

In the IETF, we are working on the protocols, so maybe this might talk a bit out of scope. But I would like to share my experience, which may -- could be a potential threat of data and privacy.

So I just arrived Vancouver Saturday night, and I entered the Canadian border. Then I picked up my baggage. Then the Canadian custom and border protections called me to the booth, and they forced me to open up my laptop, because they are -- they need to scan my laptop and my iPhone, iPad, any digital device, for child pornography. Of course I don't have that. But they forced me to tell the password. And they wrote it down, the password, to just a piece of paper, like memo pad. And they took my laptop and went to the hidden secret room, and they take half an hour, and they said I can go.

I think that's kind of experiment. I -- I knew that, but this is my first time, and I never do that -- this kind of experiment. And I'm feeling that, of course, we are talking about -- we are working on the protocol and we are technical. So not the protocol. But I think the password or any credentials are going to be, like, once I leak them, I tell the password, there is no way to secure my privacy or data.

So I don't have any plans, but I think we should be working on those kind of issues as well. Thank you.

>>Alissa Cooper: Bruce, do you want to --

>>Bruce Schneier: We know the U.S. does that. We know the U.K. does that. This is the first time I've heard of Canada doing that.

Since I've been involved with the Snowden documents, I travel abroad with a burner phone and burner computer. And, unfortunately, we are now living in a world where I have to do that. And you're right, this is not something that computer security or cryptography will fix, because an officer of the court is demanding you to hand over your keys. And in some jurisdictions, they can put you in jail if you don't. I mean, so now the solution is to minimize our data.

So this is a wholly different problem, but it would be nice if protocols would support that.

>>Alissa Cooper: Ted.

>>Ted Hardie: Ted Hardie.

I came up to disagree with Brian very strongly on two points.

The first point is, he looked forward to a long debate. And I hope we don't need one.

[ Laughter ]

>>Ted Hardie: I think this was quite a long line to stand in. And I think probably this is enough debate for it.

And the second is about the level of neutrality we need.

I think Stephen laid out very clearly that we are under attack. I think Wes's anecdote and Alissa's elaboration of that anecdote demonstrate one avenue at which that attack hinders the communication that people could take across the Internet. It harms our users and it harms the value of the network.

At the current scale of the Internet, it really harms humanity to have pervasive public surveillance. And I believe we need to do something about it.

Now, there's an old piece of IETF lore not invoked very often that the IETF in plenary can commit itself to something. And I would like to ask Jari, after Peter Saint-Andre has the last word, to take a hum where that hum asks the following question: Is the IETF willing to take the technical steps required to respond to the current attack, yes or no?

Thank you.

[ Applause ]

>>Alissa Cooper: So I see that Russ is -- Russ is adding -- yes, Brian, you will respond. Russ, I believe, is adding that one to the list of potential hums that we had perhaps planned to take.

>>Brian Carpenter: This is Brian. Just respond directly.

I will be delighted if the debate is short. I didn't mean to say it was a good thing that it will be long.

Secondly, yeah, I have no problem with regarding, you know, the news of the last couple of months as a very interesting paradigm of an attack model. I just think that doesn't mean that our technical response should be focused only on that attack model, because there are other attacks now and in the future, including, for example, the attacks on privacy by business rather than by governments that we should be equally strong in resisting.

So I don't think that changes my view of what we should do next.

>>Alissa Cooper: The last word, Peter.

>>Peter Saint-Andre: Hello. My name is Peter Saint-Andre. I'm in charge of practical actions.

As I announced on Monday morning, we are working to ensure that there is ubiquitous hop-by-hop encryption on the XMPP network by May 19, 2014. So if you're running a jabber server, please go to You can run a report on how good or bad the security of your server is. And you will, through that, perhaps discover that you have improvements to be made.

We are working toward ubiquitous encryption. This means that in the future, if your server doesn't do the right things, you might not be able to join chat rooms at If I can get those folks to sign our little manifesto.

I'm also working with Paul Wouters. He didn't mention, but we are working on an Internet draft documenting OTR so that we can have more implementations. And, hopefully, that will be (indiscernible) as an informational RFC sometime in the future. I don't particularly like OTR. I think it's kind of a messy protocol. But it's good, it works, and let's get it documented.

I think that we have gone quite a long ways with the architectures that we assumed back in the early days of the Internet, and I do not know that those are sustainable. We are working -- you know, we're working to encrypt the XMPP network, OTR is nice end to end. I think more peer-to-peer technologies might be something that we really need to investigate in the future. And I know a lot of people who are hacking on such technologies right now.

To echo what Wes and Ted said, we have a free society, to some extent, and that free society depends on the fact that not everything is public. And if everything is public, if everything is recorded, people will self-censor. And we cannot have a free society under self-censorship.

[ Applause ]

>>Alissa Cooper: Thanks. Thank you, everyone. I think this was a really productive discussion. And when they're done talking to each other, I'm going to turn this over to Jari and Russ.

>>Russ Housley: So I have the difficult question of trying to do exactly what Ted has been doing. And those of you who might have been watching us, I've been jotting down some hums that we might want to take based on the things that were said here.

I'm down to five. And if I even attempt to do any of these and a line forms to discuss what the hum means, we will not learn anything.

So I'm going to ask indulgence that we don't do that, so that we have a possibility of learning whether there is consensus to do something.

So I will start with Ted's hum, which I believe is: Is the IETF willing to respond to the pervasive surveillance attack? Yes or no?

Hum now for yes.


>>Russ Housley: Hum now for no.


>>Russ Housley: I think that's overwhelming.

[ Applause. ]

>>Pete Resnick: Bad hum!

[ Speaker off microphone ]

>> I think he has an opinion.

>>Russ Housley: You think?

So, if we all believe that pervasive surveillance is an attack and that the IETF needs to adjust our threat model to consider it when developing standards track specifications -- so should we consider this evolved threat model when considering whether to standards track specifications are acceptable or not? Yes or no. Hum now for yes.


>>Russ Housley: Hum now for no.


>>Russ Housley: That, too, was overwhelming.

Next question: The IETF should include encryption even outside of authentication where practical. Yes or no?

Hum for yes.


>>Russ Housley: Hum for no.


>>Russ Housley: That, too, is overwhelming.

[ Applause ]

>>Russ Housley: The IETF should strive for end-to-end encryption even when there are middle boxes in the path. The IETF should strive for end-to-end encryption even when there are middle boxes in the path. Hum for yes.


>>Russ Housley: Hum for no.


>>Russ Housley: That's mixed, but more yesses.

>>Russ Housley: And, finally, the IETF should create secure versions of popular non-secure protocols.

[ Laughter ]

>>Russ Housley: This is a response to the Dropbox-like comments. And so where we know there are insecure but highly-used protocols, should the IETF take on the creation of secure alternatives.

>> Like SMS?

>>Russ Housley: Hum for yes.


>>Russ Housley: Hum for no.


>>Russ Housley: Mostly yes, a couple nos. Thank you very much for indulging me about having an edit for each of those. We do have three minutes. And I guess Olaf really wants to say something.

>>Olaf Kolkman: Olaf Kolkman. I really want to ask what are the next steps, because these questions are broad. And there's a lot of nuance in this discussion. Having a sense of the room, which I understood your question was, is what you now got.

>>Russ Housley: Exactly.

>>Olaf Kolkman: But, getting the nuances right, how are we going to do that?

>>Russ Housley: There are at least two things we are yet going to do this week. The first is this week first right after lunch. And that is the perpass BoF where we will discuss some potential work that the IETF might take on in that area. And the IESG and the IAB have a, quote -- we call it our hot spots discussion at the end of the week. And one of the topics we'll be discussing at that point is what does this information about what the community wants to do inform our actions?

>>Jari Arkko: Russ.

>>Russ Housley: Yes, Jari.

>>Jari Arkko: So, in terms of what are the next steps for the hums or the positions that the room seems to be having today here, I think, that's where Brian's long process comes in, I guess. And, you know, of course Olaf is right. This is very nuanced, and it's more detailed than we heard today. I think this gives us now very good direction on how to move forward. I think we probably want to document some of this as an RFC and actually get text written out. And, if you look at some of the older RFCs, it's not just single sentence, it's pages and pages of nuance. So that is the task that we have to look at as well in the coming months.

>>Russ Housley: Yes, Derek.

>>Derek Atkins: Derek Atkins of Mocana. A question to you Russ and to everybody else. The IETF has in the past already standardized on certain opportunistic technologies which have never been implemented. So, if we're just creating standards, how do we get them to be implemented and then deployed in the wild if we've already failed at doing so? And what do we need to change to get those implementations out there when we haven't in the past?

>>Russ Housley: Clearly, that has to be part of the discussion. And I certainly don't have an answer off the top of my head. And buttons is one example of work you're talking about.

>>Eliot Lear: And, Derek, that's actually part of the topic of the ITAT workshop is how do we get protocols implemented.

>>Russ Housley: v6 aligns with what you just said as well and has nothing to do with security. I'm sorry. We are out of time.

>>Xiaohong Deng: Xiaohong Deng. One last question. I'm really thinking out of box. Consider this: Shipping one thing from a place to another place around the globe is not something new. Why people were not complaining that my mails was stealed or my goods are losing during the shipping? It's things we have been done for hundreds of years. One thing I might think of is maybe during that time encryption was one thing. And, second, the mailman, all of them are identified so that less likely to steal the things, to steal the mails. And also, for example, consider this: If your mails were stealed or lost, you will sue your delivery company, right? That's things that I can think of.

>>Russ Housley: Okay. Well, that's certainly outside -- the last part is certainly outside the technical part that we came here to talk about. But, as we said in the beginning, there are things that other people need to do as well.

Enjoy your lunch.

[ Applause. ]



The Internet Engineering Task Force (IETF) is an organized activity of the Internet Society (ISOC).

Internet SocietyISOC is a non-profit organization founded in 1992 to provide leadership in Internet-related standards, education, and policy. It is dedicated to ensuring the open development, evolution and use of the Internet for the benefit of people throughout the world.  See: