LawBytes Podcast – Episode 22 was automatically transcribed by Sonix with the latest audio-to-text algorithms. This transcript may contain errors. Sonix is the best way to convert your audio to text in 2019.
Michael Geist:
This is Law Bytes, a podcast with Michael Geist.
Dan Albas:
Minister, number nine of your guidelines: free from hate and vile extremism. One, the prime minister has been obviously talking a lot, refers to protecting Canadians from hate, violent extremism as well as disinformation. Now, I believe no one here defends hate speech and all Canadians deserve to feel safe in their communities and online. My question is, how will you enforce this measure? How will you monitor these platforms while also protecting free speech?
Navdeep Bains:
So free speech is absolutely essential. It’s part of our charter rights and freedom. This is why I became a liberal. And this is really core to our democracy and what it means to be Canadian. But at the same time, there’s clear limitations to that when it comes to hate, for example. And we see newspapers and broadcasters that hold themselves to account when it comes to not spewing that kind of hate on their platforms. So clearly, these digital platforms that have emerged also have a responsibility. We all are very aware of the 51 individuals that were killed in New Zealand, in Christchurch. And that really prompted this call to action where the prime minister was at Paris to say platforms need to step up. If they had the technology, if they had the ability to bring people together, to connect people and then investing in A.I. and all these different technologies, they need to deploy those technologies to prevent those platforms from being used as means to disseminate extremism, terrorism or hate. And so that’s what we’re trying to do with the government as a government is really apply pressure to these platforms to hold them to account. And those platforms recognize they need to step up as well. And that’s one key mechanism of how we want to deal with this.
Michael Geist:
The debates over intermediary liability, which focus on what responsibility should lie with Internet platforms and service providers for the content they host that’s posted by their users has been taking place around the world in Parliament’s op ed pages and the broader public debate. Much like the exchange you just heard between Canadian Conservative MP Dan Albas and Innovation, Science and Economic Development Minister Navdeep Bains from earlier this spring, there are no easy answers with policy choices that have implications for freedom of expression, online harms, competition and innovation. To help sort through the policy options and their implications, I’m joined on the podcast this week by Daphne Keller, the director of Intermediary Liability at Stanford Center for Internet and Society. Daphne, who served as associate general counsel for Google until 2015, worked on groundbreaking intermediary liability litigation and legislation around the world while at the company, and her work at Stanford focuses on platform regulation and Internet users rights. She recently posted an excellent article on the Balkanization blog that provided a helpful guide to intermediary liability lawmaking and agreed to chat about how policymakers can adjust the dials on new rules to best reflect national goals.
Michael Geist:
Daphne, thanks so much for joining me on the podcast.
Daphne Keller:
Thank you. It’s good to be here.
Michael Geist:
Great. So as you know, there’s been a lot of momentum lately towards regulating online speech and establishing more liability for the large Internet platforms. That’s an issue that we’ve been grappling with, really, I think, since the popularization of the Internet. Back in the 1990s. But today there seems to be more and more countries expressing concern about online harms and looking especially to the large platforms the Google and Facebook and others to do more with real legal and regulatory threats if they don’t. So before we get into some of the challenges inherent with this kind of do something demands. I thought we could set the groundwork a little bit from a couple of perspectives, both with the law says now and with the platforms have been doing. Why don’t we start with the laws and recognizing their differences, of course, between lots of different countries. Where have we been for the last number of years anyway, even going back a couple of decades with respect to some of these liability questions?
Daphne Keller:
Well, a lot of countries never enacted Internet specific content liability laws. So depending where you are in the world, it might be that these things get resolved just based on existing defamation laws or existing copyright law. But in the U.S. and the European Union, the law has been relatively stable going back two decades-ish. In the US, we’ve had two very major statutes that occupy almost the whole field. We have the DMCA Digital Millennium Copyright Act for copyright, and that sets out a sort of detailed takedown process with a lot of prescriptive steps. And then the other major U.S. law is Communications Decency Act 230, generally known as CDA 230, which is a very broad immunity for most other kinds of claims for anything that’s not intellectual property or a federal crime. So things like defamation or invasion of privacy claims, the platforms are just immunized. In Europe – is it useful if I go into some detail about Europe or is that wandering off topic for you?
Michael Geist:
I think it’s really useful because from a Canadian perspective in particular, we’ve on the one hand got now the USMCA, That seems to put some of the U.S. rules in place in Canada, at least at a high level, via trade. But at the same time, I don’t think there’s any question but that what’s been taking place in Europe is influencing a lot of the thinking amongst some Canadian politicians.
Daphne Keller:
Yeah, okay. So the main law on platform liability at the EU level is the e-commerce directive, which was passed in 2000 and it’s implemented in different member state laws slightly differently. But the basic concept is you get limited immunity if you are a certain kind of intermediary. So you have to be hosting or caching or transit provider. So it’s a little bit of a funny immunity in that it’s not clear if it covers search engines or some other things you might expect to be covered by intermediary liability protections. But if you’re eligible for those safe harbors, the rule is basically you have to take down unlawful content if you know about it. However, the member states and the courts implement the law. They can’t compel you to go out and proactively monitor. It’s just a reactive, knowledge based obligation. And that I think has had some some real shortcomings just because it lacks a lot of the procedural protections that you see in something like the DMCA, where, for example, the person who’s being accused of copyright infringement is supposed to be able to get a notice that it’s happened and be able to challenge it and so forth. There isn’t that kind of detail in most European laws. And so platforms have an even greater incentive to just take an accuser’s word for it and go ahead and take content down, even if it’s not at all clear that it’s illegal because it’s much safer to take things down and avoid risk for yourself. And you know what empirical data we have shows this happening shows lots and lots of unfounded allegations and lots and lots of erroneous takedowns.
Michael Geist:
Right. I mean, I think the situation sometimes can be somewhat similar in Canada, where without some of the clear cut procedural safeguards faced with the question of what might be unlawful content or might not be, sometimes large platforms may err on the side of taking down just because it’s simpler to do that. So we’ve got large platforms having some amount of protections, safe harbors in both the major jurisdictions, stronger procedural protections such that perhaps, I suppose the bias is more to leave up in the United States unless the process is met, whereas in Europe that may reverse. How how do the companies handle some of those kinds of differences? Is it as simple as in the US they’re more likely to err to leave content online and in Europe and perhaps similar countries without those procedural protections they’re more likely to take things down.
Daphne Keller:
Certainly, I mean that if if we’re talking about the big platforms like the Facebook and Googles and Twitters of the world, they all have nationally specific versions that are targeted to users in a particular jurisdiction and often are optimized for them in ways that are about commercial success. There will be a Google Doodle that’s relevant for a local holiday that’s shown just in that country, for example, but also by having different versions of the service for different countries, you can sort of sandbox legal compliance and say, OK, we’ve established that this content is illegal in France, so we’re going to take it down from the French version of the service, but we’re not going to apply French law globally.
Michael Geist:
Right. So that, of course, gets us into the question of things like the Equustek case that we had in Canada, where you get a single country like Canada trying to make those decisions not just for its own citizens, but effectively for others by a court order. But we’ll park that for the moment and stick to it, because if this stuff gets so complicated so quickly and I guess stick, stick primarily to the the pressure for more regulation, the sense that somehow the rules, as you’ve just articulated, are at least in the minds of certainly some politicians, and we certainly see it as part of the discourse not good enough. That erring even on one side or the other still has left us, in a sense with a certain amount of harm online. And I think there’s a greater concern and appreciation for that. So there is unquestionably mounting pressure to do more from a regulatory perspective as a way of requiring, in effect, these large platforms to do more. Now, you’ve been really prolific on the issue and written all different kinds of things on it, but it was a piece on the blog Balkanization that really crystallized it for me because it highlights the challenges of intermediary liability laws. I guess as a starting point, what are we often trying to balance when it comes to these laws?
Daphne Keller:
So there are generally three goals that legislatures are trying to balance. One is to prevent harm. So to take down content if it’s defaming someone or if it’s a movie piracy or, you know, causing harm. Another is to protect free expression. And obviously, there’s this tradeoff where the platforms are very afraid of liability. They’re likely to err on the side of taking things down and so controversial speech gets suppressed and so forth. And then the third goal is protecting technical innovation and economic growth that can come with it. So, you know, if you are a small startup, it’s really important to have immunities and, you know, know that you’re not going to be required, for example, to build a 100 million dollar content filter, because right now, at least in the US and in Europe, if you start a new platform and you want to compete with Facebook or compete with Twitter, you can know with relative certainty what kind of legal exposure you’re setting yourself up for and what it is you’re going to have to take down and potentially pay lawyers for. But but if that becomes less certain, then it’s harder and harder for small companies to enter the market and for people to experiment with new technologies. So just to recap, the three goals being balanced are harm prevention, free expression protection, and innovation.
Michael Geist:
Sure. And I guess before we get into sort of how how you move some of those dials with those three three goals, I’m going to assume that many countries will look at each of those three policy objectives somewhat differently. Some may have constitutional norms that provide very strong protections on the freedom of expression side and are more willing to give, let’s say, on the innovation side.
Daphne Keller:
Yeah, absolutely. And that kind of manifests in two ways. One is that some countries prohibit more speech than others. So, you know, they strike a balance. For example, that protects privacy more and sacrifices free expression and exchange or vice versa. But also that manifests in how countries set up their platform liability rules, you know, whatever it is that you are prohibiting. Your platform liability rules are going to need platforms to err in one side or the other. And so if you are starting from less speech protective goals, then maybe you’re more tolerant of a rule that’s going to lead platforms to take down a little bit too much speech or a lot too much speech.
Michael Geist:
Right. And certainly we’ve seen, at least in some places, perhaps with or without some of the constitutional norms around freedom of expression, there’s been certainly, at least lately, a great deal of emphasis on the harm side.
Daphne Keller:
Absolutely.
Michael Geist:
And if that’s the priority, then, you know, if if we’re if we’re kind of trying to deal with each of these three things, there may be real implications, I think is what you’re getting at. Ultimately, you either for fostering innovative competitors in this space or for the safeguards around freedom of expression.
Daphne Keller:
Yeah. And I think right now we’re seeing a big tendency for policies from Europe to get exported to the rest of the world, either via other countries adopting similar laws or via platforms taking European law and just applying it globally. But that’s kind of problematic, not just because of the conflict with United States law, which is what you hear about the most, but because of the conflict with a lot of other countries’ laws. If you compare in human rights law, the European Convention or the EU charter, they prioritize some things like privacy and personality rights protection relatively high compared to the Inter-American convention, which explicitly is set up to prioritize free expression more highly.
Michael Geist:
Right. And we we ran into some of those questions in Canada last year around Web site blocking related issues where again, it was free of expression versus copyright versus privacy versus even net neutrality type issues. And you’ve got to grapple with each of those kinds of competing objectives. Why don’t we stay for a moment with the implications for freedom of expression around this? Because they’re at least as part of the discourse lately, there’s been a tendency to amongst some certainly to sideline that, to sort of say, well, listen. Of course, it may have some implications, but we’re now focused more on the harm. As we start getting into intermediary liability type rules, what ultimately are some of the real implications for the negative of potential negative effects, I suppose, for freedom of expression?
Daphne Keller:
Well, I mean, already we see things like governments abusing copyright takedown systems to suppress criticism. The Government of Ecuador got caught using DMCA requests to try to take down police brutality videos and critical journalism. So, you know, even with the systems that we have now, there’s a lot of opportunity for abuse. Sometimes it silences really important political speech. Other times, the abusive takedown requests are like one commercial competitor trying to silence another, which is also a problem. But then there’s just that there’s there’s a lot of room for important speech to disappear.
Daphne Keller:
The maybe most politically consequential shift that I see right now is the tremendous emphasis in Europe and in some other regions on terrorist content, because I think as platforms err on the side of taking down too much to be safe, the thing that’s kind of adjacent to so-called terrorist content is likely to be political speech about tough issues, you know, about American military policy in the Middle East or about immigration policy in Europe. And so that sort of erring on the side of taking down too much when what you’re looking for is potentially violent extremist supporting speech, threatens some really important stuff.
Michael Geist:
That’s interesting. I mean, in Canada, we’ve largely avoided the takedown rules and copyright that you referenced. Successive governments have in a sense, I think looked at the experience elsewhere and seen some of these kinds of implications, such as the Ecuadorian example that you just provided and largely avoided adopting that, though many of those platforms that, of course, were very popular in Canada still use effectively take down systems. So Canadians find themselves subject to it at a certain level, even if it isn’t found within our laws. It’s striking to talk about sort of some of these decisions and the removal of content. What role, if any, do the courts play in all of this or is this just it falls to the platforms and they are the ones making these calls?
Daphne Keller:
Well, it depends where you are. There are some very interesting rulings internationally saying the courts have to be involved in in some countries. So in Argentina, the Supreme Court ruled that for for most kinds of content, a platform doesn’t have any legal obligation to take it down until a court has looked at it and given it a full and fair due process and adjudicated that it’s illegal because they didn’t want to put platforms in the decisions of being the arbiters of speech rules. There is a similar ruling from the Supreme Court of India saying you need an adequate government authority to decide what’s illegal and you shouldn’t put it in the hands of platforms. That, of course, isn’t how it has worked in the US with copyright or in Europe with that that knowledge base takedown systems that they have. And that created a sort of asymmetry in the access to remedies for the people who are affected by takedowns. If you’re somebody who is a victim of defamation or a rights holder whose copyright is being infringed and a platform doesn’t do what you want, you can sue them. And here you can take it to court and get your rights enforced. But if you’re someone who’s an online speaker and you have been wrongly silenced by a false accusation or an error, in most countries, you don’t have standing to go to court and challenge that. So there isn’t a way to correct the errors of over removal. There’s only a way using courts to correct the errors under removal.
Michael Geist:
I mean, it’s it is for those that are accustomed to seeing your due process as a core part of protecting freedom of expression, the notion that we would ultimately leave to large platforms these decisions, can be pretty frightening. And it was again, the site blocking issue in Canada, the proposal that was put forward was one that did not involve direct court oversight, which was one, one or a part where the real concerns lay. When you start vesting so much responsibility in these platforms to make these kinds of decisions. There are those that say that’s that’s appropriate in part because they are increasingly likening the platforms to publishers and saying is this sure looks a lot like a conventional publisher, shouldn’t they have the same kind of responsibility? What are some of the implications as you see it, as of treating large Internet platforms as akin to a conventional publisher?
Daphne Keller:
Well, I think it would be impossible for them to function if they were treated like publishers. Publishers do pre publication review of the editorials that they put up or the, you know, TV shows that they air. And if there is something controversial in there, they have a lawyer look at it and decide if it’s legal. You can’t layer a process like that on top of Twitter or Facebook. You know, what are they supposed to do? Hold all of our tweets while they have their legal team, evaluate them. Just. There isn’t a model where truly publisher-like legal responsibility can be put on platforms, but we still get to post things instantaneously and communicate and have a soapbox or talk to our friends, you know, all of the uses that we value that comes from having an instantaneous communication platform on the Internet, depend on those intermediaries is not having to carry with you everything we say.
Michael Geist:
I mean, that does highlight the particular challenge that I know you that you’ve seen. I see it at least one of the Internet content moderation at scale conferences. When you start getting into just the sheer amount of content that exists and what it ultimately means to put responsibility on a platform potentially to vet all of that, even to not vet it, even to try to deal with all of it is something that we haven’t really seen. I think really before in publishing or content history. It’s everybody having the opportunity in a sense to speak and using these platforms to do it. What are some of the implications if you if you move towards almost a one size fits all type approach saying that we are going to have this requirement, whether it’s vetting beforehand or even take action after given the scope and size of what’s taking place. If we treat the Facebook as akin to know other other platforms or large sites that have a lot of user generated content out there, the Wikipedia’s or Reddit’s of the world.
Daphne Keller:
Yeah. Well, I mean I do want to be clear that I’m not saying our only choices are give them complete immunity or, you know, lose the Internet. That is the point of the Balkanization piece is there are a lot of knobs and dials you can turn in the laws. You could have an accelerated TRO process to get something taken down or you could have some kinds of content where we do expect platforms to know it when they see it and take it down and others where you wait for a court, which is what the law de facto does anyway right now. You know, platforms even in the US have to take down child sex abuse material immediately if they see it. They’re not supposed to wait for a court to assess it. But the rule is very different for defamation, you know, where it’s often very difficult to know the correct legal assessment. So, you know, just with that background that I don’t think we we need an all or nothing system and I’m not saying lawmakers in the 90s got things perfect and we should never re-ask any questions. But whatever the obligations are that we put on platforms, the kinds of things we might reasonably ask Facebook or YouTube to do are very different from the kinds of things we might reasonably ask a small local blog or a two person company developing a chat app or, you know, smaller competitors to do.
Daphne Keller:
And I think lawmakers are often falling into a trap where they say we need to regulate platforms. And what they have in mind is Facebook and YouTube and they know that YouTube can do things like spend one hundred million dollars developing a copyright filter and they know that Facebook can do things like hire is at 20 or 30 thousand people at this point to do content, moderation. You know, they just sort of really move mountains and put tremendous resources into this. And so they craft laws accordingly. They say, well, platforms should have to filter, platforms should have to have very rapid human review when they’re notified that something is unlawful. And that’s tolerable for Google and Facebook. I think those laws could you are very likely to change the major platforms so that they take down a lot more lawful speech, but they’re not going to go out of business. But if you are Medium or Automatic or even Pinterest or Reddit. Reddit has 500 employees. They don’t have a multi-tens of thousands of people moderation team. And so the kinds of rules that might plausibly be imposed on very large platforms just won’t work for small platforms.
Michael Geist:
No, I think that’s a good point. And I think we’re certainly we’re law we’re sort of past the prospect of saying it’s there are no rules out there. I think you’ve highlighted it. There’s there always have been some and in some places there’s been an expectation of even more aggressive take down and moderation. But it’s clear we’re moving more and more. The question is, I think, as you’ve put it, how you adjust the dial at a certain level. One of the things that is was striking to me is how much emphasis there has been on the platform responsibility for harmful speech, let’s say, as opposed to the focus on individuals themselves. So, you know, in the aftermath of Christchurch, for example, terrible event where almost all the focus seemed to be on what Facebook did or didn’t do or YouTube did or didn’t do, as opposed to the individual who did this and other people around that that might have been doing this. Do you have thoughts on what we might do to not just focus on platform responsibility here, but individual responsibility as well for where there are people purveying hate or engaging in things that are illegal under various laws.
Daphne Keller:
Yeah, I think the focus on platforms is on the one hand understandable because they represent a choke point. You know, like they can shut down a lot of individuals in situations where it’s hard for plaintiffs and law enforcement to go find those individuals. But they’re a pretty bad choke point because they won’t stand up for the individual speakers interests outside of, you know, relatively special circumstances. But on the other hand, focusing on the platforms really risks failing to address the underlying issues. And this we’ve seen this in the EU terrorism context. There’s been tremendous energy put into making platforms take down videos that are recruitment videos or terrorist violence videos. And then when civil society organizations in Europe have asked the police, well, how many of those uploader is did you go try to find or how many of the video creators did you prosecute? How many actual investigations came out of this? There don’t seem to be a lot of a lot of efforts being put in that direction. And so, you know, it’s not that all but all enforcement should move off of platforms and onto individuals. But it certainly is the case that focusing so excessively on platforms is missing out on really important pieces of solving the problem.
Daphne Keller:
The other I mean, for many cases, there is another complication here, as you know well from from the copyright context and from other contexts where you work, which is online, speakers who are sharing illegal content are often anonymous. And so if we say the law should go after the speakers more, you know, that starts inviting lawmakers to strip away at anonymity rights or propose that platform should have to retain the real I.D. of people who post content. So, you know, there are huge policy tradeoffs in any direction there.
Michael Geist:
Yeah. It’s I think it’s really striking just how each time you peel back just a little bit on some of these policy choices, it’s not the slam dunk that you sometimes hear about as part of discussion. Just you just regulate that. You know, they broke it. They’ve got to fix it sort of thing, because there are there are those kinds of choices. I assume you don’t have much of a crystal ball and it’s tough to know where we’re necessarily going. So rather than us closing by asking what is this landscape going to look like in 12, 12 months or 24 months, I guess I’m curious, are you optimistic that as there is action, because I think it certainly feels like there’s a lot of momentum there, that that countries and politicians are going to get the complexity that that you’re highlighting here? Or are we at a point at a moment in time where there is just there’s the so-called tech-lash and strong momentum towards you got to do something that some of those implications will simply get lost in the rush to do something?
Daphne Keller:
I’m not optimistic in the US, and this is part of why I put up that Balkanization piece, because I see people proposing laws that are just ignorant of sort of the known doctrines that can be deployed in intermediary liability. You know, they say, oh, well, let’s just tell platforms to be reasonable without looking at what are platform is likely to do. If they have a vague standard, well, they’re likely to just take everything down to avoid risk. So I think we are at risk in the US of getting laws that are so badly drafted that they might just be unconstitutional. But going through the process of passing a law and then litigating to figure out if it’s unconstitutional is not a really good way to arrive at standards. In Europe, I’m in a way more optimistic. It’s it’s not that I like most of the legal proposals that have been coming out of Europe now, but that’s mostly because they represent a sort of policy tradeoff that I wouldn’t make between free expression protection and harm prevention, for example. But European civil society has been very active on intermediary liability issues for quite a while. And so you tended to see in the legal proposals coming out of the EU at least process protections, you know, at least ideas like if you’re going to use a technical filter to identify supposedly unlawful content, you should have some humans double check to make sure that filter didn’t make a mistake. Or you see legal proposals saying things like you, you should notify big users and give them an opportunity to challenge or that the latest draft of the terrorist content regulation, which is very close to becoming a law there, has some really impressive transparency provisions for government. So saying not just that platforms have to be transparent about what they’re taking down and why, but also that if governments are requesting that content be taken down, they need to tell the public what it is that they’re doing. So we are slowly moving toward kind of knowing what the what the dials and knobs are and what are the things that we can do to help create more protections. And in a way, slowing things down seems like our best chance of building up a more educated set of policy making, more education in the policymaking community so that we get better laws.
Michael Geist:
Well, I think you’ve done it. You’ve done a lot to try to help educate because they say the stuff that you’ve been working on, the large databases that highlight the kinds of cases that are out there that allow for a more comparative look as well as some of the analysis is in many ways where people need to start once they’ve once they’ve concluded that there needs to be some kind of policy measures taken or regulatory measures taken. There has to be recognition that’s step one. that’s not the end of the story. That’s really in many ways just the beginning of trying to craft things that are both effective, but also reflect the sort of values that domestically exist as well as constitutional norms and all the other policy priorities that you say can be fiddled with, I suppose, with those knobs and dials.
Daphne Keller:
Yeah, well, hopefully we’ll do a good job.
Daphne Keller:
Daphne, thanks so much for joining me on the podcast.
Daphne Keller:
Thank you, Michael.
Michael Geist:
That’s the Law Bytes podcast for this week. If you have comments suggestions or other feedback, write to lawbytes.com. That’s lawbytes at pobox.com. Follow the podcast on Twitter at @lawbytespod or Michael Geist at @mgeist. You can download the latest episodes from my Web site at Michaelgeist.ca or subscribe via RSS, at Apple podcast, Google, or Spotify. The LawBytes Podcast is produced by Gerardo LeBron Laboy. Music by the Laboy brothers: Gerardo and Jose LeBron Laboy. Credit information for the clips featured in this podcast can be found in the show notes for this episode at Michaelgeist.ca. I’m Michael Geist. Thanks for listening and see you next time.
Sonix uses cutting-edge artificial intelligence to convert your mp3 files to text.
Thousands of researchers and podcasters use Sonix to automatically transcribe their audio files (*.mp3). Easily convert your mp3 file to text or docx to make your media content more accessible to listeners.
Sonix is the best online audio transcription software in 2019—it’s fast, easy, and affordable.
If you are looking for a great way to convert your mp3 to text, try Sonix today.