Skip to main content

The Capitol riot and its aftermath makes the case for tech regulation more urgent, but no simpler

Last week and throughout the weekend, technology companies took the historic step of deplatforming the president of the United States in the wake of a riot in which the US Capitol was stormed by a collection of white nationalists, QAnon supporters, and right wing activists. The decision to remove Donald Trump, his fundraising and moneymaking […]

Last week and throughout the weekend, technology companies took the historic step of deplatforming the president of the United States in the wake of a riot in which the US Capitol was stormed by a collection of white nationalists, QAnon supporters, and right wing activists.

The decision to remove Donald Trump, his fundraising and moneymaking apparatus, and a large portion of his supporters from their digital homes because of their incitements to violence in the nation’s Capitol on January 6th and beyond, has led a chorus of voices to call for the regulation of the giant tech platforms.

They argue that private companies shouldn’t have the sole power to erase the digital footprint of a sitting president.

But there’s a reason why the legislative hearings in Congress, and the pressure from the president, have not created any new regulations. And there’s also a reason why — despite all of the protestations from the president and his supporters — no lawsuits have effectively been brought against the platforms for their decisions.

The law, for now, is on their side.

The First Amendment and freedom of speech (for platforms)

Let’s start with the First Amendment. The protections of speech afforded to American citizens under the First Amendment only apply to government efforts to limit speech. While the protection of all speech is assumed as something enshrined in the foundations of American democracy, the founders appear to have only wanted to shield speech from government intrusions.

That position makes sense if you’re a band of patriots trying to ensure that a monarch or dictator can’t abuse government power to silence its citizens or put its thumb on the lever in the marketplace of ideas.

The thing is, that marketplace of ideas is always open, but publishers and platforms have the freedom to decide what they want to sell into it. Ben Franklin would never have published pro-monarchist sentiments on his printing presses, but he would probably have let Thomas Paine have free rein.

So, the First Amendment doesn’t protect an individuals’ rights to access any platform and say whatever the hell they want. In fact, it protects businesses in many cases from having their freedom of speech violated by having the government force them to publish something they don’t want to on their platforms.

Section 230 and platform liability 

BuT WhAt AbOUt SeCTiOn 230, one might ask (and if you do, you’re not alone)?

Canceling conservative speech is hostile to the free speech foundation America was built on.

There is no reason why social media organizations that pick & choose which speech they allow to be protected by the liability protections in 47 US Code Section 230.

230 must be repealed

— Greg Abbott (@GregAbbott_TX) January 10, 2021

Unfortunately, for Abbott and others who believe that repealing Section 230 would open the door for less suppression of speech by online platforms, they’re wrong.

First, the cancellation of speech by businesses isn’t actually hostile to the foundation America was built on. If a group doesn’t like the way it’s being treated in one outlet, it can try and find another. Essentially, no one can force a newspaper to print their letter to the editor.

Second, users’ speech isn’t what is protected under Section 230; it protects platforms from liability for that speech, which indirectly makes it safe for users to speak freely.

Where things get complicated is in the difference between the letter to an editor in a newspaper and a tweet on Twitter, post on Facebook, or blog on Medium (or WordPress). And this is where U.S. Code Section 230 comes into play.

Right now, Section 230 protects all of these social media companies from legal liability for the stuff that people publish on their platforms (unlike publishers). The gist of the law is that since these companies don’t actively edit what people post on the platforms, but merely provide a distribution channel for that content, then they can’t be held accountable for what’s in the posts.

The companies argue that they’re exercising their own rights to freedom of speech through the algorithms they’ve developed to highlight certain pieces of information or entertainment, or in removing certain pieces of content. And their broad terms of service agreements also provide legal shields that allow them to act with a large degree of impunity.

Repealing Section 230 would make platforms more restrictive rather than less restrictive about who gets to sell their ideas in the marketplace, because it would open up the tech companies to lawsuits over what they distribute across their platforms.

One of the authors of the legislation, Senator Ron Wyden, thinks repeal is an existential threat to social media companies. “Were Twitter to lose the protections I wrote into law, within 24 hours its potential liabilities would be many multiples of its assets and its stock would be worthless,” Senator Wyden wrote back in 2018. “The same for Facebook and any other social media site. Boards of directors should have taken action long before now against CEOs who refuse to recognize this threat to their business.”

Others believe that increased liability for content would actually be a powerful weapon to bring decorum to online discussions. As Joe Nocera argues in Bloomberg BusinessWeek today:

“… I have come around to an idea that the right has been clamoring for — and which Trump tried unsuccessfully to get Congress to approve just weeks ago. Eliminate Section 230 of the Communications Decency Act of 1996. That is the provision that shields social media companies from legal liability for the content they publish — or, for that matter, block.

The right seems to believe that repealing Section 230 is some kind of deserved punishment for Twitter and Facebook for censoring conservative views. (This accusation doesn’t hold up upon scrutiny, but let’s leave that aside.) In fact, once the social media companies have to assume legal liability — not just for libel, but for inciting violence and so on — they will quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs. Presidents and other officials could announce policies, but they wouldn’t be able to spin wild conspiracies.”

Conservatives and liberals crowing for the removal of Section 230 protections may find that it would reinstitute a level of comity online, but the fringes will be even further marginalized. If you’re a free speech absolutist, that may or may not be the best course of action.

What mechanisms can legislators use beyond repealing Section 230? 

Beyond the blunt instrument that is repealing Section 230, legislators could take other steps to mandate that platforms carry speech and continue to do business with certain kinds of people and platforms, however odious their views or users might be.

Many of these steps are outlined in this piece from Daphne Keller on “Who do you sue?” from the Hoover Institution.

Most of them hinge on some reinterpretation of older laws relating to commerce and the provision of services by utilities, or on the “must-carry” requirements put in place in the early days of 20th century broadcasting when radio and television were distributed over airways provided by the federal government.

These older laws involve either designating internet platforms as “essential, unavoidable, and monopolistic services to which customers should be guaranteed access”; or treating the companies like the railroad industry and mandating compulsory access, requiring tech companies to accept all users and not modify any of their online speech.

Other avenues could see lawmakers use variations on the laws designed to limit the power of channel owners to edit the content they carried — including things like the fairness doctrine from the broadcast days or net neutrality laws that are already set to be revisited under the Biden Administration.

Keller notes that the existing body of laws “does not currently support must-carry claims against user-facing platforms like Facebook or YouTube, because Congress emphatically declined to extend it to them in the 1996 Telecommunications Act.”

These protections are distinct from Section 230, but their removal would have similar, dramatic consequences on how social media companies, and tech platforms more broadly, operate.

“[The] massive body of past and current federal communications law would be highly relevant,” Keller wrote. “For one thing, these laws provide the dominant and familiar model for US regulation of speech and communication intermediaries. Any serious proposal to legislate must-carry obligations would draw on this history. For another, and importantly for plaintiffs in today’s cases, these laws have been heavily litigated and are still being litigated today. They provide important precedent for weighing the speech rights of individual users against those of platforms.”

The establishment of some of these “must-carry” mandates for platforms would go a long way toward circumventing or refuting platforms’ First Amendment claims, because some cases have already been decided against cable carriers in cases that could correspond to claims against platforms.

This is really happening already so what could legislation look like

At this point the hypothetical scenario that Keller sketched out in her essay, where private actors throughout the technical stack have excluded speech (although the legality of the speech is contested), has, in fact, happened.

The question is whether the deplatforming of the president and services that were spreading potential calls to violence and sedition, is a one-off; or a new normal where tech companies will act increasingly to silence voices that they — or a significant portion of their user base — disagree with.

Lawmakers in Europe, seeing the actions from U.S. companies over the last week, aren’t wasting any time in drafting their own responses and increasing their calls for more regulation.

Europe seizes on social media’s purging of Trump to bang the drum for regulation

In Europe, that regulation is coming in the form of the Digital Services Act, which we wrote about at the end of last year.

On the content side, the Commission has chosen to limit the DSA’s regulation to speech that’s illegal (e.g., hate speech, terrorism propaganda, child sexual exploitation, etc.) — rather than trying to directly tackle fuzzier “legal but harmful” content (e.g., disinformation), as it seeks to avoid inflaming concerns about impacts on freedom of expression.

Although a beefed up self-regulatory code on disinformation is coming next year, as part of a wider European Democracy Action Plan. And that (voluntary) code sounds like it will be heavily pushed by the Commission as a mitigation measure platforms can put toward fulfilling the DSA’s risk-related compliance requirements.

EU lawmakers do also plan on regulating online political ads in time for the next pan-EU elections, under a separate instrument (to be proposed next year) and are continuing to push the Council and European parliament to adopt a 2018 terrorism content takedown proposal (which will bring specific requirements in that specific area).

Europe has also put in place rules for very large online platforms that have more stringent requirements around how they approach and disseminate content, but regulators on the continent are having a hard time enforcing htem.

Keller believes that some of those European regulations could align with thinking about competition and First Amendment rights in the context of access to the “scarce” communication channels — those platforms whose size and scope mean that there are few competitive alternatives.

Two approaches that Keller thinks would perhaps require the least regulatory lift and are perhaps the most tenable for platforms to pursue involve solutions that either push platforms to make room for “disfavored” speech, but tell them that they don’t have to promote it or give it any ranking.

Under this solution, the platforms would be forced to carry the content, but could limit it. For instance, Facebook would be required to host any posts that don’t break the law, but it doesn’t have to promote them in any way — letting them sink below the stream of constantly updating content that moves across the platform.

“On this model, a platform could maintain editorial control and enforce its Community Guidelines in its curated version, which most users would presumably prefer. But disfavored speakers would not be banished enitrely and could be found by other users who prefer an uncurated experience,” Keller writes. “Platforms could rank legal content but not remove it.”

Perhaps the regulation that Keller is most bullish on is one that she calls the “magic APIs” scenario. Similar to the “unbundling” requirements from telecommunications companies, this regulation would force big tech companies to license their hard-to-duplicate resources to new market entrants. In the Facebook or Google context, this would mean requiring the companies open up access to their user generated content, and other companies could launch competing services with new user interfaces and content ranking and removal policies, Keller wrote.

“Letting users choose among competing ‘flavors’ of today’s mega-platforms would solve some First Amendment problems by leaving platforms own editorial decisions undisturbed,” Keller writes.

Imperfect solutions are better than none 

It’s clear to speech advocates on both the left and the right that having technology companies control what is and is not permissible on the world’s largest communications platforms is untenable and that better regulation is needed.

When the venture capitalists who have funded these services — and whose politics lean toward the mercenarily libertarian — are calling for some sort of regulatory constraints on the power of the technology platforms they’ve created, it’s clear things have gone too far. Even if the actions of the platforms are entirely justified.

However, in these instances, much of the speech that’s been taken down is clearly illegal. To the point that even free speech services like Parler have deleted posts from their service for inciting violence.

The deplatforming of the president brings up the same points that were raised back in 2017 when Cloudflare, the service that stands out for being more tolerant of despicable speech than nearly any other platform, basically erased the Daily Stormer.

“I know that Nazis are bad, the content [on The Daily Stormer] was so incredibly repulsive, it’s stomach turning how bad it is,” Prince said at the time. “But I do believe that the best way to battle bad speech is with good speech, I’m skeptical that censorship is the right scheme.

“I’m worried the decision we made with respect to this one particular site is not particularly principled but neither was the decision that most tech companies made with respect to this site or other sites. It’s important that we know there is convention about how we create principles and how contraptions are regulated in the internet tech stack,” Prince continued.

“We didn’t just wake up and make some capricious decision, but we could have and that’s terrifying. The internet is a really important resource for everyone, but there’s a very limited set of companies that control it and there’s such little accountability to us that it really is quite a dangerous thing.”

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.