Skip Navigation
Get updates:

We respect your privacy

Thanks for signing up!

WASHINGTON — On Wednesday, Free Press Action Vice President of Policy and General Counsel Matt Wood testified during a House Energy and Commerce Committee hearing on “Holding Big Tech Accountable.”

Wood’s full written testimony is available here.

What follows is his testimony as delivered to the committee:

Regarding “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity”
Dec. 1, 2021


Chairmen Doyle and Pallone; Ranking Members Latta and McMorris Rodgers: Thank you for having me back. Chairman Doyle, I must especially thank my hometown congressman for your leadership and kind attention to my input over the years, if this is the last time I have the honor to appear before you.

Today’s hearing proposes holding “big tech” accountable through what it describes as targeted reforms to Section 230 in four bills. That framing is understandable in light of testimony you’ve heard from others here about the harms platforms allow or cause.

Free Press Action has not endorsed or opposed any of these bills. We see promising concepts in them — but some cause for concern too. That’s because Section 230 is a foundational and still fully necessary law. It benefits not just tech companies large and small, but the hundreds of millions of people who use their services and share ideas online.

That’s why Congress must strike the right balance, preserving the powerful benefits of this law, but considering revisions to better align court outcomes with the statute’s plain text.     

Section 230 lowers barriers to people posting their own content, ideas, and expression without needing the pre-clearance platforms would demand if they could be liable for everything users say. This law protects platforms from being treated as publishers of other parties’ information, yet also permits platforms to make content-moderation decisions while retaining that protection.

Section 230 thus encourages the open exchange of ideas, but also takedowns of hateful and harmful material. Without those paired protections, we’d risk losing moderation and risk chilling expression too. That risk is especially high for Black and Brown folks, LGBTQ+ people, immigrants, religious minorities, dissidents, and all ideas targeted for suppression by powerful people willing and able to sue just to silence statements they don’t like.

But as you have heard today, members of those same communities can suffer catastrophic harms online and off from platform conduct too. So it’s not just in the courtroom that marginalized speakers must fear being silenced, harassed, and harmed — it’s in the chat room too, in social media, comment sections, and other interactive apps.

Repealing Section 230 outright is a bad idea and wouldn’t fix all of these problems. We need privacy laws that protect against abusive data practices and other positive civil-rights protections applied to platforms. Without 230, there might be tort remedies or criminal sanctions in a few cases for underlying content; but no remedy for amplification if underlying speech is protected by the First Amendment, and not tortious.

Yet while the First Amendment is a check on claims that speech incited another’s violent and wrongful acts, and a constraint on speech torts like defamation too, those torts are not per se unconstitutional.

230’s current text should allow injured parties to hold platforms liable for such platforms’ own conduct, and even for content platforms themselves create when that is actionable too. And courts have let some suits go forward, for platforms posing their own discriminatory questions; layering content over user posts that encouraged users to drive at reckless speeds; or taking part in transactions in ways beyond letting third-party sellers post their wares.

But most courts have read it far more broadly, starting in Zeran v. AOL, which held that the prohibition on publisher liability precluded distributor liability too — even once a platform has actual knowledge of the unlawful or harmful character of material it distributes. People ranging from Justice Thomas to Professor Jeff Kosseff agree this is not the only plausible reading of 230’s plain text. When new cases call on courts to interpret the statute, decisions like Zeran prevent plaintiffs from testing liability for platforms’ conduct, not just their decision to host others’ content.

That’s why we’re interested in bills like Representative Banks’ H.R. 2000, or the Senate’s bipartisan PACT Act. They’d clarify the meaning of 230’s present text by reversing Zeran, or otherwise allow suits for platform conduct, including continued distribution of harmful content once platforms have actual knowledge of the harm it causes.

While bills like JAMA and PADAA take aim at that same laudable goal of deterring harmful amplification, we are concerned about legislating the technology this way. It could lead to hard questions about definitions and exemptions, rather than focus on providers’ knowledge and liability.    

We don’t want to chill amplification that is benign or beneficial, but also don’t want to prevent accountability when platforms’ actions cause harm even in the absence of personalized recommendations, or outside of carve-outs for important subjects like civil rights. The fact that a platform receives payment for publishing or promoting content could be highly relevant in determining its knowledge and culpability for any distinct harm distribution causes, but monetizing content or using algorithms should not automatically switch 230 off.

Unfortunately, the SAFE TECH Act tips even further toward chilling effects risked by any broad change to 230 by risking its protections any time a platform receives any payment at all, and by dropping the liability shield any time a platform is served with a request for injunctive relief.

We look forward to continuing this conversation on these important ideas in your questions today and in the legislative process ahead.

More Press Releases