Eroding Section 230 Immunity for Platforms

Bobby Chesney and Danielle Citron in an as yet unpublished article titled, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” argue convincingly that liability for “deep fakes” should fall to content platforms (i.e., Web sites that present information—think YouTube, Google, Facebook, Craigslist, eBay, Twitter, and I would argue, group messaging apps such as GroupMe or Bubble). In other words, they argue Congress should legislate away the liability shield provided to platforms by Section 230(c)(1) of the Communications Decency Act. This section states, “No provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.” 47 U.S.C. § 230(c)(1). Courts have interpreted this provision liberally, giving virtually unlimited immunity from liability to platforms for anything posted to their sites, even if those sites have solicited or knowingly hosted content regarding illegal and tortious activities. In fact, the U.S. Court of Appeals for the Second Circuit recently held that Section 230 bars civil terrorism claims against a social media company. Force v. Facebook, Inc., No. 18-397 (2d Cir. 2019).

Chesney and Citron are not alone. Senator Richard Blumenthal in a recent hearing on “Protecting Innocence in a Digital World” which was ironically the target of many witnesses’ complaints) called immunity under Section 230 “the elephant in the room.” (You can listen to Senator Blumenthal’s comments at about 1:40:00.) He calls the broad immunity under Section 230 the reason platforms do not take additional steps to eliminate content such as “deep fakes” from their platforms. In follow up, Senator Lindsay Graham stated in agreement, “things would change tomorrow if you could get sued.” Graham went on to say that platforms should “earn liability protection” with best business practices for protecting children that give a “safe haven from liability.” And that Congress should focus on developing those “best business practices.” While politicians find an easy and agreeable messaging in “protecting children,” the discussion about making platforms liable for the content they host should cause the technology community great concern, especially with the increasing risks posed by “deep fakes.”

“Deep fakes” are images or video that insert faces and voices of people over the faces and voices of the actual people in the video or image. The YouTube channel “Ctrl Shift Face” recently released an amazing deep fake of Bill Hader morphing into Tom Cruise and Seth Rogen as he did impersonations of them on an old clip from The David Letterman Show. I oversimplify to say that generating deep fakes requires a high number of images, which is why famous people are frequently the targets of deep fakes, especially actresses, such as Scarlett Johansson, who has spoken out against deep fake pornography. (As a side note, the technology used to create deep fakes is fascinating.)

The need for a vast number of images has prevented most “normal” people from being victimized by deep fakes. That ended in June 2019 when an app developer released an app called “DeepNude.” This app allowed a user to apply a photo of fully dressed person and in about thirty seconds render an image of the person nude. The app had numerous flaws, but reports explain that when a photo of a Caucasian woman in a bathing suit ran through the app, the rendered image would look very, very close to real.

Non-consensual pornography is not the only area of concern when it comes to deep fakes. There are concerns about deep fake videos or images of politicians in compromising positions emerging the day before an election. Any efforts to prove the video or image was not authentic would likely be futile, as high-quality deep fakes are very difficult to identify quickly. Or the deep fake video or image of an athlete the night before a draft or signing a contract that shows him or her ingesting drugs or in a domestic altercation. Such an image could cost an athletic prospect millions of dollars. (Or maybe more interesting, a deep fake released by the athlete himself, so he could fall in the draft to a better team.)

Given the rapidly changing landscape of technology and culture that shows a growing suspicion of “big tech,” Congress is likely to feel increasing pressure to take action to strip the blanket liability of Section 230 from content platforms. Platforms need to develop strategies that protect them against lawsuits and increased regulations and regulatory enforcement actions by state and federal governments. Businesses need legal counsel experienced in litigation and regulatory compliance and enforcement to help protect them as their immunity from liability is slowly chipped away.