Amending Section 230 to Reform Social Media and Address Political Extremism

 /  April 11, 2023, 8:04 p.m.


Insurrectionists assemble outside of the U.S. Capitol on January 6th, 2021

Social media’s hold on society is undeniable. In 2022, Facebook had approximately 300 million new photos posted daily; Twitter had 6,000 new Tweets every second; popular YouTube channels had 14 billion views on a weekly basis; and the messaging app Telegram had over 500 million users. Simultaneously, the United States confronted unprecedented waves of political violence, ranging from the Jan. 6 insurrection to racist mass shootings in grocery stores. There is a sound relationship between the two: social media companies exploit users’ confirmation biases by providing information that agrees with their beliefs, regardless of the validity, thereby nurturing polarization, eroding civil trust, and driving individuals to commit violent crimes. 


Yet, platforms like Facebook and Twitter involved in the process of instigating extremism evade legal responsibility by relying on Section 230 of the Communications Decency Act, which states that social media companies are not liable for the content of their users’ posts. Policymakers on both sides of the aisle advocate for reform but have repeatedly failed to reach a consensus. The question remains: how can Congress meaningfully regulate social media companies in order to lessen the adverse effects on the U.S.'s political environment and overall well-being? While there is no silver bullet to address the issue, the country cannot afford to stand idle. Implementing pieces of current proposals–whether amending Section 230 to include regulations for sponsored speech or giving more power to the courts to specifically outline when social media platforms have immunity–would be a strong start. 


A Rise in Political Violence


Domestic terrorism is a growing threat to the U.S. From 2014 to 2021, America has witnessed a sharp increase in domestic terrorist attacks and fatalities, with a strong correlation to political demonstrations. Subtle increases in domestic terrorist incidents tied to protests began in 2011, but the major spike occurred in 2020, when 47 percent of all terrorist attacks involved political demonstrations. This marked a 45 percent increase from the previous year, leading to an evident correlation between an increase in domestic terrorist attacks and political ideologies.


Nowadays, political violence, whether related to conspiracy theories or law enforcement, is ubiquitous and seemingly normal in the news. Last year, there were over 9,000 recorded threats against members of Congress and their families, nearly a tenfold increase from 2016. Most notably in late 2022, an intruder broke in and attacked Paul Pelosi, Speaker Pelosi’s husband, in what the San Francisco DA described as “politically motivated.” More broadly speaking, the percentage of domestic terrorist attacks tied to demonstrations increased again in 2021 to 53 percent. For example, on Feb. 19, 2022, Benjamin Smith opened fire on a police protest in Normandale Park in Portland, Oregon, after becoming enraged at the Black Lives Matter movement, COVID-19 restrictions, and an uncontrollable homeless problem. Smith killed one woman and hospitalized four others with gunshot wounds. Similarly, on May 14, 2022, Peyton Gendron opened fire on a grocery store in Buffalo, New York and sent three people to the hospital and killed ten others. After an investigation, authorities found the “Great Replacement” conspiracy theory, which alleges that white people are intentionally being replaced and has increased in popularity on mainstream platforms such as Fox News, was the underlying motivator. As 11 of the 13 people shot were Black residents of the area, the Department of Justice (DOJ) identified the attack as a hate crime and racially motivated violent extremism. The Global Terrorism Database found a new pattern related to these two examples: individuals, as opposed to organized and hierarchical terrorist groups, perpetrate the majority of violent incidents. 


Connection to Social Media


The rise in individual actions speaks to a new danger the country faces: the rapid spread of violent ideologies on the Internet via social media platforms. In 2016, sites like Instagram and Facebook played key roles in radicalizing and mobilizing about 90 percent of lone terrorist actors. This marked a 40 percent increase from the prior decade, which saw social media involved in about 50 percent of attacks. Users can spread conspiracy theories, militia tactics, and white supremacist ideas on YouTube channels, blogs, Facebook pages, and more. Minimal regulation from governing bodies and social media companies, alongside over 302 million social media users in the U.S. as of 2023, enables the spread of extremist ideologies, creating a new reality where millions of Americans can undertake, support, or excuse political violence. 


However, propagating violent ideas does not end with the rapid spread of information. Rather, algorithms from corporations like Facebook and Twitter manipulate users’ confirmation bias–the tendency to favor information that either confirms or strengthens one’s beliefs–in order to maintain the highest number of users and drive profits in the process. Because the public valuation of tech companies is proportional to their number of users, they will often do whatever they can to optimize participation. Consequently, in the absence of reliable fact-checking and corporations regulating content, users become susceptible to unintentionally spreading false misinformation or deliberately spreading inaccurate information with the intent to deceive via disinformation. 


Malicious content creators can spread their ideologies by exploiting both the rapid spread of misinformation on social media platforms and companies’ incentive to keep users engaged with content that reinforces and polarizes their beliefs. Disinformation can rapidly move through platforms and reach tens of thousands of users, but only a few narratives need to take hold to erode trust in facts and evidentiary standards. Social media companies’ algorithms will then take over for the content creators and spread the stories that gained the most traction regardless of the truth in order to keep users engaged. 


Specifically, uncertain political outcomes, like elections, that depend on public support are uniquely susceptible to both misinformation and disinformation because actors rely on disinformation campaigns in order to discredit the opposition and secure backing for their side. Often, parties involved will organize sustained, coordinated, and sophisticated efforts to manipulate public sentiment and views beyond sporadic misinformation posts. The most infamous example of this in the U.S. was the lead-up to and aftermath of the 2020 Presidential elections. Even before the election, in late Sept. 2020, a disinformation story about mail-in ballots framed a photo of empty envelopes from the 2018 election as evidence of voter fraud. Within a single day, more than 25,000 Twitter users shared the false ballot-dumping story, including Donald Trump Jr. with over 5.7 million followers. This story strengthened the case for denying the election, and culminated in the Jan. 6 insurrection.   


By echoing information disorder, social media is a driving force of political polarization and pushes individuals to violent extremes. Misinformation campaigns weaken overall societal cohesion and separate individuals into increasingly isolated political and social communities, with few opportunities to encounter counter-narratives or other sources of information. Specifically, disinformation campaigns can target group leaders with smear campaigns or false accusations regarding corruption in order to threaten their credibility. Platforms then allow this dehumanizing and polarizing discourse to become widespread; this dialogue normalizes the perception of political opponents as untrustworthy sources of information who are threats to opposing ideologies. Continued exposure to this type of rhetoric creates a marginalized mindset that isolates users from those they disagree with. Further reinforcing this, disinformation can build a collective identity around false perceptions of persecution, centering fear and grievances at the core of identity. Organizers may also present one’s group as the authentic defenders of important values, causing an individual to feel as if they’re engaged in a righteous struggle with potentially devastating consequences. As a result, there is minimal space for compromise with the opposition, and social media pushes its users to ideological extremes. 


Section 230 of the Communications Decency Act: Social Media Companies Evading the Blame


Social media’s involvement in the rise of political violence and misinformation begs the question of why neither Congress nor any other regulatory body has held these corporations accountable. In short, the answer lies in 26 words of the 1996 Communications Decency Act, which aimed to allow the Internet to develop free of government intervention. Section 230 of the Act gives companies the ability to regulate content on their websites as they see fit and stipulates that they cannot be held responsible for the content of their users’ posts. Social media companies like Facebook fine-tune their own algorithms that rank, recommend, and remove unwanted posts with minimal regulation, keeping their users entertained for a longer period of time and optimizing revenue in the process. The placement of profit above societal well-being was evident prior to the 2020 elections when Mark Zuckerberg agreed to adjust Facebook’s algorithm to limit the spread of fake news. However, by the end of November, the number of users declined, and the company reverted back to its prior algorithm. After the political environment in the capitol was in disarray around 2020, even if Congress or the DOJ wished to charge social media companies, they could not under existing law. 


Methods for Reform


Now, politicians on both the left and right argue for revising or repealing Section 230. Congressman Christopher Cox (R-CA), who co-authored the Section in 1996, recently urged for amendments because “the original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things.” Similarly, before the 2020 election, President Biden declared that it needed to be “revoked, immediately.” Despite the shared desire for change, finding meaningful regulation that will actually curb the rise in disinformation and violence remains a strenuous task with no clear solution. 


Organic vs. Sponsored Speech 


One solution that avoids conflict with the First Amendment’s protection of freedom of speech is separate liability protections for organic and sponsored speech. Corporations would not be responsible for the information users post independently, but the law would hold platforms accountable for the content they promote via paid advertisements. In 2018, MIT researchers found that on Facebook, a new advertising system that monitors accuracy reduced false information by 75 percent. While this decrease may appear to resolve the issue, the study noted that there was still an immense amount of inaccurate content circulating on the platform, signifying that regulating sponsored speech is not sufficient on its own. Despite the effort in 2018, the number of comments and shares from false content providers still tripled between the third quarter of 2016 and the third quarter of 2020. 


Moreover, the majority of social media platforms already rely on algorithms to curate user experiences and generate content; as part of regulating promoted disinformation, legislators could remove Section 230 immunity in cases where a company’s algorithm spreads disinformation and violence. The government would be able to prosecute companies with automated systems tied to acts of terrorism, extremism, and civil rights threats. While former Representative Tom Malinowski introduced legislation aimed at achieving this with the Protecting Americans from Dangerous Algorithms Act in 2020, it failed to receive a vote. Other legislators attempted to introduce similar frameworks, but a significant barrier is lawmakers’ inability to fully understand these automated systems. Due to the fear of losing a competitive edge, companies refuse to release their algorithms, leaving lawmakers in the dark about how platforms actually function and produce content. Consequently, meaningful regulation surrounding automated systems remains difficult.  


Exemptions for Specific Types of Speech


There are already lawsuits regarding exemptions from Section 230, including federal crimes like criminal fraud and intellectual property violations. The most recent addition was sex trafficking in 2018, and Democrats in Congress hope to add more. Specifically, Senators Mark Warner (D-VA), Mazie Hirono (D-HI) and Amy Klobuchar (D-MN) aimed to include claims relating to civil rights, stalking and harassment, antitrust, international human rights and wrongful death in 2021. More recently, other members of Congress advocated for exempting violence, hate speech and disinformation. While this type of legislation would directly target the underlying issue with social media, critics cite severe infringement upon First Amendment Rights as reasons for other solutions. Politicians in today’s political environment already argue over which speech should be protected under the Constitution. For instance, conservatives have repeatedly criticized social media companies like Twitter for censoring posts regarding election mis- and disinformation or COVID-19 restrictions. Consequently, corporations would not be able to meaningfully distinguish between types of speech, potentially leading to sharp rebuke and Constitutional violations. 


Repeal Section 230 and Give Power to the Courts


Repealing Section 230 as a whole would give full power to the courts to determine if or when social media companies are responsible for the content on their platforms. In fact, in late February, the Supreme Court heard Gonzalez v. Google LLC, which examined the role Google played in spreading ISIS ideologies in France that resulted in a deadly terrorist attack in Paris that left 130 dead, including an American. The case posed the question of whether or not the company acted as a publisher and speaker through its automated algorithms that promoted ISIS-related content. This case has the potential to set a precedent for social media companies and fundamentally alter Section 230. The Court’s decision has the potential to overrule the Section and give more power to the judiciary to dictate when corporations must intervene. 


Proponents of repeal argue that because the Internet has developed significantly since the late 1990s, the law does more harm than good; it protects malicious content creators, enables disinformation to spread, and provides immunity for all companies online, even ones not tied to social media. Both Republicans and Democrats have advocated for this approach, but for contrasting reasons. Republicans such as former  President Trump believe social media companies unjustifiably censor conservative viewpoints and rely on Section 230 to create a biased Internet that impedes certain individuals’ First Amendment rights while evading legal responsibility. Democrats such as President Biden, on the other hand, believe social media’s insufficient censorship of hate speech and mis- and disinformation calls for repeal. 


Consequently, repealing Section 230 would heavily politicize the courts and any further legislation that moderates social media’s content. Moreover, it could stifle the growth of smaller companies lacking the resources to regulate their content in the same manner as major corporations like Facebook. 


Establish a Threshold for Immunity


Introducing a threshold for immunity based on the size of a platform’s audience would strip larger corporations of Section 230’s immunity while maintaining its protections for companies with smaller numbers of users. While not eliminating its entirety, this proposal would limit the scope of disinformation and malicious content by incentivizing major companies to moderate their content and avoid legal prosecution. Arguments for this proposal are rooted in the fact that the Section states online services “shall not be treated as the publisher or speaker” of content on their platforms. However, nowadays, companies like Twitter and Facebook act as both the “publisher and speaker.” Their algorithms promote certain information and have direct control over the content users interact with. Critics believe these companies have evolved beyond the perceptions of the Internet when Congress implemented Section 230, and therefore, it should no longer apply. However, this proposal also creates an antitrust issue–it would cripple competition and leave smaller companies with two options: (1) pressure companies to stay below the threshold to avoid funding complex programs for content moderation and impede growth in the process, or (2) incentivize larger companies’ acquisition of smaller ones because they would already have tracking systems employed.  


Mix and Match


Because none of these proposals independently offer a sufficiently concrete solution, lawmakers should aim to implement a combination of them. For instance, establishing guidelines for paid advertisement across social media has already proven to slow the spread of false information; extending this to automated algorithms, even with limited knowledge of how they operate, could prove beneficial. Similarly, new federal regulations for certain types of speech related to human and civil rights could limit the spread of hate speech online. Nonetheless, Congress must be cautious of the antitrust implications and should have different regulations depending on the size of the platform. Developing companies with smaller audiences need the freedom to grow before being able to compete with larger corporations. Lawmakers will need to carefully monitor the implications of new regulations and make appropriate changes in a timely manner.


Social media’s hold on society continues to grow each year, while immunity granted by Section 230 enables platforms to adversely affect the country. The exploitation of confirmation bias, the spread of mis- and disinformation, and weakened societal cohesion have created an online reality where individuals turn to extremes such as violence. The number of deaths associated with these actions has risen significantly over the past decade, and threats aimed at politicians have skyrocketed. The recent Supreme Court case on Google’s role in spreading ISIS ideologies in 2015 will set a new precedent, but recent political turmoil demonstrates that the country cannot afford to delay before taking further action. Congress must act immediately to establish new regulations and guidelines for social media companies. The absence of a perfect solution and the ongoing evolution of social media’s influences means that this process will potentially include a period of trial and error, which only reinforces the immediate need for a substantive start. 






The image featured in this article is licensed for reuse under the Creative Commons Attribution-Share Alike 4.0 International license. No changes were made to the original image, which was taken by Tyler Merbler and can be found here.



Elliot Sher


Search

<script type="text/javascript" src="//downloads.mailchimp.com/js/signup-forms/popup/embed.js" data-dojo-config="usePlainJson: true, isDebug: false"></script><script type="text/javascript">require(["mojo/signup-forms/Loader"], function(L) { L.start({"baseUrl":"mc.us12.list-manage.com","uuid":"d2157b250902dd292e3543be0","lid":"aa04c73a5b"}) })</script>