Retweets are not endorsements? The debate around the US ‘Section 230’ law
By Bernardo Amaro Monteiro, MA Middle Eastern Studies and Intensive Language
‘Retweets are not endorsements’ is a Twitter jargon that has been obsolete since 1996, ten years before the platform was created in 2006. Now, this might be about to change, and Daesh (the self-proclaimed Islamic State) played a crucial role in it.
The US Supreme Court is currently questioning the responsibility of social media platforms in hosting and recommending harmful speech. Section 230 of the Communications Decency Act of 1996 helps protect the internet platforms from being legally liable for the content in their service, or as the law puts it, ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’ This is the logic that has been powering the internet, from search engines to social media platforms, algorithms and machine learning software. It’s not hard to understand, but much harder to put into practice, particularly when tech-savvy terrorists use it to their own advantage. Recruitment and ideology dissemination promoted through online marketing campaigns was key to Daesh’s success. As a result, the Supreme Court is now trying to find the balance between protecting free speech and preventing the spread of dangerous ideologies online.
“The case against Twitter, however, is based on the allegation that the platform is providing a key service to terrorist organizations, meaning that Twitter has been aware of how terrorists use the platform and failed to kick them off.”
‘Twitter, Inc. v. Taamneh’ and ‘Gonzalez v. Google LLC’ are two cases that are rocking the foundations of the Internet. Section 230 has been the ‘all problems one solution’ law for defending big tech firms’ immunity over the consequences their products created. The case against Twitter, however, is based on the allegation that the platform is providing a key service to terrorist organizations, meaning that Twitter has been aware of how terrorists use the platform and failed to kick them off. While the accusation against Google claims that YouTube’s algorithm for video recommendation promotes terrorist content, implying that the platform is endorsing this sort of discourse. The cases are being taken together, as the outcome of one affects the other. The accusations came after Daesh’s attacks in Istanbul in 2017 and Paris in 2015 and assigned indirect responsibility to Twitter and Google under the antiterrorism US law.
What is at stake in the trials?
Judges are trying to understand how social media works and how algorithms help users find the content and conversations they are looking for. Automated tools have proved to be efficient in filtering unimaginable quantities of data and are essential to how we use the Internet. Despite these advantages, the internet’s underbelly also contains of undesirable content used for exploitation and criminality. The reason why the Supreme Court is revisiting Section 230 is because it came to realize the need for laws that are equally as advanced as the technologies that we use. This process will require the court to decide if the service provider’s immunity helps establish equal footing for everyone to raise their voice in a free environment or if its main effect is the creation of misinformation and hate speech bubbles.
Another element the court is considering is the behaviour of algorithms and how they interact with human social behaviour. For example, the Cambridge Analytica scandal exposed how targeted advertisements could polarize opinions and shape election results. Acknowledging that the Trump campaign profited from the Facebook recommendation algorithm also implies recognising that it is efficiently able to surveil and manipulate users by promoting certain discourses and excluding others. It is very hard to know how the algorithm works and if there is truly a difference between how it recommends what it thinks we want to see and telling us how to think. This is an important question to ask since it is here that the difference lies between a tool for efficiency and a tool for mass control.
The scale of human interaction on the internet brings additional complexity to the decision-making process. Proving that viewing harmful content leads to an increase in violent activity is an investigation that enters into an abstract level of human social relations. Both cases will have to make the argument that there is a provable pattern between the people who saw certain content and those among them who maybe went out to practice an attack. If so, service providers will be legally liable for hosting and promoting harmful speech. Making this case will require that everyone agrees on who gets to determine the definition of ‘harmful speech’. This won’t come free of complications. Until now, Section 230 allowed service providers to independently decide what speech they allow through their terms of service. A legal outcome that establishes the boundaries of acceptable speech is, indeed, a conversation about free speech.
What about amending Section 230?
If the Supreme Court rules against Big Brother, it will mean that service providers will have to come up with much tighter policies for how we use social media. On the one hand, this could mean that users may start choosing platforms according to their content mediation policies, leading to a more diverse range of platforms. However, it is more likely that the need for regulation will empower the platform’s legal departments as they will have the first say in establishing the rules for mediating content. It is evident that the big platforms will try to stay out of the courtroom, but they are in much better shape to contest cases, while the smaller platforms may find it hard to defend themselves.
A much scarier consequence of changing Section 230, is it may imply that retweets can become endorsements – changing the way we interact online.
Photo Credit: Lex Villena and Arturaliev.