Staff writer Angela Alberti discusses the limits of free speech and considers the impact of new social media regulations on the topic.
In recent years, hate speech has become increasingly normalised and, in some cases, even condoned. An issue unlikely to be squashed during Donald Trump’s second term in office. His presidency has shown that one can use hate speech, face multiple legal battles, and even be accused of sexual assault—yet still hold immense power.
This sets a dangerous precedent: individuals can say or do anything online without facing any consequences, regardless of the harm caused.
It is unsettling to see a world in which artists like Kanye West’s status remains intact despite his violently antisemitic rhetoric. His hateful posts have now escalated to merchandise. During the Super Bowl, he placed an ad directing people to his website, which currently sells only one item: a white T-shirt featuring a large black swastika.
West has now deactivated his X account after having shared multiple antisemitic posts. Before doing so, he thanked Elon Musk for “allowing [him] to vent,” calling it “cathartic.” His account had been banned from X (formerly twitter) several times before, only to be reinstated shortly afterward. This suggests that the platform’s actions are more about virtue signalling than genuine condemnation of his words and behaviour.
In 2023, West apologised to the Jewish community after making antisemitic remarks. Today, however, there is no such apology in sight.
Platforms such as X are not holding their users accountable for their actions, and the platforms themselves seem to never have to answer for the role they play in the rise of hate speech and the too-often real world violence that follows it.
In fact, if history tells us anything it is that speech can be extremely harmful and lead to dreadful consequences. Some of the worst chapters in history, such as the holocaust, or the Rwandan genocide, were enforced by the degradation of peoples through language. A horrifying number of people were executed because they were no longer considered human, which all began and was accomplished through speech.
A much more recent example is the rise of online violence against Asian communities in the United States during the COVID-19 pandemic which led to brutal attacks on innocent people because of their presumed ethnicity.
Trump and his allies often defend their rhetoric under the banner of free speech. But when speech impedes others’ rights to speak or exist—especially when it incites violence—should it still be protected?
Determining the exact limits of free speech is challenging. However, the overarching boundary should be that no one can be allowed to incite hatred or fuel unfounded discriminatory prejudices that lead to offline violence. Most importantly, clear legal consequences should be in place for anyone who crosses said boundaries.
Some countries have taken a firm stance on the issue. In the French-speaking region of Belgium, media outlets do not allow political parties that promote discrimination to appear live on television or radio. Instead, such speech is aired with a delay, allowing journalists to provide context and opposing viewpoints.
However, there is increasingly less protection against harmful language. Big tech companies, such as X, want us to believe that the use of hate speech is simply a free speech issue. But it is something else entirely.
First, removing fact-checking policies reduces the need for content moderation, cutting costs for these companies. Second, unreliable news sources and misleading headlines tend to contain more hateful language, often designed to provoke emotional reactions and boost engagement.
In fact, social media platforms are now prioritising traction over safety.
A study has found a positive association between misinformation and hateful language. This type of speech is used to spread confusion online which undermines experts, empirical evidence and productive discussion, ultimately facilitating people to lose their grasp on reality.
Free speech is often defended as a pursuit of truth. Yet, in light of this evidence, it seems that the harmful speech they fervently defend is only getting us further away from it.
With social media algorithms, it is now easier than ever to radicalise people through misinformation. These algorithms create echo chambers for their users which insures one never has to hear an opinion that would challenge their belief system. People therefore are led to dig themselves so deep in their own biases it becomes impossible to crawl back out.
This has led to a wider divide between people for whom it has become extremely hard to maintain contact with friends or family who hold diverging political beliefs. This lack of communication easily leads to the vilification of others which is making the divide harder to breach every day.
“I consistently meet people who are eager to categorise others as either members of “my side” or the “enemy”. I think people experience emotional outrage online, usually from seeing misrepresented or extremist versions of opposing views, and then project this onto those they meet in real life. It’s very disheartening , everyone has unique and valid life experiences which shape their beliefs.”
– Daniel Guzman, KCL student
With the new generation spending most of their time and getting most of their information on social media platforms, shouldn’t they be held to higher standard in order to limit this dangerous division?
Hate speech doesn’t exist in a vacuum. It has real-world consequences, from fuelling division to inspiring acts of violence. The question we must ask is not whether free speech should exist but whether it should remain unchecked when it directly harms others. Until social media companies take meaningful action, hate will continue to spread under the guise of “free expression”—with devastating effects on our society.