TL:DR; Freedom of speech is essential, but communication platforms become harmful when misinformation and hate speech is given the same weight as expert insight and empathy. Contrary to modern social media’s ethos, not all voices are equal on every topic being posted online—formal education, compassion, and real-life experience provide a deeper and more nuanced understanding of something than a few minutes of online research and misplaced personal beliefs.
One without the other is harmful.
Freedom of speech without moderation creates nesting-grounds of virulent hatespeach that behooves no one and hurts everyone. Left unchecked, its ideologies can permeate to the outside world and affect the lives of real people.
Moderation of forums with no freedom of speech suppresses the voice of the people, leaving no room for growth and understanding. The power of having a voice gives anyone and everyone the opportunity to be heard, understood, and represented inside and outside of the digital world. Taking that away will reduce the human experience within the digital age to the point that we may as well have never invented the internet in the firt place.
Public forums, or social media, has become the modern agora, or "public space".
The internet serves as the primary conduit for communication, where ideas are exchanged, debates are waged, and communities are formed. These platforms have democratized access to information and empowered the average person to voice their opinions and ideas on a scale never before seen.
However, with this level of connectivity comes a new challenge: The Predominance of Hate.
im going to argue that moderation of public forums is not just a technical neccessity but also a moral and social imperative, essential for maintaining the integrity of discourse, protecting users from real-life harm, and fostering a healthy space for human beings to sociallize.
At its core, moderation ensures that these modern "digital-agoras" remain spaces of constructive dialogue, rather than devolving into destructive echo-chambers.
The situation as it is: Some rhetoric is getting more attention than it deserves, and people are suffering because online discourse leaks out into the world through its users.
Social media is a double-edged sword. While it enables the free flow of ideas, it also provides a platform for misinformation, hate speech, and initiations into behaviors that are damaging to others or are self-harming. It has and will continue to create monsters out of ordinary people.
Incels - self proclaimed 'Involuntarily Celebate' men - are a perfect example of what access to unmoderated digital spaces does to the impressionable. These men learn to hate themselves to a degree that becomes unsalvagable, and are quick to prey on other vulnerable men and convert them into similarly miserable people.
To have an expressive outlet of this internal loathing, they encourage each other to embrace behaviors and political stances that are damaging to women and themselves. Anyone who is familiar with the reality of Incel culture and its participants knows that these men are hateful, intollerable, nihilistic, and frequently dangerous people. Because of the accessability of unmoderated spaces there has even become a collection of influencers, grifters, websites, youtubers, blogs, and online forums colloquially known as 'The Manosphere"; Where the promotion of toxic masculinity, misogyny, and a deeply ingrained hate of themselves drives thisl group of men towards tearing down women's rights and destroying their safety.
The mysogonistic cult of the Manosphere - and many other racist, misogynist, or fascist online groups - is achievable in our current digital ecology because, without moderation, social media spaces can quickly become grounds for hostility, where the loudest and most aggressive voices dominate, silencing anyone with anything else to say.
We have to recognize that freedom of speech doesn't justify or protect hate speech, it requires us to reconcile with what is being said in other ways than violence. Then, after people have reconciled with it as required of them, the speech is judged to be opposed to humane values and modern ethics (for example. Someone advocating for stripping away the rights of women), the person who exercised that speech can and should be made to suffer consequences.
In order to combat the further creation of radical hate-culture-groups, the removal of the propogators of hate (the loud accounts shaming, berating, or slandering the marginalized) should be viewed as neccessary, and performed in a professional and holistic manner akin to pruning. By removing or preventing such content creators, and addressing disruptive online behaviors, we can create an environment that benefits everyone, where diverse voices can coexist and engage in healthy exchanges.
Moderation is crucial for preventing the dissemination of misinformation.
A pervasive issue in the information age is that everyone's parents, grandparents, and impressionable younger siblings think that the video about the current president or foreign political figure they saw an influencer post online is true.
"A lie can run around the world before the truth has got its boots on" - Terry Pratchett.
Misinformation can be spread at a speed that is unimaginable. The entire world can think something tomorrow that is totally, verifiably, and demonstrably untrue.
Falsity that spreads this quickly cannot be counteracted in an effective manner - millions of people may have already interacted with an internet post spreading an idea or belief that simply isn't grounded in reality by the time another person, who is actually knowledgeable about the topic, can even talk about it.
What that means is that information, true or not, is being published too fast for people to have the time to determine its validity. Fact-checking and healthy discourse alone cannot combat the shear volume of misinformation online - taking the time to communicate the truth to every social media user is too slow to prevent the neverending typhoon of misinformation.
By the time the truth has been discovered, verified, and produced into a video or article online. The public has likely already moved onto the next sensationalized controversy. The information age is, by its very nature, recklessly and unreliably informative.
Because of this reality, moderators should play the vital role in indentifying and addressing such content, or terminating accounts that repeatedly post misinformation. Have exhaustive lists of news sources that have been studied and determined to be unreliable, and automatically flag that content with a warning to other users that it is published by an unreliable source, or simply don't allow it to be posted at all, and flag or ban the users that try to post it.
(NOTE: It would be an ethical practice to also provide a public record of the flagged or banned accounts, with a collection of publically available evidence provided. AKA "The reason for the ban" should be public information. Also, users should be able to make public appeals to their bans.)
As the speed and volume of posts is an issue in and of itself, instead of forcing the responsibility of fact-checking and rebutting onto a platform's users, it is more equitable to identify accounts that are constant violators and stop the flow of misinformation.
We can all surely agree that is it obvious "AryanKingTheJewKiller1488" should be prevented from posting illicit pictures of women with captions objectifying them and encouraging others to find their own women to harrass, or constantly talking about how jews secretly run the world, or how "rape isn't bad when an aryan man is doing it", or posting articles quoting crime statistics surrounding minorities that are fabricated to make those minorities seem inherently violent.
The 'digital death penalty' (banning an account) should be re-evaluated as a punishment, and come to be viewed as a neccessary means to keep any digital platform healthy and free of people trying to make other people hate each other, or believe in things that are not true.
We owe it to ourselves and each other to make compassion and truth the foundation of our online experience.
How can moderation be performed in the information age? There are millions of accounts!
Community guidelines are a beautiful example of how to cultivate a healthy online space. However, most places on the internet have lackluster or weak guidelines because they are afraid they will scare away potential users. Or they have guidelines that are well made but do not enforce them because of how disrupting account-banning can feel to existing users.
Or, more realistically, if people are arguing loudly on your platform it will attract a crowd, and therefore more users will log into your website. That specific style of getting engagement is called rage-baiting, where controversial content is permitted on a platform because people will go online to argue with it. Needless to say, it's bad practice to let a Nazi or Rapist preach on your website just because it will make people angry enough to leave a comment. Ad revenue shouldn't be worth letting hate propagate on any website.
Now, with the negative consequences of social media being felt by billions around the world, it should be agreeable that "Do not post the address of someone you don't like." or "Do not advocate for the removal of someone's rights or for their murder." or "Do not post articles from a news source that has been repeatedly proven to make fabricated reports." is a reasonable request and you should definitely be flagged or banned if you break those terms.
You made the account by clicking on 'Accept'. The guidelines should be presented to you, and you should read them because you are agreeing to respect them in order to make the account itself. After that, it should be the platform's obligation to enforce the guidelines if you break them. Making an account should be a transactive agreement between a person and a platform that is based on respect for themselves and the others that will use the platform.
But the information age comes with great technical tools! Automated banning of accounts that break the guidelines should be practiced. Users should be able to report any content or account they think breaks the guidelines, maybe even give a reason they think it does, and an auto-moderator should interpret the data and validate wether it breaks guidelines, or not.
Moderation of hundreds of millions of posts can be achieved with the collaborative efforts of the users and automated moderation systems; If you see something, say something. And a moderator will sort it out from there.
It can be a measurable good. It can improve lives.
Another critical aspect of moderation is the protection of users,
particularly vulnerable populations, from harm. The anonymity afforded
by the internet often emboldens individuals to engage in cyberbullying,
harassment, and other forms of abusive behavior.
For many, especially
young people and members of marginalized communities, such experiences
can have devastating psychological, or real-world effects. Moderators act as a first
line of defense, intervening to prevent harm and enforce policies that
prioritize user safety. By creating a culture of accountability,
moderation helps to deter harmful behavior and ensures that public
forums remain inclusive and welcoming spaces for all.
Healthy moderation ensures it will last longer.
Moderation contributes to the long-term sustainability of
online communities. Unmoderated forums have always fallen victim to spam,
trolling, and incitement of violence, which alienates users and
diminishes the quality of everyones experience. Moderation helps to
maintain the focus and relevance of these spaces, ensuring that they
continue to serve their intended purpose: connecting people.
This is particularly important
for niche communities, where shared interests and values form the
foundation of interaction. By fostering a sense of order and purpose,
moderators enable these communities to thrive and evolve over time. As much fun as it is to yell at each other online all day, we could be doing better things, like teaching each other how to cook, or discussing a book you don't think enough people have read.
People with healthy communities, in life and online, live better lives. Connection to others is a human need, and the internet is a perfect place to fulfill it. We should treat these digital places with respect, and their users with dignity.
But what about my free speech?
You still have it. You can still quote Adolf Hitler and post articles about how the goverment is putting microchips in vaccines and tell everyone you know that all foreign countries are trying to destroy America with terrorists and immigrants.
But you will have less places to do it. Hopefully the only place you will be able to do it is in public. And hopefully someone around you tells you that you are a hateful idiot.
Oversight of a public place is not inherently suppression of free speech (especially if you agreed to any guidelines).
Mutual responsibility.
That being said, it is also our responsibility to push back on a platform if we, the users, think they are overstepping and are commiting censorship that is harmful to us.
If you are being silenced for speaking on issues that are hurting people in the real world, it's time to fight for the balance between Free Speech and Moderation.
I hope this was at least a little bit interesting to someone. Thanks for reading.
Comments
Displaying 0 of 0 comments ( View all | Add Comment )