Better Online Living through Content Moderation
Anti-content control rhetoric supplants widely-available psychological and sociological facts for misinformed opinions that are not only insufficient for helping others manage their own mental state, but offer wholly inadequate solutions for online abuse.
Content control features — block and ignore functions, content/trigger warnings, blocklists and privacy options — are valuable to people who need to moderate their time online. Some users may suffer from PTSD and need to avoid topics and people that trigger their anxiety. Others may simply understand the limits of their patience, and choose to make their online experience less irritating. These are all valid reasons for employing content control tools. In fact, there is no such thing as an invalid reason: nobody should be required to read or listen to content if they do not want to.
Yet users of those tools face constant cultural opposition, often maligned as “weak†and “too sensitive.†By criticizing the mere use of optional social moderation tools, opponents are creating a culture that pressures people to expose themselves to experiences far more catastrophic than they can handle. Somehow, it becomes entirely the victim’s problem when they are attacked online, no matter the situation, and they should “just deal with it.†This advice is generalized to the point of uselessness, because not every “disagreement†is a simple difference of opinion: there are online aggressors that genuinely invoke anxiety attacks, or subject people to threats of violence. Content control is helpful in limiting the worst of these attacks, which themselves can cause PTSD if severe or long-term enough. While using content control features is not guaranteed to stop the effects of abuse, they do help and their use should not be disparaged and discouraged.
Computer-Chair Psychology
Photo CC-BYÂ GorissenM, filtered.
One of the major arguments against content control is that people blow the abuse and harassment they receive out of proportion; they should just try being “less sensitive.†These arguments draw an informal parallel to Exposure Therapy – a type of therapy designed to combat severe anxiety through gradual and controlled exposure to its source, to inure an individual to these triggers and lesson the disruptions they can cause. The misapplication of this concept to content control discussions represents a misunderstanding of human psychology: Exposure Therapy is not about having random internet strangers hurl insults and threats at someone with the hope they somehow come out more mentally durable. Without controlled exposure, someone suffering from PTSD is likely to have their trauma magnified rather than reduced when faced with triggering content.
The struggle to protect the psyche of vulnerable individuals is not limited to online interactions, and represents a greater pushback against “sensitivity†and “political correctness†in Western culture. Despite such rhetoric, there is evidence that younger generations may actually be more open to difficult, complex and emotional content. In an article examining generational pushback against content warnings in university settings, Maddy Myers postulates:
“Millennials are not afraid of these conversations. Quite the opposite, in my experience. The reason why trigger warnings and content warnings have become a mainstay in progressive blogging spaces is because young people have finally begun to acknowledge how many of us have dealt with trauma and violence, and have craved places to discuss how our stories get depicted in media.â€
Arguments against content control also rely on the myth that online harassment is merely mean words said on the internet, with no real threat to the safety of someone or their family. The idea is that there is no way harassment can cause PTSD: according to popular culture, this is something only veterans suffer. This misunderstanding is predicated on the same ignorance that yields metaphors to exposure therapy: the fact is, threats of violence online can be a cause of PTSD in and of itself. As noted by Caleb Lack, a licensed clinical psychologist and psychology professor who specializes in treating anxiety disorders:
“Bullying has long been known to have a severe impact on mental health, particularly if the bullying is repeated and prolonged… So, given what we know about PTSD, and given what we know about the effects of bullying (cyber and otherwise) on mental health, I think it’s relatively safe to say that “Yes, you can ‘get’ PTSD from Twitter.†One needs to be careful, though, to be specific about this: it’s the bullying and harassment that could lead to PTSD or PTSD symptoms (as well as depression, increased suicidality, and so on), not anything inherent to Twitter itself.â€
The fact is, long-term exposure to threatening situations, such as online harassment, is one of the major causes of PTSD. It is particularly shameful that these misconceptions about content control and trauma exist, when it takes so little web research from credible sources to broaden one’s knowledge of PTSD.
Threatening Legal Recourse
Photo CC-BYÂ David Goehring, filtered.
Blocklists are one of the more recent content control tools to come into prominence, rising in direct response to hate groups such as Gamergate, TERFs, MRA/PUAs, and white supremacists. The use of blocklists has inspired vehement objection and reproach of users; in fact, one of the more direct attempts to fight blocklists come from legal action. Some people argue that they are being defamed for statements and opinions that they did not make when they are added to mass blocklists, such as the Good Game Auto Blocker by Randi Harper. However, claims of defamation do not really hold up when the blocklist makes its filtering methodology and appeal process clear. Randi Harper has explicitly stated how one is flagged for blocking, and is absolutely correct in her claim that it blocks the majority of interactions with Gamergate on Twitter. Further, to use a blocklist, someone has to opt into it – which means finding one that is suitable for their needs, subscribing to it, and often, installing a small app. Â
The most compelling arguments against blocklists come from people who do not harass or threaten other people, yet somehow see any defense by targets of harassment as being equal to the harassment itself. It is a “middle of the road†opinion that paints both sides as unreasonable, and instead suggests that a “dialogue†is in order. In this view, blocklists are bad because it means subjecting one’s internet experience to the whims of another.
What these points of view fail to recognize, however, is how vicious and pervasive online harassment can be. Gamergate, for example, is notorious for doing everything in its power to threaten people into silence – from calling and threatening family members, to posting pictures of their targets’ homes and addresses online. Asking that victims not take steps to protect themselves with content control tools is nothing short of demanding that the abused spend more time with their abusers. Such demands equivocate the response to abuse (i.e., blocking abusive users) with the abuse itself – completely failing to differentiate between the aggressor and their targets.
Towards More Personal Agency Over Online Experiences
Photo CC-BYÂ Donnie Nunley, filtered.
Siccing a hate mob on someone, threatening them and their families with physical violence, or stalking them; these are all intimidation tactics. They are designed to silence people – and are clearly illegal. Blocking the people who attempt to do these things is one way to dampen the assault.
This abuse is not rare. Women especially are considered fair game for these type of attacks, particularly women who tread in areas that are considered ‘male-dominated,’ like the tech industry or in video game culture. Visibly challenging sexism in these areas brings about severe and chronic abuse that can easily cause PTSD. Online abuse that specifically targets women is so well-documented and pervasive that the UN recently hosted an event where women including Zoe Quinn (video game developer, and co-founder of Crash Override) and Anita Sarkeesian (creator of the youtube channel “Feminist Frequency,†and the video series “Tropes vs Women in Video Gamesâ€) were able to share details of the abuse that they have personally received.
With such an abundance of evidence freely available, women should not have to repeat these stories ad nauseum, and nobody should question the utility that block and content control tools can offer. Yet, criticisms of these tools and the people who use them remains raucous. Perhaps only a relative few are willing to acknowledge how awful it can get or, more likely, the loudest voices on these issues are those who are not the regular targets of digital abuse. Either way, giving others the power to personally moderate the worst of the internet in no way violates anyone else’s rights, and is often the best option victims have.
When common tools like trigger or content warnings, block lists and “muting†features are casually disparaged, it demonstrates a lack of empathy for people who suffer from psychological trauma and need a method of defense other than disappearing from the internet. No attempted legal recourse or pseudo-scientific judgement changes the fact that block functionality protects a vulnerable group of people. Blocking, even with a list curated by a third party, is not a silencing tactic; refusing to listen to someone is not the same as silencing them. It is neither defamation, nor taking the coward’s way out. It is someone making a decision based on very personal reasons. People should be allowed to set their own personal boundaries, and disregarding those personal boundaries should be seen as disrespectful at best. Yet, privileged folk continue to use their own experience of relative safety online as an excuse for for shaming others into situations that can cause them extreme emotional distress.
Ultimately, easy one-size-fits-all solutions ignore the diversity of human psyches and experiences. Content control tools take this fact into account, and give people more room to act on behalf of their own mental and emotional needs. Not everyone is able to employ the old adage of “don’t feed the trolls,†and sometimes the “trolling†(abuse and terrorism) itself is un-ignorable. On a personal level, nobody has a responsibility to weather outright harassment, and should be allowed and encouraged to mitigate what they can not handle. Telling people otherwise is complicity in a system of violence against marginalized people: anti-content control rhetoric supplants widely-available psychological and sociological facts for misinformed opinions that are not only insufficient for helping others manage their own mental state, but offer wholly inadequate solutions for increasingly pervasive and harmful patterns of online abuse.