The useful idiot syndrome is amplifying online hate.
In this blog post, you’ll get a deeper insight into a strange social media phenomenon that makes us all into useful idiots.
By pointing fingers at a specific instance of bad behaviour, we amplify the perceived momentum of that behaviour instead. Even if the behaviour never actually existed in the first place.
Let’s dive right into it:
Like Flies on Dirt, We Gravitate Towards Conflict
I’ve cultivated something of an online pastime for the last couple of years.
Whenever I see something in the feed that I reckon could be perceived as offensive to some random cultural identity group, I head straight for the comments.
While strangers arguing in the comment section might be entertaining, there’s a rather strange communicative phenomenon that often occurs.
For instance, in one of my social feeds, I might find a Gary Larson comic strip that depicts God in front of a bunch of animals while declaring, “Well, now I guess I’d better make some things to eat you guys.” 1
Aha! A joke that points out one of many logical flaws of religious texts. That ought to attract some religious fundamentalists. “Lets’ see if some wacky creationists are going to town in the comment section,” I suggest to myself.
Whenever I get the impulse to dive into a public comment section looking for people fighting each other, I get two distinct notions:
One is the notion that this behaviour probably isn’t very productive.
And two is the notion that I’m not the only one heading for the comment section to enjoy some expected mayhem.
When “Good” Samaritans Are Crying Wolf
Whenever I dive into a comment section to entertain myself by enjoying people stating just how offended they are, I’m typically able to find a few such comments. But rarely as many as I would’ve foreguessed.
Sometimes, and not counting obvious bot- or troll accounts, I have to scroll through hundreds of comments to find one single commenter who seems to be genuinely offended.
In a sense, this could indicate something positive about social media. Maybe the popular concept of “everyone on the internet is being offended by everything on the internet” is acutely over-estimated and blown out of proportion?
Still, the comments made by people who have taken offence, however many or however few, aren’t what interests me here.
What truly interests me is that I’m finding droves of comments from two types of people:
1. Online Meta Samaritans. The first group complains about those who are taking offence. “People who are offended by this content have no right whatsoever to be offended by this content,” they say. This group won’t hesitate to express harsh opinions despite a complete lack of comments made by people being genuinely offended.
2. Online Double-Meta Samaritans. The second group complains about those who complain about those who are taking offence. And this group is typically just as nasty and hateful as the first group.
Now, we have an online fight on our hands. But for what reason?
Stirring Up Online Hate on a Cold Brew of Nothingness
Sure, some meta samaritans are just politely pointing out that taking offence might be an overreaction, but there’s typically an unproportionate amount of unwarranted ridicule, ad hominem — and plain hate.
And sure, some double-meta samaritans are just politely pointing out that taking offence isn’t an overreaction. Still, the amount of unwarranted ridicule, ad hominem, and plain hate is equally unwarranted.
Is all this hate between hundreds or thousands of commenters essential when there are only a few comments (and sometimes none) made by people genuinely taking offence?
I often find posts with hundreds of comments from angry mobs who are furiously fighting each other over claims never stated by anyone in the first place.
I’ve even seen numerous content creators being forced to publicly delete their content and apologise despite no evidence of anyone who took actual offence.
Why the Useful Idiot Syndrome is a Force Majeure
At face value, the helpful idiot syndrome seems intrinsic to human nature. When we feel at odds with the world, we tend to overcompensate.
Overcompensating in signalling virtues might be a result of feeling that our personal morals are under attack. But instead of making the world a better place, we stir up more online hate instead of less.
And some might be actively seeking to pick a fight because it’s socially safe. The useful idiot syndrome might be a psychological version of the Bandwagon Effect:
A rather significant percentage of people who comment on posts made by people or organisations they don’t know personally get triggered by merely seeing a position that they believe is offensive to a cultural identity group that they stereotypically think of as overly sensitive or morally deplorable.
Being triggered, double-meta samaritans preemptively rush to the comments to aggressively condemn the expected behaviour of the identity group — often without even seeing any actual such reactions from other people.
It could be non-Christians expressing their hate against Creationists for not having any sense of humour; angry males attacking feminists for being vengeful and mean, but it could be almost anything related to identity politics.
Consequences of the Useful Idiot Syndrome
The useful idiot syndrome, if it is indeed a natural phenomenon, can have serious consequences. It could be a social media post linking to a news story about the first person born in Africa to win a gold medal in a Scandinavian winter sport. While there might not be many accurate racist comments to be found, there might be hundreds and hundreds of words brimming with hate aimed at racist comments they have only imagined. Then, in the next news cycle, the story transforms into how the gold medalist’s accomplishment resulted in racist attacks.
Aside from partly ruining a triumphant moment for the athlete in the above scenario, a media situation is manufactured where, in this case, real-life racists might feel empowered by a disproportionate amount of attention that sits way above their numeral significance in society. In conjunction with the conversion theory, cultural groups could be effectively pitted against each other literally while drenched in hatred — without that hatred being represented accurately.
As this moral war animosity potentially sparks higher engagement, it becomes a compelling proposition for news organisations and social media algorithms to favour news stories that fuels this phenomenon.
Also, the useful idiot syndrome might result in fertile breeding grounds for targeted attacks perpetrated by destabilising interests using various destructive social engineering tactics.
There’s a risk that many of us, at least those of us who are actively commenting and engaging with people outside our circles, are acting like accelerants for polarisation — despite good intentions. By overcompensating to signal our moral value, we might be acting like useful idiots for those who don’t support our side in the righteous war.
Still, this is only an anecdotal observation at this point.2 I cannot stress that enough. I could be wrong for many reasons, and we need academic studies to determine whether or not this is an actual phenomenon. The good news is that it should be a testable hypothesis, I believe.
Update: A few weeks after this post was published, an exciting event took place in Sweden: The police issued a public warning to Swedish parents to watch out for a specific TikTok challenge where boys are encouraged to assault females sexually — and share the video on TikTok. 3
This then turned into a national news item and a vivid social media discussion, and many Swedish schools sent out a warning to parents urging them to discuss this matter with their children.
Was there ever any such challenge? If there was such a video challenge, the useful idiot syndrome only spawns opportunities for people to post such challenges to provoke further discussion.
And, as a result, we scare young children using fake news and frame young boys as sexual predators — actions that might be orchestrated by reactionary agendas operating in the shadows.
- I love Gary Larson, so the scenario is entirely plausible — the algorithms indeed seems to have figured this out already.
- The various comment sections I encounter aren’t randomly selected since the algorithms choose them for me. By engaging with a specific type of discussion, I might be reinforcing a systemic bias. Furthermore, I haven’t codified various comments and correctly counted exact ratios.
- For more context in Swedish and sound advice to parents, read Elza Dunkel’s blog post.