Blog PostsDigital FirstSocial MediaSocial Media Algorithms and How They Rule Our Lives

Social Media Algorithms and How They Rule Our Lives

A social media algorithm is not your friend — and it must be managed.

We all know how social media algorithms work, right?

You interact with social media and the platform owner collects your user data to serve you more content that in turn will keep you engaged and thus increase your exposure to third-party advertising. This is quite necessary, of course, since there’s a lot of content to structure:

Social Media Algorithms | Social Media | Doctor Spin
In a wired world of online abundance, gatekeeping is key.

Unfortunately, the true inner workings of a social media algorithm has a much darker side. And yes, “darkness” is a reasonable analogy, because these algorithms are being kept secret for many reasons.

And behind these curtains of secrecy, we don’t find myriads of layers of complex computing, but rather man-made filters designed by real people with personal agendas.

Always shaping, suggesting, nudging, presenting.

The algorithms are not “personal”

As users, we do have a general idea of how algorithms work, but it seems that only a handful of people actually knows. To prevent industrial espionage, we can safely assume that most social networks is making sure that no one developer has full access to the entirety of an algorithm.

And even if you’re a Facebook programmer, how would you know exactly how Google’s algorithm works?

One might assume that you have a personal Facebook algorithm stored on a server somewhere. An algorithm that tracks you personally and that learns about you and your behaviour. And the more it learns about you, the better it understands you. But this is not exactly how it works — for good reason.

As humans, we are notoriously bad at consciously knowing ourselves. And understanding others. And our thinking is riddled with unconscious biases. Applying various types of machine learning, however complex, to learn about users and their interactions on the individual level would be both slow and expensive. Few social media users would be patient enough to endure such a lengthy process through trial-and-error.

Anyone familiar with data mining1 will know that more advanced scraping techniques, like sentiment analysis from social media monitoring, will require large data sets. Hence the term “big data”.

This is challenging to explain in non-technical terms, but a social media algorithm gets its immense powers primarily from harvesting data from large volumes of users simultaneously and over time, not from creating billions of self-contained algorithms.

Put in another way: The master algorithm is figuring out humanity, not your personal behaviour.

They find your lack of consistency disturbing

No-one argues the fact that you are an individual. But is your behaviour consistent from interaction to interaction? The answer is probably … no. How you act and react are likely to be much more contextual and situational — at least in relation to your perceived uniqueness and self-identification.

This is the backdrop as to why social networks limits your options to tweak “your personal algorithm” yourself. Personally, I would be all over the possibility to adjust Facebook’s newsfeed or Google’s search results in fine detail using various boolean rulesets, but that would probably only teach the master social algorithm a bit more about human pretentions — and little else.

True personal algorithms that follows us around from service to service could potentially outperform all other types of algorithms, and they might be doing so in the near future, but that is only if we actually design them to lie to us: I would probably ask my algorithm to only show me serious articles written in peer-reviewed publications or published by well-educated authors with proven track-records. But if the algorithm doesn’t take it upon itself to show me some funny cat memes or weird Youtube clips every now and then, I would probably be bored quite quickly.

When we understand that the social media algorithms aren’t trying to figure you out, it follows to ask instead how well they’re doing in figuring out humanity?

Why social media algorithms aren’t … better

Social media algorithms are progressing — slowly. Understanding human behaviour at the macro-level is not a task to be underestimated.

Google is struggling to show relevant search results and it still isn’t uncommon that users have to search quite a bit before finding the information they seek. Facebook is struggling with users complaining about what they’re being shown in the newsfeeds. Spotify is struggling trying to suggest new music and it misses the mark by a mile quite often.

LinkedIn is struggling to be business-relevant while at the same time personally engaging (i.e. not boring). Despite being such a basic image-sharing platform, Instagram is criticised for making people feel bad about themselves. Pinterest is struggling with interpreting personal visual taste and intent. Netflix is struggling with suggesting what to watch (“Why on Earth would I want to see Jumanji 2?”) and Amazon is struggling with suggesting what to buy (“Please stop, I regret clicking on those purple bath towels by mistake a year ago!”).

The practical engineering approach to this problem is simple and straightforward:

Just take the guesswork out of the equation.

Taking out the guesswork by real-time testing

A dominating feature of today’s social media algorithms is real-time testing. If you publish anything, the algorithm will use its data to test your content on a small statistical subset of users. If their reactions are favourable, the algorithm will then show your published content to a slightly larger subset — and then test again. And so on.

If your published content has viral potential, and your track record as a publisher has granted you enough platform authority to surpass critical mass, your content will spread like rings on water throughout larger and larger subsets of users.

From a programming standpoint, this type of testing algorithm isn’t as mathematically complex as one might think: As discussed before, the most powerful approach to increasing virality is to find ways to reduce cycle times — which isn’t that hard to facilitate as long as there’s enough content to keep users engaged.

YouTube’s algorithm arguably does well in terms of cycle times, but the real star of online virality is the Chinese platform TikTok.

And this is where it gets pitch black. Because the complexity and gatekeeping prowess of today’s social media algorithms doesn’t primarily stem from their creative use of big data and high-end artificial intelligence — it stems from the blunt use of man-made filters.

The social media algorithms could be much more complex by using machine learning, natural language processing, and artificial intelligence combined with neural network models of human psychology. Especially if we allow these protocols to be individual across services — and we allow them to lie to us just a bit.

But, no. Instead, the social media algorithms of today are surprisingly straightforward and based on real-time iterative testing. Today, virality is largely controlled via the use of added filters.

We’re underestimating the effects of filters

The algorithmic complexity is primarily derived from actual humans manually adding filters to algorithms in their control. These filters are then tested on smaller subsets before being rolled out on larger scales.

Most of us have heard creators on Instagram, TikTok, and sometimes YouTube complain about being “shadow banned” when their reach suddenly dwindles from one day to the next — for no apparent reason. Sometimes this might be due to changes to the master algorithm, but most creators are probably affected by newly added filters.

Make no mistake about it: Filters are powerful. No matter how well a piece of content would negotiate the master algorithm — if a piece of content gets stuck in a filter, then it’s going absolutely nowhere. And these filters aren’t the output of some ultra-smart algorithm; they’re added by humans with corporate or ideological agendas.

“There is no information overload, only filter failure.”
— Clay Shirky

And TikTok might serve as one of the darkest examples of what this actually means: Leaked internal documents revealed how TikTok was adding filters to limit content by people deemed to be classified as non-attractive or people producing content in locations that bore markings of being poor.

And yes, this is where the darkness comes into full effect — it’s when human agendas gets added into the algorithmic mix.

“One document goes so far as to instruct moderators to scan uploads for cracked walls and “disreputable decorations” in users’ own homes — then to effectively punish these poorer TikTok users by artificially narrowing their audiences.”

Invisible Censorship — TikTok Told Moderators to Suppress Posts by “Ugly” People and the Poor

The grim irony here is that adding filters is rather straightforward from a programming perspective. We often think of algorithms as advanced black boxes of superhuman code that operates almost above human comprehension. But with fairly straightforward algorithms, it is man-made filters we need to watch out for and take into account.

Gatekeeping is the ultimate power in society

For the sake of argument, think about what would happen if Google and Facebook decided to filter away a specific day completely. Everything that refers to that day wouldn’t pass any iterative tests anymore. Any content from that day would be shadow banned. And search engine results pages would deflect anything related to that particular day.

To paraphrase a popular TikTok meme, “How would you know?”

No man or woman is making any decisions based on the actual reality; we all just make decisions based on our limited understanding of that reality. Hence, if you control some parts of that reality, you indirectly control parts of what we all do, or say, or even think.2

“Since we cannot change reality, let us change the eyes which see reality.”
— Nikos Kazantzakis

This ultimate gatekeeping power is in no shape or form checked or balanced: If a social network wants to manipulate billions of people by addicting them to dopamine-inducing feedback loops, they can. And they do. If they want to censor something deemed to be immoral despite it being legal, they can. And they do.

If social media filters are designed to turn us into passive consumers and ad viewers, then that’s … dark. But social networks are also actively in compliance with dominating political agendas in exchange for continuous control of their gatekeeping powers. When social media algorithms are legislated by ideological institutions to filter our world views, it’s just lights out.

And this is the truth about how social media algorithms are controlling our lives:

Their filters govern our world-views, plain and simple.

The first rule of social media algorithms

Social networks doesn’t want us talking and asking questions about their algorithms — despite them being at the core of their businesses. Because 1) they need to keep them secret, 2) they are more blunt than we might think, 3) their complexity is manifested primarily by added man-made filters, and 4) they don’t want to direct our attention at just how much gatekeeping power they yield.

And both journalists and legislators aren’t exactly hard at work exposing these apparent democratic weaknesses; journalists because they want their lost gatekeeping power back and legislators because they see ideological opportunities to gain control over these filters.

Social Media Algorithms | Social Media | Doctor Spin
Our world views are more heavily filtered than we might realise.

Any PR professional knows that the media has an agenda that must be managed so it doesn’t spin out of control. We know that politicians must be told your story — because others will tell them theirs. The same is true for managing your content and your reputation on social networks. Social networks are “good” in exactly the same way the news media is “objective” or politicians are “altruistic”.

A social media algorithm can be successfully negotiated and sometimes even be made to work for you or your organisation. But an algorithm with its filters will never be your friend. As public relations professionals, we should act accordingly and manage the social media algorithms — just as we manage journalists and legislators.

“In the digital space, attention is a currency. We earn it. We spend it.”
— Brian Solis

Cover photo by Jerry Silfwer (Prints/Instagram)

---------------------

  1. Batrinca, B., Treleaven, P.C. Social media analytics: a survey of techniques, tools and platforms. AI & Soc, 89–116 (2015). https://doi.org/10.1007/s00146-014-0549-4
  2. Lippmann, Walter. 1960. Public Opinion [1922]. New York: Macmillan.

.

Avatar of Jerry Silfwer
Jerry Silfwerhttps://doctorspin.org/
Jerry Silfwer, aka Doctor Spin, is an awarded senior adviser specialising in public relations and digital strategy. Currently CEO at KIX Communication Index and Spin Factory. Before that, he worked at Kaufmann, Whispr Group, Springtime PR, and Spotlight PR. Based in Stockholm, Sweden.

Subscribe to get notified of new blog posts & courses

🔒 Please read my integrity- and cookie policy.

What to read next

There are exciting PR opportunities in reciprocity and in allowing customers to be influencers themselves — if only for just a little while.

Featured posts

Most popular