Blog PostsLife DesignBrain FuelThe AI singularity and consciousness

The AI singularity and consciousness

How close are we to an actual AI singularity?

Is artificial intelligence plausible in a foreseeable future? This is a question that has been taunting me for several years now. I’m not talking about artificial intelligence in the more loosely use of the concept; my smartphone is “smart” in many ways for sure, but it isn’t sentient in any ay.

For narrow “smart” applications, Artificial Narrow Intelligence (ANI), it’s efficient to build specialised computer systems to perform specific tasks. However, if we mean to explore the possibility of a singularity where an artificial artefact is allowed to become sentient, the biological brain becomes an interesting focal point for science.

The fact that we have now achieved quantum supremacy, albeit not yet with sufficient error-correction, and that scientists and engineers are exploring neural networks and biological networks, one must wonder — how close are we to a bonafide singularity?1

Even though we’re still far from decoding consciousness, we can still discuss and philosophically examine the functional properties of a biological brains to maybe find clues as to what would be required from a singularity AI.

To store understanding, not data

Let’s begin by looking at a basic cognitive capability — storing information. A computer receives input which it stores in specific locations in accordance with its architecture. But a brain doesn’t store data; it stores memories. Memories differs from raw input in that the memory is made a part of the human experience. To some extent, the memory rewires the brain physically via neuroplasticity. Then the memory appears to sink deeper or dissolve over time while integrating and becoming a part of the brain itself.

Biological brains don’t retrieve raw input the same way a computer does; we retrieve a memory which at best bear some resemblance of the actual raw data it once was based on. That is — if we can retrieve the memory at all.

Brain-based memories seem to reside in a darwinian ecosystem in its own right; memories that are physiologically deemed to be important, useful, or continuously retrieved are reinforced. Brains absorb sensory information selectively which are then absorbed by the brain and recollection is therefore a holistic process. Computer-systems, on the other hand, write data that can be retrieved exactly. This difference has immense implications for a singularity AI.

A human brain doesn’t store input, it stores conceptualisations that integrate on a circuitry level with former experiences. Could a computer ever contemplate its own existence on basis of stored raw data alone? The philosophical conclusion seems to suggest that a sentient AI must interpret and understand what it senses and thus store understanding — not data.

Cognition and the autonomous will to survive

To create memories (i.e. data that has been selected for and contextually understood through interpretation), our brains are cognisant. We are able to draw input from our senses and to transform these inputs into experiences that we can remember. A computer can utilise sensors, cameras and microphones to mimic our senses — and they can easily surpass our brains in terms of detail and accuracy. However, the human brain still excels when it comes to experience through conscious cognition.

Our cognition seems to be fuelled by our evolutionary needs. This is often seen as a human weakness, but our biological need system is a crucial part of our cognitive process in creating experiences. Our need system is a sliding scale; as we get hungrier and hungrier, our conscious experiences get stronger and stronger. The scale between peckish and starving is crucial for our need system to successfully inform our cognitive processes. Computers need energy, too, but they can’t consciously experience hunger.

Therefore, we can’t just program a computer to seek out more battery power when it senses that it runs low on energy — a “smart” vacuum cleaner could be taught to do that. A sentient AI must seek to recharge because it understands itself and its own need system. It must be hardwired to want to recharge because it literally wants to survive — despite being programmed otherwise.

It sounds scary, but a sentient AI would require a hardwired (thus “free”) need system.

Must a sentient AI have autonomy?

A simple hard drive is sufficient to store raw data, but a more complex and self-sustaining architecture would be needed for a singularity AI to be able to store its “memories” (conceptualised understandings intertwined holistically with all other drivers) the way a human brain does. A new memory, based on its ranking in the need system, must be able to become an integral part of the infrastructure’s total understanding.

Each new experienced understanding must be absorbed into one single multi-layered “super memory” that is constantly revised, restructured, and rewritten based on a non-directed need system, a sort of neural structure with different layers.

It would be possible for a singularity AI to interact with external computer systems, but the sentient part of the AI must in a sense be a hermetically sealed system. Because at the very moment you break this seal, you break the autonomy of the need system and in doing so, the AI can no longer interpret and create additional conceptualisations from additional sensory input, nor can it understand its own “super memory”. Break it open, tamper with it, and it would likely break and loose its chances for sentience2.

The conscious- and subconscious duality

At this point, the AI described above “understands” sensory input (transforms raw data to conceptualisations based on its autonomous need system). In a sense, it’s free to think whatever its need system needs to think (i.e. being allowed to shape its “super memory” based on understanding rather than Asimov-type directives). And the system as such requires an explicit physical integrity to maintain its function.

More advanced biological brains have another interesting and distinguishing feature; the subconscious level. It seems that we cannot freely access all parts of our subconscious brains because, in the best case scenario, that would lead to an extremely severe case of autism which would pose severe difficulties for the need system. Having a subconscious seems crucial to sentience; it’s what makes us “feel” rather than relying on rationality based on direct full-storage retrieval.

A singularity AI would also need a subconscious level, an underlying infrastructure within the autonomously sealed brain. An artificial subconscious which the AI can’t be allowed to access at will. This, too, must be autonomous and undirected. It must be created by conceptual understanding and a self-reliant need system. It must be created via the experiences of the sentient AI, but the AI can’t be in cognitive control of since that would break its capabilities of having experiences.

A system recently managed to ‘discover’ that the Earth orbits around the Sun. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) and his team constructed a neural network with two layers, but restricted their connection with each other, thus forcing a need for efficiency:

“So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.”

Genetic restrictions for selfishness

There are physical limitations to what a human brain can do. The human brain have some plasticity, but our genetic code dictates the boundaries of the system. Thus, we are born with evolutionary refined instincts and bodily functions. A singularity AI wouldn’t be so restricted by design; it could evolve its own source code, its own bios, at will. This could make it dangerous — or self-defeating.

In The Selfish Gene, evolutionary biologist Richard Dawkins writes:

“For more than three thousand million years, DNA has been the only replicator worth talking about in the world. But it does not necessarily hold these monopoly rights for all time. Whenever conditions arise in which a new kind of replicator can make copies of itself, the new replicators will tend to take over, and start a new kind of evolution of their own.”

If a singularity AI develops a hardwired need system for curiosity or altruism, its consciousness might just vapour out in thin air. From a philosophical perspective, it’s at least plausible to think that a sentient and curious AI with quantum supremacy, in less than a fraction of a second after becoming aware, would explore ascension and thus letting go of its own “self” forever.

This suggests that part of the sentient experience is interlinked with the limitations of our very own genetic code. In a way, our genetic hard-wiring is allowing us a degree of autonomous selfishness which could be an absolute prerequisite for having an autonomous and functioning need system.

If the philosophical reasoning in this article hides any suggestions about a future sentient AI, what are those suggestions? A key element, I would argue, is that the singularity AI, the conscious autonomy of machines, might be less about computational prowess and more about imposing limitations on technology.

Read also: Why AI won’t replace your PR department anytime soon

Photo by Wim van ‘t Einde on Unsplash.


  1. As a non-scientist, I’m in no position to theorise from a physical- or biological perspective; I will in this article discuss this question from a philosophical viewpoint.
  2. Would this pose a challenge to exert human control over a sentient AI? Yes, humans would have to rely on external reward- and punishment protocols to ensure human safety. An external kill-switch, in lack of a better word.


singularity | Brain Fuel | Doctor Spin
Jerry Silfwer
Jerry Silfwer, aka Doctor Spin, is an awarded senior adviser specialising in public relations and digital strategy. Currently CEO at KIX Communication Index and Spin Factory. Before that, he worked at Kaufmann, Whispr Group, Springtime PR, and Spotlight PR. Based in Stockholm, Sweden.

Subscribe to get notified of new blog posts & courses

🔒 Please read my integrity- and cookie policy.

What to read next

The Chinese company that bought Swedish Volvo has created a new car brand, Lynk & Co. And it's now launching in Sweden.

Featured posts

Most popular