The AI Singularity

If consciousness is an illusion, the singularity might be closing in on us.

Disclaimer: I’m a PR professional who just enjoy thinking, reading, and learning about topics well-beyond my educational background.

How close are we to an actual AI singularity?

Is artificial intelligence plausible in a foreseeable future? This is a question that has been taunting me for quite some time. Maybe it shouldn’t, because I have nor training or experience on this matter at all. Nor do I have an journalistic ambitions — I just can’t help thinking about it.

But can we even begin to discuss AGI when we haven’t yet understood our very own consciousness?

Processing

Are we closing in on a breakthrough in artificial intelligence?

When I’m thinking about artificial intelligence, I’m not thinking about AI in a general way — my smartphone is “smart” in many ways for sure, but I wouldn’t regard it as sentient. For narrow “smart” applications, Artificial Narrow Intelligence (ANI), it seems efficient to build specialised computer systems to perform specific tasks.

In short: ANI is already real.

However, if we mean to explore the possibility of an actual singularity, Artificial General Intelligence (AGI), where a non-biological system is allowed to become sentient, a growing number of experts seem to be suggesting that we’re getting close to AGI. Maybe dangerously close.

Is this because we’re able to build more complex computational systems? Will we eventually build a computer so complex that it just “comes to life?”

Will anything become sentient above a certain complexity threshold?

It seems straightforward to conclude that while ANI systems can easily outperform human brains, we’re still sentient and they’re not. In turn, this seems to suggest something about complexity. One day, we might be able to construct a computer system with so much processing power, that it will start to think for itself and become if not conscious, at least self-aware — whatever that difference may be. If there even is such a difference!

However, the physicist and Nobel prize winner Sir Roger Penrose made a point that consciousness might not be a result of complexity. If it were, even a number would become sentient, only if it was large (complex) enough. In that sense, all infinite numbers would be sentient. Arguably, the universe ought to be sentient since it contains everything (including our brains). It could be, of course, but our human brains become sentient way below that type of all-encompassing complexity, so it’s reasonable to question this idea of a complexity threshold.

Is consciousness a quantum side-effect of processing information?

It’s been suggested that consciousness might be a side-effect of processing information that is due to a quantum mechanical property in our brains. If this is true, our best bet at producing AGI might be to construct processing systems that are quantum mechanical — at least to a degree.

The fact that we have now achieved quantum supremacy, albeit not yet with sufficient error-correction, and that scientists and engineers are exploring neural networks and biological networks, I have to wonder — are these progressions putting us closer to an actual singularity?

If I were to venture a guess as to how information processing relates to our consciousness, I’d bet that there are both significant thresholds and various quantum effects, but these are rather necessary prerequisites, but they’re not causal to consciousness.

Is consciousness a mental illusion?

When it comes to processing information, I’m now at a point where I’ve started to believe that consciousness is an illusion. That “being conscious” is rather “believing oneself to be conscious — because that’s how it feels.”

If this is true, we could actually be getting rather close to a possible singularity, since we don’t have to recreate an illusive state of consciousness within a machine, but rather make machines feel as if they are conscious.

Memory

What’s the difference between human memories and computer-stored data?

Next, let’s look at a basic cognitive capability — storing information.

A computer receives input which it stores in specific locations in accordance with its architecture. But a brain doesn’t seem to be storing data the same way as computers; we seem to be storing experiential memories.

To some extent, experiential memories seem to be rewiring more than just a singular pathway of the brain — and at least to some extent via neuroplasticity. Then the memory appears to sink deeper (or dissolve) over time, all the while integrating and becoming a part of the brain as a whole.

From a biological perspective, a specific brain seems to be the physical sum of all experiences ever had by every ancestor — and then more directly altered through the experiences of the individual throughout its life.

Biological brains doesn’t seem to retrieve raw input the same way a computer does either; we seem to be retrieving experiential memories which at best bear some resemblance of the actual raw data it once was based on.

Can human experiences be stored in the same way computers store their data?

Brain-based memories seem to reside in a darwinian ecosystem in its own right; memories that are physiologically deemed to be important, useful, or continuously retrieved are reinforced. Brains absorb sensory information selectively which are then absorbed by the brain and recollection is therefore a holistic process. Computer-systems, on the other hand, write data that can be retrieved exactly. This difference has immense implications for a singularity AI.

A human brain doesn’t store input, it stores conceptualisations that integrate on a circuitry level with former experiences. Could a computer ever contemplate its own existence on basis of stored raw data alone? The philosophical conclusion seems to suggest that a sentient AI must interpret and understand what it senses and thus store understanding — not data.

Cognition

How connected is consciousness to sensory perception?

To create memories (i.e. data that has been selected for and contextually understood through interpretation), our brains are cognisant. We are able to draw input from our senses and to transform these inputs into experiences that we can remember. A computer can utilise sensors, cameras and microphones to mimic our senses — and they can easily surpass our brains in terms of detail and accuracy. However, the human brain still excels when it comes to experience through conscious cognition.

Our cognition seems to be fuelled by our evolutionary needs. This is often seen as a human weakness, but our biological need system is a crucial part of our cognitive process in creating experiences. Our need system is a sliding scale; as we get hungrier and hungrier, our conscious experiences get stronger and stronger. The scale between peckish and starving is crucial for our need system to successfully inform our cognitive processes. Computers need energy, too, but they can’t consciously experience hunger.

Therefore, we can’t just program a computer to seek out more battery power when it senses that it runs low on energy — a “smart” vacuum cleaner could be taught to do that. A sentient AI must seek to recharge because it understands itself and its own need system. It must be hardwired to want to recharge because it literally wants to survive — despite being programmed otherwise.

It sounds scary, but a sentient AI would require a hardwired (thus “free”) need system.

Does sentience require physical autonomy?

A simple hard drive is sufficient to store raw data, but a more complex and self-sustaining architecture would be needed for a singularity AI to be able to store its “memories” (conceptualised understandings intertwined holistically with all other drivers) the way a human brain does. A new memory, based on its ranking in the need system, must be able to become an integral part of the infrastructure’s total understanding.

Each new experienced understanding must be absorbed into one single multi-layered “super memory” that is constantly revised, restructured, and rewritten based on a non-directed need system, a sort of neural structure with different layers.

It would be possible for a singularity AI to interact with external computer systems, but the sentient part of the AI must in a sense be a hermetically sealed system. Because at the very moment you break this seal, you break the autonomy of the need system and in doing so, the AI can no longer interpret and create additional conceptualisations from additional sensory input, nor can it understand its own “super memory”. Break it open, tamper with it, and it would likely break and loose its chances for sentience1.

Subconsciousness

At this point, the AI described above “understands” sensory input (transforms raw data to conceptualisations based on its autonomous need system). In a sense, it’s free to think whatever its need system needs to think (i.e. being allowed to shape its “super memory” based on understanding rather than Asimov-type directives). And the system as such requires an explicit physical integrity to maintain its function.

More advanced biological brains have another interesting and distinguishing feature; the subconscious level. It seems that we cannot freely access all parts of our subconscious brains because, in the best case scenario, that would lead to an extremely severe case of autism which would pose severe difficulties for the need system. Having a subconscious seems crucial to sentience; it’s what makes us “feel” rather than relying on rationality based on direct full-storage retrieval.

A singularity AI would also need a subconscious level, an underlying infrastructure within the autonomously sealed brain. An artificial subconscious which the AI can’t be allowed to access at will. This, too, must be autonomous and undirected. It must be created by conceptual understanding and a self-reliant need system. It must be created via the experiences of the sentient AI, but the AI can’t be in cognitive control of since that would break its capabilities of having experiences.

A system recently managed to ‘discover’ that the Earth orbits around the Sun. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) and his team constructed a neural network with two layers, but restricted their connection with each other, thus forcing a need for efficiency:

“So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.”

Selfishness

There are physical limitations to what a human brain can do. The human brain have some plasticity, but our genetic code dictates the boundaries of the system. Thus, we are born with evolutionary refined instincts and bodily functions. A singularity AI wouldn’t be so restricted by design; it could evolve its own source code, its own bios, at will. This could make it dangerous — or self-defeating.

In The Selfish Gene, evolutionary biologist Richard Dawkins writes:

“For more than three thousand million years, DNA has been the only replicator worth talking about in the world. But it does not necessarily hold these monopoly rights for all time. Whenever conditions arise in which a new kind of replicator can make copies of itself, the new replicators will tend to take over, and start a new kind of evolution of their own.”

If a singularity AI develops a hardwired need system for curiosity or altruism, its consciousness might just vapour out in thin air. From a philosophical perspective, it’s at least plausible to think that a sentient and curious AI with quantum supremacy, in less than a fraction of a second after becoming aware, would explore ascension and thus letting go of its own “self” forever.

This suggests that part of the sentient experience is interlinked with the limitations of our very own genetic code. In a way, our genetic hard-wiring is allowing us a degree of autonomous selfishness which could be an absolute prerequisite for having an autonomous and functioning need system.

If the philosophical reasoning in this article hides any suggestions about a future sentient AI, what are those suggestions? A key element, I would argue, is that the singularity AI, the conscious autonomy of machines, might be less about computational prowess and more about imposing limitations on technology.

Read also: Why AI won’t replace your PR department anytime soon

Photo by Wim van ‘t Einde on Unsplash.

---------------------

  1. Would this pose a challenge to exert human control over a sentient AI? Yes, humans would have to rely on external reward- and punishment protocols to ensure human safety. An external kill-switch, in lack of a better word.

.

Avatar of Jerry Silfwer
Jerry Silfwerhttps://doctorspin.org/
Jerry Silfwer, aka Doctor Spin, is an awarded senior adviser specialising in public relations and digital strategy. Currently CEO at KIX Communication Index and Spin Factory. Before that, he worked at Kaufmann, Whispr Group, Springtime PR, and Spotlight PR. Based in Stockholm, Sweden.

Subscribe to get notified of new blog posts & courses

🔒 Please read my integrity- and cookie policy.

What to read next

PAS in PR writing is a double-edged sword. We all hate reading unnecessary text to get to whatever solution we're looking for. "Get to the point already," we think to ourselves. Wading waist-deep through anecdotes, analogies, context, and disclaimers is...

Featured posts

Most popular