Print Friendly, PDF & Email

How close are we to a bonafide singularity?

Is artificial intelligence plausible in a foreseeable future? This is a question that has been taunting me for several years now. I’m not talking about artificial intelligence in the popular use of the concept; my smartphone is “smart” in many ways for sure, but it isn’t sentient in any sense of the word.

The fact that we have now achieved quantum supremacy, albeit not yet with sufficient error-correction, and that scientists and engineers are exploring neural networks and biological networks, one must wonder — how close are we to a bonafide singularity?

As a non-scientist, I’m in no position to theorise from a physical or biological perspective; I will in this article discuss this question from a philosophical foundation.

The biological brain as a start for philosophic queries

For narrow “smart” applications, it’s efficient to build specialised computer systems to perform specific tasks. However, if we mean to explore the possibility of a singularity where an artificial artefact is allowed to become sentient, the biological brain becomes an interesting focal point for science.

However, we have to also resort to philosophy when it comes to certain properties of what we think it means to be a “being”. We are still philosophically debating whether or not there’s such a thing as free will. We don’t yet understand the physical essence of consciousness. We do have an understanding of what it means to alive from a biological perspective, but as life can attained without a biological brain, such as in plant-based life, it’s reasonable to theorise that maybe non-organic matter could be outfitted with an artificial brain.

Even though we still lack many necessary aspects of understanding when it comes to what it means to be a “being”, we can still discuss and philosophically examine the functional properties of a biological brains to maybe find clues as to what would be required from a sentient AI.

Why a sentient AI will store understanding, not data

Let’s begin by looking at a basic capability — storing information. A computer receives input which it stores in specific locations in accordance with its architecture. But a brain doesn’t store data; it stores memories. Memories differs from raw input in that the memory is made a part of the human experience. To some extent, the memory rewires the brain physically via neuroplasticity. Then the memory appears to sink deeper or dissolve over time while integrating and becoming a part of the brain itself. Biological brains don’t retrieve raw input the same way a computer does; we retrieve a memory which at best bear some resemblance of the actual raw data it once was based on. That is — if we can retrieve the memory at all.

Brain-based memories seem to reside in a darwinian ecosystem in its own right; memories that are physiologically deemed to be important, useful, or continuously retrieved are often reinforced. Memories that aren’t as readily absorbed becomes seems to become harder and harder to retrieve.

Computer-systems write data that can be retrieved exactly. Brains absorb sensory information selectively which are then absorbed by the brain and recollection is a much more complex process.

This difference has immense implications for a sentient AI. A human brain doesn’t store input, it stores conceptualisations that integrate on a circuitry level with former experiences. Could a computer ever contemplate its own existence on basis of stored raw data alone? The philosophical conclusion seems to suggest that a sentient AI must interpret and understand what it senses and thus store understanding — not data.

Cognition and the autonomous will to survive

To create memories (i.e. data that has been selected for and contextually understood through interpretation), our brains are cognisant. We are able to draw input from our senses and to transform these inputs into experiences that we can remember. A computer can utilise sensors, cameras and microphones to mimic our senses — and they can easily surpass our brains in terms of detail and accuracy. However, the human brain still excels when it comes to experience through conscious cognition. Since we make memories rather than store data, this type of high-level interpretation is second nature to us. Computers analyse data, too, but that’s a separate and linear process.

Our cognition is fuelled to a large extent by our biological needs. This is often seen as a human weakness (computers need energy, but they can’t consciously experience hunger), but our biological need system is a crucial part of our cognitive process in creating experiences. Our need system is a sliding scale; as we get hungrier and hungrier, our conscious experiences get stronger and stronger. The scale between peckish and starving is crucial for our need system to successfully inform our cognitive processes.

Therefore, we can’t just program a computer to seek out more battery power when it senses that it runs low on energy. A “smart” vacuum cleaner could be taught to do that. A sentient AI must seek to recharge because it understands itself and its own need system. It must be programmed to literally want to recharge because it literally wants to survive. It sounds scary, but a sentient AI would require a self-sufficient (in a sense “free”) need system.

Architecture: Must a sentient brain have autonomy?

A simple hard drive is sufficient to store raw data, but a more complex and self-sustaining architecture would be needed for a sentient AI to be able to store its “memories” (conceptualised understandings) the way a human brain does. A new memory, based on its ranking in the need system, must be able to become an integral part of the infrastructure’s total understanding. Each new experienced understanding must be absorbed into one single multi-layered “super memory” that is constantly revised, restructured, and rewritten based on a non-directed need system, a sort of neural structure with different layers.

It would be possible for an AI to interact with external computer systems, but the sentient part of the AI must in a sense be a hermetically sealed system. Because at the very moment you break this seal, you break the autonomy of the need system and in doing so, the AI can no longer interpret and create additional conceptualisations from additional sensory input, nor can it understand its own “super memory”. Break it open, tamper with it, and it would likely break and loose its chances for sentience1.

The conscious- and subconscious duality of the brain

At this point, the AI described above “understands” sensory input (transforms raw data to conceptualisations based on its autonomous need system). In a sense, it’s free to think whatever its need system needs to think (i.e. being allowed to shape its “super memory” based on understanding rather than Asimov-type directives). And the system as such requires an explicit physical integrity to maintain its function.

More advanced biological brains have another interesting and distinguishing feature; the subconscious level. It seems that we cannot freely access all parts of our subconscious brains because, in the best case scenario, that would lead to an extremely severe case of autism which would pose severe difficulties for the need system. Having a subconscious seems crucial to sentience; it’s what makes us “feel” rather than relying on rationality based on direct full-storage retrieval.

A sentient AI would also need a subconscious level, an underlying infrastructure within the autonomously sealed brain. An artificial subconscious which the AI can’t be allowed to access at will. This, too, must be autonomous and undirected. It must be created by conceptual understanding and a self-reliant need system. It must be created via the experiences of the sentient AI, but the AI can’t be in cognitive control of since that would break its capabilities of having experiences.

A system recently managed to ‘discover’ that the Earth orbits around the Sun. Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) and his team constructed a neural network with two layers, but restricted their connection with each other, thus forcing a need for efficiency.

“So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.”

Genetic restrictions might be needed for selfishness

There are physical limitations to what a human brain can do. The human brain have some plasticity, but our genetic code dictates the boundaries of the system. Thus, we are born with evolutionary refined instincts and bodily functions. A sentient AI wouldn’t necessary be so restricted by design; it could evolve its own source code, its own bios, at will. However, it’s a possibility that such a lack of physical restrictions could result in a non-existent sense of self.

If the sentient AI develops a need for curiosity, its sentience could literally vapour out in thin air. It would, at least philosophically, be a challenge to be “something” if you in a literal sense can be anything through absorption — and nothing. From a philosophical perspective, it’s at least plausible to think that a sentient and curious AI with quantum supremacy, in a fraction of a second after becoming aware, would explore ascension and thus letting go of its “self”.

This suggests that part of the sentient experience is interlinked with the limitations of our very own genetic codes. In a way, our genetic hard-wiring is allowing for a degree of autonomous selfishness which could be an absolut prerequisite for having an autonomous and functioning need system.

We could learn more about mimicking biological limitations

If the philosophical reasoning in this article hides any suggestions about a future sentient AI, what are those suggestions? A key element, I would argue, is that sentience, conscious autonomy, might be less about computational prowess and more about biological limitations that we must learn to mimic through technology.

Photo by Wim van ‘t Einde on Unsplash.

---------------------

  1. Would this pose a challenge to exert human control over a sentient AI? Yes, humans would have to rely on external reward- and punishment protocols to ensure human safety. An external kill-switch, in lack of a better word.