The Philosophy of the Singularity
Will there be any room left for humans when machines reach superintelligence? I look at some of the big philosophical questions raised by one seminal essay on the singularity.
This week I want to walk through a kind of seminal paper in the philosophy of technology space - a paper by Australian philosopher David Chalmers entitled The singularity: A philosophical analysis
Before going into the deep philosophical (or metaphysical questions) we should define what the idea of the singularity is. Chalmers sees the singularity as an "intelligence explosion" where machines or systems out do human intelligence.
If we think about it machines or systems are getting smarter and smarter. However, for all the wonders of current AI things like ChatGPT are limited in that they (from what we can tell) cannot produce new knowledge. We cannot ask ChatGPT to write code to make a better version of ChatGPT, for example. But, if we take the development of technology to its logical conclusion, it seems plausible that there will reach a point where a machine or system will be more intelligent than humans. At this point it seems inevitable that it will be better than us at designing new systems. The new system it designs will be better still and it will be able to create an even better system and so on. When this point hits, where machine intelligence surpasses our own, is called the singularity.
The questions of the singularity
So why should we care about this? Well, Chalmers writes that we have to determine whether we, as humans, will play a significant role in this post-singularity world. If there comes a time where we can enhance our minds through artificial augmentation, even uploading our brains to a computer system, can human identity survive? This, according to Chalmers is a life and death question.
He asks, faced with the possibility of an intelligence explosion - how can we increase the odds of a desirable outcome? And how can we maximise the value in this post-singularity world? Furthermore, in a subjective sense, we should ask how good will a post-singularity world be good for me and those I care about? And secondly, from an objective or relatively neutral standpoint, how good is it that such a world comes to exist?
I won't run through all the philosophical reasoning behind the argument for the singularity, but the argument runs generally that if we create systems that are of equal intelligence to us, it stands that soon after a system will be created which is more intelligent than us. If this happens, it can be used to create a super intelligent system and so on. What I will focus on are some of the interesting implications for super intelligence, and the deep philosophical questions they reveal about consciousness and personal identity.
Human Augmentation and Brain Enhancements
Suppose that there is a path to superintelligence which doesn't rely on creating a new AI system outright. Suppose that, before creating an independent superintelligent system, we start with creating better-than-human intelligence by giving out brains cognitive enhancements. This could be done biologically, chemically or through some kind of implantation. Suppose there were a chip to extend our memory beyond what it is, like a built in hard-drive, this could open a world of opportunity for improving efficiency.
Are we almost there? In the past few weeks Google showed a version of its AI called Project Astra which used vision to look around a room. The demonstrator walked past a desk, and then asked the AI if it had seen her glasses. The AI responded that they were sitting on the table. Unlike humans, who have a limited ability to focus on one thing at a time and who can miss obvious things, an AI system can take in everything all at once. Suppose this AI were built into glasses (as Google is probably going to do). It will be constantly taking in information, and will tell us exactly where we left our keys or the TV remote. It will also help remind us when we bump into that person in the supermarket who we went to school with but whose name we have can't remember in the moment. In this form of 'enhancement' we are still us, only better.
Suppose then that this sort of technology, instead of being in glasses could be embedded in our brain, we are 'extending' our mind. We can imagine that this sort of bio-tech brain to be more intelligent than a biologically based brain. Now, because we are still relying on a brain's 'biological core' there might be some cognitive or speed limitations with this method, based on brain architecture. We could though reach a point where the brain is 'enhanced' to such a degree that the biological core is gone altogether (more on that in a minute).
Mind Uploading
If we as humans want to survive in the post-singularity superintelligent world, then we might need to consider enhancing our brains. Chalmers writes that if we really want to compete, we might have to dispense with our biological brains altogether. The process of going from biological to brain to computer is often called uploading. This could happen through a gradual process, replacing each neuron in our brain slowly overtime (gradual uploading), or all at once by scanning our brains and loading them into a computer (instant uploading). It can involve destruction of the original brain parts (destructive uploading) or preserving the original brain parts (non-destructive uploading) or reconstructing cognitive structures from records (reconstructive uploading).
The idea of uploading raises some huge philosophical questions, firstly, will I survive uploading? Chalmers writes that this divides into two of the hardest questions of philosophy, consciousness and personal identity.
Uploading and Consciousness
This is a notoriously difficult topic, and one which a blog post like this cannot give a proper account of. I will point out a few of the things philosophers puzzle about with consciousness. We take it for granted that we are conscious, that is, there is something that it is like to be us. There are things we like such as tastes, smells, sounds. These are things which makes us who we are, they are ways we experience and interact with the world. Chalmers' writes, that if we didn't have these experiences, then in an important sense would no longer exist.
One question is then, if I try to upload my mind to an AI system, would I still be conscious? The problem for Chalmers is that it is we do not know much about consciousness at all. Neuroscience is getting good at showing correlations, how systems in the brain interrelate, showing which part of the brain controls what action, or which part lights up in reaction to certain stimuli. What we don't really know is, why we have consciousness at all (this is known as the hard problem of consciousness). The question is whether all consciousness is the sum of our biological parts, that is, all the neurons, cortexes and bits of grey matter put together. Or, is there something more to consciousness beyond just these bits of material?
Philosophers are divided (as often happens in philosophy) some say that consciousness is essentially biological and that no non-biological things can be conscious. Others, think that consciousness can come from causal structures or relationships. On this view (known as functionalist) consciousness can happen in non-biological systems as long as they are organised correctly, this is also the view which Chalmers' takes.
The question is whether consciousness only relies on a kind of material system which can be broken down into its parts - or if there is something more to consciousness, some further fact which isn't explained by knowledge of physical structures. Complete mapping of the brain and physical structures can tell us objective behaviour and objective functioning, it can tell us if something is alive. But, Chalmers argues, it can't really tell us about a system's subjective experience. Someone in a coma can have their brain activity measured, but this doesn't tell us anything of the patient's subjective experience.
What would a conscious AI be like?
So, what would it be like for an AI to be conscious? We say that there is something that it is like to be a human or there is something that it is like to be a bat. But, is there something that it is like to be an AI?
Chalmers argues that if we take the functionalist view, then by replacing all of our biological brain matter with exact synthetic copies, we just might very well survive being uploaded and we would keep our consciousness. On this view all that matters is the organisation of the system.
Consider experiments at the moment with visual prothesis in which implants are designed to bypass the retina and optic nerve and stimulate the brains visual cortex directly. The hope is that this will result in restoring partial vision to those who have lost their sight. The person with the implant is still themselves, only they have been augmented. Now, imagine if technology progresses to the point where it can do similar things for parts of the brain, replacing damaged areas with brain prothesis. This could happen chunk by chunk or one neuron at a time. If it were done slowly, then each new component would interact just as the part it replaces had done. The brain would continue to function form one point to the next the same way it had all along. This would continue until we had a completely non-biological version.
But what happens to consciousness along the way? It might suddenly disappear, it might gradually fade, or it might stay throughout. We could ask the person as the process happens, "are you conscious?" "Are you - you?" The person/system might not believe anything had changed. Chalmers thinks that, most plausibly, consciousness will stay throughout. He argues that, as long as things are set up in the "same patterns of causal organisation" (one thing causing another thing) then they have the same states of consciousness. It doesn't matter if this organisation comes in the form of neurons, or silicon or any other matter.
On this view of functionalist view of consciousness, if we were capable of uploading our minds as long as they are organised in the same way, then it seems as if our consciousness would be able to survive in this new state. But, consciousness isn't the only issue with uploading. Suppose I am optimistic that consiousness survives uploading. Would the result still be me? Next week we will look at some of the sticky questions around uploading and personal identity.