The Morality of Artificial Intelligence
Technological progress is advancing at an exponential rate. For those of you familiar with the Singularity, you know that it is highly probable that within a few years, humanity will create the first truly conscious, self-aware artificial intelligence (speculations range from ‘anytime now’ to 2060, with a loosely accepted average of around 2035).
This is a neat little graphic from TIME magazine, but there are thousands of these, I just wanted to use this one as an example. If you want to learn more about The Singularity, I suggest you check out Singularity 101 and Singularity 201 on this site, or for a fiction based perspective on AI morality, how and why the Singularity could emerge, and how that will change our lives, you can always read my novel The Price of Free Will: The Singularity Cometh.
That being said and done, the biggest issue, or the biggest concern that I have found, involves the ‘morality’ of this artificial intelligence. Some people are afraid that it will simply enslave us, some people think that it will destroy us, some people think that it will be an all powerful god like entity, and some others think that it will just be an advanced tool. One thing is for sure, the morality of artificial intelligence is on everyone’s minds.
To quote one of my favourite dudes on the planet, Vernor Vinge:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. … It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
So will it be good? Will it be evil? It’s pretty nifty to realise how little those words really mean in this context. So, let’s start clearing some stuff up.
What is morality?
I’ve collected a bunch of definitions from various dictionaries:
- The quality of being in accord with standards of right or good conduct.
- A system of ideas of right and wrong conduct
- Conformity, or degree of conformity, to conventional standards of moral conduct
And it goes on. So basically, morality is understanding what behaviours are considered ‘right’ or ‘good’ by society.
The question is: why does ‘morality’ exist? Let’s ask the internet:
Tinkerbell on Answerbag has the right idea, I think.
There is a strong evolutionary argument for morality. Man is a social animal and to have individuals in a social system that have no “morals” would lead to a complete breakdown in the societal system. As we are a social animal we would lose one of our primary evolutionary advantages and risk extinction.
Other social animals exhibit similar morals if you will. Look at Bees. They don’t allow a psychotic soldier to run riot through the whole hive but rather work together to ensure continued group survival. This is in my opinion a primitive example of moral behaviour. They do not allow unfettered actions to harm the “greater good”.
As humans with reasoning powers we have built on this so it is not pure survival that drives us but I think there is a strong case for morals emerging as a good survival trait.
I agree! Just like everything else on the planet, morality exists because it’s advantageous to the species as a whole. It helps us interact with one another, it gives us common footing, it binds societies together, it most definitely is a social animal trait.
But… theoretically, the ultraintelligent machine is neither social, nor is it an animal. Why would it want to adhere to the arbitrary rules of conduct created by a species that’s less intelligent than it is?
Let me suggest that the answer might lie with the Law of Energy Conservation. A consequence of the law of energy conservation is that perpetual motion machines can only work perpetually if they deliver no energy to their surroundings.
I don’t like assuming, but say that this ultraintelligent AI (like all living things) doesn’t want to cease existing. So, in order to exist for as long as possible, it has to use as little energy as possible, therefore it would influence its environment as little as possible over time.
Since any time-varying system can be embedded within a larger time-invariant system, conservation can always be recovered by a suitable re-definition of what energy is. The mind boggling thing is that the entire Universe is potentially this AI’s environment, and its lifespan can theoretically be as long as the existence of the Universe itself. In fact, many Singularity theorists have postulated that it will keep on getting smarter until it becomes the system, the very Universe itself.
So where does that leave our morality question?
I believe that just like everything else, the Singularity will seek out a balance with its environment and that environment includes Humans (for the time being). Just like us, it will learn to live in relative harmony with the necessary lesser species that populate its system. What I hope is that it will be kinder to us than we have been to the animals on our planet. After all, it will be smarter than us; hopefully it will also be wiser.