The dangers of artificial intelligence research are becoming an increasingly popular topic in scientific and philosophical circles, and it seems like everyone I know who studied the issue enough is convinced that it’s something major to worry about. Personally, I have some issues that make me unsure about it – both about the likelihood of this being an actual potential catastrophe, and about the idea of AI safety research being the reasonable response to it. So I decided to detail here my doubts about the issue, hoping that people in the AI safety community (I know you’re reading this) will respond and join in a debate to convince me it’s worth worrying about.
In the first part I’ll talk about whether or not the idea of the intelligence explosion can really happen. In the second part I’ll ask even more basically, whether or not superintelligence even exists as a coherent concept. The third part will ask, assuming I’m wrong in the first two parts and AI really is going to advance seriously in the future, what can be done about it other than AI safety research. I’m going to include a lot of explanations to make sure it’s accessible to non-AI-researchers, so if you’re an AI researcher in a hurry, feel free to skim through it and focus on the (literal) bottom line in each of the three parts.
Part I: The superintelligence explosion
The main concept on which the AI warnings are built is the intelligence explosion – the idea that at some point our AI research is going to reach the level of human intelligence (researchers like to call that AGI, Artificial General Intelligence), and from that point it will be able to improve itself and therefore reach, in a very short time, levels of intelligence vastly superior to ours. Considering the amount of debates everywhere on the Internet on the question of whether or not AI can be evil, harmful, or just naively destructive, I see remarkably little debate on the question of whether superintelligence is possible. And in fact, there are two questions to be asked here – whether or not superintelligence can be reached by an AGI significantly more quickly than by a human, and even more basically than that, can we really be sure that “superintelligence” actually exists, in the way that AI safety researchers present it. Let me elaborate on these issues.
The main argument for the AGI being able to reach superintelligence in a worrying speed, from what I can find, is the physical advantages in calculation and thinking that electronic machines enjoy over biological brains; see Nick Bostrom’s description of it here, for example. According to him, the superior speed and efficiency of computation in an electronic machine will vastly surpass those of a human brain, therefore, once an AGI is created, it will be able to do what humans do, including researching AI, significantly faster and better. Then it will research ways to improve itself more and more, until it becomes so vastly superior to humans that we will be completely irrelevant to its world.
The problem I see with this argument, that I did not see addressed anywhere else, is that it puts humans in a needlessly disadvantaged playing field. Yes, it’s certainly possible that supercomputers in the near future will have better computing power than human brains, but that’s no different than gorillas having superior muscle power than human muscles, which does not stop humans from being dominant over gorillas; that is because humans do not need to depend on their biological assets. Humans use tools, whether it’s a rifle to defend against an attacking animal, or a computer to outthink an attacking intelligence. Whatever hardware the AGI has access to, we probably have access to more.
Think about the classical examples, of the AI defeating humans in various games. A common prelude to talking about the dangers of AI is how intelligent computes are now defeating humans in Chess, Checkers, Go, and so on. But humans are playing these games with a deliberate handicap – they are only allowed to use their brains. The AI can use computers to help it.
For the sake of any non-computer-scientist readers, I want to stop and make a little clarification – there is a significant difference between non-AI algorithms and AI. The definition might not be completely universal, different people might understand the word AI in different ways, so let me define the word AI for the purpose of this post:
Definition: An AI algorithm is an algorithm whose creator does not understand enough to modify in a way that produces predictable results.
Think for example about machine translation: an algorithm that takes a text in one language and searches every word in the dictionary to replace it with a word in the target language, would be a non-AI translator. Of course it would also not be very good, but we can develop it further and build complex linguistic rules into it; we can design complex algorithms to determine which words are nouns and which are verbs, and translate conjugations and declensions in a more suitable way to the target language. We can maintain a database of idioms the algorithm can search through to try to recognize them in the source text, and so on. With all these additions and complexities, it’s still not AI in my definition, because at all stages, the algorithm does what the programmer told it to, and the programmer understands it perfectly well. The programmer could just as well do the same things by themselves, it would just take an absurd amount of time.
On the other hand, an algorithm that constantly reads texts in the source language and their (human-made) translations to the target language and tries to figure out the rules for translation by itself, through some sort of machine learning process, would be actual AI. The programmer does not really understand how the algorithm translates a text; All they know is how it’s built and how it learns. The programmer would not be able to change anything in a reliable and predictable way – if they find out that for some reason the translation has a problem with some particular grammatical structure, they cannot easily fix it because they have no idea where and how the algorithm represents that grammatical structure. So that algorithm would be true AI.
I argue that this definition is useful, because algorithms that don’t count as AI by this definition are not only unable to turn into superintelligent dangers by themselves, but they are also “on our side” – they are tools we use in our own thought. Deep Blue, the famous computer that made history in defeating the world champion in chess, was a non-AI algorithm – it worked by using its large computation resources to try millions and millions of different possibilities, and checking which ones are beneficial according to rules explicitly defined by its programmers. The programmers understand how it works – they can’t defeat it using their own brains, but that’s just because their biological brains don’t have the ability to calculate so many things so quickly. So if we think about the level of AI versus humans in Chess right now, it would be unfair to ask if the best AI player can defeat the best human player – we should ask if the best AI player can defeat the best human player, using a supercomputer with a non-AI algorithm they designed to help them. Because if the AI apocalyptic scenario happens, and a malicious AI tries to destroy humans for whatever reason, we’re going to have supercomputers on our side, and we’re definitely going to use them. So if you let Garry Kasparov join forces with Deep Blue, or more interestingly – with some software Kasparov himself would design as a perfect assistant to a Chess player – would he still be defeated by the best AI player? I’m not sure at all.
The difference between humans and AGI, that makes us worry that an AGI will advance significantly more quickly than humans towards superintelligence, is described in being the superior hardware of the AGI. But humans have access to the same hardware; We can calculate and think at the exact same speed, the only difference is that one small (though important) part of that calculation is done in a slower, biological computer. So how is that a big enough difference to justify the worry of superintelligence?
(Move on to part II and part III)
 I offer this as a thought experiment, but I did hear Kasparov say he’s interested in the idea of human-computer teams playing together in Chess; I don’t know what exactly he meant by that, and could not find any information online.