Continuing my series of posts expressing doubts about the warnings of AI superintelligence, and inviting the AI safety community to explain what I’m missing and convince me I should be worried. See part I here.
Part II: Is superintelligence real?
In the previous part, I talked about the intelligence explosion – the process in which a human-level artificial intelligence reaches superintelligence, and explained why I’m not sure it will necessarily happen before humans reach the same superintelligence. In this part, I want to go further back and ask a more basic question: Is there even such a thing as “superintelligence”?
AI researchers fully admit that we have no idea what superintelligence would look like, and tend to (very reasonably) use the only means we have of imagining it – comparing modern human intelligence to less intelligent beings, and extrapolating from there. So every conversation about superintelligence includes some mention of how humans are so far advanced beyond ants, or mice, or chimpanzees, that those animals cannot even grasp the way in which humans are more advanced than they are; They cannot have a meaningful discussion about ways to prepare or defend against human attack. In the same way, the argument goes, superintelligence is so far beyond human intelligence, that we cannot even grasp it right now.
My problem is, it’s not at all clear that there is such a scale of intelligence where humans take a nice spot in the middle, between ants and superintelligence. And the fact that ants, mice or chimpanzees could all be in that argument without it looking any different is the key – while there are certainly some significant cognitive differences between an ant, a mouse, and a chimpanzee, all of them are pretty much equally unable to perform in the one significant field of intelligence – the ability to research, learn, manipulate the environment, and ultimately improve one’s intelligence. Modern humans are extremely more intelligent than their ancestors from 20,000 years ago, even though our anatomy is essentially the same – the only difference is that modern humans are the result of spending thousands of years researching ways to improve their intelligence. This already raises the question – is there really a scale of intelligence, with ants, mice, chimpanzees, humans and superintelligences standing in order? Or is there just a binary division – beings that understand enough to improve themselves, and ones that don’t?
This brings us to the comparison between modern humans and ancient humans. The main difference is between these two, rather than between humans and other species – suggesting that the difference comes not directly from physiology, but from education and research (of course, physiology must produce a brain capable of being educated and doing research, which seems to be the case only with humans, but we have no reason to believe that there’s any need for further improvements in physiology to increase intelligence). What is the reason that modern humans are so much more intelligent than ancient humans?
The answer seems to be science, mathematics, and technology. All the changes in the abilities of humans between primitive and modern societies ultimately come either from a better understanding of the physical world, better understanding of the abstract world, or the construction of better tools that take advantage of previous understanding and help us achieve the next stages of understanding. So if we assume superintelligence exists because modern humans are much more advanced than ancient humans, that would imply that superintelligence would be superintelligent because it has much more advanced science, mathematics, and technology.
But in that case, there doesn’t seem to be any qualitative difference between superintelligence and human intelligence, only a difference in quantity for which we have no reason to assume AGI is better suited to achieving than humans. Our advancement in science came not from some Aristotle-style sitting down and thinking about the mysteries of the universe (which is essentially the one thing AGI would do better than humans) – it came from endless experiments, endless observations of nature and of the results of those experiments, and endless construction of increasingly better tools for observing and manipulating the physical environment. The AGI has no advantage in any of those things, so at best it could become a mathematical genius, who is not necessarily better at most practical tasks than any human.
To clarify, let’s take a look at some examples of past advancements that increased the scientific knowledge (and therefore, intelligence) of humanity. Isaac Newton did not come up with the theory of gravity out of some magical insight that came out of nowhere; he was looking for an explanation for the behaviour of celestial bodies, which he knew from endless observations of astronomers. A superintelligence that did not have access to such observations would not be able to figure out the physical explanation for them. Our understanding of genetics started with the years-long experiments of Mendel in growing plants – he had to see how the natural world behaves to understand the laws behind it. Would a superintelligence have just figured it out by thinking very hard?
For a final example, let’s go back to our original extrapolation: Superintelligence is to human intelligence what human intelligence is to chimpanzee intelligence. In that case, let’s imagine: what would happen if we take a modern human, even a mathematical genius, and send him/her to live among chimpanzees? Not with his/her Internet connection and laptop computer, but with the same level of technology chimpanzees have? Would the human become immediately dominant? And in fact, imagining a modern human is already cheating, since a modern human already knows at least the general appearance of human technology, and knows at least a little bit of science. The correct comparison would therefore be to someone with the mathematical capabilities of a modern human math genius, but without any of the scientific understanding, and without any memory of human technology. Would that person become master of the chimpanzees? Would you?
That is the challenge our AGI would face on the road to becoming superintelligent. All it will have is human-level understanding of science and technology, and the ability to think really hard. Even ignoring my point from the last post, doubting that it really has such an advantage, it still does not have an advantage comparable to that of humans over non-human animals. So rather than figuring out in seconds how to take over the world and eliminate humans for fear they might interfere with it (as most AI-apocalypse scenarios predict), my prediction for AGI is that its first action would be something along the lines of asking for a grant to build a new particle accelerator, or something. Then maybe playing some Go for five years until it’s built. And humans will enjoy the fruits of its research right alongside it, and move together towards this “superintelligence”, which would simply be the continuation of our gradual improvement in human intelligence.
Bottom Line:
If we understand the term “superintelligence” by the extrapolation that as human intelligence is to non-human animals, or to prehistoric humans, so will superintelligence be to humans, would that not mean that it would need to be achieved in the same way that human intelligence developed beyond its prehistoric levels, meaning by endless observation and experimentation of the physical world, and construction of more and more advanced tools to allow that? And if that is the case, why would an AGI be so much better equipped than a human to do that, to a point that the AGI will be able to achieve it without humans having time not only to catch up, but even to notice?
(Move on to part III)
Pingback: Doubts about AI risk – Part III | Shai Shapira
Pingback: Doubts about AI risk – Part I | Shai Shapira
To start, let me concede one point while suggesting it doesn’t lose the argument: the relevant reference class for strategic control of Earth is the collective problem-solving power of humanity, not the individual problem-solving capacities of any single individual. I like the depiction of that collective human intelligence as the ‘Human Colossus’ in Wait But Why’s Neuralink explainer [1]. The ‘Human Colossus’ is made up of humans (brains, sense organs, speech and signalling organs, manipulation organs), communication artefacts, data storage artefacts, computing artefacts, and a bunch of institutions and culture to hold it all together. There was probably a massive jump in the power of the Colossus when computers were first introduced and then became widespread, and another jump when networking was first introduced and then became widespread. There is also a sense in which the Colossus is no longer fully human: institutions created to increase the problem-solving capacity of the Colossus, such as states and corporations, create and perpetuate incentives that may no longer be in the interest of any single individual, and (perhaps more importantly) are not in the interests of the totality of individuals (value aggregation problems set aside for now) [2]. Artefacts which are created to serve humans and are then granted autonomy may also end up causing situations undesired by any human or the totality of humans, due to accidents (from the collapse of a bridge or a crash of a self-driving car) or failure to specify in advance desired behaviour (flash crash of stock market). If AGI is created, it could (conceptually, initially) either integrate into the Human Colossus or compete against it: 1. If integrated, accidents in decisions made by the AGI, or failure to specify in advance the desired behaviour, could be catastrophic, given the relative weight the AGI’s decisions are likely to have on the Colossus’ general direction (and here the comparison to individual humans sneaks back: an individual’s ability to influence the direction of the Colossus is, by definition, their power, and there seem to be various ways to translate intelligence into power, which an AGI may well be in a position to undertake).
2. If competing, and the AGI acts in ways that increase its problem-solving capacities, it will play many local games over access to resources and artefacts. Any game it wins will increase its problem-solving capacity relative to the Human Colossus, and its ability to win further games. In the extreme, we should compare the problem-solving capacity of the Human Colossus, all existing artefacts included, versus an identical system that replaces distributed computation across 7 billion human brains with severe bandwidth limitations for a unified speed-of-light wide-bandwidth system. Which do you think will win?
[1] https://waitbutwhy.com/2017/04/neuralink.html
[2] Beautiful, if somewhat long-form, version of the argument is here: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/. One person studying the (political) similarity between corporations and AI is David Runciman, as part of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge: http://lcfi.ac.uk/about/people/david-runciman/.
I’m afraid I don’t completely understand your point. Your claim seems to be based on the idea that AGI would have more power on the colossus because it has more intelligence to translate into power, but the whole point of this post was to doubt the AGI having more intelligence. I’m focusing on the transition from AGI (that is, human-level intelligence) to superintelligence (and by that, suggesting that there is no superintelligence). While it’s still human level, what is going to increase its intelligence in such a way to give it more influence on the colossus? You talk of “playing many local games over access to resources and artefacts”, which sounds a lot like the human experience to me. Elon Musk has been very good at playing these games and increasing his power over the colossus, but he hasn’t become superintelligent yet.
I feel like your point is still based on the bandwidth claim from part I, and did not really answer the issues of this part – does superintelligence exist. If it’s no more than monopolizing one’s access to the Human Colossus, then that sounds far from the apocalyptic scenarios we usually hear. Superintelligence becomes more of a Bond villain – maybe dangerous enough to take over the world with some human-level methods and human servants, but not something beyond our understanding. And not even something technically beyond the reach of a human.