4 comments on “Doubts about AI risk – Part II

  1. Pingback: Doubts about AI risk – Part III | Shai Shapira

  2. Pingback: Doubts about AI risk – Part I | Shai Shapira

  3. To start, let me concede one point while suggesting it doesn’t lose the argument: the relevant reference class for strategic control of Earth is the collective problem-solving power of humanity, not the individual problem-solving capacities of any single individual. I like the depiction of that collective human intelligence as the ‘Human Colossus’ in Wait But Why’s Neuralink explainer [1]. The ‘Human Colossus’ is made up of humans (brains, sense organs, speech and signalling organs, manipulation organs), communication artefacts, data storage artefacts, computing artefacts, and a bunch of institutions and culture to hold it all together. There was probably a massive jump in the power of the Colossus when computers were first introduced and then became widespread, and another jump when networking was first introduced and then became widespread. There is also a sense in which the Colossus is no longer fully human: institutions created to increase the problem-solving capacity of the Colossus, such as states and corporations, create and perpetuate incentives that may no longer be in the interest of any single individual, and (perhaps more importantly) are not in the interests of the totality of individuals (value aggregation problems set aside for now) [2]. Artefacts which are created to serve humans and are then granted autonomy may also end up causing situations undesired by any human or the totality of humans, due to accidents (from the collapse of a bridge or a crash of a self-driving car) or failure to specify in advance desired behaviour (flash crash of stock market). If AGI is created, it could (conceptually, initially) either integrate into the Human Colossus or compete against it: 1. If integrated, accidents in decisions made by the AGI, or failure to specify in advance the desired behaviour, could be catastrophic, given the relative weight the AGI’s decisions are likely to have on the Colossus’ general direction (and here the comparison to individual humans sneaks back: an individual’s ability to influence the direction of the Colossus is, by definition, their power, and there seem to be various ways to translate intelligence into power, which an AGI may well be in a position to undertake).
    2. If competing, and the AGI acts in ways that increase its problem-solving capacities, it will play many local games over access to resources and artefacts. Any game it wins will increase its problem-solving capacity relative to the Human Colossus, and its ability to win further games. In the extreme, we should compare the problem-solving capacity of the Human Colossus, all existing artefacts included, versus an identical system that replaces distributed computation across 7 billion human brains with severe bandwidth limitations for a unified speed-of-light wide-bandwidth system. Which do you think will win?

    [1] https://waitbutwhy.com/2017/04/neuralink.html
    [2] Beautiful, if somewhat long-form, version of the argument is here: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/. One person studying the (political) similarity between corporations and AI is David Runciman, as part of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge: http://lcfi.ac.uk/about/people/david-runciman/.

    • I’m afraid I don’t completely understand your point. Your claim seems to be based on the idea that AGI would have more power on the colossus because it has more intelligence to translate into power, but the whole point of this post was to doubt the AGI having more intelligence. I’m focusing on the transition from AGI (that is, human-level intelligence) to superintelligence (and by that, suggesting that there is no superintelligence). While it’s still human level, what is going to increase its intelligence in such a way to give it more influence on the colossus? You talk of “playing many local games over access to resources and artefacts”, which sounds a lot like the human experience to me. Elon Musk has been very good at playing these games and increasing his power over the colossus, but he hasn’t become superintelligent yet.

      I feel like your point is still based on the bandwidth claim from part I, and did not really answer the issues of this part – does superintelligence exist. If it’s no more than monopolizing one’s access to the Human Colossus, then that sounds far from the apocalyptic scenarios we usually hear. Superintelligence becomes more of a Bond villain – maybe dangerous enough to take over the world with some human-level methods and human servants, but not something beyond our understanding. And not even something technically beyond the reach of a human.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s