10 comments on “Doubts about AI risk – Part I

  1. Pingback: Doubts about AI risk – Part II | Shai Shapira

  2. Pingback: Doubts about AI risk – Part III | Shai Shapira

  3. Quick comment: previous to last paragraph uses the term “malicious AI”. We don’t really think AI systems will be malicious (most likely they won’t be conscious at all), but we do think it will be possible to create dangerously unsafe systems: systems that follow design specifications, yet through unexpected side effects cause tremendous harm, including, possibly, human extinction.

  4. Response: This is a good point. While I’m not sure I’d define AI in exactly the same way, the notion that the risk scenario that matters is a showdown between an agent-like silicon-based system and a human (or all humans) is not really the relevant one. The closest this comes to scenarios of concern is if we have a system that is performing some unexpected behaviour (e.g. starting to turn schools into paperclip factories) and a human wants to shut it down. The AI risk argument says the paperclip maker will avoid being shut down because:
    1) being shut down will lead to fewer paperclips being made and
    2) the paperclip maker is more intelligent than humans and will figure out in advance ways to prevent it from being shut down.

    Your counter move is to give the humans advanced computing powers.

    Does this work?

    1. Elon Musk seems to think so, as long as your bandwidth to the advanced computing powers is wide enough, which is why he wants to develop a neural lace [1]. Without the increased bandwidth the AI seems to be at a great advantage: it can pass megabits or gigabits per second between its policy-deciding subsystem(s) and the various computing systems it is connected to, while the humans are stuck typing or speaking to their systems at a few kilobits per second.
    2. But even this might not be enough, because the decision processing speeds in silicon seem to be much higher than in brain wetware, so a silicon-based decision-making system will be able to explore plans that require shifting actions or policies at speeds not accessible by humans. Depending on the scenario, this may matter a great deal.
    3. The main point of AI safety is to make sure we build systems such that we are never in an adversarial situation against them. If we do end up in one, and both sides are utilising advanced non-AI computing sources, these sources themselves become targets for the adversaries’ policies. In other words, if some futuristic agent-like AlphaGo is playing a future human Go master, and both have access to vast computing powers, and the most important thing in the world is to win the game, then the future AlphaGo will consider hacking the Go master’s computers as part of its plan for winning the game. A small difference in intelligence can be used to gain a resource advantage, which leads to greater decision-making ability and a bigger resource advantage, and so on.

    * Usual caveats: we don’t know enough about how the brain works, we don’t know how AI technologies will develop, and we definitely don’t know how future technologies will be used.

    [1] https://waitbutwhy.com/2017/04/neuralink.html

    • Well, first of all I’d say, this “showdown” is mostly a dramatic way I used for demonstrating it; More fundamentally, I think it’s not an issue of the humans having advanced computing powers during a showdown against the super AI, but rather a reason why super AI will not exist to begin with, and therefore no showdown will happen; Because we have no reason to expect an AGI to advance faster than a human. The AI argument lies in the concept of the intelligence explosion, the idea that a human-level intelligence armed with advanced computing powers will advance in an unprecedented speed. I argue that while this is true, it’s already happening right now, and we are this human-level intelligence with advanced computing powers; The intelligence explosion is exactly what has been happening ever since the information revolution started and computers were created.

      1-2. Has anyone actually studied these assumptions scientifically? Because this “seems to be at a great advantage” does not necessarily seem to me. Good decision making depends on many things, and it’s not at all obvious that basic-level processing speed is such a big factor in it. Data availability and general processing speed seem much more significant to me. As I said in part II, we did not discover the physics required to change the world by sitting and thinking. We had to gather data, and the speed of gathering data did not depend on how fast our internal processor was.

      3. First of all, I think you’re diving into “Hollywood hacking” here. Do you really think it’s impossible to guard your (presumably offline) supercomputer from being hacked by someone with “a small difference in intelligence”, if your life depended on it? But anyway, as I said in my clarification, my argument is less for what happens during a showdown, and more for the question of whether there is actually a reason to expect an AGI to advance faster than humanity. If the AI already starts working towards disabling human computing powers while it’s still only slightly more intelligent than us, it’s not likely to succeed; If it doesn’t, then we have no reason to assume it will advance much faster than us, since we are both human-level intelligences with advanced computing powers.

  5. Look at the recent results from OpenAI on playing DOTA2 [1], especially the amount of progress from Aug 7th to Aug 11th. Would a human augmented with a non-learning automation assistant be able to increase in performance at this rate? We should both look up the relevant data on human learning rates, but I’m sceptical, unless a significant amount of the DOTA playing is done by the automated system, in which case loss of control becomes a problem again.

    [1] https://blog.openai.com/more-on-dota-2/

    • Would a human augmented with a non-learning automation assistant be able to increase in performance at this rate? Absolutely. You talk about human learning rates, but I don’t think that’s relevant – I have no doubt that we’re already very close to reaching the full potential of the human part of the “human + non-machine-learning algorithm” team, so the main addition will come from the algorithm. In that case, the progress graphs would not even be continuous – in all likelyhood, they would include some work done behind the scenes, and then immediately jump to an extremely high level. How high? I have no idea because I’ve never tried to design a DOTA playing bot. But what I’d like to know is how much time they spent behind the scenes on designing this bot, and what would I be able to produce if I spent the same amount of time and resources designing a non-machine-learning bot.

      Again I say – this AI is playing against humans with a huge handicap. Unless you have any data on the performance of non-machine-learning bots, and certainly ones that were created with as much investment as the OpenAI bot, then we can have a basis for discussion.

      Just to clarify – your last comment seems to imply that if my system is indeed a bot, it will have a loss of control problem, but the whole point of our discussion until now was that a non-machine-learning algorithm cannot have a loss of control problem. So did I misunderstand you?

    • Still, he’s talking about the speed in which you’d see one digit or type one digit into a calculator, compared to the speed in which an AGI would do the same. Indeed, the AGI is faster. But how quickly would the human input a billion digits into a calculator? How quickly would the human read a billion digits? It definitely won’t be “speed of reading/writing one digit times billion”. If we worry about our speed of doing nontrivial things, we cannot just extrapolate from our speed of doing trivial things. The calculation is not important by itself; It’s important as part of some bigger process we do. And a human would do that process on a computer properly built for it, just like the AGI will, and each calculation in it would take the same time. The question is just how long it would take to make the decision to start that process, and I would not expect that to be extremely big.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s