6 comments on “Doubts about AI risk – Part I

  1. Pingback: Doubts about AI risk – Part II | Shai Shapira

  2. Pingback: Doubts about AI risk – Part III | Shai Shapira

  3. Quick comment: previous to last paragraph uses the term “malicious AI”. We don’t really think AI systems will be malicious (most likely they won’t be conscious at all), but we do think it will be possible to create dangerously unsafe systems: systems that follow design specifications, yet through unexpected side effects cause tremendous harm, including, possibly, human extinction.

  4. Response: This is a good point. While I’m not sure I’d define AI in exactly the same way, the notion that the risk scenario that matters is a showdown between an agent-like silicon-based system and a human (or all humans) is not really the relevant one. The closest this comes to scenarios of concern is if we have a system that is performing some unexpected behaviour (e.g. starting to turn schools into paperclip factories) and a human wants to shut it down. The AI risk argument says the paperclip maker will avoid being shut down because:
    1) being shut down will lead to fewer paperclips being made and
    2) the paperclip maker is more intelligent than humans and will figure out in advance ways to prevent it from being shut down.

    Your counter move is to give the humans advanced computing powers.

    Does this work?

    1. Elon Musk seems to think so, as long as your bandwidth to the advanced computing powers is wide enough, which is why he wants to develop a neural lace [1]. Without the increased bandwidth the AI seems to be at a great advantage: it can pass megabits or gigabits per second between its policy-deciding subsystem(s) and the various computing systems it is connected to, while the humans are stuck typing or speaking to their systems at a few kilobits per second.
    2. But even this might not be enough, because the decision processing speeds in silicon seem to be much higher than in brain wetware, so a silicon-based decision-making system will be able to explore plans that require shifting actions or policies at speeds not accessible by humans. Depending on the scenario, this may matter a great deal.
    3. The main point of AI safety is to make sure we build systems such that we are never in an adversarial situation against them. If we do end up in one, and both sides are utilising advanced non-AI computing sources, these sources themselves become targets for the adversaries’ policies. In other words, if some futuristic agent-like AlphaGo is playing a future human Go master, and both have access to vast computing powers, and the most important thing in the world is to win the game, then the future AlphaGo will consider hacking the Go master’s computers as part of its plan for winning the game. A small difference in intelligence can be used to gain a resource advantage, which leads to greater decision-making ability and a bigger resource advantage, and so on.

    * Usual caveats: we don’t know enough about how the brain works, we don’t know how AI technologies will develop, and we definitely don’t know how future technologies will be used.

    [1] https://waitbutwhy.com/2017/04/neuralink.html

    • Well, first of all I’d say, this “showdown” is mostly a dramatic way I used for demonstrating it; More fundamentally, I think it’s not an issue of the humans having advanced computing powers during a showdown against the super AI, but rather a reason why super AI will not exist to begin with, and therefore no showdown will happen; Because we have no reason to expect an AGI to advance faster than a human. The AI argument lies in the concept of the intelligence explosion, the idea that a human-level intelligence armed with advanced computing powers will advance in an unprecedented speed. I argue that while this is true, it’s already happening right now, and we are this human-level intelligence with advanced computing powers; The intelligence explosion is exactly what has been happening ever since the information revolution started and computers were created.

      1-2. Has anyone actually studied these assumptions scientifically? Because this “seems to be at a great advantage” does not necessarily seem to me. Good decision making depends on many things, and it’s not at all obvious that basic-level processing speed is such a big factor in it. Data availability and general processing speed seem much more significant to me. As I said in part II, we did not discover the physics required to change the world by sitting and thinking. We had to gather data, and the speed of gathering data did not depend on how fast our internal processor was.

      3. First of all, I think you’re diving into “Hollywood hacking” here. Do you really think it’s impossible to guard your (presumably offline) supercomputer from being hacked by someone with “a small difference in intelligence”, if your life depended on it? But anyway, as I said in my clarification, my argument is less for what happens during a showdown, and more for the question of whether there is actually a reason to expect an AGI to advance faster than humanity. If the AI already starts working towards disabling human computing powers while it’s still only slightly more intelligent than us, it’s not likely to succeed; If it doesn’t, then we have no reason to assume it will advance much faster than us, since we are both human-level intelligences with advanced computing powers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s