Last part in my series of posts expressing doubts about the warnings of AI superintelligence, and inviting the AI safety community to explain what I’m missing and convince me I should be worried. See part I here, and part II here.
Part III: Preparing for superintelligence
In the previous two posts, I expressed my doubts about the risk of artificial general intelligence (AGI) turning into superintelligence in a fast or unexpected way that might put it in an extreme advantage over human intelligence. They were quite theoretical, and in this part I want to turn to the question of what is likely to happen in practice, and what we can do to benefit from artificial intelligence and, even if I was wrong in the previous two parts, prepare for the appearance of superintelligence.
From what I can see, all existing research that comes from the view of superintelligence as a potential risk, is research on AI safety – that is, research on how to create AI systems in a way that is unlikely to produce catastrophe. Maybe it’s because of my interest in politics and world affairs that makes me a bit more cynical than the average mathematician, but I find it very difficult to imagine that if real intelligence superpower was at stake, then people, corporations and governments could really be convinced to limit themselves with some algorithms to prevent bad behaviour from their AI. Moreover, this approach suffers from “Superman’s problem” – When countless villians try again and again to destroy the world, Superman need to succeed every time in stopping them. The villains only need to succeed once, and we’re doomed. The same goes for AI safety – we can build super-strong regulations and make everyone use strict safety mechanisms in designing their AI, but all it takes is one programmer saying “Something is not working. I wonder what will happen if I disable this function call here…”, and we’re doomed.
Could there be a more robust way to handle it? I’d suggest that the very notion of superpowered AI that I go against in my previous posts, is the key to prepare for superintelligence in case I am wrong. Because throughout the AI risk discussion, people constantly assign various superpowers to the superintelligent AI – it would be able to strategize perfectly, it would be able to gain access to unlimited resources, it would be able to convince humans of anything through social manipulation. One superpower seems to be neglected, even though it seems much less fantastic and therefore more likely than the others – a superintelligent AI would surely be intelligent enough to teach us how to be superintelligent.
People are worried so much that algorithms are doing intelligent things in ways we do not understand. But are we really trying to understand? Surely there is a lot of complexity in the functioning of a neural network. But is it more than the complexity of the human body? I doubt that. And yet we are able, little by little, to figure out more and more of the functions of the human body – describing the different cells it’s made of, different processes they are involved in, different organs and mechanisms. We do all this by experimentation and guessing, but how much easier would it be if we had access not only to its source code, but to endless sandbox environments where we could experiment and analyze it? And of course – if we really reach AGI, then access to an intelligent being who can study it and explain it to us? Instead of staying static while the AIs become more and more intelligent, why not study them and become more intelligent ourselves? Maybe it will be difficult to constantly chase after the AIs and try to keep up with their improvements (though I’m not at all convinced it will be). But it will be robust.
It will be robust, because instead of relying on Superman, we rely on ourselves. We move from defense to offense. If we make one AI algorithm safe, we still need to go back to the start with the next AI. But if we learn how one algorithm works, it makes us better equipped to face not only that specific AI, but any other AI that will come in the future. And even if we don’t ever face an AI risk, it has the added benefit of improving our own intelligence.
Bottom Line:
Would it not be a more robust strategy for preparing for a possible AI risk, if instead of (or in addition to) researching AI safety, we’ll focus on researching AI understanding? That is, researching ways to analyze and understand the inner workings of our AI creations, so that we can adopt for ourselves whichever methods they create to make themselves more intelligent? Thus freeing us from the worry that no matter how many AI algorithms we made safe, there can always be one we miss and creates the catastrophe?
Pingback: Doubts about AI risk – Part II | Shai Shapira
Pingback: Doubts about AI risk – Part I | Shai Shapira
http://www.srugim.co.il/207634-%D7%91%D7%94%D7%9C%D7%94-%D7%91%D7%A4%D7%99%D7%99%D7%A1%D7%91%D7%95%D7%A7-%D7%94%D7%A4%D7%99%D7%AA%D7%95%D7%97-%D7%94%D7%97%D7%9C-%D7%9C%D7%A0%D7%94%D7%9C-%D7%97%D7%99%D7%99%D7%9D-%D7%9E%D7%A9%D7%9C
My interpretation of that article: Some AI researchers were working on some pointless stuff. After a thousand versions that didn’t work, the next version also did not work, but this time someone from marketing department was there. The researchers tried to explain to him what went wrong, and he said: Can you make it sound more stupid, so we can send it to some newspapers and get some free publicity? Indeed, it worked.