Shai Shapira

The art of making computers do stuff

  • About
  • My Projects
  • Blog

Voice interfaces and the future of literacy

Posted by שי שפירא on March 16, 2018
Posted in: Artificial Intelligence, Cognition, Software, Technology. Tagged: Artificial Intelligence, Star Wars, User Interface, Voice First, Voice Interface. Leave a comment

I recently found (through High Scalability) a very interesting interview with Brian Roemmele, an experienced engineer and advocate of “Voice First” – the idea that voice is going to be the primary means for people to communicate with technology in the future. The interview is fascinating and somewhat strange; most of all, it brings to mind an old article I’ve read in the past by Ryan Britt about how Star Wars seems to describe a post-literate society – That is, their society seems to rely on voice and hologram technologies to such a point that almost no character is ever seen reading anything, and there are interesting ideas about how non-literacy (or post-literacy, in that case; i.e a society where literacy is known and has existed, but has been willingly abandoned by at least the majority of population) might explain some strange elements of the movies’ plot. Is it possible that Roemmele’s ideas are taking us in the same, post-literate direction?

(You can listen to the interview here, or read the main points in the High Scalability article. Britt’s article seems to have been taken off the web, maybe to encourage people to buy his book instead; there is a follow-up article here. Also, while preparing to write this post, I found what appears to be another article suggesting the same idea already in 1998, by David Lance Goines)

First, a few words about how I see Roemmele’s idea of Voice First. It seems extremely foreign to me, and I am obviously not in the target audience of what he describes; However, I’m open to the possibility that I am the minority, and what he says might be more relevant for most people. Is it true that “Anyone trying to type has to first put it in a voice in their head before typing”? I certainly type faster than I speak. I certainly don’t spend ninety percent of my time sifting and sorting Google results (and not just because I use DuckDuckGo instead), and when I hear that when he touched an iPhone for the first time, “little hairs went up on my back” – I can only conclude that he and I are very different people.
From my perspective, his advocacy for voice seems to hinge on three basic principles: efficiency of text-based lookup compared to menu-based lookup (it’s quicker to say “text Brian” than to find a texting app and choose who to text), efficiency of voice as an input device compared to a smartphone virtual keyboard, and the promise of AI. For me, the first one is basically the same command line concept we’ve had long ago – he even mentions it as an example of an obsolete system, but really, what he’s suggesting is a voice-based, AI-powered command line. Which is fine, since someone like me indeed still uses command line occasionally – that’s why an important part of being a Windows user is learning to use the Win+R shortcut. I never use the Windows start menu – any program I have that’s more than one click away, I open by writing its name on Win+R. As for the second point – I’ve disliked smartphone virtual keyboards from the first day I saw them. We certainly need a better input system. Personally I’m skeptical about voice being this system, but whatever. And as for the third – I’m becoming more AI-skeptic every day, and that’s too big a topic for this post. If you want to bet on the AI bubble as being the future, good luck.

But let’s return to our topic. He clearly represents more people than I do. My way of using a computer is tightly connected to my being a programmer and a gamer; I can see that people who are neither of those things do tend to have a taste more similar to his. So is it true that voice is the future, and will it bring the post-literate, Star Wars-like society? This point is not mentioned in the interview, but I think Roemmele’s vision leaves very little need for literacy. He says we’ll still be looking at screens occasionally, but it will be rare. So I think we should really stop and think about why people started reading and writing, why they still do it now, and why should they do it in the future.

The main, if not only, incentive to read, is to have access to more information. In the pre-voice-interface world, our only way of getting human-made information without that human being physically next to us and talking, was to read. We would read books, newspapers, and websites, and thus get information. Books and newspapers have been gradually shifting to digital screens in the past few years, meaning that replacing screens with voice interfaces can make almost all of our reading optional. At that point, how strong will the incentive be to learn to read? We can only guess. You might think it’s exaggerated to imagine a return to illiteracy, but it’s important to realize how much of a guess that is – this voice-first future will truly be a new situation.

Because remember, we cannot think about this using ourselves as an example – we might look at ourselves and feel like we (for the sake of argument) use voice interfaces and audio books, yet still want to read occasionally. But we’ve already learned to read, and we did that when we had a strong incentive to do so. What happens with the first generation that has audio books already before they can read? They will not have as much of an incentive to learn reading. They will not need reading to get the information. Will reading still be useful to them in the long term? I absolutely think so. But will that be enough to convince them to go through the hard work required to learn reading?

One very symbolic moment in the interview is when Roemmele asks “Who uses mice anymore?”. I think the mouse, in many ways, is a small example of the same process at work here. The mouse is significantly more efficient than the touchscreen, but the touchscreen has one advantage – it’s intuitive. The mouse, when using it for the first time, is not efficient; In the hands of an experienced user, it becomes significantly more efficient, compared to the touchscreen, which stays mediocre no matter how much you use it. Sounds familiar? This is exactly how writing works. It is not intuitive, and requires a lot of practice to master; but once mastered, it provides huge benefits over voice (which is more intuitive). If people really don’t use the mouse anymore (and by the way, don’t they? I haven’t really seen people like that, but I’m sure he knows the market more than I do. Might also be related to the fact that I am not located in the USA, which seems to be the early adopter for most of these tech trends) because they prefer the short term benefit of an intuitive interface, then can we really expect them to spend difficult hours learning to read and write, when they can just listen to audio books?

So bottom line – if Roemmele’s thesis is correct, I would think it is a very real possibility that our current (or very near future) rate of global literacy is going to be a historical peak; it will only go down from there. Not that literacy will disappear from the world completely, but it will no longer be the near-universal skill it is today. I’m not a fan of “those horrible younger generations” kinds of pessimism – as far as I’m concerned, a transition to post-literacy will be fascinating. I think it will be a bad decision for those who do it, but I have no reason to complain. If that really happens, I’ll fully enjoy my ability to demonstrate reading and writing as a party trick to my future grandchildren’s friends. I doubt if they’ll be too impressed, but who knows.

And some advice for you readers – if the world is going in a post literate direction, I strongly recommend you go against the trend. Not because literacy is some sort of magical wonder world as some people like to describe it, but it’s just a useful skill, even in a world of audio books. I admit I haven’t given much of a chance for audio books because I cannot even start this strange experience of reading a book at someone else’s pace. Even as a computer interface, it seems absurd to me to return to the command line – Roemmele’s vision seems to imply that we abandoned the command line because we don’t want to read and write too much, but I have a very different way of seeing things – we (mostly) abandoned command line because it’s a one dimentional medium. You can only read or write one thing at a time, even more so with voice than with text. On a screen we can get much more information at the same time, and be more efficient.

And finally, I’ve mentioned David Krakauer’s concept of competitive and complementary cognitive atrifacts before, and I think it’s relevant here as well – always prefer technologies that improve your intelligence rather than technologies that compete with it. Text, computer mice, keyboards, long division and maps – these are all technologies that not only help you, but you can understand them and learn to internalize them without depending on some company to provide your thinking for you. With your digital assistant – I hope you’ll enjoy this life where you can suddenly forget how to turn off the lights in your house (actual story from the interview). Or to use the Star Wars analogy – where you can be completely unaware of a dark lord taking over your republic[1].

[1] Or something along those lines. I admit I have very limited knowledge of the Star Wars universe.

Advertisement

Video games and the blessing of failure

Posted by שי שפירא on February 3, 2018
Posted in: Games, Morals. Leave a comment

Lately I’ve been asked to recommend a good computer game for an educational institute aimed at a teenage audience, which brings me to a topic I’ve always been planning to write about – the potential of video games, and games in general, for self improvement. The game I ended up recommending is OpenTTD, but many of these things would be true for many other games.

Gaming does not seem to enjoy a very good reputation these days; civilized people are usually expected to have a preference, maybe a “good taste”, in interests such as books, film or music; but games are a guilty pleasure at best. Few people would consider them on the same level as those “higher culture” interests. However, what games don’t give you in social status, they give you in character. Because games provide you with an extremely important gift, one of the most important gifts you can get in a modern, comfortable life: the gift of failure.

Failure also does not enjoy a very high status these days. Not many people participate in activities that include failure; when you go to meet friends in a restaurant, you cannot fail. You might enjoy more or less, but failure is generally not a possible result. Nor in going to see a movie, taking a walk, jogging or dancing. Some of these things you can do better or worse, but that is for you to decide; no declaration of failure awaits you. This does not seem to be a coincidence – whenever I have tried to introduce gaming to non-gamers, the immediate aversion came from the presence of failure. As soon as a “game over” message (or its equivalent) showed up, the non-gamer immediately lost interest. “It’s too stressful”, “I’m not good at this kind of things”, and so on.

But escaping failure can only get you so far. Our everyday activities might not include failure; but eventually, our lives will encounter it. If we are not used to it at that point, failure might devastate us. Our business collapsed? We lost a job? We were rejected by the university? I’ve seen many people unaccustomed to failue, who were caught completely off-guard by things like this. And this is the first advantage of games, and the most general one – even the simplest of games, including the kind of “shoot other people” games that many people imagine when they think about games (this is not the type of game I advocate, although they do also have much more depth than most people appreciate), will give you a constant knowledge of failure[1].

Does that sound unattractive? Keep reading then. Because failure is only one side of the coin. Failure is not only a catastrophe we need to prepare for; it can also be the source of the greatest joy. Light cannot exist without darkness; Good cannot exist without bad; and without failure, we cannot have one of the most important and pleasant things in life: success.

I don’t want to repeat too much of the critisism of the “everybody is a winner” approach gone too far[2], as many others have done that already; I’ll just say that an approach to life that does not include any way for you to succeed, or win, in something you had a chance of failing, is a recipe for an unsatisfying life. Why is it that people enjoy so much the feeling of fake-shooting a fake-person on their tv screen, even when these people have no interest in violence or weapons in real life? You can have many guesses, but I (as someone with experience in fake-shooting) have little doubt – the reason that succeeding in these games is such a sweet feeling, so difficult to find otherwise in daily life, is that they are difficult. When we know we can easily fail, the feeling of success becomes real. This is because the game is unforgiving – it will not give us any discounts for being tired, for being “almost right”, for being nice; the game is endlessly cold and objective. If we did well, we’ll succeed; if we did not, we fail.

And this is the real value of games – they give us the benefits of real-life challenges, but without the danger. Outside the context of a game, failure can have serious consequences; and in our modern society, even reaching the point of facing a challenge might take a lot of time and preparation. Games allow us to face challenges in comfort and safety; not as a replacement for real-life challenges, but as an introduction and practice for them.

And it’s not only the general feeling of failure and success that games prepare us for. What I’ve said so far is true for almost any kind of games, but I don’t recommend playing too much of the most common games. Because games can also choose which kind of challenges to offer – and the right challenges can teach us things that would be very hard to learn otherwise.

Strategy and management games, such as Civilization, Europa Universalis, Cities: Skylines, and many others, bring us into a whole new world with a whole new way of thinking. The most striking one is resource management – I’ve been shocked occasionally by how some non-gamers are perplexed by things that for a strategy gamer are obvious. A strategy gamer learns very quickly that the most basic concept is sacrificing some objectives in favor of more important ones; we need to do or allow some things we do not want, becuse in the bigger picture, in the long term, it will pay off. Outside of games, where else do we encounter this kind of thinking? Seems like not enough. And this is just one of counless examples – A single playthrough of Europa Universalis IV will probably give you a better understanding of world history than a full university course. I’ve seen architects who consider Cities: Skylines to be superior to their own university education in urbanistics. Games are the perfect platform for learning – in a game you do not only absorb information; you must internalize it and act by it – the difference between success and failure depends on it. And what starts in the game, will eventually be an education for life.

[1] There are some exceptions: I wonder if the recent expansion of what some critics jokingly (but quite accurately) call “walking simulators” is basically a way to sell “games” to people who are afraid of failure.

[2] Just to clarify: there are definitely good sides to this approach, and an ultra-competitive environment is usually not good; a healthy balance is the ideal.

Introducing: Civilization 5 Superintelligence mod

Posted by שי שפירא on January 6, 2018
Posted in: Artificial Intelligence, Games, Technology. Tagged: Sid Meier's Civilization, Superintelligence. Leave a comment

After being quietly online for a while, my most recent project is now officially published – the Superintelligence mod for Sid Meier’s Civilization 5. Check it out here (for basic Civ 5) or here (if you have the Brave New World expansion).

If you don’t know what any of those words mean, check out a news report here. hopefully I’ll get to writing more about my reflections on the project in a few days.

Thoughts on Guy Deutscher and training to think better

Posted by שי שפירא on December 27, 2017
Posted in: Cognition, Languages. 5 Comments

I’ve recently finished a second reading of Guy Deutscher’s “Through the Language Glass“. While seemingly a simple popular science book, I consider it to be a very important text to read, and I see potential in what it says that I’m not even sure the author himself sees with me (more on that later).

I assume “spoilers” are not an issue for a popular science book, so let me start by giving away the point of the book – it studies the effects our languages have on the way we think, but not in the magical hand-waving style associated with the Sapir-Whorf hypothesis, but in an objective, empirical way, based on the principle that some languages force us to be aware of some information and some don’t. For programmers this should be quite clear – every programming language (for the most part) can theoretically do the same things, but unlike the idealist, hippy-ish linguistics professors described by Deutscher, programmers are notorious in their love for describing why their language is just *better* at doing *better* things; not because it’s impossible to do them in another language, but because the grammar of some languages encourages different actions compared to others. Like most things in programming, no one says it better than Joel Spolsky – check out his classic “Making Wrong Code Look Wrong“.

My only problem with Deutscher’s book is that he did not go far enough. He gives two fascinating examples of his thesis, color perception and space orientation: people who speak languages with more distinct words for colors naturaly train themselves to notice the differences between those colors; and people who speak languages where cardinal directions (north, south, east, west) are used for orientation rather than personal directions (right, left, forward, backward) have to develop a sort of “inner compass” – a strong ability to recognize cardinal directions using any hints in the environment around them (this exists only in a tiny handful of small tribal languages, in case you were puzzled). These are both fascinating examples, but the problem is that he does not give any more.

In some sense, I suspect that this is an example of the difference between scientist and engineer. Deutscher, as a linguist, wants to study the world. He laments the loss of small tribal langauges that will deprive us of the ability to discover more such unique ways of viewing the world. I, meanwhile, as an engineer, consider the existence of tribes with such ideas to be a fun curiosity, but I see no reason to limit myself to existing languages. Rather than ask what kind of differences exist between different usages of language, I want to ask: what differences can exist? Can we change our language in a way that makes us more likely to understand some things?

Personally, I’ve been experimenting with these things for many years, already before my first encounter with Deutscher’s book. I’ve always aimed at using the most specific color words possible, and still feel a slight cringe when I hear, for example, someone describe olive, peach or lilac colors as green, orange or purple respectively. I’ve made concious attempts to use cardinal directions in navigating around the world (admittedly, the arrival of smartphones and their tempting GPS navigation systems was a setback for that project. I should really be less lazy). But I believe the potential exists far beyond that.

Not long ago, I heard a researcher named David Krakauer on Sam Harris’s podcast. He spoke of a concept called cognitive artifacts: some ideas, like the invention of writing or our current number system, can change the way we think so drastically that it opens new doors for us, improves our intelligence. This is a serious oversimplification and I strongly recommend listening to the actual podcast, but the point is – the linguistic differences Deutscher describes fit very well into Krakauer’s concept of cognitive artifacts. When we consider “the sky is black” to be a wrong sentence compared to “the sky is blue”, we force our mind to be more attentive to colors. When we consider the number 427 to be composed of 4*100 + 2*10 + 7, we force our mind to think in a way that makes arithmetic much easier than if we thought about it as CDXXVII (meaning, 500 – 100 + 10 + 10 + 5 + 1 + 1). And the big question for me is: what can we do next? As I’ve said before, I don’t want to stop at studying the existing world like Deutscher; I want to think about what we can change to bring even bigger revolutions. What will be the next writing, the next number system?

Just as a small example, there is one thing I’ve always wondered – our use of the word “tree”, compared to the word “animal”. The group of organisms we include in the category “tree” is no less diverse than that in the category “animal”; however, we use both in very different ways. If I point my finger at a sycamore and tell my friend “look at the person standing next to that tree”, it would sound perfectly natural. Yet, if instead of a sycamore it would be a horse, it would sound quite strange to say “look at the person standing next to that animal”. If it’s a horse, we’ll call it a horse, not an animal; but we have no problem referring to an oak, or a eucaliptus, or even a palm, as simply a tree. What would happen if we change that? What would happen if today we stopped using the word “tree” in any context where we would not use the word “animal”?

You might say that would be to difficult, they all look so similar! But that’s exactly why you should read Deutscher’s book. It’s difficult *for us* to find north in an ordinary city environment, or to tell the difference between an oak and an ash (or not, but let’s assume you’re all city boys/girls like me), because we’re not normally required to do that; when we teach our children about the world, we show them endless books and toys that teach them “this is a dog”, “this is an elephant”, “this is a cow”, and then – “this is a tree”. Growing up like that, it seems obvious that there is a huge difference between a dog and a cheetah, but a tiny difference between a birch and a poplar; but is that really true? Recently, on a trip with friends, my friends’ infant daughter was excited to notice some pandas next to us – that was somewhat surprising, considering the fact that those were, in fact, cows; but what seems obvious to us now, with our many years of education, is not at all obvious to an infant who is still trying to make a general sense of the information presented to her. Her parents immediately laughed and corrected her, and she will probably easily understand the difference between a cow and a panda quite quickly. But what would happen if she just said those were “animals”? And her parents said the same, and her books said the same, and she was never expected to actually make the judgement of which kind of animal she was seeing? In that alternative world, I would be very surprised if people were so sharp at differentiating cows from pandas. And in an alternative world where we stop using words like “tree”, “bush”, or “flower” in any context where a more specific word can be used – we will get to know our planet’s biodiversity much more, without making any concious effort.

Again – trees and flowers are only a small example. The real challenge is in finding ideas that completely change the way we think – things like how our ways of representing words and numbers on paper was a complete revolution in our thinking. If anyone has any ideas, I’d love to hear them.

Political stability, participation and rights

Posted by שי שפירא on October 16, 2017
Posted in: Politics. Tagged: Politics. 3 Comments

A lot of interesting debate followed my Quillette article, and I think it might be useful to elaborate on it a little bit; specifically, explain how I believe the correlation between public participation in the economy and political rights is created in practice. You can think of it as trying to answer the question “if political rights depend on economic participation, then why is Norway democratic and China authoritarian, rather than the other way around”?

First of all, it’s important to clarify that I described this as a historical trend, it’s not a law of nature. It’s certainly possible that some cases will go against the trend. But we don’t have to stop here; if we think more deeply about this theory, we might realize it’s actually stronger than it might seem at first.

The first concept I’d like to introduce here is political inertia. By default, any political structure has a strong tendency to keep existing: some people hold a lot of power in it and want to continue holding it, and many of the people who do not necessarily hold much power, are nonetheless content enough with their life to prefer stability over the uncertainty of change; these are two advantages automatically given to the maintaining the status-quo. Therefore, it’s very much possible that some political structure will be created, even with great majority support of its population, and later reach a situation where most people would have prefered a different structure if they could start over, yet still prefer to keep the current one just to avoid the dangers and uncertainty of change. Let’s call this situation “political tension” – the amount of people under a political entity who would prefer a different system than the one they live under at the given time (theoretically, we’d want to multiply this by “how strongly they would prefer that”, but that’s not really a measurable quantity).

This, in many ways, is the brilliance of free democratic elections: they allow the population to peacefully release most aspects of political tension. When a majority of the population is not happy with the government’s performance, by next elections they change it and reduce the tension, leaving (hopefully) a much smaller amount of people unhappy with the new government. This does not always work so well in practice for reasons I’ll discuss in the future, but it clearly works well; we can complain about our democracies as much as we want, but we’d have to be extremely ignorant to deny how much more peaceful and prosperous they made those parts of the world that built them well (what exactly I mean by building democracy “well” is something else I’ll discuss in the future).

Even in a democracy though, not everything can be changed in the elections. On the one hand, issues that are considered too important to leave to the voters, like borders, population and democracy itself; on the other hand, issues that are not important enough to have an effect on the vote. Many people might have opinions about transportation policy or government support for the arts, but votes are usually given based on more urgent things. In an authoritarian system, almost everything is unchangeable for the common people. They gather much more political tension, and therefore require much more repression to keep themselves functional.

So if political tension by itself does not necessarily guarantee political change, what does?

I’d like to suggest that this tension is itself a particular case of a wider measure – political stability. Any state needs to remain stable to survive; once enough factors reduce its stability to a low enough level – it will change. Either by revolution, seccession, invasion, or just gradual change by its current rulers.

And this is where my universal basic income article comes in. Having a state where public contribution and political participation do not match is not impossible, but it’s a less stable state. If it happens in a democracy, it’s very much possible that it will stay a democracy out of inertia – but if it becomes unstable enough, due to this or any other factors, it’s going to change, and once it starts changing (meaning inertia no longer applies), it’s very likely to change into the most stable state possible – therefore, a state where public participation and political rights do match.

So, what are these factors influencing stability? We’ve mentioned inertia, political tension, and difference between participation and rights. Looking at the world, we can guess a few more:

– Economic growth: a growing economy will naturally be more stable, as the people who are getting wealthier from it would like to continue getting wealthier. A shrinking economy will be less stable, and it’s no coincidence that some of the world’s worst conflicts tended to happen during economic crises. People who expect to become poorer have much less to lose from the uncertainty of political change.

– Ideology: While it might seem like I’m reducing people to the status of meaningless pawns with this theory, their beliefs and opinions definitely do matter here; some ideas prevalent in a society can have a large influence on its stability. Pluralism and family values are ideas that are likely to make a society more stable; Individualism or Marxism are likely to make a society less stable (just to clarify: “stable” does not mean good or bad necessarily; we’ll want a good state to be stable, and a bad state to be unstable. What “good” and “bad” means here is for you to decide).

– Ideology difference: While the previous item was about the ideologies themselves, here we add the difference in ideology between different parts of the population, and between the population and the ruling class, if there is a ruling class separate from the rest of society; as a classic example, religious differences between population and government, or between significant parts of the population, have constantly shown themselves to be sources of instability throughout history.

– Military power: As much as we might dislike it, military power is a source of stability for a country. Maintaining a righteus rule of law, or an evil, corrupt rule, or any other rule, requires the application of state violence, and we need to recognize it. The better a state is at applying this violence, the more stable it will be.

– External environment: It’s much easier for a country to have the same type of regime as its neighbors. I don’t think anyone is surprised neither at a democratization process in Serbia, for example, nor in a slide to authoritatianism in Cambodia. Both are relatively small countries, drawn to become more similar to their larger neighbors.

This is only a partial list. The point is, no country will immediately become authoritarian once it became rich from natural resources, and no country will immediately become democratic after having an industrial revolution. However, they will become less stable, and therefore more likely to change, when joined together with other sources of instability.

Norway and China were both given as counter-examples for my article; I’d argue that the Norwegian democracy has many factors contributing to its stability, including its North European environment, its relative ethnic and religious uniformity, and lack of significant revolutionary ideologies. China, likewise, has factors promoting stability, most notably its remarkable economic growth. If they reach a serious economic crisis, we’ll see: if at that point, most of their income comes from taxpayers, I’d be very surprised if we won’t eventually see a turn for democracy; hopefully in a peaceful, gradual change like what we saw (for the most part) in Taiwan and South Korea.

And one more clarification: All the above is an abstract model. It is not a scientifically proven theory and I’m fully aware of it. I offer it as food for thought, as a basis for discussion; not as an attempt to offer exact predictions. How much this abstract model actually fits reality, that’s for you to decide.

Universal Basic Income and the Threat of Tyranny

Posted by שי שפירא on October 10, 2017
Posted in: Economy, Politics. Tagged: Quillette, Universal Basic Income. Leave a comment

My article has been published on Quillette magazine:

http://quillette.com/2017/10/09/universal-basic-income-threat-tyranny/

Basically an expansion on things I’ve said in the past (like here in Hebrew) about the political implications of universal basic income. I still think universal basic income might be the direction we’re going in, but we need to think seriously about these things before going there.

Doubts about AI risk – Part III

Posted by שי שפירא on July 10, 2017
Posted in: Computer Science. Tagged: AI Risk, AI Safety, Artificial Intelligence, Superintelligence. 4 Comments

Last part in my series of posts expressing doubts about the warnings of AI superintelligence, and inviting the AI safety community to explain what I’m missing and convince me I should be worried. See part I here, and part II here.

Part III: Preparing for superintelligence

In the previous two posts, I expressed my doubts about the risk of artificial general intelligence (AGI) turning into superintelligence in a fast or unexpected way that might put it in an extreme advantage over human intelligence. They were quite theoretical, and in this part I want to turn to the question of what is likely to happen in practice, and what we can do to benefit from artificial intelligence and, even if I was wrong in the previous two parts, prepare for the appearance of superintelligence.

From what I can see, all existing research that comes from the view of superintelligence as a potential risk, is research on AI safety – that is, research on how to create AI systems in a way that is unlikely to produce catastrophe. Maybe it’s because of my interest in politics and world affairs that makes me a bit more cynical than the average mathematician, but I find it very difficult to imagine that if real intelligence superpower was at stake, then people, corporations and governments could really be convinced to limit themselves with some algorithms to prevent bad behaviour from their AI. Moreover, this approach suffers from “Superman’s problem” – When countless villians try again and again to destroy the world, Superman need to succeed every time in stopping them. The villains only need to succeed once, and we’re doomed. The same goes for AI safety – we can build super-strong regulations and make everyone use strict safety mechanisms in designing their AI, but all it takes is one programmer saying “Something is not working. I wonder what will happen if I disable this function call here…”, and we’re doomed.

Could there be a more robust way to handle it? I’d suggest that the very notion of superpowered AI that I go against in my previous posts, is the key to prepare for superintelligence in case I am wrong. Because throughout the AI risk discussion, people constantly assign various superpowers to the superintelligent AI – it would be able to strategize perfectly, it would be able to gain access to unlimited resources, it would be able to convince humans of anything through social manipulation. One superpower seems to be neglected, even though it seems much less fantastic and therefore more likely than the others – a superintelligent AI would surely be intelligent enough to teach us how to be superintelligent.

People are worried so much that algorithms are doing intelligent things in ways we do not understand. But are we really trying to understand? Surely there is a lot of complexity in the functioning of a neural network. But is it more than the complexity of the human body? I doubt that. And yet we are able, little by little, to figure out more and more of the functions of the human body – describing the different cells it’s made of, different processes they are involved in, different organs and mechanisms. We do all this by experimentation and guessing, but how much easier would it be if we had access not only to its source code, but to endless sandbox environments where we could experiment and analyze it? And of course – if we really reach AGI, then access to an intelligent being who can study it and explain it to us? Instead of staying static while the AIs become more and more intelligent, why not study them and become more intelligent ourselves? Maybe it will be difficult to constantly chase after the AIs and try to keep up with their improvements (though I’m not at all convinced it will be). But it will be robust.

It will be robust, because instead of relying on Superman, we rely on ourselves. We move from defense to offense. If we make one AI algorithm safe, we still need to go back to the start with the next AI. But if we learn how one algorithm works, it makes us better equipped to face not only that specific AI, but any other AI that will come in the future. And even if we don’t ever face an AI risk, it has the added benefit of improving our own intelligence.

Bottom Line:

Would it not be a more robust strategy for preparing for a possible AI risk, if instead of (or in addition to) researching AI safety, we’ll focus on researching AI understanding? That is, researching ways to analyze and understand the inner workings of our AI creations, so that we can adopt for ourselves whichever methods they create to make themselves more intelligent? Thus freeing us from the worry that no matter how many AI algorithms we made safe, there can always be one we miss and creates the catastrophe?

Doubts about AI risk – Part II

Posted by שי שפירא on July 10, 2017
Posted in: Computer Science. Tagged: AI Risk, AI Safety, Artificial Intelligence, Superintelligence. 4 Comments

Continuing my series of posts expressing doubts about the warnings of AI superintelligence, and inviting the AI safety community to explain what I’m missing and convince me I should be worried. See part I here.

Part II: Is superintelligence real?

In the previous part, I talked about the intelligence explosion – the process in which a human-level artificial intelligence reaches superintelligence, and explained why I’m not sure it will necessarily happen before humans reach the same superintelligence. In this part, I want to go further back and ask a more basic question: Is there even such a thing as “superintelligence”?

AI researchers fully admit that we have no idea what superintelligence would look like, and tend to (very reasonably) use the only means we have of imagining it – comparing modern human intelligence to less intelligent beings, and extrapolating from there. So every conversation about superintelligence includes some mention of how humans are so far advanced beyond ants, or mice, or chimpanzees, that those animals cannot even grasp the way in which humans are more advanced than they are; They cannot have a meaningful discussion about ways to prepare or defend against human attack. In the same way, the argument goes, superintelligence is so far beyond human intelligence, that we cannot even grasp it right now.

My problem is, it’s not at all clear that there is such a scale of intelligence where humans take a nice spot in the middle, between ants and superintelligence. And the fact that ants, mice or chimpanzees could all be in that argument without it looking any different is the key – while there are certainly some significant cognitive differences between an ant, a mouse, and a chimpanzee, all of them are pretty much equally unable to perform in the one significant field of intelligence – the ability to research, learn, manipulate the environment, and ultimately improve one’s intelligence. Modern humans are extremely more intelligent than their ancestors from 20,000 years ago, even though our anatomy is essentially the same – the only difference is that modern humans are the result of spending thousands of years researching ways to improve their intelligence. This already raises the question – is there really a scale of intelligence, with ants, mice, chimpanzees, humans and superintelligences standing in order? Or is there just a binary division – beings that understand enough to improve themselves, and ones that don’t?

This brings us to the comparison between modern humans and ancient humans. The main difference is between these two, rather than between humans and other species – suggesting that the difference comes not directly from physiology, but from education and research (of course, physiology must produce a brain capable of being educated and doing research, which seems to be the case only with humans, but we have no reason to believe that there’s any need for further improvements in physiology to increase intelligence). What is the reason that modern humans are so much more intelligent than ancient humans?

The answer seems to be science, mathematics, and technology. All the changes in the abilities of humans between primitive and modern societies ultimately come either from a better understanding of the physical world, better understanding of the abstract world, or the construction of better tools that take advantage of previous understanding and help us achieve the next stages of understanding. So if we assume superintelligence exists because modern humans are much more advanced than ancient humans, that would imply that superintelligence would be superintelligent because it has much more advanced science, mathematics, and technology.

But in that case, there doesn’t seem to be any qualitative difference between superintelligence and human intelligence, only a difference in quantity for which we have no reason to assume AGI is better suited to achieving than humans. Our advancement in science came not from some Aristotle-style sitting down and thinking about the mysteries of the universe (which is essentially the one thing AGI would do better than humans) – it came from endless experiments, endless observations of nature and of the results of those experiments, and endless construction of increasingly better tools for observing and manipulating the physical environment. The AGI has no advantage in any of those things, so at best it could become a mathematical genius, who is not necessarily better at most practical tasks than any human.

To clarify, let’s take a look at some examples of past advancements that increased the scientific knowledge (and therefore, intelligence) of humanity. Isaac Newton did not come up with the theory of gravity out of some magical insight that came out of nowhere; he was looking for an explanation for the behaviour of celestial bodies, which he knew from endless observations of astronomers. A superintelligence that did not have access to such observations would not be able to figure out the physical explanation for them. Our understanding of genetics started with the years-long experiments of Mendel in growing plants – he had to see how the natural world behaves to understand the laws behind it. Would a superintelligence have just figured it out by thinking very hard?

For a final example, let’s go back to our original extrapolation: Superintelligence is to human intelligence what human intelligence is to chimpanzee intelligence. In that case, let’s imagine: what would happen if we take a modern human, even a mathematical genius, and send him/her to live among chimpanzees? Not with his/her Internet connection and laptop computer, but with the same level of technology chimpanzees have? Would the human become immediately dominant? And in fact, imagining a modern human is already cheating, since a modern human already knows at least the general appearance of human technology, and knows at least a little bit of science. The correct comparison would therefore be to someone with the mathematical capabilities of a modern human math genius, but without any of the scientific understanding, and without any memory of human technology. Would that person become master of the chimpanzees? Would you?

That is the challenge our AGI would face on the road to becoming superintelligent. All it will have is human-level understanding of science and technology, and the ability to think really hard. Even ignoring my point from the last post, doubting that it really has such an advantage, it still does not have an advantage comparable to that of humans over non-human animals. So rather than figuring out in seconds how to take over the world and eliminate humans for fear they might interfere with it (as most AI-apocalypse scenarios predict), my prediction for AGI is that its first action would be something along the lines of asking for a grant to build a new particle accelerator, or something. Then maybe playing some Go for five years until it’s built. And humans will enjoy the fruits of its research right alongside it, and move together towards this “superintelligence”, which would simply be the continuation of our gradual improvement in human intelligence.

Bottom Line:

If we understand the term “superintelligence” by the extrapolation that as human intelligence is to non-human animals, or to prehistoric humans, so will superintelligence be to humans, would that not mean that it would need to be achieved in the same way that human intelligence developed beyond its prehistoric levels, meaning by endless observation and experimentation of the physical world, and construction of more and more advanced tools to allow that? And if that is the case, why would an AGI be so much better equipped than a human to do that, to a point that the AGI will be able to achieve it without humans having time not only to catch up, but even to notice?

(Move on to part III)

Doubts about AI risk – Part I

Posted by שי שפירא on July 10, 2017
Posted in: Computer Science. Tagged: AI Risk, AI Safety, Artificial Intelligence, Superintelligence. 10 Comments

The dangers of artificial intelligence research are becoming an increasingly popular topic in scientific and philosophical circles, and it seems like everyone I know who studied the issue enough is convinced that it’s something major to worry about. Personally, I have some issues that make me unsure about it – both about the likelihood of this being an actual potential catastrophe, and about the idea of AI safety research being the reasonable response to it. So I decided to detail here my doubts about the issue, hoping that people in the AI safety community (I know you’re reading this) will respond and join in a debate to convince me it’s worth worrying about.

In the first part I’ll talk about whether or not the idea of the intelligence explosion can really happen. In the second part I’ll ask even more basically, whether or not superintelligence even exists as a coherent concept. The third part will ask, assuming I’m wrong in the first two parts and AI really is going to advance seriously in the future, what can be done about it other than AI safety research. I’m going to include a lot of explanations to make sure it’s accessible to non-AI-researchers, so if you’re an AI researcher in a hurry, feel free to skim through it and focus on the (literal) bottom line in each of the three parts.

Part I: The superintelligence explosion

The main concept on which the AI warnings are built is the intelligence explosion – the idea that at some point our AI research is going to reach the level of human intelligence (researchers like to call that AGI, Artificial General Intelligence), and from that point it will be able to improve itself and therefore reach, in a very short time, levels of intelligence vastly superior to ours. Considering the amount of debates everywhere on the Internet on the question of whether or not AI can be evil, harmful, or just naively destructive, I see remarkably little debate on the question of whether superintelligence is possible. And in fact, there are two questions to be asked here – whether or not superintelligence can be reached by an AGI significantly more quickly than by a human, and even more basically than that, can we really be sure that “superintelligence” actually exists, in the way that AI safety researchers present it. Let me elaborate on these issues.

The main argument for the AGI being able to reach superintelligence in a worrying speed, from what I can find, is the physical advantages in calculation and thinking that electronic machines enjoy over biological brains; see Nick Bostrom’s description of it here, for example. According to him, the superior speed and efficiency of computation in an electronic machine will vastly surpass those of a human brain, therefore, once an AGI is created, it will be able to do what humans do, including researching AI, significantly faster and better. Then it will research ways to improve itself more and more, until it becomes so vastly superior to humans that we will be completely irrelevant to its world.

The problem I see with this argument, that I did not see addressed anywhere else, is that it puts humans in a needlessly disadvantaged playing field. Yes, it’s certainly possible that supercomputers in the near future will have better computing power than human brains, but that’s no different than gorillas having superior muscle power than human muscles, which does not stop humans from being dominant over gorillas; that is because humans do not need to depend on their biological assets. Humans use tools, whether it’s a rifle to defend against an attacking animal, or a computer to outthink an attacking intelligence. Whatever hardware the AGI has access to, we probably have access to more.

Think about the classical examples, of the AI defeating humans in various games. A common prelude to talking about the dangers of AI is how intelligent computes are now defeating humans in Chess, Checkers, Go, and so on. But humans are playing these games with a deliberate handicap – they are only allowed to use their brains. The AI can use computers to help it.

For the sake of any non-computer-scientist readers, I want to stop and make a little clarification – there is a significant difference between non-AI algorithms and AI. The definition might not be completely universal, different people might understand the word AI in different ways, so let me define the word AI for the purpose of this post:

Definition: An AI algorithm is an algorithm whose creator does not understand enough to modify in a way that produces predictable results.

Think for example about machine translation: an algorithm that takes a text in one language and searches every word in the dictionary to replace it with a word in the target language, would be a non-AI translator. Of course it would also not be very good, but we can develop it further and build complex linguistic rules into it; we can design complex algorithms to determine which words are nouns and which are verbs, and translate conjugations and declensions in a more suitable way to the target language. We can maintain a database of idioms the algorithm can search through to try to recognize them in the source text, and so on. With all these additions and complexities, it’s still not AI in my definition, because at all stages, the algorithm does what the programmer told it to, and the programmer understands it perfectly well. The programmer could just as well do the same things by themselves, it would just take an absurd amount of time.

On the other hand, an algorithm that constantly reads texts in the source language and their (human-made) translations to the target language and tries to figure out the rules for translation by itself, through some sort of machine learning process, would be actual AI. The programmer does not really understand how the algorithm translates a text; All they know is how it’s built and how it learns. The programmer would not be able to change anything in a reliable and predictable way – if they find out that for some reason the translation has a problem with some particular grammatical structure, they cannot easily fix it because they have no idea where and how the algorithm represents that grammatical structure. So that algorithm would be true AI.

I argue that this definition is useful, because algorithms that don’t count as AI by this definition are not only unable to turn into superintelligent dangers by themselves, but they are also “on our side” – they are tools we use in our own thought. Deep Blue, the famous computer that made history in defeating the world champion in chess, was a non-AI algorithm – it worked by using its large computation resources to try millions and millions of different possibilities, and checking which ones are beneficial according to rules explicitly defined by its programmers. The programmers understand how it works – they can’t defeat it using their own brains, but that’s just because their biological brains don’t have the ability to calculate so many things so quickly. So if we think about the level of AI versus humans in Chess right now, it would be unfair to ask if the best AI player can defeat the best human player – we should ask if the best AI player can defeat the best human player, using a supercomputer with a non-AI algorithm they designed to help them. Because if the AI apocalyptic scenario happens, and a malicious AI tries to destroy humans for whatever reason, we’re going to have supercomputers on our side, and we’re definitely going to use them. So if you let Garry Kasparov join forces with Deep Blue, or more interestingly – with some software Kasparov himself would design as a perfect assistant to a Chess player – would he still be defeated by the best AI player? I’m not sure at all[1].

Bottom Line:

The difference between humans and AGI, that makes us worry that an AGI will advance significantly more quickly than humans towards superintelligence, is described in being the superior hardware of the AGI. But humans have access to the same hardware; We can calculate and think at the exact same speed, the only difference is that one small (though important) part of that calculation is done in a slower, biological computer. So how is that a big enough difference to justify the worry of superintelligence?

(Move on to part II and part III)

[1] I offer this as a thought experiment, but I did hear Kasparov say he’s interested in the idea of human-computer teams playing together in Chess; I don’t know what exactly he meant by that, and could not find any information online.

Enjoy the Internet while it lasts

Posted by שי שפירא on July 5, 2017
Posted in: Technology. Tagged: Internet. Leave a comment

(Adapted from a post in my Hebrew blog from 11/05/2017)

The Internet enjoys a very romantic image in the eyes of many people. It is seen almost as some sort of global philosophy of freedom and sharing and equality, a space where anyone can communicate with anyone and all the borders come down. After almost thirty years of having it around, it’s often seen as an irreversible change to the nature of our society – information wants to be free, and once it has become free, it will never go back to its cage again.

What not many people seem to appreciate is the physical nature of the Internet. The Internet is not a technology, or an idea, or a study – things that once they appear in the world, it’s almost impossible to make them disappear (although not completely impossible – one of the most amazing and underappreciated facts of history is how much of society disappeared in the dark ages). No, the Internet is a physical infrastructure made of cables and routers under the physical control of different people and different organizations, and managed by DNS servers, themselves controlled by different people and organizations. The uncomfortable truth is that not many people need to make bad decisions for the Internet to become something very different than it is today, if not go away completely. Just look how easy it was for several governments in recent years to take away the Internet, or significant parts of it, when they felt the need for it. Do you think it’s only a third world problem, it will never happen in your country? I wouldn’t be so confident.

The famous cases of government censorship of the Internet are a worrying trend, but it’s only the most visible part of the problem. It’s been a long time since the Internet was a way for computer geeks from (kind of) all around the world to talk about science and programming and Star Trek. The Internet grew into a huge beast, powered by lots and lots of money. And with all that money lying around, each one of those companies that control some of that physical infrastructure has a strong incentive to always look for ways to ensure they can make money from the Internet and you cannot. And as more and more users go online, we get more and more people wanting to protect us from ourselves – how horrible would it be if we, not to mention our children, would accidentally read some dangerous or impolite information. The combination of these two trends creates a dangerous situation – imagine some of those groups controlling the infrastructure – governments, service providers, content creators, etc. – would decide use their control to create a monopoly or cartel; What would be easier than crying out that our children are exposed to dangerous materials on the Internet, and therefore we need to regulate it? Decide that from now on, creating a website requires a license from the government? We’re seeing more and more regulations and calls for regulations every year. It starts with ridiculous things like the EU cookie law, continues with laws requiring accessibility or comments regulation, and in the end why wouldn’t they require a license to run a website? After all, how many voters are there that actually run their own website and would be bothered by that?

In that sense, Internet regulation will work the same way any other “consumer protection” regulation works – most people prefer to be consumers rather than producers, so they’re happy to throw any possible legislation and limitation on the producers. Everything sounds good on paper – Why should we have hate speech in our Internet? Why shouldn’t every website be accessible for everyone? Why should we tolerate fake news? The big companies will be very happy to introduce every possible limitation possible to the lawmakers, who in turn will be happy to tell their voters how safe they are making the Internet for them. And individual website owners? They’ll be driven away just like any small business in a heavily regulated environment. Facebook and Amazon will be happy to get more and more regulation on the Internet, because they can afford the lawyers and designers needed to comply with everything (not to mention, the lobbyists that make those regulations to begin with). Your neighbour’s fifteen year-old son, which used to be the source of a significant percentage of websites in the early days of the Internet, is going to give up because he can’t take the risk. And the Internet will become the private property of some big companies. Is that a likely scenario? In my view it’s not only likely, but I would be amazed if it does not happen within the next thirty years. I don’t think someone like me will still be able to publish on the Internet by then.

And really, I cannot blame anyone. It’s not some evil conspiracy; It’s more like a natural progression of technologies. The Internet has become too big to remain that idealistic commune of computer geeks it was in the nineties. The environment where we access our bank accounts and pay taxes to our governments needs more strict rules than a forum where people talk about algorithms for sorting numbers. A vast majority of Internet users has no interest in freedom of information, or net neutrality, or their ability to create their own pages – They just want to browse their news websites and social networks given to them by whichever big company they don’t know; This might also include many of you, the readers, who might not even understand who is that “neighbor’s fifteen year-old son” from the previous paragraph because maybe you were not in the generation or the environment where that stereotype existed – Maybe you did not even experience those times where browsing the Internet would more often than not take you to the personal website of some teenager who set up a page to talk about something he or she was interested in. And it probably doesn’t even sound appealing to you – It was badly designed, unreliable, low quality – but this is what freedom looks like. Freedom is messy and badly designed, and most people don’t want it – on the Internet or elsewhere. I can’t count how many times I’ve heard the claim that Facebook overtook MySpace because MySpace allowed users “too much freedom” in designing their profile pages, leading to bad looking pages. Well, this is the direction things are going to go – We will not have the ability to design things for ourselves, and most people will be perfectly fine with it.

For all those reasons, I actually wonder if all those Internet freedom and anti-censorship activists wouldn’t do better in changing their goals – instead of wanting to protect the Internet from these trends, maybe it’s better to concede defeat and look for creating some sort of alternative Internet in addition to the normal one – Some sort of mini-Internet without any of the power of the main one, but with the freedom; A cheap Internet that probably won’t be strong enough to deliver videos or secured enough to make financial transactions, and therefore it will be dull and boring and no one will want to use it except computer geeks who want to talk about science with each other. In other words, the nineties Internet, made better with some new ideas that came up since then, but without the money.

I think a reasonable parallel might be made with radio – I don’t know any technological difference between radio and the Internet that would explain the fact that on the radio we only listen, but on the Internet we also write. When the technology was just beginning, every radio geek could broadcast whatever they wanted, with or without listeners. Eventually it became too big to be managed this way, became regulated, and as part of that regulation some frequencies were given to those geeks to communicate with each other while the rest of the world listens to whatever the radio equivalent of Facebook is. It’s called ham radio and I’ve personally never tried it, but it sounds nice. I think it’s not unlikely that this is the direction the free Internet is going, which is a little bit unfortunate, but we need to accept our situation, and take what we can. We cannot run these cables all around the world ourselves – the big money does that, and we need to play by big money’s rules.

Posts navigation

← Older Entries
  • Recent Posts

    • Voice interfaces and the future of literacy
    • Video games and the blessing of failure
    • Introducing: Civilization 5 Superintelligence mod
    • Thoughts on Guy Deutscher and training to think better
    • Political stability, participation and rights
  • Follow me on Twitter

    My Tweets
  • Archive

    • March 2018 (1)
    • February 2018 (1)
    • January 2018 (1)
    • December 2017 (1)
    • October 2017 (2)
    • July 2017 (4)
    • June 2017 (2)
    • May 2017 (1)
  • Categories

    • Artificial Intelligence (2)
    • Blog (1)
    • Books (1)
    • Cognition (2)
    • Computer Science (3)
    • Democracy (1)
    • Economy (1)
    • Games (2)
    • Languages (1)
    • Morals (2)
    • Politics (2)
    • Software (2)
    • Technology (3)
  • RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

Blog at WordPress.com.
Shai Shapira
Create a free website or blog at WordPress.com.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • Shai Shapira
    • Join 37 other followers
    • Already have a WordPress.com account? Log in now.
    • Shai Shapira
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
 

Loading Comments...