Shai Shapira

The art of making computers do stuff

  • About
  • My Projects
  • Blog

Doubts about AI risk – Part II

Posted by שי שפירא on July 10, 2017
Posted in: Computer Science. Tagged: AI Risk, AI Safety, Artificial Intelligence, Superintelligence. 4 Comments

Continuing my series of posts expressing doubts about the warnings of AI superintelligence, and inviting the AI safety community to explain what I’m missing and convince me I should be worried. See part I here.

Part II: Is superintelligence real?

In the previous part, I talked about the intelligence explosion – the process in which a human-level artificial intelligence reaches superintelligence, and explained why I’m not sure it will necessarily happen before humans reach the same superintelligence. In this part, I want to go further back and ask a more basic question: Is there even such a thing as “superintelligence”?

AI researchers fully admit that we have no idea what superintelligence would look like, and tend to (very reasonably) use the only means we have of imagining it – comparing modern human intelligence to less intelligent beings, and extrapolating from there. So every conversation about superintelligence includes some mention of how humans are so far advanced beyond ants, or mice, or chimpanzees, that those animals cannot even grasp the way in which humans are more advanced than they are; They cannot have a meaningful discussion about ways to prepare or defend against human attack. In the same way, the argument goes, superintelligence is so far beyond human intelligence, that we cannot even grasp it right now.

My problem is, it’s not at all clear that there is such a scale of intelligence where humans take a nice spot in the middle, between ants and superintelligence. And the fact that ants, mice or chimpanzees could all be in that argument without it looking any different is the key – while there are certainly some significant cognitive differences between an ant, a mouse, and a chimpanzee, all of them are pretty much equally unable to perform in the one significant field of intelligence – the ability to research, learn, manipulate the environment, and ultimately improve one’s intelligence. Modern humans are extremely more intelligent than their ancestors from 20,000 years ago, even though our anatomy is essentially the same – the only difference is that modern humans are the result of spending thousands of years researching ways to improve their intelligence. This already raises the question – is there really a scale of intelligence, with ants, mice, chimpanzees, humans and superintelligences standing in order? Or is there just a binary division – beings that understand enough to improve themselves, and ones that don’t?

This brings us to the comparison between modern humans and ancient humans. The main difference is between these two, rather than between humans and other species – suggesting that the difference comes not directly from physiology, but from education and research (of course, physiology must produce a brain capable of being educated and doing research, which seems to be the case only with humans, but we have no reason to believe that there’s any need for further improvements in physiology to increase intelligence). What is the reason that modern humans are so much more intelligent than ancient humans?

The answer seems to be science, mathematics, and technology. All the changes in the abilities of humans between primitive and modern societies ultimately come either from a better understanding of the physical world, better understanding of the abstract world, or the construction of better tools that take advantage of previous understanding and help us achieve the next stages of understanding. So if we assume superintelligence exists because modern humans are much more advanced than ancient humans, that would imply that superintelligence would be superintelligent because it has much more advanced science, mathematics, and technology.

But in that case, there doesn’t seem to be any qualitative difference between superintelligence and human intelligence, only a difference in quantity for which we have no reason to assume AGI is better suited to achieving than humans. Our advancement in science came not from some Aristotle-style sitting down and thinking about the mysteries of the universe (which is essentially the one thing AGI would do better than humans) – it came from endless experiments, endless observations of nature and of the results of those experiments, and endless construction of increasingly better tools for observing and manipulating the physical environment. The AGI has no advantage in any of those things, so at best it could become a mathematical genius, who is not necessarily better at most practical tasks than any human.

To clarify, let’s take a look at some examples of past advancements that increased the scientific knowledge (and therefore, intelligence) of humanity. Isaac Newton did not come up with the theory of gravity out of some magical insight that came out of nowhere; he was looking for an explanation for the behaviour of celestial bodies, which he knew from endless observations of astronomers. A superintelligence that did not have access to such observations would not be able to figure out the physical explanation for them. Our understanding of genetics started with the years-long experiments of Mendel in growing plants – he had to see how the natural world behaves to understand the laws behind it. Would a superintelligence have just figured it out by thinking very hard?

For a final example, let’s go back to our original extrapolation: Superintelligence is to human intelligence what human intelligence is to chimpanzee intelligence. In that case, let’s imagine: what would happen if we take a modern human, even a mathematical genius, and send him/her to live among chimpanzees? Not with his/her Internet connection and laptop computer, but with the same level of technology chimpanzees have? Would the human become immediately dominant? And in fact, imagining a modern human is already cheating, since a modern human already knows at least the general appearance of human technology, and knows at least a little bit of science. The correct comparison would therefore be to someone with the mathematical capabilities of a modern human math genius, but without any of the scientific understanding, and without any memory of human technology. Would that person become master of the chimpanzees? Would you?

That is the challenge our AGI would face on the road to becoming superintelligent. All it will have is human-level understanding of science and technology, and the ability to think really hard. Even ignoring my point from the last post, doubting that it really has such an advantage, it still does not have an advantage comparable to that of humans over non-human animals. So rather than figuring out in seconds how to take over the world and eliminate humans for fear they might interfere with it (as most AI-apocalypse scenarios predict), my prediction for AGI is that its first action would be something along the lines of asking for a grant to build a new particle accelerator, or something. Then maybe playing some Go for five years until it’s built. And humans will enjoy the fruits of its research right alongside it, and move together towards this “superintelligence”, which would simply be the continuation of our gradual improvement in human intelligence.

Bottom Line:

If we understand the term “superintelligence” by the extrapolation that as human intelligence is to non-human animals, or to prehistoric humans, so will superintelligence be to humans, would that not mean that it would need to be achieved in the same way that human intelligence developed beyond its prehistoric levels, meaning by endless observation and experimentation of the physical world, and construction of more and more advanced tools to allow that? And if that is the case, why would an AGI be so much better equipped than a human to do that, to a point that the AGI will be able to achieve it without humans having time not only to catch up, but even to notice?

(Move on to part III)

Doubts about AI risk – Part I

Posted by שי שפירא on July 10, 2017
Posted in: Computer Science. Tagged: AI Risk, AI Safety, Artificial Intelligence, Superintelligence. 10 Comments

The dangers of artificial intelligence research are becoming an increasingly popular topic in scientific and philosophical circles, and it seems like everyone I know who studied the issue enough is convinced that it’s something major to worry about. Personally, I have some issues that make me unsure about it – both about the likelihood of this being an actual potential catastrophe, and about the idea of AI safety research being the reasonable response to it. So I decided to detail here my doubts about the issue, hoping that people in the AI safety community (I know you’re reading this) will respond and join in a debate to convince me it’s worth worrying about.

In the first part I’ll talk about whether or not the idea of the intelligence explosion can really happen. In the second part I’ll ask even more basically, whether or not superintelligence even exists as a coherent concept. The third part will ask, assuming I’m wrong in the first two parts and AI really is going to advance seriously in the future, what can be done about it other than AI safety research. I’m going to include a lot of explanations to make sure it’s accessible to non-AI-researchers, so if you’re an AI researcher in a hurry, feel free to skim through it and focus on the (literal) bottom line in each of the three parts.

Part I: The superintelligence explosion

The main concept on which the AI warnings are built is the intelligence explosion – the idea that at some point our AI research is going to reach the level of human intelligence (researchers like to call that AGI, Artificial General Intelligence), and from that point it will be able to improve itself and therefore reach, in a very short time, levels of intelligence vastly superior to ours. Considering the amount of debates everywhere on the Internet on the question of whether or not AI can be evil, harmful, or just naively destructive, I see remarkably little debate on the question of whether superintelligence is possible. And in fact, there are two questions to be asked here – whether or not superintelligence can be reached by an AGI significantly more quickly than by a human, and even more basically than that, can we really be sure that “superintelligence” actually exists, in the way that AI safety researchers present it. Let me elaborate on these issues.

The main argument for the AGI being able to reach superintelligence in a worrying speed, from what I can find, is the physical advantages in calculation and thinking that electronic machines enjoy over biological brains; see Nick Bostrom’s description of it here, for example. According to him, the superior speed and efficiency of computation in an electronic machine will vastly surpass those of a human brain, therefore, once an AGI is created, it will be able to do what humans do, including researching AI, significantly faster and better. Then it will research ways to improve itself more and more, until it becomes so vastly superior to humans that we will be completely irrelevant to its world.

The problem I see with this argument, that I did not see addressed anywhere else, is that it puts humans in a needlessly disadvantaged playing field. Yes, it’s certainly possible that supercomputers in the near future will have better computing power than human brains, but that’s no different than gorillas having superior muscle power than human muscles, which does not stop humans from being dominant over gorillas; that is because humans do not need to depend on their biological assets. Humans use tools, whether it’s a rifle to defend against an attacking animal, or a computer to outthink an attacking intelligence. Whatever hardware the AGI has access to, we probably have access to more.

Think about the classical examples, of the AI defeating humans in various games. A common prelude to talking about the dangers of AI is how intelligent computes are now defeating humans in Chess, Checkers, Go, and so on. But humans are playing these games with a deliberate handicap – they are only allowed to use their brains. The AI can use computers to help it.

For the sake of any non-computer-scientist readers, I want to stop and make a little clarification – there is a significant difference between non-AI algorithms and AI. The definition might not be completely universal, different people might understand the word AI in different ways, so let me define the word AI for the purpose of this post:

Definition: An AI algorithm is an algorithm whose creator does not understand enough to modify in a way that produces predictable results.

Think for example about machine translation: an algorithm that takes a text in one language and searches every word in the dictionary to replace it with a word in the target language, would be a non-AI translator. Of course it would also not be very good, but we can develop it further and build complex linguistic rules into it; we can design complex algorithms to determine which words are nouns and which are verbs, and translate conjugations and declensions in a more suitable way to the target language. We can maintain a database of idioms the algorithm can search through to try to recognize them in the source text, and so on. With all these additions and complexities, it’s still not AI in my definition, because at all stages, the algorithm does what the programmer told it to, and the programmer understands it perfectly well. The programmer could just as well do the same things by themselves, it would just take an absurd amount of time.

On the other hand, an algorithm that constantly reads texts in the source language and their (human-made) translations to the target language and tries to figure out the rules for translation by itself, through some sort of machine learning process, would be actual AI. The programmer does not really understand how the algorithm translates a text; All they know is how it’s built and how it learns. The programmer would not be able to change anything in a reliable and predictable way – if they find out that for some reason the translation has a problem with some particular grammatical structure, they cannot easily fix it because they have no idea where and how the algorithm represents that grammatical structure. So that algorithm would be true AI.

I argue that this definition is useful, because algorithms that don’t count as AI by this definition are not only unable to turn into superintelligent dangers by themselves, but they are also “on our side” – they are tools we use in our own thought. Deep Blue, the famous computer that made history in defeating the world champion in chess, was a non-AI algorithm – it worked by using its large computation resources to try millions and millions of different possibilities, and checking which ones are beneficial according to rules explicitly defined by its programmers. The programmers understand how it works – they can’t defeat it using their own brains, but that’s just because their biological brains don’t have the ability to calculate so many things so quickly. So if we think about the level of AI versus humans in Chess right now, it would be unfair to ask if the best AI player can defeat the best human player – we should ask if the best AI player can defeat the best human player, using a supercomputer with a non-AI algorithm they designed to help them. Because if the AI apocalyptic scenario happens, and a malicious AI tries to destroy humans for whatever reason, we’re going to have supercomputers on our side, and we’re definitely going to use them. So if you let Garry Kasparov join forces with Deep Blue, or more interestingly – with some software Kasparov himself would design as a perfect assistant to a Chess player – would he still be defeated by the best AI player? I’m not sure at all[1].

Bottom Line:

The difference between humans and AGI, that makes us worry that an AGI will advance significantly more quickly than humans towards superintelligence, is described in being the superior hardware of the AGI. But humans have access to the same hardware; We can calculate and think at the exact same speed, the only difference is that one small (though important) part of that calculation is done in a slower, biological computer. So how is that a big enough difference to justify the worry of superintelligence?

(Move on to part II and part III)

[1] I offer this as a thought experiment, but I did hear Kasparov say he’s interested in the idea of human-computer teams playing together in Chess; I don’t know what exactly he meant by that, and could not find any information online.

Enjoy the Internet while it lasts

Posted by שי שפירא on July 5, 2017
Posted in: Technology. Tagged: Internet. Leave a comment

(Adapted from a post in my Hebrew blog from 11/05/2017)

The Internet enjoys a very romantic image in the eyes of many people. It is seen almost as some sort of global philosophy of freedom and sharing and equality, a space where anyone can communicate with anyone and all the borders come down. After almost thirty years of having it around, it’s often seen as an irreversible change to the nature of our society – information wants to be free, and once it has become free, it will never go back to its cage again.

What not many people seem to appreciate is the physical nature of the Internet. The Internet is not a technology, or an idea, or a study – things that once they appear in the world, it’s almost impossible to make them disappear (although not completely impossible – one of the most amazing and underappreciated facts of history is how much of society disappeared in the dark ages). No, the Internet is a physical infrastructure made of cables and routers under the physical control of different people and different organizations, and managed by DNS servers, themselves controlled by different people and organizations. The uncomfortable truth is that not many people need to make bad decisions for the Internet to become something very different than it is today, if not go away completely. Just look how easy it was for several governments in recent years to take away the Internet, or significant parts of it, when they felt the need for it. Do you think it’s only a third world problem, it will never happen in your country? I wouldn’t be so confident.

The famous cases of government censorship of the Internet are a worrying trend, but it’s only the most visible part of the problem. It’s been a long time since the Internet was a way for computer geeks from (kind of) all around the world to talk about science and programming and Star Trek. The Internet grew into a huge beast, powered by lots and lots of money. And with all that money lying around, each one of those companies that control some of that physical infrastructure has a strong incentive to always look for ways to ensure they can make money from the Internet and you cannot. And as more and more users go online, we get more and more people wanting to protect us from ourselves – how horrible would it be if we, not to mention our children, would accidentally read some dangerous or impolite information. The combination of these two trends creates a dangerous situation – imagine some of those groups controlling the infrastructure – governments, service providers, content creators, etc. – would decide use their control to create a monopoly or cartel; What would be easier than crying out that our children are exposed to dangerous materials on the Internet, and therefore we need to regulate it? Decide that from now on, creating a website requires a license from the government? We’re seeing more and more regulations and calls for regulations every year. It starts with ridiculous things like the EU cookie law, continues with laws requiring accessibility or comments regulation, and in the end why wouldn’t they require a license to run a website? After all, how many voters are there that actually run their own website and would be bothered by that?

In that sense, Internet regulation will work the same way any other “consumer protection” regulation works – most people prefer to be consumers rather than producers, so they’re happy to throw any possible legislation and limitation on the producers. Everything sounds good on paper – Why should we have hate speech in our Internet? Why shouldn’t every website be accessible for everyone? Why should we tolerate fake news? The big companies will be very happy to introduce every possible limitation possible to the lawmakers, who in turn will be happy to tell their voters how safe they are making the Internet for them. And individual website owners? They’ll be driven away just like any small business in a heavily regulated environment. Facebook and Amazon will be happy to get more and more regulation on the Internet, because they can afford the lawyers and designers needed to comply with everything (not to mention, the lobbyists that make those regulations to begin with). Your neighbour’s fifteen year-old son, which used to be the source of a significant percentage of websites in the early days of the Internet, is going to give up because he can’t take the risk. And the Internet will become the private property of some big companies. Is that a likely scenario? In my view it’s not only likely, but I would be amazed if it does not happen within the next thirty years. I don’t think someone like me will still be able to publish on the Internet by then.

And really, I cannot blame anyone. It’s not some evil conspiracy; It’s more like a natural progression of technologies. The Internet has become too big to remain that idealistic commune of computer geeks it was in the nineties. The environment where we access our bank accounts and pay taxes to our governments needs more strict rules than a forum where people talk about algorithms for sorting numbers. A vast majority of Internet users has no interest in freedom of information, or net neutrality, or their ability to create their own pages – They just want to browse their news websites and social networks given to them by whichever big company they don’t know; This might also include many of you, the readers, who might not even understand who is that “neighbor’s fifteen year-old son” from the previous paragraph because maybe you were not in the generation or the environment where that stereotype existed – Maybe you did not even experience those times where browsing the Internet would more often than not take you to the personal website of some teenager who set up a page to talk about something he or she was interested in. And it probably doesn’t even sound appealing to you – It was badly designed, unreliable, low quality – but this is what freedom looks like. Freedom is messy and badly designed, and most people don’t want it – on the Internet or elsewhere. I can’t count how many times I’ve heard the claim that Facebook overtook MySpace because MySpace allowed users “too much freedom” in designing their profile pages, leading to bad looking pages. Well, this is the direction things are going to go – We will not have the ability to design things for ourselves, and most people will be perfectly fine with it.

For all those reasons, I actually wonder if all those Internet freedom and anti-censorship activists wouldn’t do better in changing their goals – instead of wanting to protect the Internet from these trends, maybe it’s better to concede defeat and look for creating some sort of alternative Internet in addition to the normal one – Some sort of mini-Internet without any of the power of the main one, but with the freedom; A cheap Internet that probably won’t be strong enough to deliver videos or secured enough to make financial transactions, and therefore it will be dull and boring and no one will want to use it except computer geeks who want to talk about science with each other. In other words, the nineties Internet, made better with some new ideas that came up since then, but without the money.

I think a reasonable parallel might be made with radio – I don’t know any technological difference between radio and the Internet that would explain the fact that on the radio we only listen, but on the Internet we also write. When the technology was just beginning, every radio geek could broadcast whatever they wanted, with or without listeners. Eventually it became too big to be managed this way, became regulated, and as part of that regulation some frequencies were given to those geeks to communicate with each other while the rest of the world listens to whatever the radio equivalent of Facebook is. It’s called ham radio and I’ve personally never tried it, but it sounds nice. I think it’s not unlikely that this is the direction the free Internet is going, which is a little bit unfortunate, but we need to accept our situation, and take what we can. We cannot run these cables all around the world ourselves – the big money does that, and we need to play by big money’s rules.

Rational fiction and emotional fiction

Posted by שי שפירא on June 26, 2017
Posted in: Books, Morals. Leave a comment

I’ve recently finished reading “The Architect’s Apprentice” by Elif Shafak, which made me think about the difference between the kind of fiction enjoyed by computer geeks like myself, versus the kind of fiction enjoyed by other people. It’s remarkable how different the two groups are – When I meet a computer programmer, or some other person who seems clearly versed in the world of computer geeks, there is a striking amount of books I can be sure they read (or are at least strongly aware of), movies they watched, ideas they know, vocabulary they will understand. And most or all of those things tend to be completely foreign to anyone else. What is it that makes some stories appeal to this particular group? This book made me think about a possible answer.

The book is very well written – in many ways, her writing style is exactly the kind of writing style I would want to use if I ever write fiction. Focused, dynamic, chaotic. But there is one important difference between this book and the kind of books that get put into a geek’s reading list – her characters are, without exception, motivated by emotions. This combines into a theory that’s been developing in my mind about what makes computer geeks into such a defined subculture – geeks read about people motivated by goals, ideas and ideologies. Non-geeks read about people motivated by feelings.

(Note: Some spoilers ahead for The Architect’s Apprentice and for Umberto Eco’s The Name of the Rose. I’ll try to keep them as vague as possible)

I really started realizing this at the end of the book, where it turns into a kind of detective story, making me draw some parallels to what I consider to be the best detective story I’ve ever read – The Name of the Rose by Umberto Eco. While The Architect’s Apprentice (unlike The Name of the Rose) does not start as a detective story, the final act in both books is quite similar. And that is where the difference comes to play.

Umberto Eco’s William of Baskerville is a quintessential geek’s protagonist. He is an outside observer – he has no emotional investment in his investigation. He is rational to the point of exaggeration, avoiding any part of his personal life being put into the story. We know very little about him not because he hides anything, but because it’s not important. He does not want the story to be about him; he wants it to be about the investigation. Adso of Melk is slightly less rational than him, but not for lack of trying. He clearly looks up to William as the proper model for how to behave. And most importantly, this is not true only for heroes, but also for villains – the story is full of ideological debates, and the final act reveals the ideological debate behind the entire mystery. I believe that was not too much of a spoiler, because a geek would not expect otherwise – of course a murder mystery would be about ideology, why else would you have a great story?

Compare this with the final act of The Architect’s Apprentice, so similar in style – the protagonist learns the truth, connects the dots and confronts the antagonist to learn the whole story. But that story is based on completely different motivations. Jahan does not seem to believe in anything. He does not care about the world around him, other than his love or admiration for some people, his hatred or fear towards others (love, admiration, hatred, fear – in other words, emotions). He does the work he is ordered to do, and tries to get better at it – though any inspiration he feels towards the professional learning of architecture seems secondary to his admiration towards his master, which gives him the real motivation to advance. When he discovers the conspiracy in the final act, he does not want to fight it to protect the empire, or to destroy the empire, or to make some change in the world; he wants to uncover the conspiracy out of anger (again, an emotion) at the wrongs done to him and his loved ones. It’s not that he lacks empathy or desire to help others, but he does it in a fundamentally emotional way – he wants to help people after seeing their suffering. Several times in the book he encounters a suffering character and tries to help them; Once the sufferer is out of sight, the issue is over. In the final act, we also get to learn the antagonists’ motives: again, emotions. Every single one of them. Anger, love, revenge; Every action of the antagonists is motivated by the desire to hurt someone they are personally angry at, for a personal issue between them.

These two are only representative examples of this difference. Let’s consider some other works of fiction that are considered part of the “geek bookshelf”, so to speak: Frodo Baggins‘s main emotion is fear, and his story is all about conquering it to act for the greater good, to which he is directed by a stream of stoic, rational people who rarely express any emotion at all – Aragorn, Gandalf, Elrond, etc. His antagonist is basically pure evil – Sauron does not seem to be insulted, or angry, or greedy: He is simply pure evil, and acts as if it is a force of nature with no emotions. Emotions do exist in their story, but as secondary phenomena that can be enjoyed where possible, or must be conquered when necessary – the successful character growth for each of the younger characters is being able to control their emotions. Frodo his fear, Pippin his hastiness, Boromir his pride.

Eddard Stark is fully motivated by maintaining the traditional justice of his kingdom, and even his duty to his family does not take precedence over it. Stannis Baratheon, Daenaris Targaryen and their followers all do the same, though for different traditions. Of those who don’t act to preserve traditional power, many act to overthrow it in the name of some moral worldview – Varys, Beric, Mance. The few characters who are working for pure personal gain either try to hide it, or are seen as mindless pawns in other people’s stories. How about Neo? His choice of “red pill versus blue pill” became a universal symbol for choosing the greater good over the personal gain, but in reality that greater good meant abandoning everything and everyone he knew (and loved, and hated, and felt any emotion towards); Yet it is obviously the right choice, for the geeky audience. Sarah Connor certainly has no time emotional decisions as she’s running for her life, much like countless other rational heroes.

Meanwhile, who do non-geeks watch? James Bond is just as preoccupied with saving the world as Neo or Frodo, but he has no need to conquer his emotions; To the contrary, he
celebrates them. He is loved for being so powerful, that he does not even need to fight his weaknesses and personal interests. Superman has little trouble in interrupting his attempts to save the world to do something for the woman he loves, and Spiderman is much more famous for his emotional breakdowns than for anything he did for the world. This goes a long way back in time: Achilles makes his decisions regarding the life or death of countless others, based on his anger towards Agamemnon or his love towards Patroclus. This would make him a ridiculous (or possibly tragic) side character in a rational story, but is celebrated in an emotional story.

And these are just the rare examples of emotional heroes going to save the world. Saving the world is an extremely common motivation in geeky works of fiction, but extremely rare in others. Because saving the world requires strength and sacrifice; those come at the expense of indulging one’s emotions. It’s not only loving relationships that suffer – indeed, John Sheridan‘s love for Delenn or Benjamin Sisko‘s love for his son must be put at second place as they risk their lives to save the universe; But other emotions can be equally tempting – Indulging in one’s depression, anxiety, envy: while these don’t sound like “pleasurable” things to do, they are easier choices than taking responsibility. The emotional protagonist will not go to save the world because saving the world means they cannot stop at any point and complain of how unfair their situation is; they cannot be angry at their commanding officer and decide not to work with them anymore. When they fail, they must accept it and move on; no time to doubt themselves and punish themselves for their failures. When they need help, they must try to solve their own problems if possible; If not possible, they’d swallow their pride and ask help from others, rather than (as an emotional character would do) stubbornly stay alone and suffer. The emotional hero will have none of that, and is celebrated for it: sacrificing the world for their love, wallowing in angst over their difficulties, maintaining grudges against others, and refusing to ask for help even when it is clearly the right thing to do – all of these are integral parts of the behaviour of the emotional hero, and often described and praised as “human” reactions, unlike the “robotic” reactions of the rational hero.

It’s not that emotions don’t exists in rational fiction, or that rationality does not exist in emotional fiction; the difference is in what the characters aspire to. In a rational story, a
character’s story arc will be about them conquering their emotions to achieve their goals. In an emotional story, the story arc would have them indulging in their emotions; sometimes while achieving their goals, sometimes while failing, sometimes without having a goal at all – because for these stories, the emotions are the center, and people are the center. For geeks – it’s about the bigger picture. More often than not, it’s about saving the world, no matter the personal cost.

In defense of the human mind

Posted by שי שפירא on June 12, 2017
Posted in: Democracy, Software. Leave a comment

Lately, three trends have been becoming stronger and stronger in technological and political news; all three coming from different sources and representing different ideas, but in my opinion all three share a very important and fundamental characteristic – the loss of trust in the human mind, and in its capability of making decisions and managing its own life.

One trend is the increasing worry about the dangers of artificial intelligence research eventually creating a superintelligent being that makes humans obsolete. Of the three trends, this is the only one with which I don’t disagree in principle, only in some details, and I will write a separate post about that soon.

The second is the rising estimation of the power wielded by big Internet companies, especially Facebook and Google – estimation expressed both by their admirers, who describe their algorithms as some sort of magic powers, and even more by their critics, who describe them as some sort of dystopia coming to consume us. What hasn’t been said about them? They know everything about us. They can convince us of anything. They are omnipresent, we cannot escape them. See for example Tristan Harris’s appearance on Sam Harris’s podcast, or Tim Berners-Lee’s letter for the 28th birthday of the World Wide Web.

The third is the increasing tendency of intellectuals throughout the developed world to speak, either explicitly or in hints, against democracy. I don’t know if it can be said to have started there, but the UK brexit referendum and the election of Donald Trump as president of the USA have definitely opened the floodgates, with countless people complaining about the idea of “uneducated” people making important decisions by themselves. This includes some people I greatly appreciate and respect (most notably and unfortunately Richard Dawkins, for example).

All three add up to a fairly consistent future: Humans are primitive and unnecessary, and will eventually be replaced by something better. Now, don’t get me wrong – I’m definitely not an idealist who will speak poetically about the “majestic” nature of human thought. If it is true that humans are powerless against some algorithms stumbled upon by some Google engineers, we need to accept that and respond accordingly. But how true is that? My very strong impression is that people accept it as true without much scrutiny, and with very little will to fight for the future of their human brains. I suspect there’s something comforting for our current generations in thinking that they are powerless cogs in some machine that runs very well without them, releasing them of any responsibility.

Let’s look at Tristan Harris’s worries (sorry for picking on him; it’s because he’s the one giving the most rational, detailed claim about this that makes it possible to argue with, unlike the simple alarmism that constitutes most of this discourse). He talks about our instincts being “abused to control us“. About technology “hijacking our psychological vulnerabilities“. Why is that? because websites are getting better and better at using tricks to get our attention. He lists a series of methods, backed by psychological studies, that can be used to persuade people of things without them understanding how those work.

The problem is, there is nothing new about this. The idea that persuasion can be done not only by logical debate but also by mind tricks is thousands of years old, as are the complaints against it. Rhetorics have been considered a field of study at least since Aristotle, and I cannot see any way in which Facebook’s “manipulation” methods are any different than the ones described by him – the same techniques Harris describes can be used by a human just as well as by an algorithm, and the absurd thing is that Harris himself admits that, by comparing it to his own past occupation as a magician. In that sense, the idea that “manipulating” algorithms and “fake news” must be made to stop, or that they prove that democracy is unsustainable, becomes amazingly repetitive – it’s the same old anti-democratic argument raised against the first democracy in the world. The intellectuals offering these complaints are again playing the role of Plato or Aristotle complaining about the “sophists” who can convince the masses of everything, clarifying the need for the wise philosopher king to educate the masses in the true virtue. While that has always sounded good in theory, millennia have proven how wrong this kind of thinking is, and how democracy, despite its (very real) shortcomings, is still the best system to rule our societies.

And here is the advantage of understanding how old this problem actually is – we don’t need to invent new solutions. We can look at the old ones and see which works. How can we deal with sophists? Harris’s solution is to make a list of behavioural flaws we should demand them not to make; a reasonable thing to do, but hardly a solution. Just like we don’t expect every person we meet to adopt a series of demands we made from him just like that, so we should not expect it from every software company. Tim Berners-Lee’s suggestions of asking Google and Facebook to act as “gatekeepers” are even worse, when we think about it in this way – would we want to assign any company do decide who gets to talk to us and who doesn’t? This is basically the philosopher king coming back, in CEO form.

So what does work? For many centuries, the answer has been one of the most fundamental ideas of western political philosophy – the way to defeat bad ideas is not by outlawing them, but by debating them and suggesting good ideas instead. Few people argue with this philosophy in general, but it’s very easy to forget to apply it every time the bad idea puts on different clothing – in this case, we are supposed to believe that the fact these bad ideas come from algorithms rather than people somehow makes a difference. I’m still waiting to hear what that difference is. Advetisements have existed as long as capitalism has, and our society survived them. Now the advertisers have more information about us? So does a door-to-door salesman who sees where you live, what you look like, and how you speak as they try to convince you to buy some garbage. Harris is worried about studies showing you can trick people to eat 73% more soup. How much more soup can a sophist convince you to eat? How much soup can Tristan Harris the magician convince you to eat? I want to see those studies. If you don’t compare the algorithms’ persuasion power to a sophist’s persuasion power, you cannot say that the former deserves a different treatment than the latter.

So many people want us to think we’re powerless against a scary world – some tell us politics is too complicated so we should just stay in our little corner and let the experts do the thinking for us, and some tell us that advertisements are too clever so we need to close our eyes until the experts decide what we can be trusted to see. I say – if you’re going to say that human beings cannot handle an advertisement without being brainwashed, you’ll need some better evidence than what we have today. And if that’s not the case, I say let’s do something else – let’s take responsibility for our own minds and our own lives. Let’s learn more about our political systems and make better choices about them. Let’s learn more about the mind tricks used by advertisers so we won’t fall for them. A good place to start (other than Daniel Kahneman’s fantastic books), ironically, is Tristan Harris’s own essays – he gives a very nice description of some of those marketing tactics. I only wish it ended not with “If you want your Agency, you need to tell these companies that that’s what you want from them”; I wish it ended with “now you know what to watch out for; so let’s take some personal responsibility and think for ourselves”. Convincing you to buy a toaster or to vote for a candidate is a small victory for advertisers; Making you think you have no agency until you ask it from them – that’s a huge victory for them, and a loss for you.

Welcome

Posted by שי שפירא on May 30, 2017
Posted in: Blog. 3 Comments

After almost a year of blogging in Hebrew, I’ve decided it’s time to move on to the big league. Too many interesting topics just don’t have a big enough audience in my mother tongue, and if I’m going to be part of the interesting and relevant discussion about these issues, English is the way to go.

So, for all you new readers, who am I?

My name is Shai Shapira. I’m a computer programmer and writer, originally from Israel, but have been moving from place to place constantly for many years now. I’ve worked on information systems and gaming in the past (and still some in the present), but my real passion is to use programming and other engineering skills to make the world a better place, and to advance human knowledge. Which is part of what I’m going to be talking about in this blog.

My interests:

I’m fascinated with understanding how the world works. My Hebrew blog focuses mostly on politics and economy, which I love to study from an engineering point of view – understanding the facts and numbers that make everything run. While I try to do more observation and less debate, I’m definitely not impartial – I’m a staunch supporter of liberty, peace and democracy (of course, all three words have different meaning for different people, you’ll need to read me for a while to understand what I mean by them), and that is often the lens through which I observe the political world.

Other than the human world, I’m fascinated even more with the natural world. Every stone, leaf or insect around us is a complex machine, and I want to understand them. Many people like to approach nature in awe, talking about the majestic power of it; I prefer to be amazed by how rational and simple things become when understood to a low anough level. Reaching this low level where things start to make sense requires a lot of learning, and one of my goals is to reach and simplify that learning.

I’m fascinated with technology, but not the popular kind; I’ve learned the satisfaction of making something (in my case, computer code) and seeing it working by itself probably before I was 10 years old. My interest in computing, as well as in physical machines, has only grown stronger since, but is getting increasingly separate from the commercial world of “technology”, whose trends seem to me ridiculous at best, frightening at worst. I tend to assume that one day, I’ll be the information age equivalent of the Hollywood archetype of the middle-aged man working in his garage on a decades old car, insisting to fix and create things by himself rather than use the commercial options. And hopefully I’ll be better at it.

Most of all, I’m fascinated with the process of knowledge acquisition itself. Our time on Earth is too limited to learn all the things I would like to learn, but only if we assume our current way of learning is the only one. One of my ancestors from 10,000 years ago would probably think they would need all their lives to learn all the information we currently learn by the end of elementary school; Different ways of representing, acquiring and using knowledge makes it much easier for us. And I strongly believe there is more to be done for that, and our descendants will someday be amazed by our ignorance. My prototype is language learning; Becoming a hyperpolyglot serves as my testing ground for understanding how to acquire information more efficiently, using information theory, mnemonics, or any other tool I can harness for it.

I hope you’ll enjoy reading.

Posts navigation

Newer Entries →
  • Recent Posts

    • The clash of civilizations, and the day after
    • Voice interfaces and the future of literacy
    • Video games and the blessing of failure
    • Introducing: Civilization 5 Superintelligence mod
    • Now on Twitter
  • Follow me on Twitter

    My Tweets
  • Archive

    • August 2018 (1)
    • March 2018 (1)
    • February 2018 (1)
    • January 2018 (1)
    • December 2017 (2)
    • October 2017 (2)
    • September 2017 (1)
    • July 2017 (4)
    • June 2017 (2)
    • May 2017 (1)
  • Categories

    • Artificial Intelligence (2)
    • Blog (2)
    • Books (1)
    • Cognition (2)
    • Computer Science (3)
    • Democracy (1)
    • Economy (1)
    • Games (2)
    • Geopolitics (1)
    • International Relations (1)
    • Languages (1)
    • Morals (2)
    • Politics (3)
    • Software (2)
    • Technology (3)
  • RSS

    RSS Feed RSS - Posts

    RSS Feed RSS - Comments

Blog at WordPress.com.
Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy