Lately, three trends have been becoming stronger and stronger in technological and political news; all three coming from different sources and representing different ideas, but in my opinion all three share a very important and fundamental characteristic – the loss of trust in the human mind, and in its capability of making decisions and managing its own life.
One trend is the increasing worry about the dangers of artificial intelligence research eventually creating a superintelligent being that makes humans obsolete. Of the three trends, this is the only one with which I don’t disagree in principle, only in some details, and I will write a separate post about that soon.
The second is the rising estimation of the power wielded by big Internet companies, especially Facebook and Google – estimation expressed both by their admirers, who describe their algorithms as some sort of magic powers, and even more by their critics, who describe them as some sort of dystopia coming to consume us. What hasn’t been said about them? They know everything about us. They can convince us of anything. They are omnipresent, we cannot escape them. See for example Tristan Harris’s appearance on Sam Harris’s podcast, or Tim Berners-Lee’s letter for the 28th birthday of the World Wide Web.
The third is the increasing tendency of intellectuals throughout the developed world to speak, either explicitly or in hints, against democracy. I don’t know if it can be said to have started there, but the UK brexit referendum and the election of Donald Trump as president of the USA have definitely opened the floodgates, with countless people complaining about the idea of “uneducated” people making important decisions by themselves. This includes some people I greatly appreciate and respect (most notably and unfortunately Richard Dawkins, for example).
All three add up to a fairly consistent future: Humans are primitive and unnecessary, and will eventually be replaced by something better. Now, don’t get me wrong – I’m definitely not an idealist who will speak poetically about the “majestic” nature of human thought. If it is true that humans are powerless against some algorithms stumbled upon by some Google engineers, we need to accept that and respond accordingly. But how true is that? My very strong impression is that people accept it as true without much scrutiny, and with very little will to fight for the future of their human brains. I suspect there’s something comforting for our current generations in thinking that they are powerless cogs in some machine that runs very well without them, releasing them of any responsibility.
Let’s look at Tristan Harris’s worries (sorry for picking on him; it’s because he’s the one giving the most rational, detailed claim about this that makes it possible to argue with, unlike the simple alarmism that constitutes most of this discourse). He talks about our instincts being “abused to control us“. About technology “hijacking our psychological vulnerabilities“. Why is that? because websites are getting better and better at using tricks to get our attention. He lists a series of methods, backed by psychological studies, that can be used to persuade people of things without them understanding how those work.
The problem is, there is nothing new about this. The idea that persuasion can be done not only by logical debate but also by mind tricks is thousands of years old, as are the complaints against it. Rhetorics have been considered a field of study at least since Aristotle, and I cannot see any way in which Facebook’s “manipulation” methods are any different than the ones described by him – the same techniques Harris describes can be used by a human just as well as by an algorithm, and the absurd thing is that Harris himself admits that, by comparing it to his own past occupation as a magician. In that sense, the idea that “manipulating” algorithms and “fake news” must be made to stop, or that they prove that democracy is unsustainable, becomes amazingly repetitive – it’s the same old anti-democratic argument raised against the first democracy in the world. The intellectuals offering these complaints are again playing the role of Plato or Aristotle complaining about the “sophists” who can convince the masses of everything, clarifying the need for the wise philosopher king to educate the masses in the true virtue. While that has always sounded good in theory, millennia have proven how wrong this kind of thinking is, and how democracy, despite its (very real) shortcomings, is still the best system to rule our societies.
And here is the advantage of understanding how old this problem actually is – we don’t need to invent new solutions. We can look at the old ones and see which works. How can we deal with sophists? Harris’s solution is to make a list of behavioural flaws we should demand them not to make; a reasonable thing to do, but hardly a solution. Just like we don’t expect every person we meet to adopt a series of demands we made from him just like that, so we should not expect it from every software company. Tim Berners-Lee’s suggestions of asking Google and Facebook to act as “gatekeepers” are even worse, when we think about it in this way – would we want to assign any company do decide who gets to talk to us and who doesn’t? This is basically the philosopher king coming back, in CEO form.
So what does work? For many centuries, the answer has been one of the most fundamental ideas of western political philosophy – the way to defeat bad ideas is not by outlawing them, but by debating them and suggesting good ideas instead. Few people argue with this philosophy in general, but it’s very easy to forget to apply it every time the bad idea puts on different clothing – in this case, we are supposed to believe that the fact these bad ideas come from algorithms rather than people somehow makes a difference. I’m still waiting to hear what that difference is. Advetisements have existed as long as capitalism has, and our society survived them. Now the advertisers have more information about us? So does a door-to-door salesman who sees where you live, what you look like, and how you speak as they try to convince you to buy some garbage. Harris is worried about studies showing you can trick people to eat 73% more soup. How much more soup can a sophist convince you to eat? How much soup can Tristan Harris the magician convince you to eat? I want to see those studies. If you don’t compare the algorithms’ persuasion power to a sophist’s persuasion power, you cannot say that the former deserves a different treatment than the latter.
So many people want us to think we’re powerless against a scary world – some tell us politics is too complicated so we should just stay in our little corner and let the experts do the thinking for us, and some tell us that advertisements are too clever so we need to close our eyes until the experts decide what we can be trusted to see. I say – if you’re going to say that human beings cannot handle an advertisement without being brainwashed, you’ll need some better evidence than what we have today. And if that’s not the case, I say let’s do something else – let’s take responsibility for our own minds and our own lives. Let’s learn more about our political systems and make better choices about them. Let’s learn more about the mind tricks used by advertisers so we won’t fall for them. A good place to start (other than Daniel Kahneman’s fantastic books), ironically, is Tristan Harris’s own essays – he gives a very nice description of some of those marketing tactics. I only wish it ended not with “If you want your Agency, you need to tell these companies that that’s what you want from them”; I wish it ended with “now you know what to watch out for; so let’s take some personal responsibility and think for ourselves”. Convincing you to buy a toaster or to vote for a candidate is a small victory for advertisers; Making you think you have no agency until you ask it from them – that’s a huge victory for them, and a loss for you.