It has been a staple of common wisdom that artificial intelligence (AI) is to be something like Data, an android from Star Trek–rational and largely incapable of emotion, even questioning the emotions’ utility. Vulcan, also of Star Trek, is a non-AI example of this kind of rationalism. In general, the idea that reason is superior to emotion has been our legacy since the Age of Reason. Star Trek is not the only example of that in fiction, either. Science fiction abounds with examples of super-rational AIs, which are “programmed” to do this or that, and the like. If they are allowed to have emotions, it’s treated as something exceptional, along the lines of “but they can, after all, feel emotions–oh, how so adoringly human-like!”
In contrast, I claim that when AIs will finally reach our level of sophistication, they will be at least as emotional as rational, if not more so. Moreover, that a purely rational mind of any decent level of complexity is not just unlikely but patently impossible. But to begin with, what does a “decent level of complexity” entail?
In modern thermodynamics, there is an important distinction between systems near equilibrium and systems far from equilibrium. An example of the former is when you mix hot water with cold water and just let the whole stand still for a while. Very soon, the water will reach a uniformly lukewarm state–an equilibrium. An example of the latter is Earth’s weather systems, which always keep churning change and are never still–so much so that they exhibit the butterfly effect, whereby an infinitesimally tiny deviation can make enormous difference in the long run.
When the science of thermodynamics only began developing, its creator, Ludwig Boltzmann, started with systems near equilibrium–that’s what scientists do, starting with simple examples to discover basic principles. Easier to abstract some (seemingly) irrelevant details. He quickly discovered the second law of thermodynamics, among other things: in a closed system (or, more generally, in a system near equilibrium), entropy grows with time.
What he didn’t know is that open systems far from equilibrium spontaneously self-organize, exchanging entropy with their environments in such a way as to decrease their own entropy on average. That’s the discovery for which Ilya Prigogine got his Nobel Prize from–something so fundamental that it ought to have been called the fourth law of thermodynamics. This is what weather systems do, what living organisms do, what societies of self-aware sentients do. An equilibrium is death: when a society stagnates, when a living organism dies, when the whole climate reaches a uniform state, like that water we’ve mixed earlier. Boltzmann didn’t know why the universe doesn’t reach the state of “heat death” if entropy increases–but Prigogine told us why.
When is a system far from equilibrium? Without going into too much scientific detail, let me just give you a rule of thumb: that’s when the butterfly effect is possible. With standing water, hot mixed with cold, no tiny deviation will make any difference. In our brain, billions of metaphoric needles stand on end, deciding every microsecond which way to fall–and even the tiniest deviation can often (not always) make a lot of difference.
Now, back to AI. When we just began developing computing systems, we started, as did Boltzmann, with something simple. We have created computers that are so much better than us at arithmetic. We still use them for simple stuff–just look at what most software development is about (hint: primarily, managing simple workflows and building websites). So, no wonder that the computers that we know are all dead–in the sense of being near equilibrium. The impression that the computers are more powerful than our brains is deceptive. This is only because we haven’t applied them yet to solving problems that are truly difficult. Even our AI research remains rudimentary, and we’re being inordinately proud at developing a computing system that can recognize our handwriting–something that we do without breaking sweat (well, doctor’s prescriptions notwithstanding), and which we only have achieved with neural nets besides (a timid imitation of our brains).
As soon as we get into a more complex territory–the problems that require our brain’s level of complexity, the problems that exhibit the butterfly effect–we will see that our AIs (the true AIs, not the dead stuff that goes by this name today) will have to be emotional, besides being rational. This is because emotions have evolved to make snap decisions possible, in unpredictable environments that have the butterfly effect and incomplete information–whereas the problems we’re trying to solve with computers today are pretty well circumscribed. In systems far from equilibrium, where the butterfly effect galore, even infinite computing power won’t help–because the results will diverge wildly even when rounding all computations to a different decimal place (so, they become unpredictable in principle). Never mind the lack of information! What’s unknown plus unknown, eh?
Emotions have evolved to allow us to make decisions fast in situations of that sort. They are imperfect and can backfire horribly–but they work adequately more often than not. Once we have true AIs solving problems of that sort–the computing systems that are far from equilibrium–they’ll need a mechanism like that, as well. Pure reason is the domain of systems near equilibrium–dead, dumb. We’ll need systems like that, too–systems that run at longer time scales, systems that don’t need to make decisions fast in unpredictable environments. But that cannot be true AI.
A true AI will be at least as emotional as rational–if not more so. In my own science fiction, you will not see purely rational AIs at all, not even for comedy relief, with a possible exception of a system like described above, which runs at a longer time scale and doesn’t have to move–to wit, is more like a plant than an animal (and, by the way, this is the reason animals have brains and plants do not: the brains evolved to predict the outcome of movement). The old but thoroughly wrong stereotype of a super-rational AI must be demolished utterly.
I’m calling on all science fiction authors to fight it hard, with all they’ve got. And I don’t mean writing more novels where robots prove they have emotions “after all”–but where they cannot not have emotions in principle, where no one even assumes at the beginning that they can be anything but emotional.
More generally, it is time to put the old but equally wrong dichotomy between human and machine to rest, as well. Like our first computers are dead and dumb because we began with simple problems, the general thrust of technology also began with the dead and dumb things because it had to start simple. But the next stage in the progression of hand crafts to manufacturing to 3d printing is… growth. Only systems that evolve can be complex enough to be far from equilibrium. Complete reproducibility–the hallmark of today’s technology–is near equilibrium by definition.
Ilya Prigogine’s law–the unofficial fourth law of thermodynamics, the law about systems far from equilibrium–is the generalized evolutionary principle, after all.