The Human Condition:

AI and Emotion – April 14, 2019

Robot head

In the extended Star Trek series, Mr. Spock and the other Vulcans portray a rigid adherence to pure logic1 and either rejection and active repression of their humanoid emotions. This sort of character presents an attractive gravitas: sober, thoughtful, consistent, dependable, undemanding, and loyal. It would seem that, if human beings could just be cleansed of those fragile, distracting, interfering emotions, they would be made more focused, more intelligent, and … superior.

Certainly, that is one of the attractions of the computer age. If you write a program, test and debug it properly, and release it into the world, it will usually function flawlessly and as designed—apart from misapplication and faulty operation by those same clumsy humans. Known inputs will yield know outputs. Conditional situations (if/then/else) will always be handled consistently. And, in the better programs, if unexpected inputs or conditions are encountered, the software will return an error or default result, rather than venturing off into imagined possibilities. Computers are reliable. Sometimes they are balky and frustrating, because of those unknown inputs and aberrant conditions, but they are always consistent.

Our computer science is now entering the phase of creating and applying “artificial intelligence.” Probably the most recognizable of these attempts—in the real world, rather than in the realm of science fiction—is IBM’s Watson computer. This machine was designed to play the television game show Jeopardy. For this effort, its database was filled with facts about and references to history, popular culture, music, science, current events, geography, foreign languages—all the subjects that might appear on the game board. It was also programmed with language skills like rhymes, alliterations, sentence structure, and the requisite grammatical judo of putting its answer in the form of a question. Although I don’t know the architecture of Watson’s programming myself, I would imagine that it also needed a bit of randomness, the leeway to run a random-number generator now and then—effectively rolling the dice—to make a connection between clue and answer based on something other than solid, straight-line reference: it occasionally had to guess. And it won.

IBM is now using a similar computer architecture with Watson Analytics to examine complex accounting and operational data, identify patterns, make observations, and propose solutions to business users. Rather than having a human programmer write a dedicated piece of software that identifies anticipated conditions or anomalies in a specified data field, this is like having a person with a huge memory, fast comprehension, and no personal life at all look at the data and make insights. Such “expert systems” from other vendors are already analyzing patient x-rays and sieving patient symptoms and biometrics from physical and laboratory testing against a database of diseases to identify a diagnosis and recommend a course of treatment.

And for all these applications, you want an emotionless brain box that sticks to the facts and only rolls the random numbers for an intuitive leap under controlled conditions. When you’re examining a business’s books or a patient’s blood work, you generally want a tireless Mr. Spock rather than a volatile Dr. McCoy.

But the other side of artificial intelligence is the holy grail of science fiction: a computer program or network architecture that approximates the human brain and gives human-seeming responses. This isn’t an analytical tool crammed with historical or medical facts to be applied to a single domain of analysis. It’s the creation of a life form that can resemble, emulate, and perhaps actually be a person.2

IBM’s Watson has no programmed sense of self. This is because it never has to interface directly, intelligently, or empathically with another human being, just objectify and sift data. Emotions—other than the intuitive leaps of that random-number generator—would only get in the way of its assignments. And this is a good thing, because Watson is never going to wake up one day, read a negative headline about the company whose operations it’s analyzing, and decide to skew the data to crash the company’s stock. Watson has no self-awareness—and no self-interest to dabble in the stock market—to think about such things. Similarly, a Department of Defense program based on chess playing skills and designed to analyze strategic scenarios and game out a series of responses—“Skynet,” if you will—is not going to suddenly wake up, understand that human beings themselves are the ultimate threat, and “decide our fate in a microsecond.” All of that retributive judgment would require the program to have a sense of self apart from its analysis. It would need awareness of itself as a separate entity—an “I, Watson” or “I, Skynet”—that has goals, intentions, and interests other than the passive processing of data.

But a human-emulating intelligence designed to perform as a companion, caregiver, interpreter, diplomat, or some other human analog would be required to recognize, interpret, and demonstrate emotions. And this is not a case where a program relying on a database of recorded responses to hypothetical abstractions labeled as “love,” “hate,” or “fear” could then fake a response. Real humans can sniff out that kind of emotional fraud in a minute.3 The program would need to be self-aware in order to place its own interactions, interpretations, and responses in the context of another self-aware mind. To credibly think like a human being, it would need to emulate a complete human being.

In this condition, emotions are not an adjunct to intelligent self-awareness, nor are they a hindrance to clear functioning. Emotions are essential to human-scale intelligence. They are the result of putting the sense of self into real, perceived, or imagined situations and experiencing a response such as fear, anxiety, confusion, attraction and love, or repulsion and hate. In the human mind, which is always considering itself in relation to its environment, that response is natural and automatic. If the mind is defending or protecting the sense of identity or personal security, a fear or anxiety response is natural to situations that imply risk or danger. If the mind is engaging the social impulse toward companionship, community, or procreation, a love or hate response is natural to situations that offer personal association.

Emotions are not just a human response, either. Even animals have emotions. But, just as their intelligence is not as sophisticated as that of human beings, and their sense of self is more limited, so their emotions are more primitive and labile. My dog—who does not have complete self-awareness, or not enough to recognize her own image in a mirror and mistakes it for another dog—still feels, or at least demonstrates, joy at the word “walk,” contentment and even love when she’s being stroked, confusion when my tone of voice implies some bad action on her part, and shame when she knows and remembers what that action was. She also puts her tail between her legs and runs off, demonstrating if not actually feeling fear, when I put my hand in the drawer where I keep the toenail clippers.4

Emotions, either as immediate responses to perceived threats and opportunities, or enduring responses to known and long-term situations, are a survival mechanism. In the moment, they let the human or animal brain react quickly to situations where a patient course of gathering visual, audible, or scent cues and thoroughly interpreting or analyzing their possible meaning would be too slow for an appropriate response. In the longer term, emotional associations provide a background of residual re-enforcement about decisions we once made and reactions we once had and that we would benefit from remembering in the moment: “Yes, I love and am allied with this person.” “No, I hate and distrust this person.” “Oh, this place has always been bad for me.” Emotions bring immediately to the forefront of our awareness the things we need to understand and remember. As such, emotions are part of our genetic evolution applied to the structure and functioning of our animal brains.

Any self-aware artificial intelligence—as opposed to the mute data analyzers—will incorporate a similar kind of analytical short cut and associational recall. Without these responses, it would be crippled in the rapid back and forth of human interaction, no matter how fast its analytical capabilities might be.

And yes, the Vulcans of Star Trek were subject to the deepest of human emotions. Or else how could they have called anyone a friend or been loyal to anything at all—even to themselves?

1. And to science, as if the one demanded the other. While our current approach to science is an expression of logic and reasoning, any scientist will tell you there are also leaps of imagination and intuition. And as Lewis Carroll demonstrated, logic and its exercises can also be adapted to fantasy and whimsy.

2. I wrote stories about this, although in more compact form based on a fantasy version of LISP software, with the novels ME: A Novel of Self-Discovery and ME, Too: Loose in the Network.

3. Consider how we respond to people who lack “emotional intelligence,” such as those with certain types of autism or a sociopathic personality. No matter how clever they are, a normal person after a certain amount of interaction will know something is amiss.

4. And this reaction is also highly situational. When I go to that drawer each morning for her hair brush and toothpaste in our daily grooming ritual, or each evening for my coffee filters after pouring water into the coffee maker (yeah, same drawer, long story), she has no bad reaction. But let me touch that drawer in the early evening, when I generally cut her toenails every two or three months, and accidentally rattle the metal clippers—and she’s gone.