The Most Human Human

October 2, 2011

Trying to make computers pass as human can tell us a good bit about what it means to be human in the first place.

I’ve recently read What Technology Wants, which imagines technology as a symbiotic ally to humanity, and You Are Not a Gadget, which sees certain trends in technology as a threat to our humanity. Brian Christian’s The Most Human Human takes a surprising third path:

We can think of computers…as nemeses… But I prefer, for a number of reasons, the notion of rivals–who only ostensibly want to win, and who know that competition’s main purpose is to raise the level of the game. All rivals are symbiotes. They need each other. They keep each other honest. They make each other better. The story of the progression of technology doesn’t have to be a dehumanizing or dispiriting one. Quite, as you will see, the contrary. (14-15)

The book’s backbone is the story of Christian’s participation in the Loebner Prize, a Turing Test competition. Alan Turing, grandfather of the modern computer, proposed an interesting answer to the question of whether or not computers can be as intelligent as humans. Since we can never really know what’s going on inside a machine’s “brain” (or, for that matter, another person’s), we can’t detect intelligence objectively. But, he argued, if we get judges to have blind conversations with both humans and machines and the judges can’t tell the difference, then we can say that computers have effectively matched our human intelligence.1

This is the format of the Loebner Prize. No computer program has yet completely fooled the judges, but each year the one that fools the greatest number wins the “Most Human Computer” award. As for the human beings “competing” with the computer programs to convince the judges of their humanity, the one among them whom the judges most consistently identify as human wins the half-ironic “Most Human Human” award. The Most Human Human is about Christian’s quest to win this prize, and he is apparently the first person in the competition’s history to take it seriously. But while Christian pursues the prize itself with a sense of humor, he argues ardently that it touches on issues of the utmost importance:

How do we connect meaningfully with each other, as meaningfully as possible, within the limits of language and time? How does empathy work? What is the process by which someone comes into our life and comes to mean something to us? These, to me, are the test’s most central questions–the most central questions of being human. (13)

Their Weakness is Our Strength

In the lead-up to the competition, the author interviews a number of programmers and computer scientists about their machines’ shortcomings. One of the hardest things for computers to fake is knowledge of the implications of conversation. They can be remarkably good at forging human-sounding responses, but a conversation is more than just a series of statements and responses.

Computation theorist Hava Siegelmann offhandedly described intelligence as “a kind of sensitivity to things,” and all of a sudden it clicked–that’s it! These Turing test programs that hold forth, these prefabricated poem templates, may produce interesting output, but they’re static, they don’t react. They are, in other words, insensitive. (205)

(Incidentally, human arguments tend this way, with arguers responding just to the last thing the other said. The next time you’re in an argument you want to end, resist blurting out your knee-jerk reaction to what the other person just said and instead say something mindful of what’s really going on, why you’re having the argument in the first place. Computers can’t do this.)

Computers May Bring Us, Literally, to Our Senses

Since antiquity, Western civilization has had a left-brain bias, focusing on reason and rationality as the defining characteristics of our humanity. (“Man is the rational animal,” said Aristotle, even though “Man is the poetic animal,” would be equally defining.) But computers are putting our powers of logic to shame, which may force us to redefine “human” as something other than the animal who reasons.

Indeed, it’s entirely possible that we’ve seen the high-water mark of the left-hemisphere bias. I think the return of a more balanced view of the brain and mind–and of human identity–is a good thing, one that brings with it a changing perspective on the sophistication of various tasks.

It’s my belief that experiencing and understanding truly disembodied cognition, only seeing the coldness and deadness and disconnectedness of something that truly does deal in pure abstraction, divorced from sensory reality, only this can snap us out of it. Only this can, quite literally, bring us back to our senses. (72)

Not surprisingly, computers suck at the humanities–things like art, poetry, and criticism. Perhaps in the near future when machines replace us functionally and the unemployment rate approaches 100%, we’ll all just write and critique poetry.

Human Obsolescence: Don’t Blame the Machines

In one of his most insightful digressions, Christian takes on the fear that robots will replace us in our work. He argue that long before machines can take our jobs, something else has to first make us do our jobs mechanically:

This “draining” of the job to “robotic” behavior happens in many cases long before the technology to automate those jobs exists. Ergo, it must be due to capitalist rather than technological pressures. Once the jobs have been “mechanized” in this way, the much later process by which jobs get taken over by machines (or, soon, AIs) seems like a perfectly sensible response, and, by that point, perhaps a relief. To my mind, the troubling and tragic part of the equation is the first half–the reduction of a “human” job to a “mechanical” one–and less so the second. So fears over AI would seem to miss the point. (86)

Consider call centers. Even when you’re speaking to a real person on the other line, he often might as well be a robot, since he’s trained to follow a script and give stock responses to questions. Of course this kind of work can be done by machines because we first drain the human soul out of it. We shouldn’t blame the machines for taking our jobs. We should blame the circumstances that require us to do our jobs so mechanically, so soullessly, that we can be replaced by a machine in the first place.

This process has nothing to do with computers or technology as such, but rather with the codification of behavior into a method. In a chapter on chess, Christian writes at some length about The Game, a book by Neil Strauss about pickup artistry, or the human-imposed methodization of getting girls to sleep with you. In the same vein, “‘international dating coach’ Vin DiCarlo is compiling a database of text messages, and cataloging their success (even down to the punctuation) at prompting dates and replies” (123). If this is the kind of thing humans are doing–turning potentially meaningful encounters into mere tactics and scripts to follow–then we shouldn’t be surprised when programmers teach computers to do the same.

If Defeat

When a computer finally does win the gold at the Loebner Prize, the competition will be discontinued. After Deep Blue beat Gary Kasparov in the first epic man-versus-machine chess match, IBM’s engineers moved on to other projects. But Christian sees moments like these as just the beginning:

No, I think that, while certainly the first year that computers pass the Turing test will be a historic, epochal one, it does not mark the end of the story. No, I think, indeed, that the next year’s Turing test will truly be the one to watch–the one where we humans, knocked to the proverbial canvas, must pull ourselves up; the one where we learn how to be better friends, artists, teachers, parents, lovers; the one where we come back. More human than ever. I want to be there for that. (264)

  1. The equation of conversational prowess with intelligence is a fascinating, controversial assumption in its own right. Christian supports it convincingly, arguing that both depend on a “sensitivity to things,” explained below.