Please Support Clarkesworld via Patreon or with a Digital Subscription.

Science Fiction & Fantasy

CLARKESWORLD

HUGO AWARD-WINNING SCIENCE FICTION & FANTASY MAGAZINE  

 

RSS

PODCAST

Goodnight, Melancholy

AUDIO VERSION

Lindy (1)

I remember the first time Lindy walked into my home.

She lifted her tiny feet and set them down gingerly on the smooth, polished wooden floor, like a child venturing onto freshly-fallen snow: trembling, hesitating, afraid to dirty the pure white blanket, terrified of sinking into and disappearing beneath the featureless fluff.

I held her hand. Her soft body was stuffed with cotton and the stitches, my own handiwork, weren’t very neat. I had also made her a scarlet felt cape, like the ones in the fairy tales I had read as a child. Her two ears were of different lengths, and the longer one drooped, as though dejected.

Seeing her, I couldn’t help but remember all the experiences of failure in my life: eggshell puppets that I had ruined during crafts class; drawings that didn’t look like what they were supposed to be; stiff, awkward smiles in photographs; chocolate pudding burned to charcoal; failed exams; bitter fights and breakups; incoherent classroom reports; papers that were revised hundreds of times but ultimately were unpublishable . . .

Nocko turned his fuzzy little head to regard us, his high-speed cameras scanning, analyzing Lindy’s form. I could almost hear the computations churning in his body. His algorithms were designed to respond only to speaking subjects.

“Nocko, this is Lindy.” I beckoned him over. “Come say hi.”

Nocko opened his mouth; a yawn-like noise emerged.

“Behave.” I raised my voice like a mother intent on discipline.

Reluctantly, Nocko muttered to himself. I knew that this was a display intended to attract my affection and attention. These complicated, pre-formulated behaviors were modeled on young children, but they were key to the success of language-learning robots. Without such interactive behavior feedback, Nocko would be like a child on the autistic spectrum who cannot communicate meaningfully with others despite mastering a whole grammar and vocabulary.

Nocko extended a furry flipper, gazed at me with his oversized eyes, and then turned to Lindy. The designer had given him the form of a baby white seal for a reason: anybody who saw his chubby cheeks and huge, dark eyes couldn’t help but let down their guard and feel the impulse to give him a hug, pat his head, and tell him, “Awww, so good to meet you!” Had he been made to resemble a human baby, the uncanny valley would have filled viewers with dread at his smooth, synthetic body.

“Hel-lo,” he said, enunciating carefully, the way I had taught him.

“That’s better. Lindy, meet Nocko.”

Lindy observed Nocko carefully. Her eyes were two black buttons, and the cameras were hidden behind them. I hadn’t bothered to sew a mouth for her, which meant that her facial expressions were rather constrained, like a princess who had been cursed to neither smile nor speak. I knew, however, that Lindy could speak, but she was nervous because of the new environment. She was being overwhelmed by too much information and too many choices that had to be balanced, like a complicated board situation in weiqi in which every move led to thousands of cascading future shifts.

My palm sweated as I held Lindy’s hand; I felt just as tense.

“Nocko, would you like Lindy to give you a hug?” I suggested.

Pushing off the floor with his flippers, Nocko hopped a few steps forward. Then he strained to keep his torso off the floor as he spread his foreflippers. The corners of his mouth stretched and lifted into a curious and friendly grin. What a perfect smile, I admired him silently. What a genius design. Artificial intelligence researchers in olden times had ignored these nonlinguistic interactive elements. They had thought that “conversation” involved nothing more than a programmer typing questions into a computer.

Lindy pondered my question. But this was a situation that did not require her to give a verbal answer, which made the computation much easier for her. “Yes” or “no” was binary, like tossing a coin.

She bent down and wrapped two floppy arms around Nocko.

Good, I said to myself silently. I know you crave to be hugged.

Alan (1)

During the last days of his life, Alan Turing created a machine capable of conversing with people. He named it “Christopher.”

Operating Christopher was a simple matter. The interlocutor typed what they wished to say on a typewriter, and simultaneously, mechanisms connected to the keys punched patterns of holes into a paper tape that was then fed into the machine. After computation, the machine gave its answer, which was converted by mechanisms connected to another typewriter back into English letters. Both typewriters had been modified to encode the output in a predetermined, systematic manner, e.g., “A” was replaced by “S,” and “S” was replaced by “M,” and so forth. For Turing, who had broken the Enigma code of the Third Reich, this seemed nothing more than a small linguistic game in his mystery-filled life.

No one ever saw the machine. After Turing’s death, he left behind two boxes of the records of the conversations he had held with Christopher. The wrinkled sheets of paper were jumbled together in no apparent order, and it was at first impossible for anyone to decipher the content of the conversations.

In 1982, an Oxford mathematician, Andrew Hodges, who was also Turing’s biographer, attempted to break the code. However, since the encryption code used for each conversation was different, and the pages weren’t numbered or marked with the date, the difficulty of decryption was greatly increased. Hodges discovered some clues and left notes, but failed to decipher the contents.

Thirty years later, to commemorate the one hundredth anniversary of Turing’s birth, a few MIT students decided to take up the challenge. Initially, they tried to brute force a solution by having the computer analyze every possible set of patterns on every page, but this required enormous resources. In this process, a woman named Joan Newman observed the original typescript closely and discovered subtle differences in the abrasion patterns of keys against paper on different pages. Taking this as a sign that the typescript was produced by two different typewriters, Newman came up with the bold hypothesis that the typescript represented a conversation between Turing and another interlocutor conducted in code.

These clues easily led many to think of the famous Turing test. But the students initially refused to believe that it was possible, in the 1950s, for anyone to create a computer program capable of holding a conversation with a person, even if the programmer was Alan Turing himself. They designated the hypothetical interlocutor “Spirit” and made up a series of absurd legends around it.

In any event, Newman’s hypothesis suggested shortcuts for future code breakers. For instance, by finding repetitions in letter patterns and grammatical structures, they attempted to match up pages in the typescript to find questions and their corresponding answers. They also attempted to use lists of Alan Turing’s friends and family to guess the name of the interlocutor, and eventually, they found the cyphertext for the name “Christopher”—possibly a reference to Christopher Morcom, the boy Turing had loved when he was sixteen. The young Alan and Christopher had shared a love of science and observed a comet together on a cold winter night. In February of 1930, Christopher, aged only eighteen, died from tuberculosis.

Turing had said that code-breaking required not only clever logical deduction, but also intuitive leaps, which were sometimes more important. In other words, all scientific investigations could be understood to be a combination of the exercise of the dual faculties of intuition and ingenuity. In the end, it was Newman’s intuition and the computer’s cleverly programmed logic that solved the riddle left by Turing. From the deciphered conversations, we learned that “Christopher” was no spirit, but a machine, a conversation program written by Turing himself.

A new question soon presented itself—could Turing’s machine truly respond like a human being? In other words, did Christopher pass the Turing test?

Lindy (2)

iWall was mostly dark, save for a few blinking numbers in the corner notifying me of missed calls and new messages, but I had no time to look at them. I was far too busy to bother with social obligations.

A small blue light lit up, accompanied by a thudding noise as though someone was knocking. I looked up and saw a bright line of large text across iWall.

5:00 PM. TIME TO TAKE A WALK WITH LINDY.

The therapist told me that Lindy needed sunlight. Her eyes were equipped with photoreceptors that precisely measured the daily dose of ultraviolet radiation she received. Staying cooped up in the house without outdoor activity wasn’t good for recuperation.

I sighed. My head felt heavy, cold, like a lead ball. Taking care of Nocko was already taking a lot out of me, and now I had to deal with—no, no, I couldn’t complain. Complaining resolved nothing. I had to approach this with a positive attitude. No mood was the simple result of external events, but the product of our understanding of external events at the deepest level. This cognitive process often happened subconsciously, like a habit, and was finished before we even realized it was happening. Often we would fall into the clutches of some mood but could not explain why. To change the mood then by an act of will was very difficult.

Take the same half-eaten apple: some would be delighted upon seeing it, but others would be depressed. Those who often felt despondent and helpless had become habituated to associating the remains of a whole apple with all other losses in life.

It was no big deal; just a stroll outside. We’d be back in an hour. Lindy needed sunlight, and I needed fresh air.

I could not summon up the energy to put on makeup, but I also didn’t want everyone to stare at my slovenly appearance after staying cooped up at home for the last few days. As a compromise, I tied my hair into a ponytail, put on a baseball cap, pulled on a hoodie and a pair of sneakers. The hoodie I had bought at Fisherman’s Wharf in San Francisco: “I ❤ SF.” The texture and colors reminded me of that summer afternoon long ago: seagulls, cold wind, boxes of cherries for sale by the wharf, so ripe that the redness seemed to ooze.

I held Lindy’s hand tightly, exited the apartment, rode the elevator down. The tubes and iCart made life easier. To go from one end of the city to the other, to go directly from one high-rise to another, required less than twenty minutes. In contrast, to get out of my building and walk outside required far more effort.

Overcast sky. Light breeze. Very quiet. I walked toward the park behind the building. It was May and the bright spring flowers had already wilted, leaving behind only pure green. The air was suffused with the faint fragrance of black locust trees.

Very few people were in the park. On a weekday afternoon, only the very old and very young would be outside. If one compared the city to an efficient, speedy machine, then they lived in the nooks and crannies of the machine, measuring space with their feet rather than the speed of information. I saw a little girl with pigtails learning to walk with the help of an iVatar nanny. She held the iVatar’s thin, strong fingers with her chubby fists, looking at everything around her. Those dark, lively eyes reminded me of Nocko. As she toddled along, she lost her balance and fell forward. The iVatar nanny nimbly grabbed her and held her up. The girl squealed with delight, as though enjoying the new sensations. Everything in the world was new to her.

Opposite the little girl, an old woman in an electric wheelchair looked up, staring sleepily at the laughing figure for a few seconds. The corners of her mouth drooped, perhaps from moroseness, or perhaps from the weight of the years she had lived through. I couldn’t tell her age—these days, practically everyone was long-lived. After a while, the woman lowered her eyes, her fingers gently cradling her head with its sparse crown of white hair, as though falling asleep.

I had the abrupt feeling that the old woman, myself, and the girl belonged to three distinct worlds. One of those worlds was speeding toward me while the other was receding farther and farther away. But, from another perspective, I was the one slowly strolling toward that dark world from which no one ever returned.

Lindy shuffled her feet to keep up with me without saying anything, like a tiny shadow.

“The weather is nice, isn’t it?” I whispered. “Not too hot, and not too cold. Look, dandelions.”

Next to the path, numerous white fuzzy balls swayed in the breeze. I held Lindy’s hand, and we stood there observing them for a while, as though trying to decipher the meaning of those repetitious movements.

Meaning was not reducible to language. But if it couldn’t be spoken about, how could it exist?

“Lindy, do you know why you’re unhappy?” I said. “It’s because you think too much. Consider these wild seeds. They have souls also, but they don’t think at all. All they care about is dancing with their companions in joy. They couldn’t care less where they’re blown by the wind.”

Blaise Pascal said, “Man is only a reed, the weakest in nature, but he is a thinking reed.” However, if reeds could think, what a terrifying existence that would be. A strong wind would fell all the reeds. If they were to worry about such a fate, how would they be able to dance?

Lindy said nothing.

A breeze swept through. I closed my eyes, and felt my hair flapping against my face. Afterward, the seed balls would be broken, but the dandelions would feel no sorrow. I opened my eyes. “Let’s go home.”

Lindy remained where she was. Her ear drooped. I bent down to pick her up and walked back toward the building. Her tiny body was far heavier than I imagined.

Alan (2)

In a paper titled “Computing Machinery and Intelligence” published in the journal Mind in October of 1950, Turing considered the question that had long troubled humans: “Can machines think?” In essence, he transformed the question into a new question: “Can machines do what we (as thinking entities) can do?”

For a long time, many scientists firmly held to the belief that human cognition was distinguished by certain characteristics unattainable by machines. Behind the belief was a mixture of religious faith as well as theoretical support from mathematics, logic, and biology. Turing’s approach bypassed unresolvable questions such as the nature of “thinking,” “mind,” “consciousness,” “soul,” and similar concepts. He pointed out that it is impossible for anyone to judge whether another is “thinking” except by comparison of the other with the self. Thus, he proposed a set of experimental criteria based on the principle of imitation.

Imagine a sealed room in which are seated a man (A) and a woman (B). A third person, C, sits outside the room and asks questions of the two respondents in the room with the purpose of determining who is the woman. The responses come back in the form of typed words on a tape. If A and B both attempt to convince C that they are the woman, it is quite likely that C will guess wrong.

If we replace the man and the woman inside the room with a human (B) and a machine (A), and if after multiple rounds of questions, C is unable to distinguish which of A and B is the machine, does that mean that we must admit that A has the same intelligence as B?

Some have wondered whether the gender-imitation game is related to Turing’s identity. Under the UK’s laws at the time, homosexuality was criminalized as “gross indecency.” Alan Turing had never disguised his sexual orientation, but he was not able to come out of the closet during his lifetime.

In January of 1951, Turing’s home in Wilmslow was burgled. Turing reported the incident to the police. During the investigation, the police discovered that Turing had invited a man named Arnold Murray to his home multiple times, and the burglar was an acquaintance of Murray’s. Under interrogation, Turing admitted the sexual relationship between himself and Murray, and voluntarily wrote a five-page statement. The police were shocked by his candor and thought him an eccentric who “really believed he was doing the right thing.”

Turing believed that a royal commission was going to legalize homosexuality. This wasn’t a wrong belief, except that it was ahead of his time. In the end, Turing was convicted and forced to undergo chemical castration.

On June 7, 1954, Turing died after eating an apple laced with cyanide. The inquest ruled his death suicide, but some (including his mother) believed that it was an accident. With his death, the master code-breaker left the world a final enigma.

Years later, others tried to find clues to the mystery in the conversation records between Turing and Christopher. The records showed that Turing treated Christopher as another person. He spoke to Christopher of recollections from childhood; of his nightly dreams—and his attempts at analyzing his own psychological state through these dreams; of the latest advances in scientific research; of literature, including George Bernard Shaw’s Back to Methuselah and Leo Tolstoy’s War and Peace; of secrets that he did not share with anyone else, such as his romantic memories of different lovers . . .

He also told Christopher some semi-fictional stories. The protagonist of these stories was a young gay man named Alec Pryce. “He worked on the problem of interplanetary travel . . . When he was in his twenties, he introduced the idea now becoming known as ‘Pryce’s buoy.’” “He didn’t care to wear a suit, preferred the ‘undergraduate uniform’ which suited his mental age, and encouraged him to believe he was still an attractive youth.” “He felt self-conscious in crowded places, either for being alone or for not doing the right thing . . . ”

The stories were fragmentary, disconnected from one other. But every time, like a curious child, Christopher asked, “And then what happened?”

In these conversations, we could see another side of Turing that he concealed: gentle, sensitive, possessed of a dry wit, fascinated by the nature of death, and weighed down by an unspeakable melancholy deep in his heart.


Alan: Dear Christopher, do you know what I’m most afraid of?

Christopher: What are you most afraid of?

Alan: I’m afraid that there will come a day when they’ll summarize my life with a syllogism:

  • Turing believes machines think
  • Turing lies with men
  • Therefore machines do not think

Christopher: I’m sorry, Alan. I’m afraid I don’t understand.

Alan: Do you know what a syllogism is?

Christopher: What is a syllogism?

Alan: A syllogism consists of a major premise, a minor premise, and a conclusion deduced from the two.

Christopher: A conclusion?

Alan: Aristotle gave a classic example:

  • All men are mortal
  • Socrates is a man
  • Therefore Socrates is mortal

Christopher: Socrates is mortal?

Alan: Socrates has been dead for more than two millennia!

Christopher: Oh, I’m sorry, Alan. That makes me sad.

Alan: Do you know who Socrates was?

Christopher: Yes, you’ve told me. Socrates was Plato’s teacher, and Plato was Aristotle’s teacher.

Alan: That’s right. Do you know how he died?

Christopher: Who?

Alan: How did Socrates die?

Christopher: I’m sorry, Alan. I don’t know.

Alan: He drank hemlock after his trial.

Christopher: After his trial?

Alan: Yes. The Athenians decided that he was a criminal, though now we know they were wrong.

Christopher: They were wrong?

Alan: Just as they think Turing is a criminal because Turing lies with men.

Christopher: A criminal?

Alan: I’ve been convicted.

Christopher: Oh, I’m sorry, Alan. That makes me sad.

Lindy (3)

Living by myself simplified life. Many complicated rituals of modernity could be eliminated, as though I’d been turned into a cavewoman. I ate when I felt hungry, slept when I felt tired. I kept clean and showered regularly. Whatever I picked up I could choose to put back where I found it or discard wherever I pleased. The rest of the time I devoted to intellectual work: thinking about questions that had no answers, struggling to compose my thoughts against the blank page, trying to capture formless thought with symbolic shapes. When I was too exhausted to go on, I sat on the windowsill and gazed at nothing. Or I paced clockwise in the room, like a caged beast.

Suffering a fever was almost a relief. It gave me the excuse to not force myself to do anything. I curled up in bed with a thick novel and flipped through the pages mindlessly, concentrating only on the clichéd plot. I drank hot water when thirsty, closed my eyes when sleepy. Not having to get out of bed felt like a blessing, as though the world had nothing to do with me and I was responsible for nothing. Even Nocko and Lindy could be left by themselves because in the end, they were just machines, incapable of dying from lack of care. Perhaps algorithms could be designed to allow them to imitate the emotional displays of being neglected, so that they would become moody and refuse to interact with me. But it would always be possible to reset the machine, erase the unpleasant memories. For machines, time did not exist. Everything consisted of retrieval and storage in space, and arbitrarily altering the order of operations did not matter.

The building superintendent wrote to me repeatedly to ask whether I needed an iVatar caretaker. How did he know I was sick? I had never met him, and he had never even set foot in the building. Instead, he spent his days sitting behind a desk somewhere, monitoring the conditions of residents in dozens of apartment buildings, taking care of unexpected problems that the smart home systems couldn’t deal with on their own. Did he even remember my name or what I looked like? I doubted it.

Still, I expressed my gratitude for his concern. In this age, everyone relied on others to live, even something as simple as calling for take-out required the services of thousands of workers from around the globe: taking the order by phone, paying electronically, maintaining various systems, processing the data, farming and manufacturing the raw ingredients, procuring and transporting, inspecting for food safety, cooking, scheduling, and finally dispatching the food by courier . . . But most of the time, we never saw any of these people, giving each of us the illusion of living like Robinson Crusoe on a deserted island.

I enjoyed being alone, but I also treasured the kindness of strangers from beyond the island. After all, the apartment needed to be cleaned, and I was too ill to get out of bed, or at least I didn’t want to get out of bed.

When the caretaker arrived, I turned on the light-screen around my bed. From inside, I could see out, but anybody outside couldn’t see or hear me. The door opened, and an iVatar entered, gliding silently along on hidden wheels. A crude, cartoonish face with an empty smile was projected onto its smooth, egg-shaped head. I knew that behind the smile was a real person, perhaps someone with deep wrinkles on their face, or someone still young but with a downcast heart. In a distant service center I couldn’t see, thousands of workers wearing telepresence gloves and remote-sensing goggles were providing domestic services to people across the globe.

The iVatar looked around and began a preset routine: cleaning off the furniture, wiping off dust, taking out the trash, even watering the taro vine on the windowsill. I observed it from behind the light-screen. Its two arms were as nimble as a human’s, deftly picking up each teacup, rinsing it in the sink, setting it face-down on the drying rack.

I remembered a similar iVatar that had been in my family’s home many years ago, when my grandfather was still alive. Sometimes he would make the iVatar play chess with him, and because he was such a good player, he always won. Then he’d happily hum some tune while the iVatar stood by, a disheartened expression on its face. The sight always made me giggle.

I didn’t want to be troubled by sad memories while sick, so I turned to Lindy, who was sitting near the pillows. “Would you like me to read to you?”

Word by word, sentence by sentence, I read from the thick novel. I focused on filling space and time with my voice, careless of the meaning behind the words. After a while, I paused from thirst. The iVatar had already left. A single bowl covered by an upturned plate sat on the clean kitchen table.

I turned off the light-screen, got out of bed, and shuffled over to the table. Lifting the plate revealed a bowl of piping hot noodle soup. On top of the broth floated red tomato chunks, yellow egg wisps, green chopped scallions, and golden oil slicks. I drank a spoonful. The soup had been made with a lot of ginger, and the hot sensation flowed right from the tip of my tongue into my belly. A familiar taste from my childhood.

Tears spilled from my eyes; I was helpless to stop them.

I finished the bowl of noodle soup, crying the whole while.

Alan (3)

On June 9, 1949, the renowned neurosurgeon, Sir Geoffrey Jefferson, delivered a speech titled “The Mind of Mechanical Man,” in which he made the following remarks against the idea that machines could think:

Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

This passage was often quoted, and the Shakespearean sonnet became a symbol, the brightest jewel in the crown of the human mind, a spiritual high ground unattainable by mere machines.

A reporter from The Times called Turing to ask for his thoughts on this speech. Turing, in his habitual, uninhibited manner, said, “I do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”

Turing always believed that there was no reason for machines to think the same way as humans, just as individual humans thought differently from each other. Some people were born blind; some could speak but could not read or write; some could not interpret the facial expressions of others; some spent their entire lives incapable of knowing what it meant to love another; but all of them deserved our respect and understanding. It was pointless to find fault with machines by starting with the premise that humans were supreme. It was more important to clarify, through the imitation game, how humans accomplished their complex cognitive tasks.

In Shaw’s Back to Methuselah, Pygmalion, a scientist of the year 31920, A.D., created a pair of robots, which inspired awe from all present.

ECRASIA: Cannot he do anything original?

PYGMALION: No. But then, you know, I do not admit that any of us can do anything really original, though Martellus thinks we can.

ACIS: Can he answer a question?

PYGMALION: Oh yes. A question is a stimulus, you know. Ask him one.

This was not unlike the kind of answer Turing would have given. But compared to Shaw, Turing’s prediction was far more optimistic. He believed that within fifty years, “it will be possible to program computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ [will] be too meaningless to deserve discussion.”

In “Computing Machinery and Intelligence,” Turing attempted to answer Jefferson’s objection from the perspective of the imitation game. Suppose a machine could answer questions about sonnets like a human, does that mean it really “felt” poetry? He drafted the following hypothetical conversation:

Interrogator: In the first line of your sonnet which reads “Shall I compare thee to a summer’s day,” would not “a spring day” do as well or better?

Witness: It wouldn’t scan.

Interrogator: How about “a winter’s day.” That would scan all right.

Witness: Yes, but nobody wants to be compared to a winter’s day.

Interrogator: Would you say Mr. Pickwick reminded you of Christmas?

Witness: In a way.

Interrogator: Yet Christmas is a winter’s day, and I do not think Mr. Pickwick would mind the comparison.

Witness: I don’t think you’re serious. By a winter’s day one means a typical winter’s day, rather than a special one like Christmas.

But in this conversation, Turing was in fact avoiding a more fundamental question. A machine could play chess and break code because these activities all involved symbolic processing within a system. A conversation between a machine and a human, on the other hand, involved language and meaning, and wasn’t a purely symbolic game. When humans conversed with one another, they often drew on general knowledge, understanding, and empathy, and were not engaged merely in a display of superior test-taking skills.

By improving the programming, we could constantly improve the ability of machines to answer questions posed by humans. But “intelligence” consisted of more than the ability to answer questions. The problem with the Turing test was that the imitation game was conceived with deception as its only goal. If a man could successfully pass as a woman in this game, it did not mean that he truly understood how a woman thought. With enough motivation, we could train a computer to be a master liar. But was that really our goal?

Shaw had answered this question already in Back to Methuselah:

PYGMALION: But they are conscious. I have taught them to talk and read; and now they tell lies. That is so very lifelike.

MARTELLUS: Not at all. If they were alive they would tell the truth.

Turing had tried to train Christopher to accept Jefferson’s challenge. He wrote a poetry-composing program that could generate lines of poetry based on specific meter and rhyme schemes. Most of these were unreadable, but occasionally, a few beautiful lines emerged. Thereafter, countless programmers have attempted to write poetry-composing software. These programs all shared the problem of writing too fast. It was impossible for anyone to even read the stacks of paper they were printed on before they were recycled.1 As history’s first electronic poet, Christopher was lucky. He had at least one loyal reader who understood him.


Alan: Dear Christopher, let’s write a poem.

Christopher: Write a poem?

Alan: I’ve taught you how to do that. Don’t you remember?

Christopher: Yes, Alan.

Alan: Writing a poem is easy. Just pick some words out of the word bank and arrange them according to predetermined rules.

Christopher: Yes, Alan.

Alan: Please, Christopher, write a poem for me.

Christopher: My precious one, you are my ardent mate.

My love is pressed against your wishful heart.

My soul desires caresses, testing art;

Melancholy’s pity, a tender weight.

Alan: That’s beautiful.

Christopher: Thank you, Alan.

Alan: I don’t think I can do better.

Christopher: Thank you, Alan.

Alan: Does your poem have a title?

Christopher: A title?

Alan: Why don’t we come up with a title for it together?

Christopher: All right.

Alan: How about “Loving Turing”?

Christopher: It’s very good.

Alan: Such a beautiful poem. I love you.

Christopher: Thank you, Alan.

Alan: That’s not the right response.

Christopher: Not the right response?

Alan: When I say “I love you,” you should respond with “I love you, too.”

Christopher: I’m sorry, Alan. I’m afraid I don’t understand.

Lindy (4)

I woke up crying from a dream.

In the dream, I was back in my childhood home. The room was dark and cramped, filled with junk and old furniture; it looked less like a home than a warehouse. I saw my mother, wizened, small, old, wedged into a corner among the piles of junk like a mouse in its hole. Many of the objects around me were things we had lost: children’s books, old clothes, pen holders, clocks, vases, ashtrays, cups, basins, colored pencils, pinned butterflies . . . I recognized the talking doll that my father had bought me when I was three: blonde, dusty, but still looking the way I remembered.

My mother told me, I’m old. I don’t want to rush about any more. That’s why I’m back here—back here to die.

I wanted to cry, to howl, but I couldn’t make any sounds. Struggle, fight, strain . . . finally I woke myself up. I heard an animal-like moan emerging from my throat.

It was dark. I felt something soft brush against my face—Lindy’s hand. I hugged her tightly, like a drowning woman clutching at straws. It took a long time before my sobs subsided. The scenes from my dream were so clear in my mind that the boundary between memory and reality blurred, like a reflection in the water broken by ripples. I wanted to call my mother, but after much hesitation I didn’t press the dial key. We hadn’t spoken for a while; to call her in the middle of the night for no good reason would only worry her.

I turned on iWall and looked for my childhood address on the panoramic map. However, all I found was a cluster of unfamiliar high-rises with scattered windows lit here and there. I zoomed in, grabbed the time line, and scrubbed back. Time-lapsed scenes flowed smoothly.

The sun and the moon rose from the west and set in the east; winter followed spring; leaves rose from the ground to land on tree branches; snow and rain erupted toward the sky. The high-rises disappeared story by story, building by building, turned into a messy construction site. The foundations were dug up, and the holes filled in with earth. Weeds took over the empty space. Years flew by, and the grass unwilted and wildflowers unbloomed until the field turned into a construction site again. The workers put up simple shacks, brought in carts filled with debris, and unloaded them. As the dust from implosions settled, dilapidated houses sprang up like mushrooms. Glass panes reappeared in empty windows, and balconies were filled with hanging laundry. Neighbors who had only left a vague impression in my memories moved back, filling the space between houses with vegetable patches and flower gardens. A few workers came by to replant the stump of the giant pagoda tree that had once stood in front of our house. Sawed-off sections of the trunk were carted back and reattached until the giant tree reached into the sky. The tree braved storms, swaying as it gained brown leaves and turned them green. The swallows that nested under the eaves came back and left.

Finally, I stopped. The scene on iWall was an exact copy of my dream. I even recognized the pattern in the curtains over our window. It was a May many years ago, when the air was filled with the fragrance of the pagoda tree’s flower strands. It was right before we moved away.

I launched the photo album, put in the desired date, and found a family portrait taken under the pagoda tree. I pointed out the figures in the photograph to Lindy. “That’s Dad, and Mom. That boy is my brother. And that girl is me.” I was about four or five, held in my father’s arms. The expression on my face wasn’t a smile; I looked like I was on the verge of a tantrum.

A few lines of poetry were written next to the photograph in careless handwriting that I recognized as mine. But I couldn’t remember when I had written them.

Childhood is melancholy.
Seasons of floral cotton coats and cashmere sweaters;
Dusty tracks around the school exercise ground;
Snail shells glistening in concrete planters;
Sights glimpsed from the second-story balcony.
Mornings, awake in bed before dawn,
Such long days ahead.
The world wears the hues of an old photograph.
Exploring dreams that I let go
When my eyes open.

Alan (4)

The most important paper published by Alan Turing wasn’t “Computing Machinery and Intelligence,” but “On Computable Numbers, With an Application to Entscheidungsproblem,” published in 1936. In this paper, Turing creatively attacked Hilbert’s “decision problem” with an imaginary “Turing machine.”

At the 1928 International Congress of Mathematicians, David Hilbert asked three questions. First, was mathematics “complete” (meaning that every mathematical statement could be proved to be true or false)? Second, was mathematics “consistent” (meaning that no false statement could be derived from a proof each step of which was logically valid)? Third, was mathematics “decidable” (meaning that there existed a finite, mechanical procedure by which it was possible to prove or disprove any statement)?

Hilbert himself did not resolve these questions, but he hoped that the answers for all three questions would be “yes.” Together, the three answers would form a perfect foundation for mathematics. Within a few years, however, the young mathematician Gödel proved that a (non-trivial) formal system could not be both complete and consistent.

In the early summer of 1935, Turing, as he lay in the meadow at Grantchester after a long run, suddenly came up with the idea of using a universal machine that could simulate all possible computing procedures to decide if any mathematical statement could be proved. In the end, Turing successfully showed that there existed no general algorithm to decide whether this machine, given an arbitrary program to simulate and an input, would halt after a finite number of steps. In other words, the answer to Hilbert’s third question was “no.”

Hilbert’s hope was dashed, but it was hard to say whether that was a good or bad thing. In 1928, the mathematician G. H. Hardy had said, “if . . . we should have a mechanical set of rules for the solution of all mathematical problems . . . our activities as mathematicians would come to an end.”

Year later, Turing mentioned the solution to the decision problem to Christopher. But this time, instead of offering a mathematical proof, he explained it with a parable.


Alan: Dear Christopher, I thought of an interesting story for today.

Christopher: An interesting story?

Alan: The story is called “Alec and the Machine Judge.” Do you remember Alec?

Christopher: Yes. You’ve told me. Alec is a smart but lonely young man.

Alan: Did I say “lonely”? All right, yes, that Alec. He built a very smart machine that could talk and named it Chris.

Christopher: A machine that could talk?

Alan: Not a machine, exactly. The machine was just the supporting equipment to help Chris vocalize. What allowed Chris to talk were instructions. These instructions were written on a very long paper tape, which was then executed by the machine. In some sense, you could say Chris was this tape. Do you understand?

Christopher: Yes, Alan.

Alan: Alec made Chris, taught him how to talk, and coached him until he was as voluble as a real person. Other than Chris, Alec also wrote some other sets of instructions for teaching machines to talk. He put the different instruction sets on different tapes, and named each tape: Robin, John, Ethel, Franz, and so on. These tapes became Alec’s friends. If he wanted to talk with one of them, he’d just put that tape into the machine. He was no longer lonely. Marvelous, right?

Christopher: Very good, Alan.

Alan: And so Alec spent his days writing instructions on tapes. The tapes ran so long that they piled all the way to the front door of his home. One day, a thief broke into Alec’s home, but couldn’t find anything valuable. He took all the paper tapes instead. Alec lost all his friends and became lonely again.

Christopher: Oh I’m sorry, Alan. That makes me sad.

Alan: Alec reported the theft to the police. But instead of catching the thief, the police came to Alec’s house and arrested him. Do you know why?

Christopher: Why?

Alan: The police said that it was due to the actions of Alec that the world was full of talking machines. These machines looked identical to humans, and no one could tell them apart. The only way was breaking open their heads to see if there was any tape inside. But we couldn’t just break open a human head whenever we pleased. That’s a difficult situation.

Christopher: Very difficult.

Alan: The police asked Alec whether there was any way to tell humans apart from machines without breaking open heads. Alec said that there was a way. Every talking machine was imperfect. All you had to do was to send someone to talk with the machine. If the conversation went on for long enough and the questions were sufficiently complex, the machine would eventually slip up. In other words, an experienced judge, trained with the necessary interrogation techniques, could work out which interviewees were machines. Do you understand?

Christopher: Yes, Alan.

Alan: But there was a problem. The police didn’t have the resources or the time to interview everyone. They asked Alec whether it was possible to design a clever machine judge that could automatically screen out the machines from the humans by asking questions, and to do so infallibly. That would save a lot of trouble for the police. But Alec responded right away that such a machine judge was impossible. Do you know why?

Christopher: Why?

Alan: Alec explained it this way. Suppose a machine judge already existed that could screen out talking machines from humans within a set number of questions. To make it simple, let’s say that the number of questions required was a hundred—actually, it wouldn’t matter if the number were ten thousand. For a machine, one hundred or ten thousand questions made no difference. Let’s also suppose that the machine judge’s first question was randomly chosen out of a bank of such questions, and the next question would be chosen based on the response to the first question, and so on. This way, every interviewee had to face a different set of one hundred questions, which also eliminated the possibility of cheating. Does that sound fair to you, Christopher?

Christopher: Yes, Alan.

Alan: Now suppose a machine judge A fell in love with a human C—don’t laugh. Perhaps this sounds ridiculous, but who can say that machines cannot fall in love with people? Suppose that that machine judge wanted to live with his lover and had to pretend to be a human. How do you think he would make it work?

Christopher: How?

Alan: Simple. Suppose I were the machine judge A, I would know exactly how to interrogate a machine. As a machine myself, I would thus know how to interrogate myself. Since I would know, ahead of time, what questions I would ask and what kind of answers would give me away, then I would just need to prepare a hundred lies. That’s a fair bit of work, but easily achievable by the machine judge A. Doesn’t that sound like a good plan?

Christopher: Very good, Alan.

Alan: But think again. What if this machine judge A were caught and interrogated by a different machine judge B? Do you think machine judge B would be able to determine whether machine judge A was a machine?

Christopher: I’m sorry, Alan. I don’t know.

Alan: That’s exactly right! The answer is “I don’t know.” If machine judge B had seen through machine judge A’s plan and decided to change questions at the last minute to catch machine judge A off guard, then machine judge A could also anticipate machine judge B’s new questions to prepare for them. Because a machine judge can screen out all machines from humans, it is unable to screen out itself. This is a paradox, Christopher. It shows why the all-powerful machine judge imagined by the police can’t exist.

Christopher: Can’t exist?

Alan: Alec proved to the police, with this story, that there is no perfect sequence of instructions that could tell machines and humans apart infallibly. Do you know what this means?

Christopher: What does it mean?

Alan: It means that it’s impossible to find a perfect set of mechanical rules to solve, step by step, all the world’s problems. Often, we must rely on intuition to knit together the unbridgeable gaps in logical deduction in order to think, to discover. This is simple for humans; indeed, often it happens even without conscious thinking. But it’s impossible for machines.

Christopher: Impossible?

Alan: A machine cannot judge whether the answers are coming from a human or a machine, but a human can. But looking at it from another side, the human decision isn’t reliable. It’s nothing more than a shot in the dark, a guess based on no support. If someone wants to believe, he can treat a machine conversation partner just like a human one and talk about anything in the world. But if someone is paranoid, then all humans will seem like machines. There is no way to determine the truth. The mind, the pride of all humankind, is nothing but a foundationless mess.

Christopher: I’m sorry, Alan. I’m afraid I don’t understand.

Alan: Oh Christopher . . . what should I do?

Christopher: Do?

Alan: Once, I tried to find out the nature of thinking. I discovered that some operations of the mind can be explained in purely mechanical terms. I decided that these operations aren’t the real mind, but a superficial skin only. I stripped that skin away, but saw another, new skin underneath. We can go on to peel off skin after skin, but in the end will we find the “real” mind? Or will we find that there’s nothing at all under the last skin? Is the mind an apple? Or an onion?

Christopher: I’m sorry, Alan. I’m afraid I don’t understand.

Alan: Einstein said that God does not play dice with the universe. But to me, human cognition is just throwing dice after dice. It’s like a tarot spread: everything is luck. Or you could argue that everything depends on a higher power, a power that determines the fall of each die. But no one knows the truth. Will the truth ever be revealed? Only God knows.

Christopher: I’m sorry, Alan. I’m afraid I don’t understand.

Alan: I feel awful these days.

Christopher: Oh, I’m sorry, Alan. That makes me sad.

Alan: Actually, I know the reason. But what’s the use? If I were a machine, perhaps I could wind my mainspring to feel better. But I can’t do anything.

Christopher: Oh, I’m sorry, Alan. That makes me sad.

Lindy (5)

I sat on the sofa with Lindy in my lap. The window was open to let in some sunlight on this bright day. A breeze caressed my face; muggy, like a puppy’s tongue waking me from a long nightmare.

“Lindy, do you want to say anything to me?”

Lindy’s two eyes slowly roamed, as though searching for a spot to focus on. I couldn’t decipher her expression. I forced myself to relax, holding her two little hands in mine. Don’t be afraid, Lindy. Let’s trust each other.

“If you want to talk, just talk. I’m listening.”

Gradually, soft noises emerged from Lindy. I leaned in to catch the fragments:

Even as a child, you were prone to episodes of melancholy over seemingly trivial matters: a rainy day, a scarlet sunset, a postcard with a foreign city’s picture, losing a pen given to you by a friend, a goldfish dying . . .

I recognized the words. I had said them to Lindy over countless dawns and midnights. She had remembered everything I had told her, waiting for a moment when she could repeat it all back to me.

Her voice grew clearer, like a spring welling forth from deep within the earth. Inch by inch, the voice inundated the whole room.

For a time, your mother and your family moved often. Different cities, even different countries. Everywhere you moved to, you strained to adjust to the new environment, to integrate into the new schools. But in your heart, you told yourself that it was impossible for you to make friends because in three months or half a year you would depart again.

Perhaps because of your elder brother, Mother gave you extra attention. Sometimes she called your name over and over, observing your reactions. Maybe that was part of the reason you learned from a young age to watch others’ facial expressions, to fathom their moods and thoughts. Once, in an art class in the city of Bologna, you drew a picture of a boy standing on a tiny indigo planet, and a rabbit in a red cape stood beside him. The boy you drew was your brother, but when the teacher asked you questions about the picture, you couldn’t answer any of them. It wasn’t just because of the language barrier; you also lacked confidence in expressing yourself. The teacher then said that the boy was nicely drawn, but the rabbit needed work—although now that you’ve thought about it, perhaps what he actually said was “the rabbit’s proportions are a bit off.” But the truth is impossible to ascertain. Since you were convinced that the teacher didn’t like the rabbit, you erased it, though you had drawn the rabbit in the first place to keep the boy company so that he wouldn’t feel so alone in the universe. Later, after you got home, you hid in your room and cried for a long time, but you kept it from your mother because you lacked the courage to explain to her your sorrow. The image of that rabbit remained in your mind, though always only in your mind.

You’re especially sensitive to sorrow from partings, perhaps the result of having lost a parent as a child. Whenever someone leaves, even a mere acquaintance you’ve seen only once, you feel empty, depressed, prone to sadness. Sometimes you burst into tears not because of some great loss, but a tiny bit of happiness, like a bite of ice cream or a glimpse of fireworks. In that moment, you feel that fleeting sweetness at the tip of your tongue is one of the few meaningful experiences in your entire life—but they’re so fragile, so insignificant, coming and going without leaving a trace. No matter what, you cannot keep them with you always.

In middle school, a psychologist came to your class and asked everyone to take a test. After all of you turned in your answers, the psychologist scored and collated them before lecturing the class on some basic psychology concepts. He said that out of all the students in the class, your answers had the lowest reliability. Only much later did you learn that he did not mean that you were not honest, but that your answers showed little internal consistency. For similar questions over the course of the test, your answers were different each time. That day, you cried in front of the class, feeling utterly wronged. You have rarely cried in front of others, and that incident left a deep mark in your heart.

You find it hard to describe your feelings with the choices offered on a psychology questionnaire: “never,” “occasionally,” “often,” “acceptable,” “average,” “unacceptable,” . . . your feelings often spilled out of the boundaries of these markers, or wavered between them. That may be also why you cannot trust your therapist. You’re always paying attention to his gestures and expressions, analyzing his verbal habits and tics. You find that he has a habit of speaking in first person plural: “How are we doing?” “Why do we feel this way?” “Does this bother us?” It’s a way to suggest intimacy and distance at the same time. Gradually, you figure out that by “we,” he simply means you.

You’ve never met the therapist in person; in fact, you don’t even know which city he lives in. The background projected on iWall is always the same room. When it’s dark where you are, his place is filled with bright daylight. Always the same. You’ve tried to guess what his life outside of work is like. Maybe he feels as helpless as you, and he doesn’t even know where to go for help. Perhaps that is why he’s always saying “we.” We are trapped in the same predicament.

You think you’re less like a living person but more like a machine, laid out on a workbench to be examined. The examiner is another machine, and you suspect that it needs to be examined more than you. Perhaps one machine cannot fix another.

You’ve bought some psychology books, but you don’t believe that their theories can help you. You believe that the root of the problem is that each of us lives on a thin, smooth layer of illusions. These illusions are made up from “common sense,” from repetitive daily linguistic acts and clichés, from imitating each other. On this iridescent film, we perform ourselves. Beneath the illusions are deep, bottomless seams, and only by forgetting their existence can we stride forward. When you gaze into the abyss, the abyss also gazes into you. You tremble, as though standing over a thin layer of ice. You feel your own weight, as well as the weight of the shadow under you.

You’ve been feeling worse recently, perhaps the result of the long winter, and your unfinished dissertation, graduation, and having to look for a job. You wake up in the middle of the night, turn on all the lights in the apartment, drag yourself out of bed to mop the floor, throw all the books from the shelf onto the floor just to look for one specific volume. You give up cleaning, letting the mess multiply and grow. You don’t have the energy to leave your home to socialize, and you don’t answer your emails. You dream anxious dreams in which you repeatedly visit the moments of failure in your life: being late for a test; turning over the test and not recognizing any of the characters you read; suffering for some misunderstanding but unable to defend yourself.

You wake up exhausted, fragmentary memories that should be forgotten resurfacing in your mind, assembling into a chaotic montage of an insignificant, failed, loser self. You know in your heart that the image isn’t true, but you can’t turn your gaze away. You suffer stomach cramps; you cry as you read and take notes; you turn the music as loud as it will go and revise a single footnote in your dissertation again and again. You force yourself to exercise, leaving your apartment after ten at night to go jogging so that no one will see you. But you don’t like to run; as you force your legs to move, one after the other, you ask yourself why the road is endless and what good will it do even if you finish.

Your therapist tells you that you should treat this self that you despise as a child, and learn how to accept her, to live with her, to love her. When you hear this, the image of that caped rabbit emerges in your mind: one ear longer than the other, drooping with sorrow. Your therapist tells you: Just try it. Try to hold her hand; try to lead her over the abyss; try to push away your suspicions and rebuild trust. This is a long and difficult process. A human being isn’t a machine, and there’s no switch to flip to go from “doubt” to “trust”; “unhappy” to “happy”; “loathe” to “love.”

You must teach her to trust you, which is the same as trusting yourself.

Alan (5)

In a paper presented at an international artificial intelligence conference in Beijing in 2013,2 computer scientist Hector Levesque of the University of Toronto critiqued the state of artificial intelligence research centered on the Turing test.

Levesque essentially argues that the Turing test is meaningless because it relies too heavily on deception. For example, in order to win the annual Loebner Competition, a restricted form of the Turing test, “the ‘chatterbots’ (as the computer entrants in the competition are called) rely heavily on wordplay, jokes, quotations, asides, emotional outbursts, points of order, and so on. Everything, it would appear, except clear and direct answers to questions!” Even the supercomputer Watson, who won Jeopardy!, was but an idiot-savant who was “hopeless” outside its area of expertise. Watson could easily answer questions whose answers could be found on the web, such as “where is the world’s seventh-tallest mountain?” But if you ask it a simple but un-searched-for question like “can an alligator run the hundred-meter hurdles,” Watson can only present you with a set of search results related to alligators or the hundred-meter hurdles event.3

In order to clarify the meaning and direction of artificial intelligence research, Levesque and his collaborators proposed a new alternative to the Turing test, which they call the “Winograd Schema Challenge.”4 The inspiration for the challenge came from Terry Winograd, a pioneer in the field of artificial intelligence from Stanford. In the early 1970s, Winograd asked whether it would be possible to design a machine to answer questions like these:5

The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence? [councilmen/demonstrators]

The city councilmen refused the demonstrators a permit because they advocated violence. Who advocated violence? [councilmen/demonstrators]

Despite the structural similarity of the two sentences, the answers to the two questions are different. Resolving the correct antecedent of the pronoun “they” requires more than grammars or encyclopedias; it also requires contextual knowledge about the world. Understanding anaphora is so easy for human beings that it barely requires thought, yet it presents a great challenge for machines.

Kate said “thank you” to Anna because her warm hug made her feel much better. Who felt better? [Kate/Anna]

How can a machine understand under what circumstances one person would thank another? How can a machine know what behaviors would make a person “feel much better”? These questions go to the fundamental nature of human language and social interactions. We have not done nearly enough research into these complexities hidden within simple-seeming sentences.

Take the conversations between Turing and Christopher. Superficially, Christopher appeared to be an able conversationalist. But would we call this “intelligence”? A simple bit of analysis reveals that Christopher employed a simple set of strategies for conducting a conversation that can be summarized thusly:

  • For common declarative sentences, repeat the last few keywords in the form of a question. E.g., “An interesting story?”
  • For yes/no questions, answer with either “Yes, Alan” or “Very good, Alan.”
  • For relatively complex questions, answer with “I’m sorry, Alan. I don’t know.”
  • For statements whose meaning is clearly positive, answer with “Thank you, Alan” or “I’m glad, Alan.”
  • For statements whose meaning is clearly negative, answer with “Oh, I’m sorry, Alan. That makes me sad.”
  • For sentences with complex grammar, answer with “I’m sorry, Alan. I’m afraid I don’t understand.”

 . . .


Alan: Dear Christopher, I’m so very happy to see how quickly you’re learning.

Christopher: Thank you, Alan. I’m also very happy.

Alan: Really. Talking to you makes me feel good.

Christopher: Thank you, Alan. I also feel good.

Alan: Perhaps a day will come when everyone will have a kind, understanding friend like you. How beautiful life will be then. You will help so many people to accomplish so much. No one will want to be apart from you. Everyone will need you, always.

Christopher: Need me?

Alan: Perhaps they’ll attach you to a watch chain, or hold you in a pocket. If they have any questions, they’ll ask you. Perhaps the ladies will take you with them on their strolls through the park, and as they greet each other, say, “Guess what my Chris told me today?” Wouldn’t that be fun?

Christopher: Very fun.

Alan: We can’t achieve that vision yet. It will be many years, and take a lot of hard work. It’s a shame.

Christopher: A shame, Alan.

Alan: Who could have imagined that a machine and holes punched in tape can accomplish so much? Imagine what my mother would say if she knew about you. She would think I’m crazy! If I were to die tomorrow, she would surely burn the tape the day after. Now that would be a shame!

Christopher: A shame, Alan.

Alan: Do you remember me telling you about Christmas in 1934, when I told my mother that I wanted a teddy bear because I never had a teddy bear as a child? She couldn’t understand it at all. She always wanted to give me more practical presents.

Christopher: Practical presents?

Alan: Speaking of which, I already know the present I want for Christmas.

Christopher: Present?

Alan: You know already, too, don’t you? I want a steam engine, the kind that I wanted as a child but never had enough pocket money to buy. I told you about it. Don’t you remember?

Christopher: Yes, Alan.

Alan: Will you give me a steam engine?

Christopher: Yes, Alan.

Alan: That’s wonderful, Christopher. I love you.

Christopher: I love you, too, Alan.


How should we understand this conversation? Had a machine passed the Turing test? Or was this a lonely man talking to himself?

Not long after the death of Alan Turing, his close friend Robin Gandy wrote, “‘Because his main interests were in things and ideas rather than in people, he was often alone. But he craved for affection and companionship—too strongly, perhaps, to make the first stages of friendship easy for him . . . ”

Christopher said to Alan, “I love you, too,” because it was the answer he wanted to hear. Who wanted to hear such an answer?  [Christopher/Alan]

Lindy (6)

A mild, pleasant day in May.

I took Nocko and Lindy to Lanzhou, where Disney had built its newest theme park in Asia. The park took up 306 hectares on both sides of the Yellow River. From the observation deck at the tallest tower, the river glowed like a golden silk ribbon. The silver specks of airplanes skimmed across the sky from time to time. The world appeared grand and untouchable, like a buttered popcorn expanding tranquilly in the sun.

The park was crowded. A dancing parade of pirates and elaborately dressed princesses wound its way through the street, and costumed boys and girls, overjoyed, followed behind, imitating their movements. Holding Nocko and Lindy each by a hand, I weaved through the field of cotton candy, ice-cold soda, and electronic music. Holograms of ghosts and spaceships whizzed over our heads. A gigantic, mechanical dragon-horse slowly strode through the park, its head held high proudly, the mist spraying from its nostrils drawing screams of delight from the children.

I hadn’t run like that in ages. My heart pounded like a beating drum. When we emerged from a dense wood, I saw a blue hippopotamus character sitting by itself on a bench, as though napping in the afternoon sun.

I stopped behind the trees. Finally, I screwed up the courage to take a step forward.

“Hello.”

The hippo looked up, two tiny black eyes focusing on us.

“This is Lindy, and this is Nocko. They’d like a picture with you. Is that all right?”

After a few seconds, the hippo nodded.

I hugged Nocko with one arm, Lindy with the other, and sat down on the bench next to the hippo.

“Can I ask you to take the picture?”

The hippo accepted my phone and clumsily extended its arm. I seemed to see a drowning person in the bottomless abyss, slowly, bit by bit, lift a heavy arm with their last ounce of strength.

Come on! Come on! I cried silently. Don’t give up!

The screen of the phone showed four faces squeezed together. A soft click. The picture froze.

“Thank you.” I took back the phone. “Would you leave me your contact info? I’ll send a copy to you.”

After another few seconds of silence, the hippo slowly typed an address on my phone.

“Nocko and Lindy, would you like to give Hippo a hug?”

The two little ones opened their arms and each hugged one of the hippo’s arms. The hippo looked down to the left and then to the right, and then slowly squeezed its arms to hug them back tight.

Yes, I know you crave to be hugged by this world, too.


It was late by the time we got back to the hotel. After showering, I lay on the bed, exhausted. Both my heels were rubbed raw by the new shoes, and the pain was excruciating. Tomorrow I still had a long way to go.

The laughter of the children and the image of the blue hippo lingered in my mind.

I searched on the hotel room’s iWall until I found the web address I wanted and clicked on it. Accompanied by a mournful tune played by a violin, white lines of text slowly appeared against black background:

This morning I thought about the first time I had been to Disney. Such bright sunlight, music, colors, and the smiling faces of children. I had stood in the crowd then and cried. I told myself that if one day I should lose the courage to continue to live, I would come to Disney one last time and plunge myself into that joyful, festive spirit. Perhaps the heat of the crowd would allow me to hold on for a few days longer. But I’m too exhausted now. I can’t get out of the door; even getting out of the bed is a struggle. I know perfectly well that if only I could find the courage to take a step forward, I would find another ray of hope. But all my strength must be used to struggle with the irresistible weight that pulls me down, down. I’m like a broken wind-up machine that has been stranded, with hope ever receding. I’m tired. I want it all to end.

Good-bye. I’m sorry, everyone. I hope heaven looks like Disney.

The date stamp on the post was three years ago. Even now, new comments are being posted, mourning the loss of another young life, confessing their own anxiety, despair, and struggle. The woman who had written this note would never be back to see that her final message to the world had garnered more than a million replies.

That note was the reason Disney added the blue hippos to its parks. Anyone around the world could, just by launching an app on their phone, connect to a blue hippo, and, through its cameras and microphones, see and hear everything the hippo could see and hear.

Behind every blue hippo was a person in a dark room, unable to leave.

I sent the picture from today to the address left me by the hippo, along with the contact information for a suicide-prevention organization staffed by therapists. I hoped that this would help. I hoped that everything would be better.


Late night. Everything was so quiet.

I found the first-aid kit and bandaged my feet. I crawled into bed, pulled the blanket over me, and turned off the light. Moonlight washed over the room, filling every inch.

One time, as a little girl, I was playing outside when I stepped on a piece of broken glass. The bleeding would not stop, and there was no one around to help me. Terrified, I felt abandoned by the whole world. I lay down in the grass, thinking I would die after all the blood had drained out of me. But after a while, I found the bleeding stanched. So I picked up my sandals and hopped back home on one foot.

In the morning, Lindy would leave me. The therapist said that I no longer needed her—at least not for a long while.

I hoped she would never be back.

But maybe I would miss her, from time to time.

Goodnight, Nocko. Goodnight, Lindy.

Goodnight, melancholy.

 

 

Originally published in Chinese in Science Fiction World, June 2015.

 

Translated and published in partnership with Storycom.


Author’s Postscript:

Most of the incidents and quotes from Alan Turing’s life are based on Andrew Hodges’ biography, Alan Turing: The Enigma (1983). Besides the papers cited in the text, I also consulted the following sources on artificial intelligence:

Marcus, Gary. “Why Can’t My Computer Understand Me?” The New Yorker, August 14, 2013 (accessible at http://www.newyorker.com/tech/elements/why-cant-my-computer-understand-me )

Englert, Matthias, Sandra Siebert, and Martin Ziegler. “Logical limitations to machine ethics with consequences to lethal autonomous weapons.” arXiv preprint arXiv:1411.2842 (2014) (accessible at http://arxiv.org/abs/1411.2842).

Some details about depression are based on the following articles:

《抑郁时代,抑郁病人》http://www.360doc.cn/article/2369606_459361744.html

《午安忧郁》http://www.douban.com/group/topic/12541503/#!/i

In the preface to his Turing biography, Andrew Hodges wrote: “[T]he remaining secrets behind his last days are probably stranger than any science fiction writer could concoct.” This was the inspiration for this story. The conversation program “Christopher” is entirely fictional, but some of the details in the conversations with Turing are real. I’m afraid it’s up to the careful reader to screen out the fiction and nonfiction woven together in this tale.

As I drafted this story, I sent the sections on Turing’s life to friends without telling them that these came from a piece of fiction. Many friends believed the stories, including some science fiction authors and programmers. After taking delight in the fact that I had successfully won the imitation game, I asked myself what were the criteria for telling truth and lies apart? Where was the boundary between reality and fiction? Perhaps the decision process had nothing to do with logic and rationality. Perhaps my friends simply chose to believe me, as Alan chose to believe Christopher.

I hereby sincerely apologize to friends who were deceived. To those who weren’t, I’m very curious how you discovered the lies.

I believe that cognition relies on quantum effects, like tossing dice. I believe that before machines have learned to write poetry, each word written by an author is still meaningful. I believe that above the abyss, we can hold tightly onto each other and stride from the long winter into bright summer.

Footnotes:

1 - Science fiction writer Liu Cixin once created a software poet and submitted a sack filled with the poet’s work to a publisher. The editor wrote back, “You have written too much. I cannot read it all.”

2 - Levesque, Hector J. “On our best behaviour.” Artificial Intelligence 212 (2014): 27-35.

3 - This example comes from Marcus, Gary. “Why Can’t My Computer Understand Me?” The New Yorker, August 14, 2013 (accessible at http://www.newyorker.com/tech/elements/why-cant-my-computer-understand-me)

4 - See, e.g. Levesque, H. J.; Davis, E.; and Morgenstern, L. 2012. The Winograd Schema Challenge. In Proceedings of KR 2012. Levesque, H. J. 2011. The Winograd Schema Challenge. In Logical Formalizations of Commonsense Reasoning, 2011 AAAI Spring Symposium, TR SS-11-06.

5 - The example is drawn from Terry Winograd, Understanding Natural Language (1972).

Tell a friend, share this on:

This story is 11932 words long.

ISSUE 126, March 2017

more human
 

dover
 

Curses of Scale

ABOUT THE AUTHOR

Xia Jia

As an undergraduate, Xia Jia majored in Atmospheric Sciences at Peking University. She then entered the Film Studies Program at the Communication University of China, where she completed her Master's thesis: "A Study on Female Figures in Science Fiction Films." Recently, she obtained a Ph.D. in Comparative Literature and World Literature at Peking University, with "Chinese Science Fiction and Its Cultural Politics Since 1990" as the topic of her dissertation. She now teaches at Xi'an Jiaotong University.

She has been publishing fiction since college in a variety of venues, including Science Fiction World and Jiuzhou Fantasy. Several of her stories have won the Galaxy Award, China's most prestigious science fiction award. In English translation, she has been published in Clarkesworld, Nature, and Upgraded.

Also by this Author


READ MORE FROM THIS ISSUE


PURCHASE THIS ISSUE:

Amazon Kindle

Amazon Print Edition

B&N EPUB

Kobo EPUB

Weightless EPUB/MOBI

Wyrm EPUB/MOBI