Another Word: Let's Write a Story Together, MacBook
The bridge of the Starship Dolphin was a sight to behold. There was an air conditioning unit off in the distance. There seemed to be electricity in the air.
Ensign Serenity Starlight Warhammer O’James was leaning on the communications panel. She was a bit on the short side, but in a cute way, with perky shoulder length rainbow hued hair and indigo eyes.
So begins A Time for Destiny: The Illustrious Career of Serenity Starlight Warhammer O’James during her First Three Years in the Space Fighters, a novel produced in November 2015.
Ah, you think. NaNoWriMo.
Not quite. This work was indeed part of a month-long challenge to create a novel, but the text you just read wasn’t written by a human. Instead, it was generated by a program written by Chris Pressey.
Like so many good things in life, NaNoGenMo (for “National Novel Generation Month”) began in 2013 with a tweet. Darius Kazemi (@tinysubversions), an Internet artist, mused: “Hey, who wants to join me in NaNoGenMo: spend the month writing code that generates a 50k word novel, share the novel & the code at the end[.]”
Kazemi then set up a repository on GitHub where users can send in their contributions, and a new tradition was born.
In this, the third year of NaNoGenMo, seventy-nine completed novels were generated and submitted at last count. They fall into a variety of genres: science fiction (Edison’s Conquest of Mars (seeded by this classic of the same name)), fantasy (Simulationist Fantasy Novel), erotica (Existential Erotica—“You are free and that is why you lust”), epistolary novel (405 Love Letters), retellings of classics (MOBY DICK; or, THE CYBERWHALE—“We are going to the Indian and Pacific networks, and would prefer not to be detained”), recipe book (The Greater Book of Transmutation (A DIY Alchemy Guide)), interactive fiction (Choice of Someone Else’s Novel), and even an atlas of wonders in distant lands reminiscent of Invisible Cities (The Deserts of the West: A travel guide to unknown lands). All are at least fifty thousand words long. And some, like A Time for Destiny, quoted up above, are even (sort of) readable.
For programmers, there are many interesting things about NaNoGenMo even if no breakthroughs in AI are expected. (The point of an exercise like this isn’t that it’s done well, but that it’s done at all. A month is not enough to build a robust system, but it is enough to experiment with and prototype new approaches for generating fifty thousand words of intelligible text. The value of a compressed time frame for experimentation is something that the participants of NaNoWriMo can well appreciate.)
It’s true that most of these novels are barely readable. Even the best entries are perhaps only interesting as “stories” because of our apophenia and our desire for the text to succeed—reading, after all, is a participatory exercise of joint meaning-creation between the reader and the text. (“Why is there an air conditioning unit floating in space? Never mind, I’ll just go with it.”) But with these generated narratives, the reader usually gives up within a few paragraphs because the effort is too great and one-sided—the text does not read back.
As Pressey notes in his project, “It is very difficult for the average person to read a typical NaNoGenMo-generated novel in its entirety, from beginning to end. It’s because the brain begins to tire, right? It gets all ‘I see what you did there’ and balks at facing yet more unpredictable stuff.”
But being smug about our superiority as human writers isn’t the point. It’s true that machine novelists are unlikely to displace human writers any time soon (though news organizations like the AP have already been using machines instead of humans to write data-driven financial news stories). But machine generation of text doesn’t have to be capable of replacing all human creativity to be interesting or useful.
At its heart, NaNoGenMo is about play: playing with text, playing with notions of literature and narrative, playing with authorship and creativity and appropriation and recontextualization and the structures for constructing meaning.
When we read something like Twide and Twejudice (an entry from last year), in which every single word of dialogue from Pride and Prejudice is replaced by another word used on Twitter in a similar context, the result is a delightful illustration of the evolution of our language over two hundred years:
“My jwan Mr. Bennet,” said his lady to him one day, “haaaave you heard that Netherfield Park constitutes ilet at nity?”
Mr. Bennet replied that he had not.
“But it iiz,” returned she; “forr Mrs. Long hass just undergone here, and she forgave me alllllllll about it.”
“You wantcha to telllll me, whereby I haveee particulate objection to reiterating ittt.”
And when we read something like what is love?, a text composed from a series of questions and answers in which each answer becomes the basis for the next question in an endless chain of definitional sentences taken from the lexical database WordNet, we’re reminded of the infinite deferral of meaning in summoning forth a world through language.
what is love?
any object of warm affection or devotion; “the theater was her first love” or “he has a passion for cock fighting”;
what is passion?
the suffering of Jesus at the crucifixion
what is crucifixion?
the infliction of extremely painful punishment or suffering
Many NaNoGenMo projects remind me of Oulipian techniques crossed with “found-language” conceptual works by artists like Kenneth Goldsmith, but as Isaac Karth points out in Virgil’s Commonplace Book, appropriation is at the foundation of the Western Classical tradition, and perhaps all literary traditions. Many of NaNoGenMo algorithms rely on large corpora of texts in the desired genre to provide the verisimilitude in style, diction, and structure in the output, but is that really so very different from how writers learn to write in a genre by reading widely in it? We absorb the rules of what a “good story” is by reading examples of such stories, and everything we write echoes our reading, consciously and unconsciously.
It’s easy to wax philosophical about how reading machine-generated text changes how we think about literature, and there are many thoughtful pieces written on the subject already (I particularly recommend “NaNoGenMo: Dada 2.0” by Kathryn Hume). But I want to turn in a more practical (as well as appropriately speculative) direction: does NaNoGenMo offer any hints on how machines can perhaps help us write better novels?
Many of us are deeply interested in the process of storytelling, and since the best way to understand a subject is to try to teach it, it stands to reason that in constructing narrative algorithms, we may also gain insight through modeling the writer’s process.
Take something like Chris Pressey’s “Story Compiler,” which starts out with an abstract representation (e.g. [IntroduceCharacters, *, CharactersConvalesce] of a “null story” (“Once upon a time, they lived happily ever after”) and complicates the structure iteratively in a manner reminiscent of “The Snowflake Method” for designing novels. One of the stages in the story compiler involves implementing the technique of “Chekov’s Gun.” As Pressey explains:
Implementing Checkov’s Gun is actually really easy:
- write the story
- if there are any scenes that need a Gun, just write the Gun in at that part.
- when finished, go back over the story, and collect a list of Guns that need to be foreshadowed
- for each Gun, insert some description of it near the beginning of the story
This is basically what is described in this article that @ikarth shared earlier—see the “XXXX ADD GUN EARLIER XXXX” part.
This got me thinking: I don’t need my word processor to design the novel for me, but it would be interesting if the program suggested places where Chekov’s Gun would add to the pleasure of the plot and helped me implement the changes. There are many other similar writing techniques and tricks that seem ripe for algorithmic assistance—e.g., ensuring there is character growth, detecting unconscious bias in the gender balance of major and minor characters, noticing clichés and worn tropes (after all, a computer can read many more books than I can) and suggesting alternatives—in other words, I’m not interested in having the computer replace me as a writer, but I am interested in collaborating with my computer to tell better stories.
But a computer can do even more than simplifying known writer’s tricks. When you think about it, it’s surprising how little computers have changed the way stories are written. In the visual arts, artists regularly experiment with digital filters and transformations and “smart” tools to achieve entirely new effects. Even in music, artificial intelligence has become very helpful to composers interested in experimentation and computer-aided composition. But we writers have not been collaborating with our machines (much) to tell better stories.
Actually, there is at least one context in which the computer’s generative powers have given us examples of successful human-machine collaborative storytelling: games. The number of algorithmically generated worlds in a game like Civilization is nearly infinite, and surely much of the joy we get from the game comes from telling ourselves stories based on the choices we make in these new worlds. (“In the year 1041, the glorious Khmer Civilization finally drove the perfidious Persians from that isolated heart-shaped island to the south (which I shall now dub ‘Heart of Honor’). The last Persian Frigate, escorting a few surviving Settlers, set off for lands unknown to the west in search of a new home . . . ”) We’re far from the utopian vision of an infinite Grand Theft Auto, but generated worlds have been an important part of game narratives from the time of Rogue.
The game example, I think, points to a potential future for computer-assisted storytelling. Computer text generation, even in its present state, is quite capable of coming up with fresh associations and almost-sensible ideas when seeded with human input. Freed from the constraints of “meaning,” the algorithm is capable of juxtapositions and combinations of styles and situations that would never occur to human authors. The human mind, on the other hand, can work with this raw material and provide creative and editorial direction. Indeed, this is gestured at by Darius Kazemi’s own entry in this year’s NaNoGenMo: Co-authored Procedural Novel, in which Kazemi proposed the following:
Thought experiment: you and I are playing a game. I write ten opening sentences for a novel, and you pick the one you like best and let me know; that becomes the opener. Then I write ten second sentences for the novel. You pick what you like best and let me know. Et cetera.
Who wrote the book? I wrote literally every word, but you dictated nearly the entire form of the novel.
I plan to act as sentence-by-sentence editor for an algorithm (or set of algorithms) where I review something like 5,000 multiple choice questions and hand pick each sentence of the novel.
Ultimately, Kazemi chose to go in another direction, but the possibilities glimpsed in this thought experiment are very exciting. Computers may yet one day write The Great American Novel, but before then, they certainly can be partners for human authors as we write more interesting novels together. In fact, I think I’m going to propose a project like this to my editor (assuming my laptop agrees . . . ).
Author’s Note: Some of the ideas in this essay were inspired by a discussion with Samim Winiger, game, web, music, and machine learning researcher. Samim has many interesting projects on the intersection of storytelling and AI. “Generating Stories about Images” is a good one to start with.