Non-Fiction
The Future, One Thing at a Time
It’s conventional wisdom in science fiction that the future doesn’t come one thing at a time. This idea, which is sometimes called Campbell’s Rule (or Campbell’s Exception), is explained in these terms in the book On Writing Science Fiction by George Scithers et. al: “You can never do merely one thing. Our world a century hence could be almost as alien as another planet; each change triggers countless others.” As fondly as the book is remembered, history shows that this rule simply isn’t accurate. Different fields of science and technology do not march in lock-step; the world may change more rapidly or slowly for different places and people, and sometimes it really does change one thing at a time.
As literary advice, the rule was probably meant to prevent the sort of shallow stories that focus on a single change or technology in isolation. These are particularly common among beginners and among non-genre writers dipping their toes into the genre, which may be why SF writers and editors hold to this rule so fiercely. The problem is that it embodies an attitude towards technology that forms a big part of why so much older SF feels dated today. It may also lead to a lot of today’s most cutting-edge SF seeming out-of-step in just a few years: just as we laugh at giant computers of old SF novels and the voice-call-only communicators in the original Star Trek, many novels published just a decade ago are already beginning to look left behind. (Nearly all books that depict online social spaces, for instance, portray environments that are nearer to Second Life than to Facebook, while hackers are shown as outlaw programmers rather than basement-dwelling script kiddies.)
Why do we believe that the future can’t happen one thing at a time? In my opinion, it’s based on three fundamental fallacies about technological progress.
First, the fallacy that technological progress is inevitable. This is the idea that the future must be different from the past, but in many ways our lives are fundamentally the same as they were 20, 50, or 100 years ago. Consider a typical morning: we are likely awakened by an alarm clock (first adjustable, mechanical alarm clock patented 1847), eat a breakfast of eggs, toast or, if we are feeling particularly futuristic, corn flakes (patented 1896); shower in water heated by electricity (first electric water heater patented 1889) or gas (1868); put on clothing made of cotton (made affordable by the cotton gin in 1793) mixed with, perhaps, some synthetics such as nylon (1935), Lycra (1959) or Gore-Tex (1976).
Many technologies basically stop developing, in some cases surprisingly quickly: the development of canned and frozen foods led SF writers to imagine endless variations on technologies for synthesizing and preserving foods, but the food in your pantry and freezer was made in ways that have scarcely changed since the days of Louis Pasteur or Clarence Birdseye. Technologies can even disappear: Although digital watches were once a powerful symbol of futurism, if you’re under 30 you’re unlikely to put one on before leaving the house, preferring to get the time from your phone instead. (Some of these abandoned technologies eventually make a comeback: If you stop for a bathroom break you’re using a toilet that was pioneered in Minoan Crete and then forgotten for more than two millennia.)
When technologies do continue to develop, most of the changes that occurred after the initial flurry of innovation are basically incremental, things that improve the experience of using a technology without altering its basic function. The alarm clock, for instance, has undergone numerous changes—electrification, digital readout, the ability to play music and radio broadcasts—but what it does, and the role it plays in our lives, is unchanged. There are some genuinely disruptive technologies which change the world on a large scale: The classic example is the automobile which, as Scithers notes, shaped the 20th century in dozens of ways, from hollowing out cities to causing wars. But even these technologies soon settle into a state of slow, incremental change. When we drive to work we are using a machine that, while safer, more fuel-efficient, and easier to drive, is fundamentally little different from the Model T.
Moreover, the effects of things like the automobile create their own inertia; they often act as a brake on the changes other technologies might cause. Look at personal computers and the Internet. Not so long ago we imagined that, because of these advances, we would all be working from home, with employers hiring from a global pool of remote employees. Instead we still drive to work in offices (where we may use computers less powerful than the ones we have at home), and our children sit in rows in classrooms, paper and pens in hand, all because cities have been planned for cars and our educational system has been built to meet the needs of 19th-century industrial technology.
Truly disruptive technologies are the exception, not the rule. Few have anywhere near the impact of the automobile or even the Internet. A more typical example would be the microwave, a device that is uncannily similar to the “instant cookers” found in so many imagined futures but which has nevertheless utterly failed to replace the oven. When technology does change society, it’s much more common that only some sectors are affected—and it’s rarely possible to predict which ones and in which ways. Despite being funded by the military, the Internet has had little effect on the life of the average soldier. Nor has it much changed academia, despite the fact that academics were the first to use it. What it changed instead was the music industry. The unexpected collision of the Internet with cheap memory and digital compact discs (which the industry itself introduced, goosing profits in the short term but inadvertently digging its own grave) made it possible for consumers to rip and share songs, leading an entire generation to think of music as something they only have to pay for if they feel like it. Even within that industry, change has been uneven: Classical and jazz have been much less affected, since listeners tend to be older and sound quality is more highly valued, while the sheet-music business is near to dead because files are smaller and no quality is lost in copying.
The second fallacy about technological progress is that all technology progresses at the same rate. This likely has its origin in the roots of SF in the 19th century, in which a small number of basic technologies—primarily efficient motors, mass production, and electric power—led to fairly uniform technological development across much of society. The last 80-odd years, however, have seen a much more varied rate of progress, as industrial technology has largely stalled and other fields, in particular computer and communications technology, have developed in ways that were almost unimaginable based on their starting points. This is why they appear in little or no older SF, or appear in forms that are nothing like how they actually evolved. For all the giant computers that were hell-bent on controlling the world (or were already in control of it), it’s hard to find an example as late as the 1970s of a personal computer on a par with an Apple II, much less an iPad.
Technological progress is also often dependent on factors other than technology itself. One way in which our world would be almost unrecognizable to anyone visiting from more than 25 years in the past is that it is nearly cashless. While bills and coins are still in circulation, it’s rare that we have to pay cash for anything. In that case, though, the technology had been around for years before it had much of an effect. Credit cards were first introduced in 1959, while the magnetic stripe that made them less of a hassle to use appeared in 1970. It was only after a 1970’s Supreme Court ruling allowed banks to operate under the usury laws of their home state—effectively allowing them to set their own interest rates—that credit cards became profitable, rather than the loss-leader service they had previously been. Today it’s hard to imagine our world without credit cards. To name just one example, without them the Internet would be much more like the “world mind” imagined by SF writers than the shopping mall/arcade/peepshow (with a small library attached) that it is today.
This isn’t to say that technology doesn’t change our lives, but the ways in which it does so are subtle, unpredictable, and different for various places and people. This leads to the third fallacy of technological progress in SF: Everyone uses technology in the same way. Most early SF was written by (or from the point of view of) scientists and engineers, and the genre still bears the stamp of that viewpoint today; also, SF writers tend to be enthusiastic early adopters of new technologies, which colors their view. But widespread adoption of a technology (and therefore its continued development) can often depend on the unexpected uses found for it by unexpected audiences. These audiences may also choose not to adopt technologies until they reach a point where they are useful to them.
A good example is the landline telephone, something which we consider to be a fundamental feature of modern life but which never caught on in much of Africa due to a number of factors that prevented the investment in infrastructure needed. That didn’t mean that there was no desire for the technology: When cell phones, which have a much smaller infrastructure footprint, became affordable, they spread across Africa, leapfrogging generations of communications technology and leading to a level of adoption that for some years was higher than in most industrialized nations.
The same phenomenon may happen on a smaller scale, and even in industrial countries. While it was the middle class that led adoption of the personal computer and the Internet in North America, smartphone use rose first and fastest among people with lower incomes for whom it was not an additional computer but a first one, cheaper and more practical than any PC. In rare cases groups may be so selective as to adopt only a handful of technologies, something to which anyone who’s ever seen an Amish man or woman gliding by on rollerblades can testify.
Different groups may also innovate according to their own needs. While we use Facebook to share ironic motivational posters and get in touch with old classmates, Kenyan herders use iCow to connect themselves (and their cows) to vets, breeders, and buyers of meat and milk. As a result, the look of the future may vary depending on where you are and whom you’re with. Similarly, teenagers have often been leaders in determining how new technologies will be used: sharing music files over the Internet, for example, rather than simply ripping the CDs you already own onto your iPod, or using their cell phones for everything but making voice calls.
What’s more significant than how technology develops is how it changes our lives. That, too, is variable and unpredictable. For example, while digital technologies have changed how we do our jobs, what we do remains largely the same; we might do it more quickly, more easily, or remotely, but few of us do jobs that our grandparents wouldn’t recognize. (More people are “the person who fixes the machine when it breaks,” but the job description itself goes back to Daedalus.) In fact, the fastest growing segment of the economy is the service sector, which is probably the most resistant to technological change.
What has changed, though, is our leisure time. Not only do we engage in entirely new pursuits (such as video games) in new ways (such as listening to music or watching video, both of which were once inherently collective activities, mostly in private), our whole attitude towards leisure is different. Where we once defined ourselves through our work, we now do so largely through our play—something that was made possible by the development of digital and communications technologies, but which only a few SF writers imagined. The role-playing game featured in Larry Niven’s and Steven Barnes’ Dream Park, published in 1981, is played live with actors, props, and holograms and available only to the wealthy, a far cry from the millions of players of today’s MMOs. Arthur C. Clarke, on the other hand, not only imagined communications satellites in 1956 but also that these satellites would be used to broadcast porn—though his notion that it would be in the form of documentaries about tantric carvings on the walls of Hindu temples is rather quaint.
The emphasis on play over work has had a significant effect on business and technology. While RIM, the maker of the Blackberry, bet heavily on business needs continuing to drive the mobile device industry, Apple bet on consumers, and we all know how that worked out. What’s interesting is that consumers have become the main influence on how technology develops at a time when our purchasing power has actually declined. This, too, is an unexpected result of technological change, as microprocessors, miniaturization, fiber optics, and other developments have worked together to make media and electronics a cheaper treat and an ever-larger part of our non-working lives. (Teens now spend an average of 10 hours per day consuming screen media.)
What all this means is that we need to re-evaluate just what SF is supposed to do. One of the fundamental premises of classic SF—If this goes on—ignores the fact that many technologies do not keep going on. Technological, cultural, and economic forces eventually move them into long periods of equilibrium or, at best, slow and incremental change. This is true of most industrial technologies; one of the most popular models of airplane in use today, the 747, went into service in 1969. We’d be foolish to think that it’s not true of computers, genetic engineering, or any other technology you could name. Moreover, when things do go on they do so unpredictably. The complex interaction between culture, economics, politics, and any number of other factors mean that it’s impossible to guess more than two steps ahead with any accuracy. "If SF isn’t able to predict the future, it can take on a more important role, one closer to that currently played by fantasy: to imagine how new technologies, as well as social, cultural and political changes, can enable us to tell stories that we couldn’t tell before, and to shed new light on what it means to be human.
Matthew Johnson lives in Ottawa with his wife Megan and their sons Leo and Miles, where he works as a media educator and writes fantasy and SF when time and circumstances permit. His novel Fall From Earth was published in 2009, and a collection of his short fiction will be published in 2014 by Chizine Publications. His most recently published stories are "The Afflicted" (Fantasy & Science Fiction, July-August 2012) and "The Last Islander" (Asimov's Science Fiction, September 2012.)