Non-Fiction
Another Word: The Singularity is Dead. Long Live the Singularity!
The Singularity has become an essential trope of science fiction: we shall one day cross a threshold of technological progress, when artificial superintelligences are created, and these in turn will be capable of creating other artificial minds of even greater intelligence, resulting in an accelerated acceleration of innovation. New technologies and scientific discoveries will follow with unpredictable, incomprehensible speed.
Some see a frightening future in this prospect, where we unaugmented humans are left dizzily watching the world rush off without us. Others, like Ray Kurzweil, put their every dream and hope into this narrative, proclaiming that immortality is just on the other side of the next tech bubble. Take your vitamins, don’t smoke, and you’ll live long enough to be immortal. But both narrative poles, and all the gradations between them, overlook a fundamental problem: the Singularity has come and gone. It was born in 1900 and it died in 1936.
To see this, we need to reflect for a moment on the fact that the concept of the Singularity assumes a very special, very optimistic view of mathematical reasoning. A view that is provably false. A view that is known among mathematicians and logicians as “Hilbert’s Dream.”
In 1900, the mathematician David Hilbert gave an address at the Second International Congress of Mathematicians. The auspicious year inspired Hilbert to strive for an inspirational talk, one that looked ahead to the new century and called upon mathematicians to accomplish great things. Hilbert presented ten unsolved problems in his talk, and later published these along with thirteen others. Hilbert’s Twenty-Three Problems were wildly successful. As Hilbert hoped, they inspired generations of mathematicians, and—best of all—many of these problems were solved in the Twentieth Century.
Implicit in two of the Problems were very general demands for all mathematical reasoning. The Second Problem called for a proof of the consistency of arithmetic. We call reasoning consistent if it is not possible with that reasoning to prove something and also prove its denial. Hilbert was asking that we prove that arithmetic could never yield a contradiction. If we had such a proof, then we could know that our reasoning could never go wrong. For example, we could know that our reasoning would never give us a proof that 2+2=5, along with our everyday proofs that 2+2=4.
Hilbert’s Tenth Problem addresses a technical question with diophantine equations. But implicit in this question, and explicit in some of his later lectures, is another, very general demand: to construct an effective method that will check any proof in logic or arithmetic, and tell us whether that proof is valid. This is sometimes called decidability, because what he was asking is whether we can have a mechanical procedure or algorithm to decide if a given proof is correct.
A dynamic, brilliant, and optimistic man, Hilbert was fed up with the popular strain of mysticism sweeping Europe at the time. We cannot know had become a catchphrase, a rallying cry of lazy obfuscators. In response, Hilbert responded that We can know and we must know and we will know. And this is captured in his dream: together, decidability and consistency would mean that all the theorems of arithmetic would be available to us, and we could be confident that all those theorems are true. Only time and memory would constrain our search for the answers to all our arithmetic problems. Extend these results to other branches of mathematics, and all the fruits of reason would be ours to command.
Why does this matter to the Singularity? Consider what the proponents of the Singularity imagine. A computer of immense speed and memory is created. Some breakthrough in software gives it intelligence. So what does it do? If the Singularity fantasy is correct, the supercomputer reasons out the next steps in science and mathematics, enabling it to solve new scientific problems, including the problem of creating more powerful artificial intelligences. Such reasoning must be consistent and effective, if it is to accomplish this end. After all, we are supposing that from reasoning alone, the supercomputer solves the problem of how to create ever more intelligent offspring, and other technological marvels. Hilbert’s Dream must be realized, if our superintelligence is going to be capable of igniting the Singularity.
Leap now from 1900 to the year 1930. A young, little known logician named Kurt Gödel gave a talk at a conference on the theory of knowledge. Godel was arguably the greatest logician who ever drew breath. He had a wily, frighteningly creative mind, prone to paranoia, which always led him to the most extreme questions: questions about the nature of truth, of infinity, of time.
The talk was sparsely attended, and most of the audience had no idea what was happening before them. For the young Gödel demonstrated the most audacious trick of logic ever performed: he showed how to turn mathematical formulas into numbers, on which one can then perform further mathematics.The result is that Gödel developed a technique to use mathematics to describe itself.
Gödel then revealed what is today called a Gödel Sentence. In arithmetic it is possible to construct a sentence that means, This sentence is not provable. The consequences are stunning: if this sentence can be proved, arithmetic is inconsistent (because it would have proved a falsehood). If this sentence cannot be proved, then it is true, and thus there are truths of arithmetic that cannot be proved—a property we call incompleteness.
Most of us hope that arithmetic is consistent. After all, we’ve been doing arithmetic for a long time, eons, and no one ever proved that 2+2=5. So, we conclude optimistically that Gödel has proved that there are truths of arithmetic that cannot be proved. This is why this proof is now called Gödel’s First Incompleteness Theorem.
The theorem generalizes to all of what the rest of us consider mathematics. Furthermore, it has been shown that not just the Gödel Sentence, but other problems of mathematics are unprovable. This is a shocking result: for every mathematical system of significant power, there are truths of that system that we cannot reach.
Gödel later extended his result, in a way that gives an answer to Hilbert’s Second Problem. Gödel proved that we cannot prove the consistency of a mathematical system from within that mathematical system. This is now known as Gödel’s Second Incompleteness Theorem.
An answer to the decidability question came shortly thereafter. In a paper written in 1936 and published in 1937, a young English mathematician named Alan Turing gave us the first formal definition of an algorithm, and in so doing gave us the first formal description of computation.
This is a marvelous accomplishment when you consider that Turing’s paper proved perfect: his description of computation captures exactly what all and any computer can do. And in this 1936 paper, Turing also proved a result as stunning as Gödel’s: there is no effective procedure to determine if another arbitrary procedure is effective. What this means is that there can be no computer program to tell you whether some arbitrary computer program is going to work.
Think of your own experience with software. Sometimes you turn on your computer, open a piece of software, set it to work on some problem, and promptly get the wait symbol—be it a spinning daisy wheel or turning hourglass. You wait, at first patiently, but then with growing annoyance, as the symbol spins and spins. You are confronted with a conundrum: is this program stuck in an infinite loop, or is it working correctly but just taking a long time? Turing proved that you cannot have an effective procedure to answer this question for any arbitrary program. You simply can’t reliably know which situation you are in. Your program may be stuck. Or, it may be working correctly, but working on a hard problem that will take a long while. (My advice, however, is to reboot.)
Some jokingly call Turing’s discovery the Computer Scientist Employment Act, since it means that we cannot replace human programmers with a computer that generates and tests programs. But the result is more general than this. It means that there is no algorithm to find the good algorithms, no one test to test all our reasoning.
Together, these three results topple Hilbert’s Dream. The consequence of Gödel’s First Incompleteness Theorem is that we cannot know if the information that we need to solve some problem is, even if true, provable. We might find that we simply have to assume some answer to the problem before us, if we are going to reason about it. But how do we know if such an assumption is correct? The consequence of Gödel’s Second Incompleteness Theorem is that we cannot know beforehand if such assumptions are correct. We must simply develop our theories and work with them, and if they fail spectacularly, we will back up and start over; but until they do so fail, we won’t be able to predict whether our reasoning is consistent or not. And the consequence of Turing’s Undecidability result is that we have no brute force method to find the answer to our problems, and furthermore when we cannot find the answer to some problem, we cannot know whether this is because of our own failure of imagination, or rather is a consequence of our theory being too weak. And there is no escape route; no matter how smart you are, these theorems still stand in the way.
To return to the Singularity: consider now what these results mean for our lonely supercomputer, as it sits in its humming server farm, planning world domination. If it is going to make faster and smarter versions of itself, what must it do? It must reason out the next steps in scientific development, draw the correct inferences from these, and use these results to develop its ever-more-intelligent children. And how shall it do this? Turing’s result guarantees that there is no effective procedure to just hammer out the conclusion. Gödel’s results guarantee that the machine cannot be sure its reasoning is sound or sufficient for the problem before it. And so the conniving machine must make some hypotheses, test those hypotheses through use, and go back to make more hypotheses when it fails. It must, in other words, go about applying the scientific method and scientific experimentation. And this takes time. Snail time. It takes laboratories, and other resources, down here in the crawling now.
The dream of the Singularity is really an extension of the most persistent, but most unrealistic, of science fiction tropes. This is the trope of the lonely super-productive scientist.
There are endless stories of the sole scientific genius, developing a host of brilliant advances, working alone in his laboratory. We can find examples from every Era. H. G. Wells’ Time Traveler builds a time machine alone in his Victorian shop. The aptly named Lone of Theodore Sturgeon’s, More Than Human orders some parts from the local electronic shop and builds an anti-gravity device in his barn. Ayn Rand’s John Galt has a room in his New York apartment that is filled with world-transforming inventions that he tinkered together when not speechifying. Ted Chiang’s Leon Greco, former computer graphics artist, gets an intelligence boost, and spontaneously discerns how to: hack into FDA databases and understand the scientific papers stored there, write viruses that can penetrate government databases and wipe and rewrite selected information, discover new pattern-matching algorithms, quickly comprehend all the physics he reads and identify easy extensions to it, and so on.
There is a reason that nothing like this ever happens in real life. Yes, sometimes an inventor hits on an innovation, maybe even two, on her own. But our reasoning capabilities are constrained in many ways that ultimately require endless hours of hard slogging through the scientific method, something that can be effectively done only by armies of scientists and engineers openly sharing results. In contrast, these fantasies of the lone scientist producing wild new inventions are just as realistic as would be a fantasy in which Wells’ Time Traveler singlehandedly builds the Eiffel Tower, or John Galt singlehandedly erects the Brooklyn Bridge. Work—be it physical or mental—has important costs and constraints. There just are no free lunches. Even for supercomputers.
Scientific progress appears to have accelerated, and perhaps it will continue to do so. But we are the ones driving this progress, working together in large numbers, and using our computers as valuable time-saving tools. If computers ever become our equals, they will be no better off than we are, laboring with us on the hard work of scientific and mathematical discovery.
I don’t mean to scold. Or, well, maybe I do. I confess that I’m not very fond of the Singularity as SF trope; I fear it counsels passivity, asking us to await the technoreligious rapture. Why exercise or compost, when soon this is all going to be software? But, then, it’s easy to criticize any of our tropes. I should know: I’m one of those writers who throws in faster than light travel because I want to have aliens in my space operas; and I’m one of those writers who has a common galactic language so my aliens can make witty remarks to each other. There are readers who groan and pull their hair when they encounter this kind of thing.
Sometimes a writer gives up realism in one domain in order to realistically explore certain consequences. If a writer wants to explore and exaggerate a radical technology’s effect on a single human being, she might adopt the trope in which that single human being is a genius who, working alone, develops that technology. And if she wants to explore the vertiginous effects of scientific progress, she might introduce the idea of runaway scientific progress, driven by superintelligences, to allow for many encounters with such progress. So let the Singularity thrive as a narrative tool.
But we should all be wary of those selling the message that the Singularity is near, here in the actual world. They are hawking a false dream, one long ago denied us. Artificial intelligences are not alone going to make us live longer or organize our economy or restore our environment or build us spaceships. We’ll have to do all that ourselves.
Let’s get to work.
Craig DeLancey is a writer and philosopher. He has published short stories in magazines like Analog, Lightspeed, Cosmos, Shimmer, and Nature Physics. His novels include the Predator Space Chronicles and Gods of Earth. Born in Pittsburgh, PA, he lives now in upstate New York and, in addition to writing, teaches philosophy at Oswego State, part of the State University of New York (SUNY).