Another Word:
Will Aliens be Alien?
It’s a common complaint against science fiction: the aliens aren’t really alien. They’re humans in disguise.
Often enough it’s a fair observation, especially in film. No one finds the motives of Klingons or Yoda or ET inscrutable. But in our novels too, extraterrestrial intelligences often appear very human-like—indeed, they’re even often humanoid in form. We typically encounter them not as beings with motives that are completely new to us, but rather as demonstrating extremes on familiar means (for example, being more war-like, or more intelligent, or more peaceful than your average human).
This familiarity is often a product of the demands of telling a tale. If an extraterrestrial intelligence is going to be a character in your novel, then the reader must understand it for the narrative to be compelling. If your alien is incomprehensible, then its mysteriousness will come to be a theme, which may not serve your narrative goals. It is for this reason that our best SF stories about inscrutable aliens are specifically about this inscrutability: it stands out as an impediment to action, demanding attention.
But set aside the demands of story telling, and implicit in the humans-in-disguise criticism is the claim that extraterrestrial intelligences would be very—perhaps incomprehensibly—strange to us. Which begs the question: is this right? Should we expect that the distance between our home worlds were a kind of distance between our conceptions of the universe? Will we have so little in common that we cannot find shared semantic ground?
Perhaps not. For all (or nearly all) the organisms of the universe will share this common history: they will have evolved. Evolution results in boundless complexity, expressed in wild varieties of forms and behaviors, but its basic principles are simple and universal. A population has variation in it, augmented periodically by mutation. Populations grow to carrying capacity, and the result is fierce competition for survival. Some individuals succeed better than others in this competition, and have more offspring. These offspring will live to carry on some of the beneficial traits of their progenitors.
Consider: few of us expect that unintelligent extraterrestrial organisms will be incomprehensible. We arrive on planet X, and certain organisms are building hives, others are eating the hive makers, others are rapidly moving away from the organisms that eat the hive makers.
We interpret these organisms readily, without hesitation: that organism is cooperating with kin; that organism is hunting; that organism is fleeing. But such an expectation is no small matter, because intelligent aliens will be organisms too. They will have an evolutionary history shared with the other organisms of their planet. And this will provide the foundation for their skills, their motivations, and ultimately their intelligence.
For all the complexity that arises in the details, the universals of evolution mean that all organisms will share certain features. They will have evolved in competition.
Helping kin will increase the likelihood of the helpful trait being spread in the population. Since intelligence is likely to result in control of the environment, it is likely to result in the adoption of a K-strategy, in which the organisms have fewer offspring and invest more time and resources into the survival of those offspring; and if the organisms adopt a K-strategy, then they will have a deep interest in the success of their offspring. These constraints, and thousands of others, will lead naturally to certain dispositions and motivations—including what we call emotions.
These include a readiness to cooperate but an eagerness to find and punish cheaters; a love for kin; and a host of motives to protect one’s own offspring. Surely, these could provide a foundation for mutual understanding.
There is an interesting parallel here with a debate in paleontology. The question concerns what Steven J. Gould called “running the tape over.” He argued that if we could repeat the history of Earth over and over, allowing variations where they naturally occur, we would see wildly different outcomes across these different histories.
The alternative theory, championed most notably by Simon Conway Morris, is that evolution is more rigorously optimizing than this. On Conway Morris’s account, history of life on Earth would have to produce bipedal, bilaterally symmetric intelligent beings after about as much time as it took for us to show up (accounting for events like asteroids falling on us, and re-setting the clock). Evolution, on this view, is highly constrained. It will reliably result in similar outcomes for similar conditions.
This debate about “running the tape over again” may represent two extremes to how difficult it will be to understand an extraterrestrial intelligence. I suspect Gould would have thought that extraterrestrial intelligences would be quite understandable, if we could just know something about their evolutionary history. But for him, the greater disparity of possible evolutionary outcomes would mean that extra work will be required to discern the evolutionary constraints that are relevant to a species.
On the other hand, the more optimal evolution turns out to be, the more readily identifiable we can expect the foundation for mutual understanding to be. If an organism’s strategy is best for its environment, and any old starting place will get its lineage there, then this should hold true for an extraterrestrial in a similar environment.
We might expect the alien to have eyes, recognizable as like our eyes, because we expect that eyes like our own are a relatively effective way to seize the benefits of visual perception. We might expect the alien to have an analog of fear, since a general motivation to avoid predators and other dangers, and remember them as threatening, appears to be a very effective. And so on.
The heritage of a single organism is one thing. Cultures are another. We are well familiar with failures of human beings to understand each other. Surely the situation will be worse with respect to an extraterrestrial culture. Culture adds something new, something that changes quickly and varies widely across individuals of a single genotype. Won’t this make our extraterrestrials incomprehensible?
The case of human cultural variation is easily exaggerated. We tend to focus upon differences, but the fact remains that no matter how alien a human culture, it remains possible to understand much of it. We read the Illiad or the Mahabarata or the Popul Vuh, and though the writers of those works are far from some of us in time and space, we find nothing incomprehensible in the motives and actions of their protagonists.
Culture builds upon what evolution provides. In language and customs we find explosive variety. But these varieties are less successful, and more difficult to maintain, precisely to the degree that they oppose what evolution has instilled in the species.
Culture adds complexity, but it cannot (at first, anyway) extinguish the goals and motives we inherit. You can ask that your warriors not fear death, but we can predict that they normally will.
This allows us to make a modest prediction. Extraterrestrial intelligences will resist understanding to the degree that their culture is complex. Nothing about their evolutionary history would be incomprehensibly strange to us, and thus nothing about the motives and abilities that they inherit would be incomprehensibly strange to us. Rather, what will allow for strangeness is the ways in which intelligence and culture take those basic motives and combine and reformulate them into surprising new forms. Aliens will have an evolutionary history like those we find here on Earth, and surprising cultural complexity to alter and reinterpret and redirect the abilities and motives that this history gave them. That means a biological understanding can serve as the basis for cultural understanding. We have a Rosetta Stone: it’s called Darwinism.
So extraterrestrial intelligences won’t be humans in disguise. But they’ll be something quite similar to that: they’ll be an evolutionary history, dressed up in culture.
Now, if only they’d call us.
ABOUT THE AUTHOR

Craig DeLancey is a writer and philosopher. He has published short stories in magazines like Analog, Lightspeed, Cosmos, Shimmer, and Nature Physics. His novels include the Predator Space Chronicles and Gods of Earth. Born in Pittsburgh, PA, he lives now in upstate New York and, in addition to writing, teaches philosophy at Oswego State, part of the State University of New York (SUNY).
WEBSITE
Also by this Author
PURCHASE THIS ISSUE:
ISSN 1937-7843 Clarkesworld Magazine © 2006-2018 Wyrm Publishing. Robot illustration by Serj Iulian.
Ken wrote on January 17th, 2014 at 6:27 pm:
So a culture that has gone where we have not yet been would be less likely to have common elements. A culture that has been spacefaring for thousands of years has evolved in ways that we have not yet seen. We can hardly predict what we will be doing in a hundred years.
Craig DeLancey wrote on January 22nd, 2014 at 10:59 am:
Ken,
Thanks for the comment.
There are two things that I did not have space to address in this short essay. One is language, which obviously relies upon arbitrary connections. The other is auto-evolution.
In the case you raise, I'm not sure I agree. If a species evolves over a long period of time as a space-faring race, I would still expect that evolution would provide us a key to understanding them. They still need to reproduce, with limited resources, and so this constrains the strategies a species can evolve. But perhaps you and I could kick around a particular imaginary case to see if we still disagree.
But if auto-evolution occurs -- if the species begins to knowingly alter its genome to a very extensive degree -- then I do think that could prove to be a very mysterious case. I fear we don't really know what will come out of a species modifying itself with intent. In a way, such a case is where evolution as we understand it has stopped. (Something similar could be said if individuals of a species ever achieve practical near-immortality.)
Best,
Craig
Seth wrote on January 28th, 2014 at 1:04 am:
Craig, it's interesting that Peter Watts went through the same chain of logic as you but came to a radically different (and much less optimistic) conclusion. Have you read Blindsight?
Craig DeLancey wrote on January 28th, 2014 at 9:13 am:
Seth,
I'm ashamed to admit I've not, but I just ordered BLINDSIGHT and will read it as soon as it arrives. I'll post my thoughts as soon as I read it.
Thanks for the tip,
Craig
Seth wrote on January 28th, 2014 at 1:44 pm:
I hope you'll enjoy it! I don't think it's a rock-solid incontrovertible argument for how all alien life MUST think, but like a lot of Stanislaw Lem's work, it challenges implicit preconceptions we have about alien cognition and the centrality of our own mental setup.
Mike wrote on February 19th, 2014 at 9:10 pm:
I understand that there will be certain natural similarities between us and ET life, but I don't feel as confident about there being any similarities beyond this.
Whales and dolphins are incredibly intelligent (there's even video on Youtube of dolphins blowing bubble rings and playing with them), yet we haven't established any sort of meaningful communication with them. They don't seem to have any sort of culture, either. (Maybe this can be blamed on their lack or hands, if lack of hands means no opportunity to discover tools.)
One more thing:
It's a bit off-topic, but why does SF sometimes feature humans eating food from alien worlds? I doubt an ET burger would have any nutritional value for us.
Craig DeLancey wrote on March 15th, 2014 at 10:14 am:
Seth,
I finished BLINDSIGHT. I see why you suggested it: it has a radical suggestion, that consciousness (understood as self-awareness and self-modelling) is a mistake (a non-beneficial trait), and that some aliens won't be conscious (in this sense of conscious). I think that would be inconsistent with my claims.
Here's my response:
* I believe self-awareness and self-modelling are beneficial and so would be selected for (in the novel, they are wasteful accidents). Intelligence is going to be a consequence of social structure; and a consequence of being social is that you need to run simulations of the other members of your social group; and in turn these simulations can be run of yourself, creating a self-model. That's consciousness (in the relevant sense).
* How do the un-self-aware and non-social organisms make technology? The only mechanism is trial and error (they can't coordinate: that requires modelling the states of the other actors, and again this can be turned on yourself to constitute consciousness). The eusocial and social organisms will outcompete, by huge leaps and bounds, any organism just trying random technological advances all by itself.
* And consider: could you have a technological intelligence without language? It seems unlikely to me. And yet, you surely can't have a language without a society. Once you have that, self-awareness is inevitable.
That's a bunch of empirical claims, so it could all be wrong, but it is corroborated by our terrestrial evidence.
Seth wrote on March 15th, 2014 at 11:22 am:
I think you've hit on some really important questions, and as I go forward with this comment, bear in mind I'm not aggressively disagreeing with you.
Are you sure that consciousness is intrinsically tied to running simulations of other members of your social group? I agree this is a vital task - but is simulating the behavior of another, or even of yourself, the same as being AWARE that you're simulating such a model? (Probably, but still, worth asking. This is a difficult question to answer rigorously.)
Watts' Scramblers make technology by cognition and modeling, not trial and error. They're ferociously intelligent, and they do coordinate. Again, the difference seems to be that they build the necessary models, there's just no metacognitive model for them to think about what they're thinking. Here, at least, I think we have firm ground to say that Watts might have a point. Over the last few years, advances in machine learning have started to suggest that very complex problems can be solved across a wide range of cases by algorithms that aren't remotely conscious.
I believe the Scramblers have language, and that Watts argues that language and society are not bound to consciousness. A computer has 'language' and a 'social structure' but no self-awareness.
Again, I generally agree with you - I don't think Blindsight is a watertight case, and I don't think Watts believes it is either. But it's a striking work because it questions the tacitly assumed link between consciousness and other high-level adaptive behaviors like language and technology, and that link rarely gets challenged.
Even as monist physicalist convinced that consciousness is purely a function of the brain, I wish we had better answers to a lot of the hard questions. It would make these discussions more precise. (If non-human animal species on Earth start showing evidence of consciousness, that suggests to me that it is, as you say, an inevitable product of these modeling functions. But without any way to access qualia, how do we know if their modes of consciousness are similar? What range of systems might be available? Lots of tricky stuff to figure out.)
Craig DeLancey wrote on March 16th, 2014 at 3:02 pm:
Seth,
Here's my reasoning. Imagine two scramblers, Alpha and Beta. Alpha watches Beta put an object it needs into Box #1. Alpha watches Beta leave the vicinity. Alpha watches Jukka Sarasti take the object from Box #1 and put it into Box #2. Next, for some reason, Alpha needs to predict what Beta will do when it returns to get the object. To accomplish this, Alpha must run a simulation of Beta, based on the different knowledge that Beta has, in order to predict that Beta will look in Box #1 and needs to be informed that the object is now in Box #2. So, you need to run these simulations. I think there is no getting around it. And, once you have that ability, why not run it on yourself also?
It's only a hypothesis that self-awareness (of the kind that Watts is attacking) is (in part?) mind simulation turned on yourself; the hypothesis could be wrong; but I think it's not a bad theory. But, of course, I agree with you that we don't know a lot about these things yet and all is still up for grabs.
Thanks!
David wrote on May 3rd, 2015 at 9:48 pm:
The point about evolution is valuable. Insofar as it's likely that life in the universe will evolve in broadly similar environments, I think it's pretty plausible that alien biospheres will have creatures that fill familiar roles. Even on Earth, similarity of role leads to similarity of form, like dolphins and sharks. But despite surface similarities, these creatures are very different cognitively. Large oceanic predators are likely to look shark-like, but are space-faring creatures likely to think human-like? Sadly, we only have an n=1 from which to extrapolate.
It's quite plausible that mind simulation is the most sophisticated task that human brains can do, because it's so important to our survival. The human way of doing it involves consciousness, maybe necessarily so. But I'm not sure that mind simulation in principle has to be conscious, even if those tools are applied on the simulating subject.
First, consider a future version of Apple's Siri, an AI whose usefulness will depend on its accuracy of human mind simulation. To do its job, it must start with ambiguous input and correctly reconstruct our intentions and goals. Only then can it provide satisfactory responses. I'm confident that Siri's reading of human minds will get pretty accurate - maybe as accurate as people's, maybe more - but even when it does, it will be done without consciousness. This relevant to first contact fiction, because I think it's quite likely that if we make contact with a space-faring intelligence, it will not be biological.
Second, I'm wondering whether there are limits to how sophisticated a hive of non-conscious drones can get, without the hive itself becoming conscious. Hives are in some sense familiar to us, but a hive that gets sophisticated enough to travel through interstellar space and make contact with us could prove very difficult to talk to. We would be able to reconstruct some of its motives based on evolutionary universals (to survive, to extend its range, to manage its resources), but how would we carry on a conversation? The hive in Blindsight was chillingly effective in predicting human behavior, but not by imagining itself in our place. It (they?) deciphered and hacked our "plumbing and wiring" (neuron latency, etc.) and insofar as it modeled humans on a cognitive level, it was analogous to how a poker bot models human players. These are getting pretty good, btw.:
http://www.riverscasino.com/pittsburgh/BrainsVsAI/
I think these are two plausible examples of how something undisputably intelligent could be hopelessly alien. Of course we would anthropomorphize it; we did that even to ELIZA. But it would be a mistake. It's nothing like us.
Craig DeLancey wrote on May 4th, 2015 at 11:36 pm:
David,
Thanks for the note.
Regarding super-Siri, since it is not a biological entity and so doesn't evolve in the way biological organisms evolve, it would fall outside my claims above. I think all bets are off for how weird AI could get; but there is the constraint that presumably at least the first iterations will be made by, and will be servants of, biological entities.
With respect to consciousness: it does seems to me that the notion of consciousness we are talking about here is self-awareness and mind-simulation (and not phenomenal experience). Super-Siri should have those. Phenomenal experience is presumably not what Watts is claiming his scramblers (primarily) lack -- or am I missing something?
I like your idea about the hive mind. It would seem Watts has described a superorganism, in E. O. Wilson's sense, and Wilson thinks such a thing is a single organism. Maybe we could learn about such a hive mind by looking at our own terrestrial examples?
Thanks!