Issue 112 – January 2016


Our Future is Artificial

Artificial intelligence, or AI, is a growing technological field closely tied in with robotics. It is also a common plot device used in many modern movies and books, but the concept of AI—the real AI that is currently growing into infancy in laboratories all around the world—is a bigger mystery. To understand artificial intelligence, it’s important to understand natural intelligence. What are the basic building blocks for intelligence, and what is the threshold that moves a creature from simply existing into intelligence? What makes us, humans, intelligent? For those who study AI, the key factors that determine intelligence are knowledge, planning, learning, natural language, perception, the ability to reason, and the ability to manipulate the physical world.

The thing that separates humans from animals on an intellectual level is the concept of self. Understanding that one exists is a huge jump on the intelligence scale. Some animals can recognize themselves in mirrors, but the only animal to date that has asked a human an existential question is the parrot Alex, who asked its keeper, “What color am I?” This simple question is an excellent example of intelligence. In AI, the threshold that a computer program has to pass to be labeled as true artificial intelligence is the sense of self, the ability to ask existential questions that the AI has formulated without a human’s guiding hand.

Aspects of artificial intelligence have been around in science and literature for hundreds of years, starting with Ada Lovelace as a visionary of the modern computer in the 1830-1840’s, and the 1863 novel Erewhon by Samuel Butler, who wrote of a utopia with self-replicating intelligent machines. Since these two pioneers, there have been many novels and stories that depict machines and AI that have slowly started to crawl toward reality.

Some current research has taken a real step toward achieving true AI. These include: IBM Watson, Rensselaer Polytechnic Institute’s Nao bots, and Google’s Deep Dream. Out of these three, IBM Watson is the oldest and best known for winning at Jeopardy in 2011. It was a huge success in showing how a machine could beat a human cast in simple question and answer tasks. But surely the human contestants could have also won if they had memorized all of Wikipedia before the contest.

What IBM Watson truly consists of is huge amounts of data which is sorted and filtered in a way that allows it to give an array of answers based on an index of accuracy. It is basically a powerful search engine that understands natural language. This isn’t a huge feat today, when most search engines do the same thing. The answer is built out of thousands of queries that have been searched from databases full of information—imagine memorizing all the dictionaries and wikis in the world.

IBM Watson is a fancy natural language search engine, but not really intelligent even though it does touch knowledge and natural language in the definition of AI. Today, IBM Watson is marketed toward health care industries as a diagnostics tool for doctors—not to replace doctors, but as a search tool to help them with continued education and in areas outside their expertise.

Nao bots are a much newer enterprise. Built in a lab of the Rensselaer Polytechnic Institute in New York, they fascinated the world for a few days in July, 2015. The simulation was performed by three Aldebaran Nao humanoid robots, at the RAIR Lab. These robots were given a problem where two of them were rendered mute with a fictitious “dumbing pill.” The robots were asked if they could speak and the one that wasn’t affected by the pill answered with “I don’t know,” quickly amending it to “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

The robot was able to understand the question through the use of natural language processing and it was able to amend the answer after it heard itself speak. This means that it showed a base of knowledge, natural language processing, and perception with a hint of reasoning. Since they are robots, they were able to touch the physical world in a small way. They were able to comprehend a spoken question, similar to Siri or Hey Google, but they take it a step forward by having the AI amend its own knowledge base with new information as it receives it. This technology is still in a very early stage, but it has improved tremendously in the last five years.

Google’s Deep Dream was also revealed in July, 2015. Deep Dream is something on a completely different level. It isn’t specifically artificial intelligence, but it is a learning algorithm—an algorithm is a fancy way of saying recipe for code—it is essentially a piece of image recognition software. This, however, is a huge step forward in AI.

A deeply ingrained source of human intelligence is pattern and image recognition. To be able to program something that is able to even start to resemble a plausible image recognition pattern is something that hasn’t been properly done before—simply for the fact that it is very difficult to accomplish. Imagine you are a machine, a simple algorithm, trying to understand an image. You have to break it down into pieces you can understand, from pixels to machine language.

Deep Dream is a deep neural network—a computer program that isn’t pre-programmed, but is taught to learn from examples. It is fed thousands of pictures of images, e.g. a dog, and it is told, “this is a picture of a dog.” It can then start to see shapes and patterns, which in its deep mind indicate that a picture consists of a dog. It can then plausibly guess in other pictures if there is a dog in the picture. That is why many of the images that come out from deep dream are filled with dogs or eyes or other common objects that were fed through its learning process.

What is extraordinary about these algorithms is how they teach scientists to understand how something or someone can learn. As humans we learn from having a non-stop video feed to our brains from infancy, which literally takes images and associates them with the correct terms and correct uses along with the information our other senses give us. This gives us millions of gigabytes of information a day in high definition. If, let’s say, Deep Dream was given this quantity of information it could learn to associate patterns and images on a completely different level.

These three examples of the current level of AI shows how far we have come and how far we still have to go in creating a strong AI, but the seed is there. IBM Watson was a start, with a huge collection of information that could be filtered and sorted to beat a simple game of Jeopardy. Nao bots showed how far we still have to go to achieve true natural language processing and robot decision-making. And Deep Dream shows how a mechanical, programmed neural network filters and understands information.

The next step in AI is better understanding, be it image or natural language processing. Both sources of information will give AI new information about its surroundings and a method to touch the real world. Deep Dream and the Nao Bots AI have gone a long way, but there is still much to do to reach such an AI. In the future, this technology can be used to boost their ability to understand their surroundings—with smart technology we could have traffic lights where you don’t have to push a button to cross. It can be used as tools in the house—your robot vacuum can skirt a fallen object—and as companions—a lonely person can have a pet that responds to its surroundings or a nurse that will deliver the right pills for the day.

There have also been many concerns about AI. In 2014, Stephen Hawking claimed that “the development of full artificial intelligence could spell the end of the human race.” And quite a few notable people in the computer industry agree, most notably, Bill Gates (founder of Microsoft) and Elon Musk (SpaceX and Tesla Motors).

A technological singularity is a theoretical point in time when a true “strong AI” is created that can generically make improvements on itself. AI could then create a new era where smart machines could design technical improvements to themselves in a never-ending cycle. This causes major concerns in areas such as military robots, which are constantly made more sophisticated.

War is a great motivator for technological advancement. Unmanned aerial vehicles, or UAVs, are drones that have become commonplace in the last decade. They started as piloted drones that were used to bomb war zones, but now they are sophisticated enough to be programmed with a target and sent on their way. In a hypothetical situation, a UAV has been given a command to “go destroy target X.” To ensure that the machine isn’t hacked on the way to its target it is kept in a closed loop—no one can modify the command after it has been given. What if the target turns out to be the wrong one? Or imagine a world where the machines themselves gather evidence and select their targets.

In literature and movies we have already touched on the possibilities of strong AI. There haven’t been many AI progression stories as most stories border the cusp of either AI sentience or dominance—because it makes for a better story. Most AI science fiction revolves around an AI questioning the very essence of humanity. There is Hal, from 2001: Space Odyssey, driven mad by conflicting orders, but there is also Baymax the healthcare robot, from Big Hero 6, who sacrifices himself for his friend. One stands for the destruction of humanity, while the other for preserving it.

There is a clear divide in AI in science fiction where the AI symbolizes either the good or the bad in humanity. The novel Do Androids Dream of Electric Sheep? by Philip K. Dick, better known for its adaptation as the movie Blade Runner, has an interesting thought experiment of the real difference between humans and androids.

If we manage to gain this technology, are we as humans and their creators now superior? Do we have the right to use them as property forever? At what point does an AI reach the same level as a human? If we do achieve singularity, does it mean they are the same as us? Will there always be a fear of an AI uprising—an AI-controlled world where the roles are flipped, and we become the slaves like in the 1999 film The Matrix or the novel series Hyperion Cantos by Dan Simmons.

In recent years there have been multiple movies—such as Chappie and Ex Machina—that realize the moral dilemma of creating a sentient being. How far can we go in creating another thing or creature that is like us? Can we impose Asimov’s three laws of robotics on a sentient being when we cannot uphold them ourselves?

Current AI research is done to aid humans—be it in military fashion or as helpers for the elderly. They need to be smart enough to be useful, but as this technology evolves, more debates about ethics and a possible singularity will continue. In the 1981 novel Golem XIV by Stanislaw Lem, an AI is able to gain a higher intelligence than a human is capable of and thus transcends to a higher level of intelligence unattainable by humans.

Should we aspire for a robot maid from The Jetsons or a self-learning robot that will be good or evil based on its upbringing? Or will we create a being that will transcend beyond our reach, which will create a world of logic and reason that our primitive brains are incapable of understanding?

It is impossible to ascertain when or if anything like the singularity will happen, but it does bring up the morality of what the future of artificial intelligence will hold and how humans will traverse this new era.

Author profile

Sofia Siren is a world traveler, having lived in Finland, USA, Japan and currently residing in Canada. She has a master's degree in Software Engineering. Her Clarkesworld article was her first non-academic published article. You can contact her on all social networks @bluphacelia

Share this page on: