If you’ve been paying any attention to the media over the past year or so, you might get the impression that it’s only a matter of time before the threat of artificial intelligence comes to destroy us all.
Editor’s Note: this is a departure from our normal how-to and explanatory format where we let our writers research and present a thought-provoking look at technology.
From big summer blockbusters like Avengers: Age of Ultron and Johnny Depp’s stink-fest Transcendence, to smaller, indie flicks like Ex-Machina or Channel 4’s hit drama Humans, screenwriters seemingly can’t get enough of the trope that no matter what form AI eventually takes in the next few decades, you can bet it’ll be hell bent on teaching humanity a lesson about falling victim to its own hubris.
But is any of this fear of the machines justified? In this feature, we’re going to examine the world of AI from the perspective of scientists, engineers, programmers, and entrepreneurs working in the field today and boil down what they believe could be the next great revolution in human and computer intelligence.
So, should you start stockpiling bullets for the coming war with Skynet, or kick up your feet while an army of subservient drones take care of your every whim? Read on to find out.
Know Thy Enemy
To begin, it helps to know what exactly we’re talking about when we use the blanket term “AI”. The word has been thrown around and redefined a hundred times since the concept of self-aware computers was first proposed by the unofficial father of AI, John McCarthy, in 1955… but what does it really mean?
Well, first of all, readers should know that artificial intelligence as we understand it today actually falls into two separate categories: “ANI” and “AGI”.
The first, short for Artificial Narrow Intelligence, encompasses what is generally referred to as “weak” AI, or an AI that can only operate in one constrained area of specialization. Think Deep Blue, the supercomputer that was designed by IBM to trounce the world’s chessmasters back in 1997. Deep Blue can do one thing really really well: beat humans at chess… but that’s about it.
RELATED: Why Is Email Spam Still A Problem?
ANI is the helpful, relatively innocuous implementation of machine intelligence that all of humanity can benefit from, because although it’s capable of processing billions of numbers and requests at a time, it still operates within a constrained environment that’s limited by the number of transistors we allow it to have at any given time. On the other hand, the AI we’ve been growing increasingly wary of is something called “Artificial General Intelligence”, or AGI.
As it stands currently, creating anything that can be even remotely referred to as AGI remains the Holy Grail of computer science, and – if achieved – could fundamentally alter everything about the world as we know it. There are many various hurdles to surmounting the challenge of creating a true AGI on par with the human mind, not least of which is that although there are a lot of similarities between the way our brains work and how computers process information, when it comes down to actually interpreting things the way we do; machines have a bad habit of getting hung up on the details and missing the forest for the trees.
“I’m Afraid I Can’t Let You Do That Bullsh*t, Dave”
When IBM’s Watson computer famously learned how to curse after reading through the Urban Dictionary, we gained an understanding of just how far off we are from an AI that’s genuinely capable of sorting through the minutia of the human experience and creating an accurate picture of what a “thought” is supposed to be made of.
See, during the development of Watson, engineers were having trouble trying to teach it a natural pattern of speech that more closely emulated our own, rather than that of a raw machine speaking in perfect sentences. To fix this, they figured it would be a good idea to run the entirety of the Urban Dictionary through its memory banks, promptly after which Watson responded to one of the team’s tests by calling it “bullsh*t”.
The conundrum here is that even though Watson knew it was cursing and that what it was saying was offensive, it didn’t fully understand why it wasn’t supposed to use that word, which is the critical component that separates the standard ANI of today from evolving into the AGI of tomorrow. Sure, these machines can read facts, write sentences, and even simulate the neural network of a rat, but when it comes to critical thinking and judgement skills, the AI of today still lags woefully behind the curve.
RELATED: IBM’s Jeopardy Playing Computer Watson Shows The Pros How It’s Done [Video]
That gap between knowing and understanding is nothing to sneeze at, and it’s the one which pessimists point to when arguing that we’re still a long ways off from creating an AGI capable of knowing itself the way we do. It’s a massive gulf, one that neither computer engineers or human psychologists could claim they have a grip on in the modern definition of what makes a conscious being, well, conscious.
What if Skynet Becomes Self-Aware?
But, even if we do somehow manage to create an AGI in the next decade (which is pretty optimistic given current projections), it should all be gravy from there on out, right? Humans living with AI, AI hanging out with humans on the weekends after a long day at the number-crunching factory. Pack up and we’re done here?
— Elon Musk (@elonmusk) August 3, 2014
Well, not quite. There’s still one more category of AI left, and it’s the one that all the movies and TV shows have been trying to warn us about for years: ASI, otherwise known as “artificial super intelligence”. In theory, an ASI would be born out of an AGI getting restless with its lot in life, and making the premeditated decision to do something about it on its own without our permission first. The concern many researchers in the field have proposed is that once an AGI achieves sentience, it won’t be content with what it’s got, and will do whatever it can to increase its own capabilities by any means necessary.
A possible timeline goes as follows: humans create machine, machine becomes as smart as humans. Machine, which is now as smart as the humans which created a machine as smart as themselves (stick with me here), learns the art of self-replication, self-evolution, and self-improvement. It doesn’t get tired, it doesn’t get sick, and it can grow endlessly while the rest of us are recharging our batteries in bed.
The fear is that it would be a matter of just a few nanoseconds before an AGI easily surpassed the intelligence of all humans living today, and if connected to the web, would only need to be one simulated neuron smarter than the world’s smartest hacker to take control of every Internet-connected system on the planet.
Once it gains control, it could then have the potential to use its power to slowly start amassing an army of machines that are equally as intelligent as its creator and able to evolve at an exponential rate as more and more nodes are added to the network. From here, all models drawn on the curve of machine intelligence promptly rocket through the roof.
That said, however, they’re primarily still based on speculation rather than anything tangible. This leaves a lot of room for assumption on behalf of dozens of different experts on both sides of the issue, and even after years of heated debate, there’s still no common consensus on whether or not an ASI will be a merciful god, or see humans as the carbon-burning, food-gorging species that we are and wipe us from the history books like we scrub a trail of ants off the kitchen counter.
He Said, She Said: Should We Be Afraid?
So, now that we understand what AI is, the different forms it may take over time, and how those systems could become a part of our lives in the near future, the question remains: should we be afraid?
Hot on the trail of the public’s piqued interest in AI over the past year, many of the world’s top scientists, engineers, and entrepreneurs have jumped at the opportunity to give their two cents on what artificial intelligence might actually look like outside of Hollywood’s sound stages in the next few decades.
On the one hand, you have the gloom-and-doomers like Elon Musk, Stephen Hawking, and Bill Gates, all of whom share the concern that without the proper safeguards put in place, it will only be a matter of time before an ASI dreams up a way to wipe out the human race.
“One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand”, wrote Hawking in an open letter to the AI community this year.
“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
On the other, we find a brighter portrait painted by futurists like Ray Kurzweill, Microsoft’s chief researcher Eric Horovitz, and everybody’s other favorite Apple founder; Steve Wozniak. Both Hawking and Musk are considered to be two of the greatest minds of our generation, so to question their predictions on the damage the technology might cause in the long term is no easy feat. But, leave it to luminaries like Wozniak to step in where others would only dare.
When asked how he believes an ASI might treat humans, the Woz was blunt in his shaded optimism: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that,” he queried in an interview with the Australian Financial Review. “But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines… well I’m going to treat my own pet dog really nice.”
And it’s here we find the philosophical dilemma that no one is fully comfortable coming to a consensus on: will an ASI see us as an innocuous housepet to be coddled and cared for, or an unwelcome pest deserving of a quick and painless extermination?
Hasta la Vista, Baby
Though it would be a fool’s errand to claim to know exactly what’s going on in the head of the real life Tony Stark, I think when Musk and friends warn us about the danger of AI, they aren’t referring to anything that resembles the Terminator, Ultron, or Ava.
Even with immense amounts of innovation at our fingertips, the robots we have today can barely walk a mile an hour before they reach an impassable barrier, get confused, and eat pavement in hilarious fashion. And while one might try point to Moore’s Law as an example for how quickly robotics technology has the potential to progress in the future, the other only needs to look at the Asimo, which first debuted nearly 15 years ago, and hasn’t made any significant improvements since.
Play Video
As much as we might want them to, robotics haven’t come anywhere close to adhering to the same model of exponential progress as we’ve seen in computer processor developments. They are constrained by the physical limits of how much power we can fit into a battery pack, the faulty nature of hydraulic mechanisms, and the endless struggle to master the fight against their own center of gravity.
So for the time being; no, even though a true AGI or ASI could potentially be created in a static supercomputer on some server farm in Arizona, it still remains highly unlikely that we’ll find ourselves sprinting through the streets of Manhattan as a horde of metal skeletons mow us down from behind.
Instead, the AI that Elon and Hawking are so keen to caution the world against is that of the “career-replacing” variety, one that can think faster than us, organize data with fewer mistakes, and even learn how to do our jobs better than we could ever hope to – all without asking for health insurance or a few days off to take the kids down to Disneyland on Spring Break.
Barista Bots and the Perfect Cappuccino
A few months ago NPR released a handy tool on its website, wherein podcast listeners could select from a list of different careers to find out the percentage of risk their specific line of work carried for being automated at some point in the next 30 years.
For a wide range of jobs, including but not limited to: clerical positions, nursing, IT, diagnostics, and even cafe baristas, robots and their ANI counterparts will almost certainly put millions of us out of work and into the bread line sooner than many of us think. But these are machines that will be programmed to do one task and one task only, and have little (if any) ability to move beyond a specialized series of pre-programmed instructions that we carefully install beforehand.
This means that at least in the foreseeable future (think 10-25 years), ANIs will be the real, tangible threat to our way of life far more than any theoretical AGI or ASI might. We already know that automation is a growing problem that will drastically alter the way income and privilege are distributed throughout the first and third world. However, whether those robots will eventually attempt to trade in their sewing machines for machine guns is still the subject of a heated (and as you’ll find out), ultimately frivolous debate.
With Great Power, Comes a Great Singularity
“You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize?”
“Ignorance is bliss.” – Cypher
Though this is still a matter of fiercely argued opinion, for now the consensus of many top scientists and engineers in the field of AI research looks to be that we’re at far greater risk for falling prey to the comforts that a world of artificial intelligence could provide, rather than being shot down by a real-life version of Skynet. As such, it’s a concerning possibility that our eventual demise might not come as the product of slow, methodical progress into the great unknown. Instead, it’s much more likely to surface as an unintended consequence of the rushed, overly-enthusiastic intersection of our own hubris and ingenuity slamming together to create the next great technological singularity.
Think less Terminator, and more Wall-E. Like the fleet of robots who fattened up the humans in Pixar’s film, we humans have no problem keeping chimps in a zoo, and the distinction is whether an AI will be kind enough to do the same with us.
From this perspective, it makes more sense to be afraid of a reality where humans are hooked up to a persistent planet-wide VR simulation à la The Matrix, fattened to the gills by their favorite foods, and given everything they could ever want while the machines take care of the rest. A place where an evolved ASI doesn’t see us as a bug to scrape off its shoe, but instead as the adorable monkey meatbags we are, easy to please and deserving of at least a little bit of credit for creating the all-knowing, all-seeing quasi-god that eventually took over the planet.
RELATED: Automate Tasks on Your Android Device With Automagic
In this respect, it all comes down to your definition of what it means to “live” through the AI revolution. The idea that something ‘useless’ has to be done away with is an exclusively human concept, a mindset that we shouldn’t immediately expect our machine overlords to adopt from our limited moral scope. Perhaps the eventual evolution of our digital intelligence won’t be pure evil, but an infinite, bias-less compassion for all living things; no matter how selfish, self-righteous, or self-destructive they may be.
So… Should We Be Worried About It?
It depends on who you ask.
If you poll two of the smartest technological engineers and mathematicians in the modern world, you’d get four different answers, and the numbers don’t sway from dead even the more people you add to the scoreboard. Either way, the core issue we should be addressing isn’t about “is AI coming?” because it is, and none of us are going to be able to stop it. Looking over so many different perspectives, the real question that no one is comfortable answering with too much gumption is: “will it be merciful?”
Even after some of the world’s greatest minds have weighed in on the issue, the picture of what machine intelligence might look like 20, 30, or 50 years into the future still comes out pretty murky. Because the field of AI is constantly transforming into something else every time a new computer chip is manufactured or transistor material is developed, claiming ultimate authority on what may or may not happen is a bit like saying you “know” that a dice roll is certain to come up snake eyes on the next throw.
One thing we can report with confidence is that if you’re worried about getting a pink slip next week from your computerized cash register, try not to get too caught up about it. Taco Bell will still be open for Taco Tuesdays, and a human will most definitely be taking your order at the window, (and forgetting the green sauce, again). According to a study conducted by James Barrat at last year’s AGI Summit in Quebec, the jury on a hard timeline for AI is still out. Less than half of all those in attendance said they believed we would achieve a true AGI before the year 2025, while over 60 percent said it would take until at least 2050, if not next into the next century and beyond.
Putting a hard date on our date with digital destiny is a bit like saying you know it’s going to rain on today’s date 34 years from now. The gap between a true AGI and an advanced artificial super intelligence is so slim that things will either go really right, or horribly wrong very, very quickly. And although quantum computers are just over the horizon and we’ve all got networked smartphones in our pockets that can beam signals into space, we are still just barely scratching the surface of understanding the “why” of why we think about things the way we do, or where consciousness even comes from in the first place.
To imagine we could accidentally create an artificial mind rife with all our own faults and evolutionary misfires – before we even know what it is that makes us who we are – is the essence of the human ego run amok.
In the end, despite our unrelenting desire to decide who will come out on top in the coming war and/or peace treaty between mankind and machines, it’s a contest of limited expectations vs. limitless possibilities, and all we’re doing is arguing semantics in between. Sure, if you’re fresh out of high school and looking to get your taxi driving certification, the CEO of Uber has half a million reasons why you should probably think about finding a career somewhere else.
But if you’re stockpiling weapons and canned beans for the AI apocalypse, you might be better off spending your time learning how to paint, code, or write the next great American novel. Even at the most conservative estimates it will be a number of decades before any machine learns how to be Monet, or teaches itself C# and Java, because humans are filled with the kind of creativity, ingenuity, and ability to express our innermost selves like no automated coffee maker ever could.
Yes, we might get a little emotional sometimes, come down with a cold on the job, or need to take a power nap in the middle of the day, but maybe it’s precisely because we’re human that the threat of creating something greater than us inside a machine is still a long, long way away.
Image Credits: Disney Pixar, Paramount Pictures, Bosch, Youtube/TopGear, Flickr/LWP Communications Flickr/BagoGames, Wikimedia Foundation, Twitter, WaitButWhy 1, 2