Screen Shot 2015-11-18 at 4.55.47 PM

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

An unassuming but competitive 40-year-old, Hassabis is regarded as the Merlin who will likely help conjure our A.I. children. The field of A.I. is rapidly developing but still far from the powerful, self-evolving software that haunts Musk. Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning.

Some in Silicon Valley were intrigued to learn that Hassabis, a skilled chess player and former video-game designer, once came up with a game called Evil Genius, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”

Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

the World Government Summit in Dubai, in February, Musk again cued the scary organ music, evoking the plots of classic horror stories when he noted that “sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing.” He said that the way to escape human obsolescence, in the end, may be by “having some sort of merger of biological intelligence and machine intelligence.” This Vulcan mind-meld could involve something called a neural lace—an injectable mesh that would literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk told me in February. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. “For a meaningful partial-brain interface, I think we’re roughly four or five years away.”

Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.” He went on: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” Some A.I. engineers found Musk’s theatricality so absurdly amusing that they began echoing it. When they would return to the lab after a break, they’d say, “O.K., let’s get back to work summoning.”

Musk wasn’t laughing. “Elon’s crusade” (as one of his friends and fellow tech big shots calls it) against unfettered A.I. had begun.

Elon Musk smiled when I mentioned to him that he comes across as something of an Ayn Rand-ian hero. “I have heard that before,” he said in his slight South African accent. “She obviously has a fairly extreme set of views, but she has some good points in there.”

But Ayn Rand would do some re-writes on Elon Musk. She would make his eyes gray and his face more gaunt. She would refashion his public demeanor to be less droll, and she would not countenance his goofy giggle. She would certainly get rid of all his nonsense about the “collective” good. She would find great material in the 45-year-old’s complicated personal life: his first wife, the fantasy writer Justine Musk, and their five sons (one set of twins, one of triplets), and his much younger second wife, the British actress Talulah Riley, who played the boring Bennet sister in the Keira Knightley version of Pride & Prejudice. Riley and Musk were married, divorced, and then re-married. They are now divorced again. Last fall, Musk tweeted that Talulah “does a great job playing a deadly sexbot” on HBO’s Westworld, adding a smiley-face emoticon. It’s hard for mere mortal women to maintain a relationship with someone as insanely obsessed with work as Musk.

“How much time does a woman want a week?” he asked Ashlee Vance. “Maybe ten hours? That’s kind of the minimum?”

Mostly, Rand would savor Musk, a hyper-logical, risk-loving industrialist. He enjoys costume parties, wing-walking, and Japanese steampunk extravaganzas. Robert Downey Jr. used Musk as a model for Iron Man. Marc Mathieu, the chief marketing officer of Samsung USA, who has gone fly-fishing in Iceland with Musk, calls him “a cross between Steve Jobs and Jules Verne.”As they danced at their wedding reception, Justine later recalled, Musk informed her, “I am the alpha in this relationship.”

a tech universe full of skinny guys in hoodies—whipping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is—Musk is a throwback to Henry Ford and Hank Rearden. In Atlas Shrugged, Rearden gives his wife a bracelet made from the first batch of his revolutionary metal, as though it were made of diamonds. Musk has a chunk of one of his rockets mounted on the wall of his Bel Air house, like a work of art.

Musk shoots for the moon—literally. He launches cost-efficient rockets into space and hopes to eventually inhabit the Red Planet. In February he announced plans to send two space tourists on a flight around the moon as early as next year. He creates sleek batteries that could lead to a world powered by cheap solar energy. He forges gleaming steel into sensuous Tesla electric cars with such elegant lines that even the nitpicking Steve Jobs would have been hard-pressed to find fault. He wants to save time as well as humanity: he dreamed up the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour. When Musk visited secretary of defense Ashton Carter last summer, he mischievously tweeted that he was at the Pentagon to talk about designing a Tony Stark-style “flying metal suit.” Sitting in traffic in L.A. in December, getting bored and frustrated, he tweeted about creating the Boring Company to dig tunnels under the city to rescue the populace from “soul-destroying traffic.” By January, according to Bloomberg Businessweek, Musk had assigned a senior SpaceX engineer to oversee the plan and had started digging his first test hole. His sometimes quixotic efforts to save the world have inspired a parody twitter account, “Bored Elon Musk,” where a faux Musk spouts off wacky ideas such as “Oxford commas as a service” and “bunches of bananas genetically engineered” so that the bananas ripen one at a time.

Of course, big dreamers have big stumbles. Some SpaceX rockets have blown up, and last June a driver was killed in a self-driving Tesla whose sensors failed to notice the tractor-trailer crossing its path. (An investigation by the National Highway Traffic Safety Administration found that Tesla’s Autopilot system was not to blame.)

Musk is stoic about setbacks but all too conscious of nightmare scenarios. His views reflect a dictum from Atlas Shrugged: “Man has the power to act as his own destroyer—and that is the way he has acted through most of his history.” As he told me, “we are the first species capable of self-annihilation.”

Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.

Maybe we already have overlords. As Musk slyly told Recode’s annual Code Conference last year in Rancho Palos Verdes, California, we could already be playthings in a simulated-reality world run by an advanced civilization. Reportedly, two Silicon Valley billionaires are working on an algorithm to break us out of the Matrix.

Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention.

“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”

You’d think that anytime Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. Musk’s crusade was viewed as Sisyphean at best and Luddite at worst. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Steve Wozniak says, humans are the family pets.

But Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence. I sat down with the two men when their new venture had only a handful of young engineers and a makeshift office, an apartment in San Francisco’s Mission District that belongs to Greg Brockman, OpenAI’s 28-year-old co-founder and chief technology officer. When I went back recently, to talk with Brockman and Ilya Sutskever, the company’s 30-year-old research director (and also a co-founder), OpenAI had moved into an airy office nearby with a robot, the usual complement of snacks, and 50 full-time employees. (Another 10 to 30 are on the way.)

Altman, in gray T-shirt and jeans, is all wiry, pale intensity. Musk’s fervor is masked by his diffident manner and rosy countenance. His eyes are green or blue, depending on the light, and his lips are plum red. He has an aura of command while retaining a trace of the gawky, lonely South African teenager who immigrated to Canada by himself at the age of 17.

In Silicon Valley, a lunchtime meeting does not necessarily involve that mundane fuel known as food. Younger coders are too absorbed in algorithms to linger over meals. Some just chug Soylent. Older ones are so obsessed with immortality that sometimes they’re just washing down health pills with almond milk.

At first blush, OpenAI seemed like a bantamweight vanity project, a bunch of brainy kids in a walkup apartment taking on the multi-billion-dollar efforts at Google, Facebook, and other companies which employ the world’s leading A.I. experts. But then, playing a well-heeled David to Goliath is Musk’s specialty, and he always does it with style—and some useful sensationalism.

Let others in Silicon Valley focus on their I.P.O. price and ridding San Francisco of what they regard as its unsightly homeless population. Musk has larger aims, like ending global warming and dying on Mars (just not, he says, on impact).

Musk began to see man’s fate in the galaxy as his personal obligation three decades ago, when as a teenager he had a full-blown existential crisis. Musk told me that The Hitchhiker’s Guide to the Galaxy, by Douglas Adams, was a turning point for him. The book is about aliens destroying the earth to make way for a hyperspace highway and features Marvin the Paranoid Android and a supercomputer designed to answer all the mysteries of the universe. (Musk slipped at least one reference to the book into the software of the Tesla Model S.) As a teenager, Vance writes in his biography, Musk formulated a mission statement for himself: “The only thing that makes sense to do is strive for greater collective enlightenment.”

OpenAI got under way with a vague mandate—which isn’t surprising, given that people in the field are still arguing over what form A.I. will take, what it will be able to do, and what can be done about it. So far, public policy on A.I. is strangely undetermined and software is largely unregulated. The Federal Aviation Administration oversees drones, the Securities and Exchange Commission oversees automated financial trading, and the Department of Transportation has begun to oversee self-driving cars.

Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites—even when the tech elites happen to be his own friends, people such as Google founders Larry Page and Sergey Brin. “I’ve had many conversations with Larry about A.I. and robotics—many, many,” Musk told me. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’ ” (A boot loader is the small program that launches the operating system when you first turn on your computer.) “Matter can’t organize itself into a chip,” Musk explained. “But it can organize itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip.”

Musk has no intention of being a boot loader. Page and Brin see themselves as forces for good, but Musk says the issue goes far beyond the motivations of a handful of Silicon Valley executives.

Ater the so-called A.I. winter—the broad, commercial failure in the late 80s of an early A.I. technology that wasn’t up to snuff—artificial intelligence got a reputation as snake oil. Now it’s the hot thing again in this go-go era in the Valley. Greg Brockman, of OpenAI, believes the next decade will be all about A.I., with everyone throwing money at the small number of “wizards” who know the A.I. “incantations.” Guys who got rich writing code to solve banal problems like how to pay a stranger for stuff online now contemplate a vertiginous world where they are the creators of a new reality and perhaps a new species.

Microsoft’s Jaron Lanier, the dreadlocked computer scientist known as the father of virtual reality, gave me his view as to why the digerati find the “science-fiction fantasy” of A.I. so tantalizing: “It’s saying, ‘Oh, you digital techy people, you’re like gods; you’re creating life; you’re transforming reality.’ There’s a tremendous narcissism in it that we’re the people who can do it. No one else. The Pope can’t do it. The president can’t do it. No one else can do it. We are the masters of it . . . . The software we’re building is our immortality.” This kind of God-like ambition isn’t new, he adds. “I read about it once in a story about a golden calf.” He shook his head. “Don’t get high on your own supply, you know?”

Google has gobbled up almost every interesting robotics and machine-learning company over the last few years. It bought DeepMind for $650 million, reportedly beating out Facebook, and built the Google Brain team to work on A.I. It hired Geoffrey Hinton, a British pioneer in artificial neural networks; and Ray Kurzweil, the eccentric futurist who has predicted that we are only 28 years away from the Rapture-like “Singularity”—the moment when the spiraling capabilities of self-improving artificial super-intelligence will far exceed human intelligence, and human beings will merge with A.I. to create the “god-like” hybrid beings of the future.

It’s in Larry Page’s blood and Google’s DNA to believe that A.I. is the company’s inevitable destiny—think of that destiny as you will. (“If evil A.I. lights up,” Ashlee Vance told me, “it will light up first at Google.”) If Google could get computers to master search when search was the most important problem in the world, then presumably it can get computers to do everything else. In March of last year, Silicon Valley gulped when a fabled South Korean player of the world’s most complex board game, Go, was beaten in Seoul by DeepMind’s AlphaGo. Hassabis, who has said he is running an Apollo program for A.I., called it a “historic moment” and admitted that even he was surprised it happened so quickly. “I’ve always hoped that A.I. could help us discover completely new ideas in complex scientific domains,” Hassabis told me in February. “This might be one of the first glimpses of that kind of creativity.” More recently, AlphaGo played 60 games online against top Go players in China, Japan, and Korea—and emerged with a record of 60–0. In January, in another shock to the system, an A.I. program showed that it could bluff. Libratus, built by two Carnegie Mellon researchers, was able to crush top poker players at Texas Hold ‘Em.

Peter Thiel told me about a friend of his who says that the only reason people tolerate Silicon Valley is that no one there seems to be having any sex or any fun. But there are reports of sex robots on the way that come with apps that can control their moods and even have a pulse. The Valley is skittish when it comes to female sex robots—an obsession in Japan—because of its notoriously male-dominated culture and its much-publicized issues with sexual harassment and discrimination. But when I asked Musk about this, he replied matter-of-factly, “Sex robots? I think those are quite likely.”

Wether sincere or a shrewd P.R. move, Hassabis made it a condition of the Google acquisition that Google and DeepMind establish a joint A.I. ethics board. At the time, three years ago, forming an ethics board was seen as a precocious move, as if to imply that Hassabis was on the verge of achieving true A.I. Now, not so much. Last June, a researcher at DeepMind co-authored a paper outlining a way to design a “big red button” that could be used as a kill switch to stop A.I. from inflicting harm.

Google executives say Larry Page’s view on A.I. is shaped by his frustration about how many systems are sub-optimal—from systems that book trips to systems that price crops. He believes that A.I. will improve people’s lives and has said that, when human needs are more easily met, people will “have more time with their family or to pursue their own interests.” Especially when a robot throws them out of work.

Musk is a friend of Page’s. He attended Page’s wedding and sometimes stays at his house when he’s in the San Francisco area. “It’s not worth having a house for one or two nights a week,” the 99th-richest man in the world explained to me. At times, Musk has expressed concern that Page may be naïve about how A.I. could play out. If Page is inclined toward the philosophy that machines are only as good or bad as the people creating them, Musk firmly disagrees. Some at Google—perhaps annoyed that Musk is, in essence, pointing a finger at them for rushing ahead willy-nilly—dismiss his dystopic take as a cinematic cliché. Eric Schmidt, the executive chairman of Google’s parent company, put it this way: “Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.”

Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us. They gripe that his epic good-versus-evil story line is about luring talent at discount rates and incubating his own A.I. software for cars and rockets. It’s certainly true that the Bay Area has always had a healthy respect for making a buck. As Sam Spade said in The Maltese Falcon, “Most things in San Francisco can be bought, or taken.”

Musk is without doubt a dazzling salesman. Who better than a guardian of human welfare to sell you your new, self-driving Tesla? Andrew Ng—the chief scientist at Baidu, known as China’s Google—based in Sunnyvale, California, writes off Musk’s Manichaean throwdown as “marketing genius.” “At the height of the recession, he persuaded the U.S. government to help him build an electric sports car,” Ng recalled, incredulous. The Stanford professor is married to a robotics expert, issued a robot-themed engagement announcement, and keeps a “Trust the Robot” black jacket hanging on the back of his chair. He thinks people who worry about A.I. going rogue are distracted by “phantoms,” and regards getting alarmed now as akin to worrying about overpopulation on Mars before we populate it. “And I think it’s fascinating,” he said about Musk in particular, “that in a rather short period of time he’s inserted himself into the conversation on A.I. I think he sees accurately that A.I. is going to create tremendous amounts of value.”

Although he once called Musk a “sci-fi version of P. T. Barnum,” Ashlee Vance thinks that Musk’s concern about A.I. is genuine, even if what he can actually do about it is unclear. “His wife, Talulah, told me they had late-night conversations about A.I. at home,” Vance noted. “Elon is brutally logical. The way he tackles everything is like moving chess pieces around. When he plays this scenario out in his head, it doesn’t end well for people.”


So ‘lawyers’ look safe for the moment.


Screen Shot 2017-03-29 at 8.04.47 AM

Screen Shot 2015-11-18 at 4.55.47 PM

In science fiction, the promise or threat of artificial intelligence is tied to humans’ relationship to conscious machines. Whether it’s Terminators or Cylons or servants like the “Star Trek” computer or the Star Wars droids, machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise.

What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.

* * *

Deflationary examples of AI are everywhere. Google funds a system to identify toxic comments online, a machine learning algorithm called Perspective. But it turns out that simple typos can fool it. Artificial intelligence is cited as a barrier to strengthen an American border wall, but the “barrier” turns out to be little more than sensor networks and automated kiosks with potentially-dubious built-in profiling. Similarly, a “Tennis Club AI” turns out to be just a better line sensor using off-the-shelf computer vision. Facebook announces an AI to detect suicidal thoughts posted to its platform, but closer inspection reveals that the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers.

AI’s miracles are celebrated outside the tech sector, too. Coca-Cola reportedly wants to use “AI bots” to “crank out ads” instead of humans. What that means remains mysterious. Similar efforts to generate AI music or to compose AI news stories seem promising on first blush—but then, AI editors trawling Wikipedia to correct typos and links end up stuck in infinite loops with one another. And according to human-bot interaction consultancy Botanalytics (no, really), 40 percent of interlocutors give up on conversational bots after one interaction. Maybe that’s because bots are mostly glorified phone trees, or else clever, automated Mad Libs.

AI has also become a fashion for corporate strategy. The Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts, noting a huge uptick in the last two years. Companies boast about undefined AI acquisitions. The 2017 Deloitte Global Human Capital Trends report claims that AI has “revolutionized” the way people work and live, but never cites specifics. Nevertheless, coverage of the report concludes that artificial intelligence is forcing corporate leaders to “reconsider some of their core structures.”

And both press and popular discourse sometimes inflate simple features into AI miracles. Last month, for example, Twitter announced service updates to help protect users from low-quality and abusive tweets. The changes amounted to simple refinements to hide posts from blocked, muted, and new accounts, along with other, undescribed content filters. Nevertheless, some takes on these changes—which amount to little more than additional clauses in database queries— conclude that Twitter is “constantly working on making its AI smarter.”

* * *

I asked my Georgia Tech colleague, the artificial intelligence researcher Charles Isbell, to weigh in on what “artificial intelligence” should mean. His first answer: “Making computers act like they do in the movies.” That might sound glib, but it underscores AI’s intrinsic relationship to theories of cognition and sentience. Commander Data poses questions about what qualities and capacities make a being conscious and moral—as do self-driving cars. A content filter that hides social media posts from accounts without profile pictures? Not so much. That’s just software.

Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix’s dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they’re really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

Screen Shot 2015-11-18 at 4.55.47 PM

Which is why when I began to read about the growing fear, in certain quarters, that a superhuman-level artificial intelligence might wipe humanity from the face of the Earth, I felt that here, at least, was a vision of our technological future that appealed to my fatalistic disposition.

Such dire imitations were frequently to be encountered in the pages of broadsheet newspapers, as often as not illustrated by the apocalyptic image from the Terminator films—by a titanium-skulled killer robot staring down the reader with the glowing red points of its pitiless eyes. Elon Musk had spoken of A.I. as “our greatest existential threat,” of its development as a technological means of “summoning the demon.” (“Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted in August 2014. “Unfortunately, that is increasingly probable.”) Peter Thiel had announced that “People are spending way too much time thinking about climate change, and way too little thinking about AI.” Stephen Hawking, meanwhile, had written an op-ed for the Independent in which he’d warned that success in this endeavour, while it would be “the biggest event in human history,” might very well “also be the last, unless we learn to avoid the risks.” Even Bill Gates had publicly admitted of his disquiet, speaking of his inability to “understand why some people are not concerned.”

Though I couldn’t quite bring myself to believe it, I was morbidly fascinated by the idea that we might be on the verge of creating a machine that could wipe out the entire species, and by the notion that capitalism’s great philosopher kings—Musk, Thiel, Gates—were so publicly exercised about the Promethean dangers of that ideology’s most cherished ideal. These dire warnings about A.I. were coming from what seemed to be the most unlikely of sources: not from Luddites or religious catastrophists, that is, but from the very people who personify our culture’s reverence for machines.

One of the more remarkable phenomena in this area was the existence of a number of research institutes and think tanks substantially devoted to raising awareness about what was known as “existential risk”—the risk of absolute annihilation of the species, as distinct from mere catastrophes like climate change or nuclear war or global pandemics—and to running the algorithms on how we might avoid this particular fate. There was the Future of Humanity Institute in Oxford, and the Centre for the Study of Existential Risk at the University of Cambridge, and the Machine Intelligence Research Institute at Berkeley, and the Future of Life Institute in Boston. The latter outfit featured on its board of scientific advisers not just prominent figures from science and technology, like Musk and Hawking and the pioneering geneticist George Church, but also, for some reason, the beloved film actors Alan Alda and Morgan Freeman.

What was it these people were referring to when they spoke of existential risk? What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, were a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have in mind. They may not have been experts in AI, as such, but they were extremely clever men who knew a lot about science. And if these people were worried, shouldn’t we all be worrying with them?

* * *

Nate Soares raised a hand to his close-shaven head and tapped a finger smartly against the frontal plate of his monkish skull.

“Right now,” he said, “the only way you can run a human being is on this quantity of meat.”

We were talking, Nate and I, about the benefits that might come with the advent of artificial superintelligence. For Nate, the most immediate benefit would be the ability to run a human being—to run, specifically, himself—on something other than this quantity of neural meat to which he was gesturing.]

He was a sinewy, broad-shouldered man in his mid-20s, with an air of tightly controlled calm; he wore a green T-shirt bearing the words “NATE THE GREAT,” and as he sat back in his office chair and folded his legs at the knee, I noted that he was shoeless, and that his socks were mismatched, one plain blue, the other white and patterned with cogs and wheels.

The room we conversed in was utterly featureless, save for the chairs we were sitting on, and a whiteboard, and a desk, on which rested an open laptop and a single book, which I happened to note was a hardback copy of philosopher Nick Bostrom’s surprise hit book Superintelligence: Paths Dangers, Strategies—which lays out, among other apocalyptic scenarios, a thought experiment in which an A.I. is directed to maximize the production of paperclips and proceeds to convert the entire planet into paperclips and paperclip production facilities.

This was Nate’s office at the Machine Intelligence Research Institute in Berkeley. The bareness of the space was a result, I gathered, of the fact that he had only just assumed his role as the executive director, having left a lucrative career as a software engineer at Google the previous year and having subsequently risen swiftly up the ranks at MIRI.

He spoke, now, of the great benefits that would come, all things being equal, with the advent of artificial superintelligence. By developing such a transformative technology, he said, we would essentially be delegating all future innovations—all scientific and technological progress—to the machine.

These claims were more or less standard among those in the tech world who believed that artificial superintelligence was a possibility. The problem-solving power of such a technology, properly harnessed, would lead to an enormous acceleration in the turnover of solutions and innovations, a state of permanent Copernican revolution. Questions that had troubled scientists for centuries would be solved in days, hours, minutes. Cures would be found for diseases that currently obliterated vast numbers of lives, while ingenious workarounds for overpopulation would be simultaneously devised. To hear of such things was to imagine a God who had long since abdicated all obligations toward his creation making a triumphant return in the guise of a software, an alpha and omega of zeroes and ones.

It was Nate’s belief that, should we manage to evade annihilation by machines, such a state of digital grace would inevitably be ours.

However, this mechanism, docile or otherwise, would be operating at an intellectual level so far above that of its human progenitors that its machinations, its mysterious ways, would be impossible for us to comprehend, in much the same way that our actions are, presumably, incomprehensible to the rats and monkeys we use in scientific experiments. And so, this intelligence explosion would, in one way or another, be an end to the era of human dominance—and very possibly the end of human existence.

“It gets very hard to predict the future once you have smarter-than-human things around,” said Nate, “In the same way that it gets very hard for a chimp to predict what is going to happen because there are smarter-than-chimp things around. That’s what the Singularity is: It’s the point past which you expect you can’t see.”

What he and his colleagues—at MIRI, at the Future of Humanity Institute, at the Future of Life Institute—were working to prevent was the creation of an artificial superintelligence that viewed us, its creators, as raw material that could be reconfigured into some more useful form (not necessarily paper clips). And the way Nate spoke about it, it was clear that he believed the odds to be stacked formidably high against success.

“To be clear,” said Nate, “I do think that this is the shit that’s going to kill me.” And not just him—“all of us,” he said. “That’s why I left Google. It’s the most important thing in the world, by some distance. And unlike other catastrophic risks—like say climate change—it’s dramatically underserved. There are thousands of person-years and billions of dollars being poured into the project of developing AI. And there are fewer than 10 people in the world right now working full-time on safety. Four of whom are in this building.”

“I’m somewhat optimistic,” he said, leaning back in his chair, “that if we raise more awareness about the problems, then with a couple more rapid steps in the direction of artificial intelligence, people will become much more worried that this stuff is close, and the A.I. field will wake up to this. But without people like us pushing this agenda, the default path is surely doom.”

For reasons I find difficult to identify, this term default path stayed with me all that morning, echoing quietly in my head as I left MIRI’s offices and made for the BART station, and then as I hurtled westward through the darkness beneath the bay. I had not encountered the phrase before, but understood intuitively that it was a programming term of art transposed onto the larger text of the future. And this term default path—which, I later learned, referred to the list of directories in which an operating system seeks executable files according to a given command—seemed in this way to represent in miniature an entire view of reality: an assurance, reinforced by abstractions and repeated proofs, that the world operated as an arcane system of commands and actions, and that its destruction or salvation would be a consequence of rigorously pursued logic. It was exactly the sort of apocalypse, in other words, and exactly the sort of redemption, that a computer programmer would imagine.

* * *

One of the people who had been instrumental in the idea of existential risk being taken seriously was Stuart Russell, a professor of computer science at U.C. Berkeley who had, more or less literally, written the book on artificial intelligence. (He was the co-author, with Google’s research director Peter Norvig, of Artificial Intelligence: A Modern Approach, the book most widely used as a core A.I. text in university computer science courses.)

I met Stuart at his office in Berkeley. Pretty much the first thing he did upon sitting me down was to swivel his computer screen toward me and have me read the following passage from a 1960 article by Norbert Wiener, the founder of cybernetics:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Stuart said that the passage I had just read was as clear a statement as he’d encountered of the problem with AI, and of how that problem needed to be addressed. What we needed to be able to do, he said, was define exactly and unambiguously what it was we wanted from this technology It was as straightforward as that, and as diabolically complex.

It was not, he insisted, the question of machines going rogue, formulating their own goals and pursing them at the expense of humanity, but rather the question of our own failure to communicate with sufficient clarity.

“I get a lot of mileage,” he said, “out of the King Midas myth.”

What King Midas wanted, presumably, was the selective ability to turn things into gold by touching them, but what he asked for (and what Dionysus famously granted him) was the inability to avoid turning things into gold by touching them. You could argue that his root problem was greed, but the proximate cause of his grief—which included, let’s remember, the unwanted alchemical transmutations of not just all foodstuffs and beverages, but ultimately his own child—was that he was insufficiently clear in communicating his wishes.

The fundamental risk with AI, in Stuart’s view, was no more or less than the fundamental difficulty in explicitly defining our own desires in a logically rigorous manner.

Imagine you have a massively powerful artificial intelligence, capable of solving the most vast and intractable scientific problems. Imagine you get in a room with this thing, and you tell it to eliminate cancer once and for all. The computer will go about its work and will quickly conclude that the most effective way to do so is to obliterate all species in which uncontrolled division of abnormal cells might potentially occur. Before you have a chance to realize your error, you’ve wiped out every sentient lifeform on Earth except for the artificial intelligence itself, which will have no reason to believe it has not successfully completed its task.

At times, it seemed to me perfectly obvious that the whole existential risk idea was a narcissistic fantasy of heroism and control—a grandiose delusion, on the part of computer programmers and tech entrepreneurs and other cloistered egomaniacal geeks, that the fate of the species lay in their hands: a ludicrous binary eschatology whereby we would either be destroyed by bad code or saved by good code.

But there were other occasions where I would become convinced that I was the only one who was deluded, and that Nate Soares, for instance, was absolutely, terrifyingly right: that thousands of the world’s smartest people were spending their days using the world’s most sophisticated technology to build something that would destroy us all. It seemed, if not quite plausible, on some level intuitively, poetically, mythologically right.

This was what we did as a species, after all: We built ingenious devices, and we destroyed things.

Screen Shot 2015-11-18 at 4.55.47 PM

The term “artificial intelligence” is widely used, but less understood. As we see it permeate our everyday lives, we should deal with its inevitable exponential growth and learn to embrace it before tremendous economic and social changes overwhelm us.

Part of the confusion about artificial intelligence is in the name itself. There is a tendency to think about AI as an endpoint — the creation of self-aware beings with consciousness that exist thanks to software. This somewhat disquieting concept weighs heavily; what makes us human when software can think, too? It also distracts us from the tremendous progress that has been made in developing software that ultimately drives AI: machine learning.

Machine learning allows software to mimic and then perform tasks that were until very recently carried out exclusively by humans. Simply put, software can now substitute for workers’ knowledge to a level where many jobs can be done as well — or even better — by software. This reality makes a conversation about when software will acquire consciousness somewhat superfluous.

When you combine the explosion in competency of machine learning with a continued development of hardware that mimics human action (think robots), our society is headed into a perfect storm where both physical labor and knowledge labor are equally under threat.

The trends are here, whether through the coming of autonomous taxis or medical diagnostics tools evaluating your well-being. There is no reason to expect this shift towards replacement to slow as machine learning applications find their way into more parts of our economy.

The invention of the steam engine and the industrialization that followed may provide a useful analogue to the challenges our society faces today. Steam power first substituted the brute force of animals and eventually moved much human labor away from growing crops to working in cities. Subsequent technological waves such as coal power, electricity and computerization continued to change the very nature of work. Yet, through each wave, the opportunity for citizens to apply their labor persisted. Humans were the masters of technology and found new ways to find income and worth through the jobs and roles that emerged as new technologies were applied.

Here’s the problem: I am not yet seeing a similar analogy for human workers when faced with machine learning and AI. Where are humans to go when most things they do can be better performed by software and machinery? What happens when human workers are not users of technology in their work but instead replaced by it entirely? I will admit to wanting to have an answer, but not yet finding one.

Some say our economy will adjust, and we will find ways to engage in commerce that relies on their labor. Others are less confident and predict a continued erosion of labor as we know it, leading to widespread unemployment and social unrest.

Other big questions raised by AI include what our expectations of privacy should be when machine learning needs our personal data to be efficient. Where do we draw the ethical lines when software must choose between two people’s lives? How will a society capable of satisfying such narrow individual needs maintain a unified culture and look out for the common good?

The potential and promise of AI requires a discussion free of ideological rigidity. Whether change occurs as our society makes those conscious choices or while we are otherwise distracted, the evolution is upon us regardless.

Screen Shot 2015-11-18 at 4.55.47 PM

Automation has become an increasingly disruptive force in the labour market.

Self-driving cars threaten the job security of millions of American truck drivers. At banks, automated tellers are increasingly common. And at wealth management firms, robo -advisers are replacing humans.

“Any job that is routine or monotonous runs the risk of being automated away,” Yisong Yue, an assistant professor at the California Institute of Technology, told Business Insider.

While it may seem like low-skill jobs face the most risk of being replaced by automation, complicated jobs that are fairly routine face some of the biggest risks, Yue, who teaches in the computing and mathematical sciences department at Caltech, explained.

In the legal profession, for example, groups of lawyers and paralegals sift through vast amounts of documents searching for keywords. Technology now exists that can quickly do that work. In the future it’s likely that a handful of lawyers to do the job of 20 due to automation.

That’s not heartening news for college students about to join the workforce. But experts say there are ways for them to adapt their academic pursuits to compete with an increasingly automated workforce, by learning to be critical thinkers who improvise in ways that robots cannot.

“I think that the types of jobs that are secure are the types of jobs that require free form pattern matching and creativity; things that require improvisation,” Yue said.

As well as thinking critically, Yue believes that students who are comfortable around computers and those who understand at least some programming will have advantages in an automated workforce.

CEO of HiringSolved Shon Burton, a company that leverages AI & machine learning technology to make job recruiting more efficient, agrees.

“Absolutely I think there’s value in some level of understanding computer science,” Burton told Business Insider. He explained that people who understand technology, in turn know limitations and abilities of an automated process and can use that knowledge to help them work smarter.

Still, that doesn’t mean that STEM majors alone hold the key to finding success in the future workforce. In fact, it may be those with “soft skills,” like adaptability and communication, that actually have an advantage.

“Students should be thinking, ‘in 20 years, where does the human add value?,’” Burton said. There will always be areas where humans will want to interact with other humans, he continued.

For example, perhaps artificial intelligence will be better able to diagnose a disease, but humans will still likely want to talk to a doctor to learn about their diagnosis and discuss options.

The best thing a college student can do to ensure they will succeed in an automated workplace is to chose an industry they love, and ensure they focus on learning creativity and communication skills, according to Burton.

“The important thing if you’re coming out of school is to think about where your edge is, you think about doing something you really want to do,” he explained.

Then he asked, “where does the automation stop?”

Screen Shot 2015-11-18 at 4.55.47 PM

A recent study found 50% of occupations today will be gone by 2020, and a 2013 Oxford study forecasted that 47% of jobs will be automated by 2034. A Ball State study found that only 13% of manufacturing job losses were due to trade, the rest from automation. A McKinsey study suggests 45% of knowledge work activity can be automated.

94% of the new job creation since 2005 is in the gig economy. These aren’t stable jobs with benefits on a career path. And if you are driving for Uber, your employer’s plan is to automate your job. Amazon has 270k employees, but most are soon-t0-be-automated ops and fulfillment. Facebook has 15k employees and a 330B market cap, and Snapchat in August had double their market cap per employee, at $48M per employee. The economic impact of Tech was raising productivity, but productivity and wages have been stagnant in recent years.

And the Trumpster…

Trump’s lack of attention to the issue is based on good reasons and bad ones. The bad ones are more fun, so let’s start with them. Trump knows virtually nothing about technology — other than a smartphone, he doesn’t use it much. And the industries he’s worked in — construction, real estate, hotels, and resorts — are among the least sophisticated in their use of information technology. So he’s not well equipped to understand the dynamics of automation-driven job loss.

The other Trump shortcoming is that the automation phenomenon is not driven by deals and negotiation. The Art of the Deal‘s author clearly has a penchant for sparring with opponents in highly visible negotiations. But automation-related job loss is difficult to negotiate about. It’s the silent killer of human labor, eliminating job after job over a period of time. Jobs often disappear through attrition. There are no visible plant closings to respond to, no press releases by foreign rivals to counter. It’s a complex subject that doesn’t lend itself to TV sound bites or tweets.

Screen Shot 2015-11-18 at 4.55.47 PM

Ray Kurzweil, the author, inventor, computer scientist, futurist and Google employee, was the featured keynote speaker Thursday afternoon at Postback, the annual conference presented by Seattle mobile marketing company Tune. His topic was the future of mobile technology. In Kurzweil’s world, however, that doesn’t just mean the future of smartphones — it means the future of humanity.

Continue reading for a few highlights from his talk.

On the effect of the modern information era: People think the world’s getting worse, and we see that on the left and the right, and we see that in other countries. People think the world is getting worse. … That’s the perception. What’s actually happening is our information about what’s wrong in the world is getting better. A century ago, there would be a battle that wiped out the next village, you’d never even hear about it. Now there’s an incident halfway around the globe and we not only hear about it, we experience it.

Which is why the perception that someone like Trump sells, could be false and misleading. But more importantly, what actions we take based upon that information. If I respond differently, then my perception has directly changed my actions, which has unforseen ramifications when multiplied by millions.

Brexit could be an example of exactly this.

On the potential of human genomics: It’s not just collecting what is basically the object code of life that is expanding exponentially. Our ability to understand it, to reverse-engineer it, to simulate it, and most importantly to reprogram this outdated software is also expanding exponentially. Genes are software programs. It’s not a metaphor. They are sequences of data. But they evolved many years ago, many tens of thousands of years ago, when conditions were different.

Clearly our genome is not exactly the same. It to has evolved. This may have been through random mutations, in which certain recipients thrived in a changing environment.

How technology will change humanity’s geographic needs: We’re only crowded because we’ve crowded ourselves into cities. Try taking a train trip across the United States, or Europe or Asia or anywhere in the world. Ninety-nine percent of the land is not used. Now, we don’t want to use it because you don’t want to be out in the boondocks if you don’t have people to work and play with. That’s already changing now that we have some level of virtual communication. We can have workgroups that are spread out. … But ultimately, we’ll have full-immersion virtual reality from within the nervous system, augmented reality.

One of my favorite novels is Asimov’s “Foundation” series. The planet Trantor….entirely covered by a city. Is that what we want?

On connecting the brain directly to the cloud: We don’t yet have brain extenders directly from our brain. We do have brain extenders indirectly. I mean this (holds up his smartphone) is a brain extender. … Ultimately we’ll put them directly in our brains. But not just to do search and language translation and other types of things we do now with mobile apps, but to actually extend the very scope of our brain.

The mobile phone as a brain extender. Possibly true for 1% of all users. Most use facebook or whatever other time wasting application, and essentially gossip. A monumental waste of time. Far from being a brain extender, for most, it is the ultimate dumbing down machine. Text language encourages bad spelling, poor grammar etc. So you can keep your brain extenders.

As far as directly connecting your brain to the cloud….that sounds like ‘The Matrix”, which is of course the subject of philosophical musings about the brain in a vat. The potential for mind control would seem to be a possibility here. Not for me thanks.

Why machines won’t displace humans: We’re going to merge with them, we’re going to make ourselves smarter. We’re already doing that. These mobile devices make us smarter. We’re routinely doing things we couldn’t possibly do without these brain extenders.

To date, I would argue that the vast majority are significantly more stupid because of them.

As to robots and AI, imagine a man, Spock, who’s choice making is driven 100% by logic, rather than by 50% logic and 50% emotion. How long does the emotional decision maker last? Most emotional decisions get us in trouble. The market is an excellent example. Politics is another, ie. Trump.


Next Page »