robots


Screen Shot 2015-11-18 at 4.55.47 PM

There’s a decent chance that Facebook CEO Mark Zuckerberg will see this story. It’s relevant to his interests and nominally about him and the media and advertising industries his company has managed to upend and dominate. So the odds that it will appear in his Facebook News Feed are reasonably good. And should that happen, Zuckerberg might wince at this story’s headline or roll his eyes in frustration at its thesis. He might even cringe at the idea that others might see it on Facebook as well. And some almost certainly will. Because if Facebook works as designed, there’s a chance this article will also be routed or shared to their News Feeds. And there’s little the Facebook CEO can do to stop it, because he’s not really in charge of his platform — the algorithms are.

This has been true for some time now, but it’s been spotlit in recent months following a steady drumbeat of reports about Facebook as a channel for fake news and propaganda and, more recently, the company’s admission that it sold roughly $100,000 worth of ads to a Russian troll farm in 2016. The gist of the coverage follows a familiar narrative for Facebook since Trump’s surprise presidential win: that social networks as vast and pervasive as Facebook are among the most important engines of social power, with unprecedented and unchecked influence. It’s part of a Big Tech political backlash that’s gained considerable currency in recent months — enough that the big platforms like Facebook are scrambling to avoid regulation and bracing themselves for congressional testimony.

Should Zuckerberg or Twitter CEO Jack Dorsey be summoned to Congress and peppered with questions about the inner workings of their companies, they may well be ill-equipped to answer them. Because while they might be in control of the broader operations of their respective companies, they do not appear to be fully in control of the automated algorithmic systems calibrated to drive engagement on Facebook and Twitter. And they have demonstrably proven that they lacked the foresight to imagine and understand the now clear real-world repercussions of those systems — fake news, propaganda, and dark targeted advertising linked to foreign interference in a US presidential election.

Among tech industry critics, every advancement from Alexa to AlphaGo to autonomous vehicles is winkingly dubbed as a harbinger of a dystopian future powered by artificial intelligence. Tech moguls like Tesla and SpaceX founder Elon Musk and futurists like Stephen Hawking warn against nightmarish scenarios that vary from the destruction of the human raceto the more likely threat that our lives will be subject to the whims of advanced algorithms that we’ve been happily feeding with our increasingly personal data. In 2014, Musk remarked that artificial intelligence is “potentially more dangerous than nukes” and warned that humanity might someday become a “biological boot loader for digital superintelligence.”

But if you look around, some of that dystopian algorithmic future has already arrived. Complex technological systems orchestrate many — if not most — of the consequential decisions in your life. We entrust our romantic lives to apps and algorithms — chances are you know somebody who’s swiped right or matched with a stranger and then slept with, dated, or married them. A portion of our daily contact with our friends and families is moderated via automated feeds painstakingly tailored to our interests. To navigate our cities, we’re jumping into cars with strangers assigned to us via robot dispatchers and sent down the quickest route to our destination based on algorithmic analysis of traffic patterns. Our fortunes are won and lost as the result of financial markets largely dictated by networks of high-frequency trading algorithms. Meanwhile, the always-learning AI-powered technology behind our search engines and our newsfeeds quietly shapes and reshapes the information we discover and even how we perceive it. And there’s mounting evidence that suggests it might even be capable of influencing the outcome of our elections.

Put another way, the algorithms increasingly appear to have more power to shape lives than the people who designed and maintain them. This shouldn’t come as a surprise, if only because Big Tech’s founders have been saying it for years now — in fact, it’s their favorite excuse — “we’re just a technology company” or “we’re only the platform.” And though it’s a convenient cop-out for the unintended consequences of their own creations, it’s also — from the perspectives of technological complexity and scale — kind of true. Facebook and Google and Twitter designed their systems, and they tweak them rigorously. But because the platforms themselves — the technological processes that inform decisions for billions of people every second of the day — are largely automated, they’re enormously difficult to monitor.

Facebook acknowledged this in its response to a ProPublica report this month that showed the company allowed advertisers to target users with anti-Semitic keywords. According to the report, Facebook’s anti-Semitic categories “were created by an algorithm rather than by people.”

And Zuckerberg suggested similar difficulties in monitoring just this week while addressing Facebook’s role in protecting elections. “Now, I’m not going to sit here and tell you we’re going to catch all bad content in our system,” he explained during a Facebook Live session last Thursday. “I wish I could tell you we’re going to be able to stop all interference, but that wouldn’t be realistic.” Beneath Zuckerberg’s video, a steady stream of commenters remarked on his speech. Some offered heart emojis of support. Others mocked his demeanor and delivery. Some accused him of treason. He was powerless to stop it.

Facebook

Facebook’s response to accusations about its role in the 2016 election since Nov. 9 bears this out, most notably Zuckerberg’s public comments immediately following the election that the claim that fake news influenced the US presidential election was “a pretty crazy idea.” In April, when Facebook released a white paper detailing the results of its investigation into fake news on its platform during the election, the company insisted it did not know the identity of the malicious actors using its network. And after recent revelations that Facebook had discovered Russian ads on its platform, the company maintained that as of April 2017, it was unaware of any Russian involvement. “When asked we said there was no evidence of Russian ads. That was true at the time,” Facebook told Mashable earlier this month.

Some critics of Facebook speak about the company’s leadership almost like an authoritarian government — a sovereign entity with virtually unchecked power and domineering ambition. So much so, in fact, that Zuckerberg is now frequently mentioned as a possible presidential candidate despite his public denials. But perhaps a better comparison might be the United Nations — a group of individuals endowed with the almost impossible responsibility of policing a network of interconnected autonomous powers. Just take Zuckerberg’s statement this week, in which he sounded strikingly like an embattled secretary-general: “It is a new challenge for internet communities to deal with nation-states attempting to subvert elections. But if that’s what we must do, we are committed to rising to the occasion,” he said.

“I wish I could tell you we’re going to be able to stop all interference, but that wouldn’t be realistic” isn’t just a carefully hedged pledge to do better, it’s a tacit admission that the effort to do better may well be undermined by a system of algorithms and processes that the company doesn’t fully understand or control at scale. Add to this Facebook’s mission as a business — drive user growth; drive user engagement; monetize that growth and engagement; innovate in a ferociously competitive industry; oh, and uphold ideals of community and free speech — and you have a balance that’s seemingly impossible to maintain.

Facebook’s power and influence are vast, and the past year has shown that true understanding of the company’s reach and application is difficult; as CJR’s Pete Vernon wrote this week, “What other CEO can claim, with a straight face, the power to ‘proactively…strengthen the democratic process?’” But perhaps “power” is the wrong word to describe Zuckerberg’s — and other tech moguls’ — position. In reality, it feels more like a responsibility. At the New York Times, Kevin Roose described it as Facebook’s Frankenstein problem — the company created a monster it can’t control. And in terms of responsibility, the metaphor is almost too perfect. After all, people always forget that Dr. Frankenstein was the creator, not the monster.

Advertisements

Screen Shot 2015-11-18 at 4.55.47 PM

This will form the foundation:

(Phys.org)—Researchers have built a new type of “neuron transistor”—a transistor that behaves like a neuron in a living brain. These devices could form the building blocks of neuromorphic hardware that may offer unprecedented computational capabilities, such as learning and adaptation.

The researchers, S. G. Hu and coauthors at the University of Electronic Science and Technology of China and Nanyang Technological University in Singapore, have published a paper on the neuron transistor in a recent issue of Nanotechnology.

In order for a transistor to behave like a biological neuron, it must be capable of implementing neuron-like functions—in particular, weighted summation and threshold functions. These refer to a biological neuron’s ability to receive weighted input signals from many other neurons, and then to sum the input values and compare them to a threshold value to determine whether or not to fire. The human brain has tens of billions of neurons, and they are constantly performing weighted summation and threshold functions many times per second that together control all of our thoughts and actions.

In the new study, the researchers constructed a neuron transistor that acts like a single neuron, capable of weighted summation and threshold functions. Instead of being made of silicon like conventional , the neuron transistor is made of a two-dimensional flake of molybdenum disulfide (MoS2), which belongs to a new class of semiconductor called .

To demonstrate the neuron transistor’s neuron-like behavior, the researchers showed that it can be controlled by either one gate or two gates simultaneously. In the latter case, the neuron transistor implements a summation . To demonstrate, the researchers showed that the neuron transistor can perform a counting task analogous to moving the beads in a two-bead abacus, along with other logic functions.

One of the advantages of the neuron transistor is its operating speed. Although other neuron transistors have already been built, they typically operate at frequencies of less than or equal to 0.05 Hz, which is much lower than the average firing rate of biological  of about 5 Hz. The new neuron transistor works in a wide frequency range of 0.01 to 15 Hz, which the researchers expect will offer advantages for developing neuromorphic hardware.

In the future, the researchers hope to add more control gates to the neuron transistor, creating a more realistic model of a biological neuron with its many inputs. In addition, the researchers hope to integrate neuron transistors with memristors (which are considered to be the most suitable device for implementing synapses) to construct neuromorphic systems that can work in a similar way to the brain.

 

Screen Shot 2015-11-18 at 4.55.47 PM

BEIJING — What worries you about the coming world of artificial intelligence?

Too often the answer to this question resembles the plot of a sci-fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs.

These are interesting issues to contemplate, but they are not pressing. They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This doesn’t mean we have nothing to worry about. On the contrary, the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better. They are only tools, not a competing form of intelligence. But they will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power.

It is imperative that we turn our attention to these imminent challenges.

What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan) in the service of a specified goal (maximizing profits for the lender). Think of a spreadsheet on steroids, trained on big data. These tools can outperform human beings at a given task.

This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

Part of the answer will involve educating or retraining people in tasks A.I. tools aren’t good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and “cross-domain” thinking — for example, the work of a trial lawyer. But these skills are typically required by high-paying jobs that may be hard to retrain displaced workers to do. More promising are lower-paying jobs involving the “people skills” that A.I. lacks: social workers, bartenders, concierges — professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?

The solution to the problem of mass unemployment, I suspect, will involve “service jobs of love.” These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous — or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.

Other volunteer jobs may be higher-paying and professional, such as compassionate medical service providers who serve as the “human interface” for A.I. programs that diagnose cancer. In all cases, people will be able to choose to work fewer hours than they do now.

Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.

As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of “service of love” voluntarism.

To fund this, tax rates will have to be high. The government will not only have to subsidize most people’s lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals.

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. It’s a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies — Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent — are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

Screen Shot 2015-11-18 at 4.55.47 PM

Screen Shot 2017-06-07 at 7.35.57 AM

A disruption leading to better things? Or the beginning of the end?

Predicting the future is always fraught with problems and mostly, turns out to be wrong.

Science fiction, which I have always enjoyed reading, has a pretty good track record, as good as anything else, in identifying the future. There have been a number of writers that have dealt with Robots: Asimov and Herbert. The first is quite positive, the latter, not. Film has seen the ‘Terminator’ series, again, dystopian.

However I think an examination of history demonstrates that in the ‘short-run’, any new disrupting technology is painful to a group [the displaced/disrupted group] and their livelihoods are shattered.

Over time, the new technology is a positive.

Looking at the list of jobs that will be disrupted/displaced/replaced, that pain is going to be widespread. That in of itself is a little different. Is it enough of a ‘difference’ to be material? Not sure.

In the ‘long run’, the important question is whether robots are run on a software program or whether they run on a true artificial intelligence (“AI”). The difference is material. An AI robot will have 150 x your intelligence. Is something that much more intelligent than you, going to be subservient to you? I doubt it….when was the last time you took orders from your goldfish?

Screen Shot 2015-11-18 at 4.55.47 PM

A few weeks ago, I wrote a column that outlined the worries of big thinkers such as Stephen Hawking and Andrew Yang who are predicting a wave of job destruction caused by automation, robots and artificial intelligence.

Michael Mandel begs to differ. Mandel is chief economic strategist at the Progressive Policy Institute. He and Bret Swanson, president of Entropy Economics LLC, just completed a study for the Tech CEO Council that foresees a rather bright economic future brought about by technological innovation.

I recently interviewed Mandel and he made a compelling argument that the application of technology to the physical economy will, in time, produce more jobs, higher wages, greater productivity and all kinds of as-yet-unimagined business activity. The two doomsday narratives that are currently circulating — that robots will steal jobs and that productivity will lag more or less permanently — are as wrong as the 19th century fears that electrification would put people out of work, Mandel said.

 

His examples:

Mandel pointed out that this is already happening in two areas. The first is fracking. Technological innovations have enabled extraction companies to access heretofore unreachable energy reserves and, though this progress comes with a controversial environmental cost, there is no question fracking has created good-paying jobs and enhanced economic activity.

The second is e-commerce. Beyond the digital component, e-commerce is about getting physical products shipped and delivered and the result is jobs for a lot more folks than just those who write computer code. Mandel points to Kentucky, where the big rise in e-commerce employment is transforming the state’s economy. It is an early example, he said, that the blessings of technology are “breaking out of the digital ghetto of the coastal states.”

Hardly the examples that would really address the issues raised by the naysayers. The trouble with the pro-lobby is that they really don’t know where, what or how any improvements may take place.

Second, AI and robotic utilisation of AI is different to electricity etc. As such the potential downside to AI and robots cannot be fully seen either.

Screen Shot 2015-11-18 at 4.55.47 PM

It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.

They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).

An unassuming but competitive 40-year-old, Hassabis is regarded as the Merlin who will likely help conjure our A.I. children. The field of A.I. is rapidly developing but still far from the powerful, self-evolving software that haunts Musk. Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning.

Some in Silicon Valley were intrigued to learn that Hassabis, a skilled chess player and former video-game designer, once came up with a game called Evil Genius, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.

Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”

Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”

In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

the World Government Summit in Dubai, in February, Musk again cued the scary organ music, evoking the plots of classic horror stories when he noted that “sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing.” He said that the way to escape human obsolescence, in the end, may be by “having some sort of merger of biological intelligence and machine intelligence.” This Vulcan mind-meld could involve something called a neural lace—an injectable mesh that would literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk told me in February. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. “For a meaningful partial-brain interface, I think we’re roughly four or five years away.”

Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.” He went on: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” Some A.I. engineers found Musk’s theatricality so absurdly amusing that they began echoing it. When they would return to the lab after a break, they’d say, “O.K., let’s get back to work summoning.”

Musk wasn’t laughing. “Elon’s crusade” (as one of his friends and fellow tech big shots calls it) against unfettered A.I. had begun.

Elon Musk smiled when I mentioned to him that he comes across as something of an Ayn Rand-ian hero. “I have heard that before,” he said in his slight South African accent. “She obviously has a fairly extreme set of views, but she has some good points in there.”

But Ayn Rand would do some re-writes on Elon Musk. She would make his eyes gray and his face more gaunt. She would refashion his public demeanor to be less droll, and she would not countenance his goofy giggle. She would certainly get rid of all his nonsense about the “collective” good. She would find great material in the 45-year-old’s complicated personal life: his first wife, the fantasy writer Justine Musk, and their five sons (one set of twins, one of triplets), and his much younger second wife, the British actress Talulah Riley, who played the boring Bennet sister in the Keira Knightley version of Pride & Prejudice. Riley and Musk were married, divorced, and then re-married. They are now divorced again. Last fall, Musk tweeted that Talulah “does a great job playing a deadly sexbot” on HBO’s Westworld, adding a smiley-face emoticon. It’s hard for mere mortal women to maintain a relationship with someone as insanely obsessed with work as Musk.

“How much time does a woman want a week?” he asked Ashlee Vance. “Maybe ten hours? That’s kind of the minimum?”

Mostly, Rand would savor Musk, a hyper-logical, risk-loving industrialist. He enjoys costume parties, wing-walking, and Japanese steampunk extravaganzas. Robert Downey Jr. used Musk as a model for Iron Man. Marc Mathieu, the chief marketing officer of Samsung USA, who has gone fly-fishing in Iceland with Musk, calls him “a cross between Steve Jobs and Jules Verne.”As they danced at their wedding reception, Justine later recalled, Musk informed her, “I am the alpha in this relationship.”

a tech universe full of skinny guys in hoodies—whipping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is—Musk is a throwback to Henry Ford and Hank Rearden. In Atlas Shrugged, Rearden gives his wife a bracelet made from the first batch of his revolutionary metal, as though it were made of diamonds. Musk has a chunk of one of his rockets mounted on the wall of his Bel Air house, like a work of art.

Musk shoots for the moon—literally. He launches cost-efficient rockets into space and hopes to eventually inhabit the Red Planet. In February he announced plans to send two space tourists on a flight around the moon as early as next year. He creates sleek batteries that could lead to a world powered by cheap solar energy. He forges gleaming steel into sensuous Tesla electric cars with such elegant lines that even the nitpicking Steve Jobs would have been hard-pressed to find fault. He wants to save time as well as humanity: he dreamed up the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour. When Musk visited secretary of defense Ashton Carter last summer, he mischievously tweeted that he was at the Pentagon to talk about designing a Tony Stark-style “flying metal suit.” Sitting in traffic in L.A. in December, getting bored and frustrated, he tweeted about creating the Boring Company to dig tunnels under the city to rescue the populace from “soul-destroying traffic.” By January, according to Bloomberg Businessweek, Musk had assigned a senior SpaceX engineer to oversee the plan and had started digging his first test hole. His sometimes quixotic efforts to save the world have inspired a parody twitter account, “Bored Elon Musk,” where a faux Musk spouts off wacky ideas such as “Oxford commas as a service” and “bunches of bananas genetically engineered” so that the bananas ripen one at a time.

Of course, big dreamers have big stumbles. Some SpaceX rockets have blown up, and last June a driver was killed in a self-driving Tesla whose sensors failed to notice the tractor-trailer crossing its path. (An investigation by the National Highway Traffic Safety Administration found that Tesla’s Autopilot system was not to blame.)

Musk is stoic about setbacks but all too conscious of nightmare scenarios. His views reflect a dictum from Atlas Shrugged: “Man has the power to act as his own destroyer—and that is the way he has acted through most of his history.” As he told me, “we are the first species capable of self-annihilation.”

Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.

Maybe we already have overlords. As Musk slyly told Recode’s annual Code Conference last year in Rancho Palos Verdes, California, we could already be playthings in a simulated-reality world run by an advanced civilization. Reportedly, two Silicon Valley billionaires are working on an algorithm to break us out of the Matrix.

Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention.

“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”

You’d think that anytime Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. Musk’s crusade was viewed as Sisyphean at best and Luddite at worst. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Steve Wozniak says, humans are the family pets.

But Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence. I sat down with the two men when their new venture had only a handful of young engineers and a makeshift office, an apartment in San Francisco’s Mission District that belongs to Greg Brockman, OpenAI’s 28-year-old co-founder and chief technology officer. When I went back recently, to talk with Brockman and Ilya Sutskever, the company’s 30-year-old research director (and also a co-founder), OpenAI had moved into an airy office nearby with a robot, the usual complement of snacks, and 50 full-time employees. (Another 10 to 30 are on the way.)

Altman, in gray T-shirt and jeans, is all wiry, pale intensity. Musk’s fervor is masked by his diffident manner and rosy countenance. His eyes are green or blue, depending on the light, and his lips are plum red. He has an aura of command while retaining a trace of the gawky, lonely South African teenager who immigrated to Canada by himself at the age of 17.

In Silicon Valley, a lunchtime meeting does not necessarily involve that mundane fuel known as food. Younger coders are too absorbed in algorithms to linger over meals. Some just chug Soylent. Older ones are so obsessed with immortality that sometimes they’re just washing down health pills with almond milk.

At first blush, OpenAI seemed like a bantamweight vanity project, a bunch of brainy kids in a walkup apartment taking on the multi-billion-dollar efforts at Google, Facebook, and other companies which employ the world’s leading A.I. experts. But then, playing a well-heeled David to Goliath is Musk’s specialty, and he always does it with style—and some useful sensationalism.

Let others in Silicon Valley focus on their I.P.O. price and ridding San Francisco of what they regard as its unsightly homeless population. Musk has larger aims, like ending global warming and dying on Mars (just not, he says, on impact).

Musk began to see man’s fate in the galaxy as his personal obligation three decades ago, when as a teenager he had a full-blown existential crisis. Musk told me that The Hitchhiker’s Guide to the Galaxy, by Douglas Adams, was a turning point for him. The book is about aliens destroying the earth to make way for a hyperspace highway and features Marvin the Paranoid Android and a supercomputer designed to answer all the mysteries of the universe. (Musk slipped at least one reference to the book into the software of the Tesla Model S.) As a teenager, Vance writes in his biography, Musk formulated a mission statement for himself: “The only thing that makes sense to do is strive for greater collective enlightenment.”

OpenAI got under way with a vague mandate—which isn’t surprising, given that people in the field are still arguing over what form A.I. will take, what it will be able to do, and what can be done about it. So far, public policy on A.I. is strangely undetermined and software is largely unregulated. The Federal Aviation Administration oversees drones, the Securities and Exchange Commission oversees automated financial trading, and the Department of Transportation has begun to oversee self-driving cars.

Musk believes that it is better to try to get super-A.I. first and distribute the technology to the world than to allow the algorithms to be concealed and concentrated in the hands of tech or government elites—even when the tech elites happen to be his own friends, people such as Google founders Larry Page and Sergey Brin. “I’ve had many conversations with Larry about A.I. and robotics—many, many,” Musk told me. “And some of them have gotten quite heated. You know, I think it’s not just Larry, but there are many futurists who feel a certain inevitability or fatalism about robots, where we’d have some sort of peripheral role. The phrase used is ‘We are the biological boot-loader for digital super-intelligence.’ ” (A boot loader is the small program that launches the operating system when you first turn on your computer.) “Matter can’t organize itself into a chip,” Musk explained. “But it can organize itself into a biological entity that gets increasingly sophisticated and ultimately can create the chip.”

Musk has no intention of being a boot loader. Page and Brin see themselves as forces for good, but Musk says the issue goes far beyond the motivations of a handful of Silicon Valley executives.

Ater the so-called A.I. winter—the broad, commercial failure in the late 80s of an early A.I. technology that wasn’t up to snuff—artificial intelligence got a reputation as snake oil. Now it’s the hot thing again in this go-go era in the Valley. Greg Brockman, of OpenAI, believes the next decade will be all about A.I., with everyone throwing money at the small number of “wizards” who know the A.I. “incantations.” Guys who got rich writing code to solve banal problems like how to pay a stranger for stuff online now contemplate a vertiginous world where they are the creators of a new reality and perhaps a new species.

Microsoft’s Jaron Lanier, the dreadlocked computer scientist known as the father of virtual reality, gave me his view as to why the digerati find the “science-fiction fantasy” of A.I. so tantalizing: “It’s saying, ‘Oh, you digital techy people, you’re like gods; you’re creating life; you’re transforming reality.’ There’s a tremendous narcissism in it that we’re the people who can do it. No one else. The Pope can’t do it. The president can’t do it. No one else can do it. We are the masters of it . . . . The software we’re building is our immortality.” This kind of God-like ambition isn’t new, he adds. “I read about it once in a story about a golden calf.” He shook his head. “Don’t get high on your own supply, you know?”

Google has gobbled up almost every interesting robotics and machine-learning company over the last few years. It bought DeepMind for $650 million, reportedly beating out Facebook, and built the Google Brain team to work on A.I. It hired Geoffrey Hinton, a British pioneer in artificial neural networks; and Ray Kurzweil, the eccentric futurist who has predicted that we are only 28 years away from the Rapture-like “Singularity”—the moment when the spiraling capabilities of self-improving artificial super-intelligence will far exceed human intelligence, and human beings will merge with A.I. to create the “god-like” hybrid beings of the future.

It’s in Larry Page’s blood and Google’s DNA to believe that A.I. is the company’s inevitable destiny—think of that destiny as you will. (“If evil A.I. lights up,” Ashlee Vance told me, “it will light up first at Google.”) If Google could get computers to master search when search was the most important problem in the world, then presumably it can get computers to do everything else. In March of last year, Silicon Valley gulped when a fabled South Korean player of the world’s most complex board game, Go, was beaten in Seoul by DeepMind’s AlphaGo. Hassabis, who has said he is running an Apollo program for A.I., called it a “historic moment” and admitted that even he was surprised it happened so quickly. “I’ve always hoped that A.I. could help us discover completely new ideas in complex scientific domains,” Hassabis told me in February. “This might be one of the first glimpses of that kind of creativity.” More recently, AlphaGo played 60 games online against top Go players in China, Japan, and Korea—and emerged with a record of 60–0. In January, in another shock to the system, an A.I. program showed that it could bluff. Libratus, built by two Carnegie Mellon researchers, was able to crush top poker players at Texas Hold ‘Em.

Peter Thiel told me about a friend of his who says that the only reason people tolerate Silicon Valley is that no one there seems to be having any sex or any fun. But there are reports of sex robots on the way that come with apps that can control their moods and even have a pulse. The Valley is skittish when it comes to female sex robots—an obsession in Japan—because of its notoriously male-dominated culture and its much-publicized issues with sexual harassment and discrimination. But when I asked Musk about this, he replied matter-of-factly, “Sex robots? I think those are quite likely.”

Wether sincere or a shrewd P.R. move, Hassabis made it a condition of the Google acquisition that Google and DeepMind establish a joint A.I. ethics board. At the time, three years ago, forming an ethics board was seen as a precocious move, as if to imply that Hassabis was on the verge of achieving true A.I. Now, not so much. Last June, a researcher at DeepMind co-authored a paper outlining a way to design a “big red button” that could be used as a kill switch to stop A.I. from inflicting harm.

Google executives say Larry Page’s view on A.I. is shaped by his frustration about how many systems are sub-optimal—from systems that book trips to systems that price crops. He believes that A.I. will improve people’s lives and has said that, when human needs are more easily met, people will “have more time with their family or to pursue their own interests.” Especially when a robot throws them out of work.

Musk is a friend of Page’s. He attended Page’s wedding and sometimes stays at his house when he’s in the San Francisco area. “It’s not worth having a house for one or two nights a week,” the 99th-richest man in the world explained to me. At times, Musk has expressed concern that Page may be naïve about how A.I. could play out. If Page is inclined toward the philosophy that machines are only as good or bad as the people creating them, Musk firmly disagrees. Some at Google—perhaps annoyed that Musk is, in essence, pointing a finger at them for rushing ahead willy-nilly—dismiss his dystopic take as a cinematic cliché. Eric Schmidt, the executive chairman of Google’s parent company, put it this way: “Robots are invented. Countries arm them. An evil dictator turns the robots on humans, and all humans will be killed. Sounds like a movie to me.”

Some in Silicon Valley argue that Musk is interested less in saving the world than in buffing his brand, and that he is exploiting a deeply rooted conflict: the one between man and machine, and our fear that the creation will turn against us. They gripe that his epic good-versus-evil story line is about luring talent at discount rates and incubating his own A.I. software for cars and rockets. It’s certainly true that the Bay Area has always had a healthy respect for making a buck. As Sam Spade said in The Maltese Falcon, “Most things in San Francisco can be bought, or taken.”

Musk is without doubt a dazzling salesman. Who better than a guardian of human welfare to sell you your new, self-driving Tesla? Andrew Ng—the chief scientist at Baidu, known as China’s Google—based in Sunnyvale, California, writes off Musk’s Manichaean throwdown as “marketing genius.” “At the height of the recession, he persuaded the U.S. government to help him build an electric sports car,” Ng recalled, incredulous. The Stanford professor is married to a robotics expert, issued a robot-themed engagement announcement, and keeps a “Trust the Robot” black jacket hanging on the back of his chair. He thinks people who worry about A.I. going rogue are distracted by “phantoms,” and regards getting alarmed now as akin to worrying about overpopulation on Mars before we populate it. “And I think it’s fascinating,” he said about Musk in particular, “that in a rather short period of time he’s inserted himself into the conversation on A.I. I think he sees accurately that A.I. is going to create tremendous amounts of value.”

Although he once called Musk a “sci-fi version of P. T. Barnum,” Ashlee Vance thinks that Musk’s concern about A.I. is genuine, even if what he can actually do about it is unclear. “His wife, Talulah, told me they had late-night conversations about A.I. at home,” Vance noted. “Elon is brutally logical. The way he tackles everything is like moving chess pieces around. When he plays this scenario out in his head, it doesn’t end well for people.”

Continued………

So ‘lawyers’ look safe for the moment.

 

Screen Shot 2017-03-29 at 8.04.47 AM

Screen Shot 2015-11-18 at 4.55.47 PM

In science fiction, the promise or threat of artificial intelligence is tied to humans’ relationship to conscious machines. Whether it’s Terminators or Cylons or servants like the “Star Trek” computer or the Star Wars droids, machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise.

What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.

* * *

Deflationary examples of AI are everywhere. Google funds a system to identify toxic comments online, a machine learning algorithm called Perspective. But it turns out that simple typos can fool it. Artificial intelligence is cited as a barrier to strengthen an American border wall, but the “barrier” turns out to be little more than sensor networks and automated kiosks with potentially-dubious built-in profiling. Similarly, a “Tennis Club AI” turns out to be just a better line sensor using off-the-shelf computer vision. Facebook announces an AI to detect suicidal thoughts posted to its platform, but closer inspection reveals that the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers.

AI’s miracles are celebrated outside the tech sector, too. Coca-Cola reportedly wants to use “AI bots” to “crank out ads” instead of humans. What that means remains mysterious. Similar efforts to generate AI music or to compose AI news stories seem promising on first blush—but then, AI editors trawling Wikipedia to correct typos and links end up stuck in infinite loops with one another. And according to human-bot interaction consultancy Botanalytics (no, really), 40 percent of interlocutors give up on conversational bots after one interaction. Maybe that’s because bots are mostly glorified phone trees, or else clever, automated Mad Libs.

AI has also become a fashion for corporate strategy. The Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts, noting a huge uptick in the last two years. Companies boast about undefined AI acquisitions. The 2017 Deloitte Global Human Capital Trends report claims that AI has “revolutionized” the way people work and live, but never cites specifics. Nevertheless, coverage of the report concludes that artificial intelligence is forcing corporate leaders to “reconsider some of their core structures.”

And both press and popular discourse sometimes inflate simple features into AI miracles. Last month, for example, Twitter announced service updates to help protect users from low-quality and abusive tweets. The changes amounted to simple refinements to hide posts from blocked, muted, and new accounts, along with other, undescribed content filters. Nevertheless, some takes on these changes—which amount to little more than additional clauses in database queries— conclude that Twitter is “constantly working on making its AI smarter.”

* * *

I asked my Georgia Tech colleague, the artificial intelligence researcher Charles Isbell, to weigh in on what “artificial intelligence” should mean. His first answer: “Making computers act like they do in the movies.” That might sound glib, but it underscores AI’s intrinsic relationship to theories of cognition and sentience. Commander Data poses questions about what qualities and capacities make a being conscious and moral—as do self-driving cars. A content filter that hides social media posts from accounts without profile pictures? Not so much. That’s just software.

Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix’s dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they’re really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

Next Page »