robots


Screen Shot 2015-11-18 at 4.55.47 PM

In science fiction, the promise or threat of artificial intelligence is tied to humans’ relationship to conscious machines. Whether it’s Terminators or Cylons or servants like the “Star Trek” computer or the Star Wars droids, machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise.

What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.

* * *

Deflationary examples of AI are everywhere. Google funds a system to identify toxic comments online, a machine learning algorithm called Perspective. But it turns out that simple typos can fool it. Artificial intelligence is cited as a barrier to strengthen an American border wall, but the “barrier” turns out to be little more than sensor networks and automated kiosks with potentially-dubious built-in profiling. Similarly, a “Tennis Club AI” turns out to be just a better line sensor using off-the-shelf computer vision. Facebook announces an AI to detect suicidal thoughts posted to its platform, but closer inspection reveals that the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers.

AI’s miracles are celebrated outside the tech sector, too. Coca-Cola reportedly wants to use “AI bots” to “crank out ads” instead of humans. What that means remains mysterious. Similar efforts to generate AI music or to compose AI news stories seem promising on first blush—but then, AI editors trawling Wikipedia to correct typos and links end up stuck in infinite loops with one another. And according to human-bot interaction consultancy Botanalytics (no, really), 40 percent of interlocutors give up on conversational bots after one interaction. Maybe that’s because bots are mostly glorified phone trees, or else clever, automated Mad Libs.

AI has also become a fashion for corporate strategy. The Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts, noting a huge uptick in the last two years. Companies boast about undefined AI acquisitions. The 2017 Deloitte Global Human Capital Trends report claims that AI has “revolutionized” the way people work and live, but never cites specifics. Nevertheless, coverage of the report concludes that artificial intelligence is forcing corporate leaders to “reconsider some of their core structures.”

And both press and popular discourse sometimes inflate simple features into AI miracles. Last month, for example, Twitter announced service updates to help protect users from low-quality and abusive tweets. The changes amounted to simple refinements to hide posts from blocked, muted, and new accounts, along with other, undescribed content filters. Nevertheless, some takes on these changes—which amount to little more than additional clauses in database queries— conclude that Twitter is “constantly working on making its AI smarter.”

* * *

I asked my Georgia Tech colleague, the artificial intelligence researcher Charles Isbell, to weigh in on what “artificial intelligence” should mean. His first answer: “Making computers act like they do in the movies.” That might sound glib, but it underscores AI’s intrinsic relationship to theories of cognition and sentience. Commander Data poses questions about what qualities and capacities make a being conscious and moral—as do self-driving cars. A content filter that hides social media posts from accounts without profile pictures? Not so much. That’s just software.

Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix’s dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they’re really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

Screen Shot 2015-11-18 at 4.55.47 PM

Which is why when I began to read about the growing fear, in certain quarters, that a superhuman-level artificial intelligence might wipe humanity from the face of the Earth, I felt that here, at least, was a vision of our technological future that appealed to my fatalistic disposition.

Such dire imitations were frequently to be encountered in the pages of broadsheet newspapers, as often as not illustrated by the apocalyptic image from the Terminator films—by a titanium-skulled killer robot staring down the reader with the glowing red points of its pitiless eyes. Elon Musk had spoken of A.I. as “our greatest existential threat,” of its development as a technological means of “summoning the demon.” (“Hope we’re not just the biological boot loader for digital superintelligence,” he tweeted in August 2014. “Unfortunately, that is increasingly probable.”) Peter Thiel had announced that “People are spending way too much time thinking about climate change, and way too little thinking about AI.” Stephen Hawking, meanwhile, had written an op-ed for the Independent in which he’d warned that success in this endeavour, while it would be “the biggest event in human history,” might very well “also be the last, unless we learn to avoid the risks.” Even Bill Gates had publicly admitted of his disquiet, speaking of his inability to “understand why some people are not concerned.”

Though I couldn’t quite bring myself to believe it, I was morbidly fascinated by the idea that we might be on the verge of creating a machine that could wipe out the entire species, and by the notion that capitalism’s great philosopher kings—Musk, Thiel, Gates—were so publicly exercised about the Promethean dangers of that ideology’s most cherished ideal. These dire warnings about A.I. were coming from what seemed to be the most unlikely of sources: not from Luddites or religious catastrophists, that is, but from the very people who personify our culture’s reverence for machines.

One of the more remarkable phenomena in this area was the existence of a number of research institutes and think tanks substantially devoted to raising awareness about what was known as “existential risk”—the risk of absolute annihilation of the species, as distinct from mere catastrophes like climate change or nuclear war or global pandemics—and to running the algorithms on how we might avoid this particular fate. There was the Future of Humanity Institute in Oxford, and the Centre for the Study of Existential Risk at the University of Cambridge, and the Machine Intelligence Research Institute at Berkeley, and the Future of Life Institute in Boston. The latter outfit featured on its board of scientific advisers not just prominent figures from science and technology, like Musk and Hawking and the pioneering geneticist George Church, but also, for some reason, the beloved film actors Alan Alda and Morgan Freeman.

What was it these people were referring to when they spoke of existential risk? What was the nature of the threat, the likelihood of its coming to pass? Were we talking about a 2001: A Space Odyssey scenario, where a sentient computer undergoes some malfunction or other and does what it deems necessary to prevent anyone from shutting it down? Were we talking about a Terminator scenario, were a Skynettian matrix of superintelligent machines gains consciousness and either destroys or enslaves humanity in order to further its particular goals? Certainly, if you were to take at face value the articles popping up about the looming threat of intelligent machines, and the dramatic utterances of savants like Thiel and Hawking, this would have been the sort of thing you’d have in mind. They may not have been experts in AI, as such, but they were extremely clever men who knew a lot about science. And if these people were worried, shouldn’t we all be worrying with them?

* * *

Nate Soares raised a hand to his close-shaven head and tapped a finger smartly against the frontal plate of his monkish skull.

“Right now,” he said, “the only way you can run a human being is on this quantity of meat.”

We were talking, Nate and I, about the benefits that might come with the advent of artificial superintelligence. For Nate, the most immediate benefit would be the ability to run a human being—to run, specifically, himself—on something other than this quantity of neural meat to which he was gesturing.]

He was a sinewy, broad-shouldered man in his mid-20s, with an air of tightly controlled calm; he wore a green T-shirt bearing the words “NATE THE GREAT,” and as he sat back in his office chair and folded his legs at the knee, I noted that he was shoeless, and that his socks were mismatched, one plain blue, the other white and patterned with cogs and wheels.

The room we conversed in was utterly featureless, save for the chairs we were sitting on, and a whiteboard, and a desk, on which rested an open laptop and a single book, which I happened to note was a hardback copy of philosopher Nick Bostrom’s surprise hit book Superintelligence: Paths Dangers, Strategies—which lays out, among other apocalyptic scenarios, a thought experiment in which an A.I. is directed to maximize the production of paperclips and proceeds to convert the entire planet into paperclips and paperclip production facilities.

This was Nate’s office at the Machine Intelligence Research Institute in Berkeley. The bareness of the space was a result, I gathered, of the fact that he had only just assumed his role as the executive director, having left a lucrative career as a software engineer at Google the previous year and having subsequently risen swiftly up the ranks at MIRI.

He spoke, now, of the great benefits that would come, all things being equal, with the advent of artificial superintelligence. By developing such a transformative technology, he said, we would essentially be delegating all future innovations—all scientific and technological progress—to the machine.

These claims were more or less standard among those in the tech world who believed that artificial superintelligence was a possibility. The problem-solving power of such a technology, properly harnessed, would lead to an enormous acceleration in the turnover of solutions and innovations, a state of permanent Copernican revolution. Questions that had troubled scientists for centuries would be solved in days, hours, minutes. Cures would be found for diseases that currently obliterated vast numbers of lives, while ingenious workarounds for overpopulation would be simultaneously devised. To hear of such things was to imagine a God who had long since abdicated all obligations toward his creation making a triumphant return in the guise of a software, an alpha and omega of zeroes and ones.

It was Nate’s belief that, should we manage to evade annihilation by machines, such a state of digital grace would inevitably be ours.

However, this mechanism, docile or otherwise, would be operating at an intellectual level so far above that of its human progenitors that its machinations, its mysterious ways, would be impossible for us to comprehend, in much the same way that our actions are, presumably, incomprehensible to the rats and monkeys we use in scientific experiments. And so, this intelligence explosion would, in one way or another, be an end to the era of human dominance—and very possibly the end of human existence.

“It gets very hard to predict the future once you have smarter-than-human things around,” said Nate, “In the same way that it gets very hard for a chimp to predict what is going to happen because there are smarter-than-chimp things around. That’s what the Singularity is: It’s the point past which you expect you can’t see.”

What he and his colleagues—at MIRI, at the Future of Humanity Institute, at the Future of Life Institute—were working to prevent was the creation of an artificial superintelligence that viewed us, its creators, as raw material that could be reconfigured into some more useful form (not necessarily paper clips). And the way Nate spoke about it, it was clear that he believed the odds to be stacked formidably high against success.

“To be clear,” said Nate, “I do think that this is the shit that’s going to kill me.” And not just him—“all of us,” he said. “That’s why I left Google. It’s the most important thing in the world, by some distance. And unlike other catastrophic risks—like say climate change—it’s dramatically underserved. There are thousands of person-years and billions of dollars being poured into the project of developing AI. And there are fewer than 10 people in the world right now working full-time on safety. Four of whom are in this building.”

“I’m somewhat optimistic,” he said, leaning back in his chair, “that if we raise more awareness about the problems, then with a couple more rapid steps in the direction of artificial intelligence, people will become much more worried that this stuff is close, and the A.I. field will wake up to this. But without people like us pushing this agenda, the default path is surely doom.”

For reasons I find difficult to identify, this term default path stayed with me all that morning, echoing quietly in my head as I left MIRI’s offices and made for the BART station, and then as I hurtled westward through the darkness beneath the bay. I had not encountered the phrase before, but understood intuitively that it was a programming term of art transposed onto the larger text of the future. And this term default path—which, I later learned, referred to the list of directories in which an operating system seeks executable files according to a given command—seemed in this way to represent in miniature an entire view of reality: an assurance, reinforced by abstractions and repeated proofs, that the world operated as an arcane system of commands and actions, and that its destruction or salvation would be a consequence of rigorously pursued logic. It was exactly the sort of apocalypse, in other words, and exactly the sort of redemption, that a computer programmer would imagine.

* * *

One of the people who had been instrumental in the idea of existential risk being taken seriously was Stuart Russell, a professor of computer science at U.C. Berkeley who had, more or less literally, written the book on artificial intelligence. (He was the co-author, with Google’s research director Peter Norvig, of Artificial Intelligence: A Modern Approach, the book most widely used as a core A.I. text in university computer science courses.)

I met Stuart at his office in Berkeley. Pretty much the first thing he did upon sitting me down was to swivel his computer screen toward me and have me read the following passage from a 1960 article by Norbert Wiener, the founder of cybernetics:

If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Stuart said that the passage I had just read was as clear a statement as he’d encountered of the problem with AI, and of how that problem needed to be addressed. What we needed to be able to do, he said, was define exactly and unambiguously what it was we wanted from this technology It was as straightforward as that, and as diabolically complex.

It was not, he insisted, the question of machines going rogue, formulating their own goals and pursing them at the expense of humanity, but rather the question of our own failure to communicate with sufficient clarity.

“I get a lot of mileage,” he said, “out of the King Midas myth.”

What King Midas wanted, presumably, was the selective ability to turn things into gold by touching them, but what he asked for (and what Dionysus famously granted him) was the inability to avoid turning things into gold by touching them. You could argue that his root problem was greed, but the proximate cause of his grief—which included, let’s remember, the unwanted alchemical transmutations of not just all foodstuffs and beverages, but ultimately his own child—was that he was insufficiently clear in communicating his wishes.

The fundamental risk with AI, in Stuart’s view, was no more or less than the fundamental difficulty in explicitly defining our own desires in a logically rigorous manner.

Imagine you have a massively powerful artificial intelligence, capable of solving the most vast and intractable scientific problems. Imagine you get in a room with this thing, and you tell it to eliminate cancer once and for all. The computer will go about its work and will quickly conclude that the most effective way to do so is to obliterate all species in which uncontrolled division of abnormal cells might potentially occur. Before you have a chance to realize your error, you’ve wiped out every sentient lifeform on Earth except for the artificial intelligence itself, which will have no reason to believe it has not successfully completed its task.

At times, it seemed to me perfectly obvious that the whole existential risk idea was a narcissistic fantasy of heroism and control—a grandiose delusion, on the part of computer programmers and tech entrepreneurs and other cloistered egomaniacal geeks, that the fate of the species lay in their hands: a ludicrous binary eschatology whereby we would either be destroyed by bad code or saved by good code.

But there were other occasions where I would become convinced that I was the only one who was deluded, and that Nate Soares, for instance, was absolutely, terrifyingly right: that thousands of the world’s smartest people were spending their days using the world’s most sophisticated technology to build something that would destroy us all. It seemed, if not quite plausible, on some level intuitively, poetically, mythologically right.

This was what we did as a species, after all: We built ingenious devices, and we destroyed things.

Screen Shot 2015-11-18 at 4.55.47 PM

The term “artificial intelligence” is widely used, but less understood. As we see it permeate our everyday lives, we should deal with its inevitable exponential growth and learn to embrace it before tremendous economic and social changes overwhelm us.

Part of the confusion about artificial intelligence is in the name itself. There is a tendency to think about AI as an endpoint — the creation of self-aware beings with consciousness that exist thanks to software. This somewhat disquieting concept weighs heavily; what makes us human when software can think, too? It also distracts us from the tremendous progress that has been made in developing software that ultimately drives AI: machine learning.

Machine learning allows software to mimic and then perform tasks that were until very recently carried out exclusively by humans. Simply put, software can now substitute for workers’ knowledge to a level where many jobs can be done as well — or even better — by software. This reality makes a conversation about when software will acquire consciousness somewhat superfluous.

When you combine the explosion in competency of machine learning with a continued development of hardware that mimics human action (think robots), our society is headed into a perfect storm where both physical labor and knowledge labor are equally under threat.

The trends are here, whether through the coming of autonomous taxis or medical diagnostics tools evaluating your well-being. There is no reason to expect this shift towards replacement to slow as machine learning applications find their way into more parts of our economy.

The invention of the steam engine and the industrialization that followed may provide a useful analogue to the challenges our society faces today. Steam power first substituted the brute force of animals and eventually moved much human labor away from growing crops to working in cities. Subsequent technological waves such as coal power, electricity and computerization continued to change the very nature of work. Yet, through each wave, the opportunity for citizens to apply their labor persisted. Humans were the masters of technology and found new ways to find income and worth through the jobs and roles that emerged as new technologies were applied.

Here’s the problem: I am not yet seeing a similar analogy for human workers when faced with machine learning and AI. Where are humans to go when most things they do can be better performed by software and machinery? What happens when human workers are not users of technology in their work but instead replaced by it entirely? I will admit to wanting to have an answer, but not yet finding one.

Some say our economy will adjust, and we will find ways to engage in commerce that relies on their labor. Others are less confident and predict a continued erosion of labor as we know it, leading to widespread unemployment and social unrest.

Other big questions raised by AI include what our expectations of privacy should be when machine learning needs our personal data to be efficient. Where do we draw the ethical lines when software must choose between two people’s lives? How will a society capable of satisfying such narrow individual needs maintain a unified culture and look out for the common good?

The potential and promise of AI requires a discussion free of ideological rigidity. Whether change occurs as our society makes those conscious choices or while we are otherwise distracted, the evolution is upon us regardless.

Screen Shot 2015-11-18 at 4.55.47 PM

Automation has become an increasingly disruptive force in the labour market.

Self-driving cars threaten the job security of millions of American truck drivers. At banks, automated tellers are increasingly common. And at wealth management firms, robo -advisers are replacing humans.

“Any job that is routine or monotonous runs the risk of being automated away,” Yisong Yue, an assistant professor at the California Institute of Technology, told Business Insider.

While it may seem like low-skill jobs face the most risk of being replaced by automation, complicated jobs that are fairly routine face some of the biggest risks, Yue, who teaches in the computing and mathematical sciences department at Caltech, explained.

In the legal profession, for example, groups of lawyers and paralegals sift through vast amounts of documents searching for keywords. Technology now exists that can quickly do that work. In the future it’s likely that a handful of lawyers to do the job of 20 due to automation.

That’s not heartening news for college students about to join the workforce. But experts say there are ways for them to adapt their academic pursuits to compete with an increasingly automated workforce, by learning to be critical thinkers who improvise in ways that robots cannot.

“I think that the types of jobs that are secure are the types of jobs that require free form pattern matching and creativity; things that require improvisation,” Yue said.

As well as thinking critically, Yue believes that students who are comfortable around computers and those who understand at least some programming will have advantages in an automated workforce.

CEO of HiringSolved Shon Burton, a company that leverages AI & machine learning technology to make job recruiting more efficient, agrees.

“Absolutely I think there’s value in some level of understanding computer science,” Burton told Business Insider. He explained that people who understand technology, in turn know limitations and abilities of an automated process and can use that knowledge to help them work smarter.

Still, that doesn’t mean that STEM majors alone hold the key to finding success in the future workforce. In fact, it may be those with “soft skills,” like adaptability and communication, that actually have an advantage.

“Students should be thinking, ‘in 20 years, where does the human add value?,’” Burton said. There will always be areas where humans will want to interact with other humans, he continued.

For example, perhaps artificial intelligence will be better able to diagnose a disease, but humans will still likely want to talk to a doctor to learn about their diagnosis and discuss options.

The best thing a college student can do to ensure they will succeed in an automated workplace is to chose an industry they love, and ensure they focus on learning creativity and communication skills, according to Burton.

“The important thing if you’re coming out of school is to think about where your edge is, you think about doing something you really want to do,” he explained.

Then he asked, “where does the automation stop?”

Screen Shot 2015-11-18 at 4.55.47 PM

A recent study found 50% of occupations today will be gone by 2020, and a 2013 Oxford study forecasted that 47% of jobs will be automated by 2034. A Ball State study found that only 13% of manufacturing job losses were due to trade, the rest from automation. A McKinsey study suggests 45% of knowledge work activity can be automated.

94% of the new job creation since 2005 is in the gig economy. These aren’t stable jobs with benefits on a career path. And if you are driving for Uber, your employer’s plan is to automate your job. Amazon has 270k employees, but most are soon-t0-be-automated ops and fulfillment. Facebook has 15k employees and a 330B market cap, and Snapchat in August had double their market cap per employee, at $48M per employee. The economic impact of Tech was raising productivity, but productivity and wages have been stagnant in recent years.

And the Trumpster…

Trump’s lack of attention to the issue is based on good reasons and bad ones. The bad ones are more fun, so let’s start with them. Trump knows virtually nothing about technology — other than a smartphone, he doesn’t use it much. And the industries he’s worked in — construction, real estate, hotels, and resorts — are among the least sophisticated in their use of information technology. So he’s not well equipped to understand the dynamics of automation-driven job loss.

The other Trump shortcoming is that the automation phenomenon is not driven by deals and negotiation. The Art of the Deal‘s author clearly has a penchant for sparring with opponents in highly visible negotiations. But automation-related job loss is difficult to negotiate about. It’s the silent killer of human labor, eliminating job after job over a period of time. Jobs often disappear through attrition. There are no visible plant closings to respond to, no press releases by foreign rivals to counter. It’s a complex subject that doesn’t lend itself to TV sound bites or tweets.

Screen Shot 2015-11-18 at 4.55.47 PM

Ray Kurzweil, the author, inventor, computer scientist, futurist and Google employee, was the featured keynote speaker Thursday afternoon at Postback, the annual conference presented by Seattle mobile marketing company Tune. His topic was the future of mobile technology. In Kurzweil’s world, however, that doesn’t just mean the future of smartphones — it means the future of humanity.

Continue reading for a few highlights from his talk.

On the effect of the modern information era: People think the world’s getting worse, and we see that on the left and the right, and we see that in other countries. People think the world is getting worse. … That’s the perception. What’s actually happening is our information about what’s wrong in the world is getting better. A century ago, there would be a battle that wiped out the next village, you’d never even hear about it. Now there’s an incident halfway around the globe and we not only hear about it, we experience it.

Which is why the perception that someone like Trump sells, could be false and misleading. But more importantly, what actions we take based upon that information. If I respond differently, then my perception has directly changed my actions, which has unforseen ramifications when multiplied by millions.

Brexit could be an example of exactly this.

On the potential of human genomics: It’s not just collecting what is basically the object code of life that is expanding exponentially. Our ability to understand it, to reverse-engineer it, to simulate it, and most importantly to reprogram this outdated software is also expanding exponentially. Genes are software programs. It’s not a metaphor. They are sequences of data. But they evolved many years ago, many tens of thousands of years ago, when conditions were different.

Clearly our genome is not exactly the same. It to has evolved. This may have been through random mutations, in which certain recipients thrived in a changing environment.

How technology will change humanity’s geographic needs: We’re only crowded because we’ve crowded ourselves into cities. Try taking a train trip across the United States, or Europe or Asia or anywhere in the world. Ninety-nine percent of the land is not used. Now, we don’t want to use it because you don’t want to be out in the boondocks if you don’t have people to work and play with. That’s already changing now that we have some level of virtual communication. We can have workgroups that are spread out. … But ultimately, we’ll have full-immersion virtual reality from within the nervous system, augmented reality.

One of my favorite novels is Asimov’s “Foundation” series. The planet Trantor….entirely covered by a city. Is that what we want?

On connecting the brain directly to the cloud: We don’t yet have brain extenders directly from our brain. We do have brain extenders indirectly. I mean this (holds up his smartphone) is a brain extender. … Ultimately we’ll put them directly in our brains. But not just to do search and language translation and other types of things we do now with mobile apps, but to actually extend the very scope of our brain.

The mobile phone as a brain extender. Possibly true for 1% of all users. Most use facebook or whatever other time wasting application, and essentially gossip. A monumental waste of time. Far from being a brain extender, for most, it is the ultimate dumbing down machine. Text language encourages bad spelling, poor grammar etc. So you can keep your brain extenders.

As far as directly connecting your brain to the cloud….that sounds like ‘The Matrix”, which is of course the subject of philosophical musings about the brain in a vat. The potential for mind control would seem to be a possibility here. Not for me thanks.

Why machines won’t displace humans: We’re going to merge with them, we’re going to make ourselves smarter. We’re already doing that. These mobile devices make us smarter. We’re routinely doing things we couldn’t possibly do without these brain extenders.

To date, I would argue that the vast majority are significantly more stupid because of them.

As to robots and AI, imagine a man, Spock, who’s choice making is driven 100% by logic, rather than by 50% logic and 50% emotion. How long does the emotional decision maker last? Most emotional decisions get us in trouble. The market is an excellent example. Politics is another, ie. Trump.

 

Screen Shot 2015-11-18 at 4.55.47 PM

EXPERTS warn that “the substitution of machinery for human labour” may “render the population redundant”. They worry that “the discovery of this mighty power” has come “before we knew how to employ it rightly”. Such fears are expressed today by those who worry that advances in artificial intelligence (AI) could destroy millions of jobs and pose a “Terminator”-style threat to humanity. But these are in fact the words of commentators discussing mechanisation and steam power two centuries ago. Back then the controversy over the dangers posed by machines was known as the “machinery question”. Now a very similar debate is under way.

After many false dawns, AI has made extraordinary progress in the past few years, thanks to a versatile technique called “deep learning”. Given enough data, large (or “deep”) neural networks, modelled on the brain’s architecture, can be trained to do all kinds of things. They power Google’s search engine, Facebook’s automatic photo tagging, Apple’s voice assistant, Amazon’s shopping recommendations and Tesla’s self-driving cars. But this rapid progress has also led to concerns about safety and job losses. Stephen Hawking, Elon Musk and others wonder whether AI could get out of control, precipitating a sci-fi conflict between people and machines. Others worry that AI will cause widespread unemployment, by automating cognitive tasks that could previously be done only by people. After 200 years, the machinery question is back. It needs to be answered.

Machinery questions and answers

The most alarming scenario is of rogue AI turning evil, as seen in countless sci-fi films. It is the modern expression of an old fear, going back to “Frankenstein” (1818) and beyond. But although AI systems are impressive, they can perform only very specific tasks: a general AI capable of outwitting its human creators remains a distant and uncertain prospect. Worrying about it is like worrying about overpopulation on Mars before colonists have even set foot there, says Andrew Ng, an AI researcher. The more pressing aspect of the machinery question is what impact AI might have on people’s jobs and way of life.

This fear also has a long history. Panics about “technological unemployment” struck in the 1960s (when firms first installed computers and robots) and the 1980s (when PCs landed on desks). Each time, it seemed that widespread automation of skilled workers’ jobs was just around the corner.

Each time, in fact, technology ultimately created more jobs than it destroyed, as the automation of one chore increased demand for people to do the related tasks that were still beyond machines. Replacing some bank tellers with ATMs, for example, made it cheaper to open new branches, creating many more new jobs in sales and customer service. Similarly, e-commerce has increased overall employment in retailing. As with the introduction of computing into offices, AI will not so much replace workers directly as require them to gain new skills to complement it (see our special report in this issue). Although a much-cited paper suggests that up to 47% of American jobs face potential automation in the next decade or two, other studies estimate that less than 10% will actually go.

Even if job losses in the short term are likely to be more than offset by the creation of new jobs in the long term, the experience of the 19th century shows that the transition can be traumatic. Economic growth took off after centuries of stagnant living standards, but decades passed before this was fully reflected in higher wages. The rapid shift of growing populations from farms to urban factories contributed to unrest across Europe. Governments took a century to respond with new education and welfare systems.

This time the transition is likely to be faster, as technologies diffuse more quickly than they did 200 years ago. Income inequality is already growing, because high-skill workers benefit disproportionately when technology complements their jobs. This poses two challenges for employers and policymakers: how to help existing workers acquire new skills; and how to prepare future generations for a workplace stuffed full of AI.

An intelligent response

As technology changes the skills needed for each profession, workers will have to adjust. That will mean making education and training flexible enough to teach new skills quickly and efficiently. It will require a greater emphasis on lifelong learning and on-the-job training, and wider use of online learning and video-game-style simulation. AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.

Social and character skills will matter more, too. When jobs are perishable, technologies come and go and people’s working lives are longer, social skills are a foundation. They can give humans an edge, helping them do work that calls for empathy and human interaction—traits that are beyond machines.

And welfare systems will have to be updated, to smooth the transitions between jobs and to support workers while they pick up new skills. One scheme widely touted as a panacea is a “basic income”, paid to everybody regardless of their situation. But that would not make sense without strong evidence that this technological revolution, unlike previous ones, is eroding the demand for labour. Instead countries should learn from Denmark’s “flexicurity” system, which lets firms hire and fire easily, while supporting unemployed workers as they retrain and look for new jobs. Benefits, pensions and health care should follow individual workers, rather than being tied (as often today) to employers.

Despite the march of technology, there is little sign that industrial-era education and welfare systems are yet being modernised and made flexible. Policymakers need to get going now because, the longer they delay, the greater the burden on the welfare state. John Stuart Mill wrote in the 1840s that “there cannot be a more legitimate object of the legislator’s care” than looking after those whose livelihoods are disrupted by technology. That was true in the era of the steam engine, and it remains true in the era of artificial intelligence.

Next Page »