Screen Shot 2015-11-18 at 4.55.47 PM

he title of the book—Conserving America?—tells us much of what we need to know about Deneen’s thesis. For much of conservative intellectual history in the United States the question of whether or not America should or could be conserved was beyond dispute, and, for the most part, it remains so. This conservation typically takes the form of conservative intellectuals fighting for the preservation of the principles of the Declaration and Constitution against those who seek, to borrow a phrase from Pope Benedict XVI, to read those documents through a hermeneutic of rupture. Interestingly, Deneen argues in the book that if America is to be conserved it will not be through the promotion of conservative principles but in a rejection of those principles. Deneen believes that the philosophy, a Hobbesian and Lockean form of liberalism, upon which the United States was founded, contains the seeds of its own destruction.

Deneen has become, over the course of his long career, one of the nation’s sharpest critics, arguing that America’s founding principles are at their root progressive and at odds with the national self-image preferred by conservatives. Conserving America? is composed of twelve chapters taken mainly from speeches and lectures on subjects ranging from whether in fact America has a conservative tradition (according to Deneen, no) to what will happen when American liberalism possibly falls apart.

Deneen argues that the “Enlightenment and liberal philosophies that informed the American founding posited the existence of radically autonomous human beings in the state of nature, rights bearing creatures who consent to the creation of a government which exists to secure those rights.” But the truth, according to Deneen, is just the opposite. This radical autonomy and the state of nature exist only in theory and it is government that is put to the task of making that theory come into existence.

Deneen further argues that our electoral choices are in fact false choices and that there is a “consistent and ongoing continuity in the basic trajectory of modern liberal democracy both at home and abroad” regardless of whether the political left or the political right is in power. This trajectory results in the concentration of power in the hands of a global aristocracy that controls more and more wealth—an aristocracy that moves “continuously back and forth between public and private positions, controlling the major institutions of modern society.”

Deneen argues that the political left is the greatest beneficiary of this arrangement though they remain silent, or even at times, blissfully unaware of liberalism’s beneficence and focus their attention on identity and sexual politics. The political right, on the other hand, “promises to shore up traditional family values while supporting a borderless and dislocating economic system that destabilizes family life especially among those who do not ascend to the global elite, those outside the elite circles who exhibit devastating levels of familial and community disintegration.” For Deneen, conservatism cannot do both and indeed, its support of a limitless globalism destroys any real basis for a conservative polity.

While Deneen discusses the problems and shortcomings of the political left in the book, it is his critique of the political right that is most interesting. American conservatives would do well to take Deneen’s critique seriously even if they find they cannot fully embrace it. His critique of the destabilizing nature of capitalism is important and that critique should find ears after an election cycle that in many ways hinged on the “forgotten man,” an election cycle during which a major conservative magazine published an article arguing that communities that have been ravaged by capitalist creative destruction in some sense deserve to die, and that the poor people living in them need, more than anything, a U-Haul.

Deneen’s critique of capitalism goes further in that he recognizes that capitalism makes it difficult to have a common good toward which a society can work. Because capitalism is based on the idea that each person working toward his or her own self-interest will benefit society as a whole, it results only in something that resembles cooperation rather than actual cooperation, much less working toward a common good that benefits the whole. This might be okay, argues Deneen following Tocqueville, if not for the fact that the “language of self-interest would [over time] exert a formative influence upon democratic man’s self-understanding” causing him to lose sight of communal responsibility and the common good. The language of self-interest is also deceptive in that “in thinking solely of our own advancement and accumulation, we deceive ourselves in thinking we are wholly self-sufficient and that our success has come solely through our own efforts.” Both of these effects of the reliance on self-interest to direct society result in the breakdown of conscious cooperation and therefore become corrosive to society.

Deneen’s critique of capitalism is instructive but at times it falls prey to what might be called a boutique mentality. For Deneen the ideal community looks something like Bedford Falls from the movie It’s a Wonderful Life. In one of his more well-known essays, Deneen sees George Bailey as almost as much a villain as Mr. Potter, for it is Bailey who brings urban sprawl to Bedford Falls, in the form of affordable housing. Deneen sees in Bailey not a hero but an agent of destruction: “George represents the vision of post-war America: the ambition to alter the landscape so as to accommodate modern life, to uproot nature and replace it with monuments of human accomplishment. To re-engineer life for mobility and swiftness, one unencumbered by permanence, one no longer limited to a moderate and comprehensible human scale.”

Deneen argues that a community like Bailey Park cannot sustain trust and community in the same way as Bedford Falls. Deneen sees Bailey Park as the gateway to an America “wounded first by Woolworth, then K-Mart, then Wal-Mart; mercilessly bled by the automobile; drained of life by subdivisions, interstates, and the suburbs.” It’s a long list of wrongs to load on the back of George Bailey. Where should the people who moved to Bailey Park live? They can’t afford to live in Bedford Falls unless Deneen’s critique of capitalism goes further than it seems. Deneen is not altogether wrong about suburban life or urban sprawl, but it’s not clear what a realistic alternative would be without a massive re-appropriation of wealth. Moreover, conservatives have long admired this movie for its opposition, in the form of George Bailey, to a rapacious capitalism: the movie’s alternative view of Bedford Falls had the rapacious banker Potter prevailed is even less attractive. The challenge to Deneen is to demonstrate that there is some path that avoids both forms of consumerism and community-erasing that the movie presents.

Another theme worth grappling with is Deneen’s argument that American conservatism is not at all conservative. Deneen argues that the two main commitments of mainstream American conservatism have been the “strenuous defense of a relatively unregulated market and the insistence upon a strong military posture that extended American power into every corner of the world, often explicitly in defense of promoting universalized liberal democracy …” Neither, he says, supports the local and humane scale of community necessary for the common good and political liberty.

This effectively argues that conservatism’s main goals have been to promote liberalism. In appropriating the tools of liberalism, Deneen argues, conservatism was wildly successful, but its success did away with conservative ways of life like family farming and family-owned businesses. Deneen further argues that conservative promotion of economic and cultural globalization and conservative commitments to the “abstractions of the markets and the abstractions of national allegiance” destroyed the local forms of American life that had sustained distinctive communities across America.

So what is the way forward? Deneen sees the collapse of liberalism as possible even though in his estimation nearly every human institution has been formed to enact and perpetuate liberalism. What reason is there for hope in the face of these odds? Deneen believes that as liberalism becomes more itself, it will become harder to explain its “endemic failures [massive income inequality, the breakdown of community, etc.] as merely accidental or unintended.” Deneen is not overly optimistic on this score. He explains that as the failures of liberalism come out, many proposed alternatives will be even worse and so it is our responsibility to defeat these alternatives and propose in their place something better.

The book’s concluding chapter is titled “After Liberalism,” a deliberate homage to Alasdair MacIntyre’s After Virtue. Deneen acknowledges that many of the alternatives to liberalism on the world stage are not comforting, but he urges us to actively hope for the end of liberalism and that it might be a “fourth sailing—after antiquity, after Christianity, after liberalism into a post liberal and hopeful future.”

If we are to have any hope for this future after liberalism, conservatives will need to take seriously the challenges thinkers like Deneen put forth. The effects of liberalism and the free market on community must not be dismissed as intractable or their possible alternatives as unrealistic. Conservatives would also be remiss if Deneen’s critique of the deleterious effects of, especially the philosophy of Hobbes and Locke, as applied through constitutionalism, are not adequately addressed. If conservatives fail to address these problems seriously it will be due to a failure of imagination. In fact it can be said, based on Deneen’s argument, that the only way to achieve the stated goals of conservatism, is to stop being “conservative.”  

Screen Shot 2015-11-18 at 4.55.47 PM

You can learn a lot about a culture from its drug use. Robert McAlmon, an American author living in Berlin during the tumultuous Weimar years, marveled that “dope, mostly cocaine, was to be had in profusion” at “dreary night clubs” where “poverty-stricken boys and girls of good German families sold it, and took it.” Cocaine was banned in 1924, though few people noticed—use peaked three years later. For those who preferred downers, morphine was just as easily accessible. Pharmacists legally prescribed the opioid for non-serious ailments, and morphine addiction was common among World War One veterans. The market was bolstered by low prices—for Americans, McAlmon noted that enough cocaine for “quite too much excitement” cost about ten cents—and by the fact that production was more or less local. In the 1920s, German companies generated 40 percent of the world’s morphine, and controlled 80 percent of the global cocaine market.

When the Nazis rose to power, illegal drug consumption fell. Suddenly, drugs were regarded as “toxic” to the German body, and folded into the escalating discourse of anti-Semitism. Users were penalized with prison sentences, and addicts were classed—along with Jews, gypsies and homosexuals—as undesirable social elements. By the end of the 1930s, pharmaceutical production had pivoted away from opioids and cocaine and towards synthetic stimulants that could be produced entirely within Germany, per Nazi directive. The transition from cabaret cocaine to over-the-counter meth helped fuel what German journalist Norman Ohler in his new book Blitzed: Drugs in the Third Reich calls the “developing performance society” of the early Nazi era, and primed Germany for the war to come.

The breakthrough moment came in 1937, when the Temmler-Werke company introduced Pervitin, a methamphetamine-based stimulant. (The doctor who developed it, Fritz Hauschild, would go on to pioneer East Germany’s sports doping program.) Within months, this variant of crystal meth was available without a prescription—even sold in boxed chocolates—and was widely adopted by all sectors of society to elevate mood, control weight gain, and increase productivity. It’s impossible to untangle Pervitin’s success from Germany’s rapidly changing economic fortunes under the Third Reich. As the country rebounded from economic depression to nearly full employment, marketing for Pervitin claimed it would help “integrate shirkers, malingerers, defeatists and whiners” into the rapidly expanding workforce. Students took it to cram for exams; housewives took it to stave off depression. Pervitin use was so common as to be unremarkable, a feature of life in the early Third Reich.

Meanwhile, in the military, Pervitin was enthusiastically embraced as the vanguard of the so-called “war on exhaustion.” As Hitler’s troops began annexing territory in the spring of 1939, Wehrmacht soldiers started relying on “tank chocolate” to keep them alert for days on end. Though Nazi medical officials were increasingly aware of Pervitin’s risks—tests found that soldiers’ critical thinking skills declined the longer they stayed awake—the short-term gains were appealing enough. Even after drug sales to the general public were restricted in April 1940, the German Army High Command issued the so-called “stimulant decree,” ordering Temmler to produce 35 million tablets for military use.

Twenty minutes later, the nerve cells in their brains started releasing the neurotransmitters. All of a sudden, dopamine and noradrenalin intensified perception and put the soldiers in a state of absolute alertness. The night brightened: no one would sleep, lights were turned on, and the ‘Lindworm’ of the Wehrmacht started eating its way tirelessly towards Belgium…

Three sleepless days later, the Nazis were in France. The stunned Allies were closer to defeat than they would be at any other point during the war. As Ohler writes in Blitzed, the Germans had claimed more land in less than a hundred hours than they had over the entire course of World War One.


Blitzed, which was a bestseller in Germany, is comprised of two main parts: A look at the effects of drugs on the German military, and on the Führer himself. While Hitler’s medical records have been scrutinized for decades—first by wartime American intelligence agencies, and more recently, by scholars Hans-Joachim Neumann and Henrik Eberle in Was Hitler Ill?—Ohler spent five years in international archives mounting a case that the dictator was not merely suffering from stress or madness, but drug-induced psychosis, which fueled his lethal tendencies.

This is Ohler’s first nonfiction book (he’s written three novels) and the first popular book of its kind, filling a gap between specialist academic literature and sensationalist TV documentaries. There’s a contemporary Berlin sensibility to Blitzed: Ohler came upon the idea after a local DJ told him that the Nazis “took loads of drugs,” and his archival research is interspersed with accounts of urban exploring at the former Temmler factory. The hipster-as-historian persona occasionally feels forced—Ohler characterizes Hitler as a junkie and his doctors as dealers a few too many times—but the book is an impressive work of scholarship, with more than two dozen pages of footnotes and the blessing of esteemed World War Two historians. From Hitler’s irregular hours and unusual dietary preferences—his staff would leave out apple raisin cakes for him to eat in the middle of the night—to his increasingly monomaniacal demands, Ohler offers a compelling explanation for Hitler’s erratic behavior in the final years of the war, and how the biomedical landscape of the time affected the way history unfolded.

Over the past half-century, discussions about Hitler’s health have touched lightly on Dr. Theodor Morell, a private practitioner who specialized in dermatology and venereal diseases before becoming the Führer’s personal physician in 1936. According to Ohler, Morell’s role was far greater than previously acknowledged. Despite being widely regarded as a fraud, Morell was granted more access to Hitler than anybody other than Eva Braun. During the nine years the doctor treated Hitler, he is believed to have given the Führer between 28 and 90 different drugs, including Pervitin, laxatives, anti-gas pills with strychnine in them, morphine derivatives, seminal extract from bulls, body-building supplements, digestives, sedatives, hormones, and many vitamins of mysterious provenance, mostly administered via injection. This all happened quietly, as the myth of Hitler-as-teetotaler was central to Nazi ideology: “Hitler allegedly didn’t even allow himself coffee and legend had it that after the First World War he threw his last pack of cigarettes into the Danube,” Ohler writes. Luckily, Morell left detailed accounts of Hitler’s medical records, likely believing that if anything happened to “Patient A,” he would be held responsible.

Hitler did not have any serious medical conditions at the start of the war—he suffered from painful gas, believed to be the result of his vegetarianism—but over the years, he came to rely more and more on Morell’s injections. After the fall of 1941, when the war began to turn in the Allies’ favor—and, Ohler observes, the “dip in Hitler’s performance became obvious”—they took on greater potency. One night in the summer of 1943, Hitler awoke with violent stomach cramps. Knowing he was scheduled to meet Mussolini the next day, Morell gave the Führer his first dose of Eukadol, an Oxycodone-based drug that was twice as strong as morphine. It had the desired effect: Hitler ranted through the meeting, preventing Mussolini from pulling Italy out of the war. The Führer was delighted; Morell remained firmly in Hitler’s good graces.

Records show that Eukodol was administered only 24 more times between that night and the end of 1944, but Ohler suspects that the coded reports disguise a much higher number. “This approach to the dictator’s health,” he writes of the injections, “could be compared to using a sledgehammer to crack a walnut.”

Aside from power and prestige, there were lucrative reasons for men unconcerned with morality to work for the Third Reich. Before becoming Hitler’s physician, Morell made a name for himself in the emerging field of vitamins, becoming one of the first doctors in Germany to promote them as medicinal. Following his appointment, Morell used his role to develop his vanity vitamin business, enlisting Hitler as his “gold standard” patient.

To this end, Morell developed a preparation called Vitamultin that he marketed across Europe, with special packaging for Hitler’s personal vitamins, and another for those of high-ranking Nazi officials. Though they mostly consisted of dried lemon, milk and sugar, the SS ordered hundreds of millions of tablets; the Nazi trade unions requested nearly a billion. Pleased with his returns, Morell went on to take over an “Aryanized” cooking-oil company in Czechoslovakia, which he converted into a factory for vitamin production. Soon after, he advanced plans to construct an “organotherapeutic factory” which would manufacture hormones preparations from slaughterhouse leftovers.

By 1943, Morell was a one-man empire, and not even a ban on introducing new medication to the German market could stop him. “The Führer has authorized me to do the following,” he wrote in a letter to the Reich Health Office. “If I bring out and test a remedy and then apply it in the Führer’s headquarters, and apply it successfully, then it can be applied elsewhere in Germany and no longer needs authorization.”

A number of books have covered the same material as Ohler, but none have focused as strongly on how pharmaceuticals ran in the blood of the Third Reich. Pervitin, Eukodal, and other “wonder drugs” of the time were seen as the magic bullets that would allow German productivity to reach new heights, German soldiers to march farther, stay awake longer, and, ironically, cleanse the country of its toxic elements. It was only later, Ohler notes, that the effects would become clear:

Studies show that two thirds of those who take crystal meth excessively suffer from psychosis after three years. Since Pervitin and crystal meth have the same active ingredient, and countless soldiers had been taking it more or less regularly since the invasion of Poland, the Blitzkrieg on France, or the attack on the Soviet Union, we must assume psychotic side-effects, as well as the need to keep increasing the dosage to achieve a noticeable effect.

As it became obvious the Nazis were going to lose, military efforts became increasingly desperate, and life was cheaply traded for grasping attempts at victory. Teenage recruits were dosed with amphetamines and shipped to the front; Navy pharmacologists tested dangerous mixes of high-grade pharmaceuticals on pilots. At a wine bar in Munich, Ohler meets with a Navy official who tells him how in the final months of the war, members of the Hitler Youth were loaded into mini-submarines and sent to sea with not much more than packets of cocaine chewing gum.


The last days of the Third Reich were marked by a combination of delirium and exhaustion. In January 1945, with the Russians and Allies closing in, Hitler was transferred to an underground bunker beneath the Reich Chancellery. By that point, his addiction and physical deterioration were apparent: He was barely able to sleep or focus, and was regularly receiving Eukodol injections for painful constipation and seizures. Or, at least he had been—in the weeks leading up to the new year, the British bombed the pharmaceutical companies that manufactured Eukadol and cocaine, threatening Hitler’s supply. In the months that followed, his reserve of drugs drawing down, he likely went through a brutal withdrawal. He sacked Morell on April 17, and two weeks later, shot himself.

Ohler’s book makes a powerful case for the centrality of drugs to the Nazi war effort, and had he wanted to, he could have easily made it two or three times as long. He only briefly touches on drug experimentation in concentration camps, and doesn’t explore the structural ties that existed between the Third Reich and German pharmaceutical companies, which would come to light during the Nuremberg Trials. Without these relationships, it’s unlikely the German war machine would have run for as long as it did. After supporting the Nazis’ rise to power, pharmaceutical conglomerate I.G. Farben developed the nerve gas used in the camps, and produced oil and synthetic rubber for war efforts. In 1942, Farben set up Auschwitz-Monowitz, a smaller concentration camp within Auschwitz, to provide slave labor for the company’s nearby industrial complex. Tens of thousands of inmates died as a result of experimentation and forced labor, and the development of thalidomide, notorious for causing deformities in fetuses, has been linked to Monowitz.

Ohler also doesn’t mention that the amphetamine craze didn’t only happen in Germany. While Germans were dosing with Pervitin, British and American troops were doing the same with Benzedrine, an amphetamine developed in the ‘30s as the first prescription anti-depressant. (Benzedrine was the ancestor of medications now prescribed for attention-deficient disorder.) The Germans eventually decided to drop Pervitin as the war dragged on, but the American military stuck with “bennies,” and by the end of 1945, production was up to a million tablets a day.

The notion that substances play an outsized role in shaping society—and especially during wartime—does not just belong to history. Amphetamines still factor heavily in conflicts in Syria (Captagon), Afghanistan (Dexedrine), and West Africa (cocaine mixed with gunpowder). Thanks to pharmacological advances, we’re now more able than ever to grasp how drugs may have altered behaviors and influenced certain moments. Knowing that Eukadol and Pervitin contributed to the grotesqueries of Nazi military strategy, for instance, places a new lens on a disturbing chapter of history.

Screen Shot 2015-11-18 at 4.55.47 PM

In science fiction, the promise or threat of artificial intelligence is tied to humans’ relationship to conscious machines. Whether it’s Terminators or Cylons or servants like the “Star Trek” computer or the Star Wars droids, machines warrant the name AI when they become sentient—or at least self-aware enough to act with expertise, not to mention volition and surprise.

What to make, then, of the explosion of supposed-AI in media, industry, and technology? In some cases, the AI designation might be warranted, even if with some aspiration. Autonomous vehicles, for example, don’t quite measure up to R2D2 (or Hal), but they do deploy a combination of sensors, data, and computation to perform the complex work of driving. But in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.

* * *

Deflationary examples of AI are everywhere. Google funds a system to identify toxic comments online, a machine learning algorithm called Perspective. But it turns out that simple typos can fool it. Artificial intelligence is cited as a barrier to strengthen an American border wall, but the “barrier” turns out to be little more than sensor networks and automated kiosks with potentially-dubious built-in profiling. Similarly, a “Tennis Club AI” turns out to be just a better line sensor using off-the-shelf computer vision. Facebook announces an AI to detect suicidal thoughts posted to its platform, but closer inspection reveals that the “AI detection” in question is little more than a pattern-matching filter that flags posts for human community managers.

AI’s miracles are celebrated outside the tech sector, too. Coca-Cola reportedly wants to use “AI bots” to “crank out ads” instead of humans. What that means remains mysterious. Similar efforts to generate AI music or to compose AI news stories seem promising on first blush—but then, AI editors trawling Wikipedia to correct typos and links end up stuck in infinite loops with one another. And according to human-bot interaction consultancy Botanalytics (no, really), 40 percent of interlocutors give up on conversational bots after one interaction. Maybe that’s because bots are mostly glorified phone trees, or else clever, automated Mad Libs.

AI has also become a fashion for corporate strategy. The Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts, noting a huge uptick in the last two years. Companies boast about undefined AI acquisitions. The 2017 Deloitte Global Human Capital Trends report claims that AI has “revolutionized” the way people work and live, but never cites specifics. Nevertheless, coverage of the report concludes that artificial intelligence is forcing corporate leaders to “reconsider some of their core structures.”

And both press and popular discourse sometimes inflate simple features into AI miracles. Last month, for example, Twitter announced service updates to help protect users from low-quality and abusive tweets. The changes amounted to simple refinements to hide posts from blocked, muted, and new accounts, along with other, undescribed content filters. Nevertheless, some takes on these changes—which amount to little more than additional clauses in database queries— conclude that Twitter is “constantly working on making its AI smarter.”

* * *

I asked my Georgia Tech colleague, the artificial intelligence researcher Charles Isbell, to weigh in on what “artificial intelligence” should mean. His first answer: “Making computers act like they do in the movies.” That might sound glib, but it underscores AI’s intrinsic relationship to theories of cognition and sentience. Commander Data poses questions about what qualities and capacities make a being conscious and moral—as do self-driving cars. A content filter that hides social media posts from accounts without profile pictures? Not so much. That’s just software.

Isbell suggests two features necessary before a system deserves the name AI. First, it must learn over time in response to changes in its environment. Fictional robots and cyborgs do this invisibly, by the magic of narrative abstraction. But even a simple machine-learning system like Netflix’s dynamic optimizer, which attempts to improve the quality of compressed video, takes data gathered initially from human viewers and uses it to train an algorithm to make future choices about video transmission.

Isbell’s second feature of true AI: what it learns to do should be interesting enough that it takes humans some effort to learn. It’s a distinction that separates artificial intelligence from mere computational automation. A robot that replaces human workers to assemble automobiles isn’t an artificial intelligence, so much as machine programmed to automate repetitive work. For Isbell, “true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty.

Griping about AI’s deflated aspirations might seem unimportant. If sensor-driven, data-backed machine learning systems are poised to grow, perhaps people would do well to track the evolution of those technologies. But previous experience suggests that computation’s ascendency demands scrutiny. I’ve previously argued that the word “algorithm” has become a cultural fetish, the secular, technical equivalent of invoking God. To use the term indiscriminately exalts ordinary—and flawed—software services as false idols. AI is no different. As the bot author Allison Parrish puts it, “whenever someone says ‘AI’ what they’re really talking about is ‘a computer program someone wrote.’”

Writing at the MIT Technology Review, the Stanford computer scientist Jerry Kaplan makes a similar argument: AI is a fable “cobbled together from a grab bag of disparate tools and techniques.” The AI research community seems to agree, calling their discipline “fragmented and largely uncoordinated.” Given the incoherence of AI in practice, Kaplan suggests “anthropic computing” as an alternative—programs meant to behave like or interact with human beings. For Kaplan, the mythical nature of AI, including the baggage of its adoption in novels, film, and television, makes the term a bogeyman to abandon more than a future to desire.

* * *

Kaplan keeps good company—when the mathematician Alan Turing accidentally invented the idea of machine intelligence almost 70 years ago, he proposed that machines would be intelligent when they could trick people into thinking they were human. At the time, in 1950, the idea seemed unlikely; Even though Turing’s thought experiment wasn’t limited to computers, the machines still took up entire rooms just to perform relatively simple calculations.

But today, computers trick people all the time. Not by successfully posing as humans, but by convincing them that they are sufficient alternatives to other tools of human effort. Twitter and Facebook and Google aren’t “better” town halls, neighborhood centers, libraries, or newspapers—they are different ones, run by computers, for better and for worse. The implications of these and other services must be addressed by understanding them as particular implementations of software in corporations, not as totems of otherworldly AI.

On that front, Kaplan could be right: abandoning the term might be the best way to exorcise its demonic grip on contemporary culture. But Isbell’s more traditional take—that AI is machinery that learns and then acts on that learning—also has merit. By protecting the exalted status of its science-fictional orthodoxy, AI can remind creators and users of an essential truth: today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.

Screen Shot 2015-11-18 at 4.55.47 PM

I’m a big fan of Murray Rothbard and have read pretty much everything that he wrote, which was a lot as he was a prodigious author. I came across this article which for the most part was very flattering, but contained these two criticisms.

Unfortunately, Rothbard also sidesteps some difficult problems. The primary argument for having a state at all is that the state can overcome the public goods/free rider problem, while private entrepreneurs cannot. Rather than addressing this argument, Rothbard effectively denies the problem exists, which is no answer at all and certainly does nothing to assuage the doubts of critics. Similarly, in response to the challenge that his proposed private protective agencies would fight among themselves and oppress people, he simply asserts this would be too costly for them and they’d realize peaceful cooperation and trade are more profitable.

Well, no. One could use this logic to “prove” that Al Capone would never order the St. Valentine’s Day massacre of the North Side gang, or that Hitler would never invade Poland. There’s nothing special about whether we call an organization a “state” or not that changes the benefit-cost analyses of the leaders in these matters. Perhaps it’s possible that under certain circumstances an anarchic society could be peaceful and stable, but Rothbard simply ignored the most difficult problems for his theory.

That, to me, illustrates Rothbard’s primary flaw. It seems to me that for him, no argument is too shallow so long as it leads him to a libertarian conclusion. His dedication to liberty is admirable, but as the 19th century French economist Frederic Bastiat warned, “The worst thing that can happen to a good cause is, not to be skillfully attacked, but to be ineptly defended.” In my view, by not taking arguments for a minimal state sufficiently seriously, Rothbard ends up deceiving himself and supposing that the case for his anarcho-capitalism is airtight. I think it is not, and there are other examples of this sort of error in Rothbard’s economic, political, and historical writing.[3]

Screen Shot 2015-11-18 at 4.55.47 PM

I have just, 5 mins ago, finished the ‘Professionals’ course for lawyers in NZ. It has taken 3 months and has drained me of energy coming as it did immediately after completing the law degree.

Screen Shot 2015-11-18 at 4.55.47 PM

Well, this comes as no surprise. With Republicans now controlling the Senate, House and White House, they have decided that they didn’t really mean what they said about states’ rights. And they didn’t really mean what they said about personal responsibility.

Out of the House of Representatives, courtesy of Rep. Steve King of Iowa, comes a bill (H.R. 1215) to grant immunity to doctors and hospitals if they negligently injury someone.

Given that 210,000 to 440,000 are estimated to die each year from medical malpractice  — a number that dwarfs the 30,000+ killed by guns — you should care about the subject.

Cynically named as a bill to “improve patient access to health care services” by “reducing the excessive burden the liability system,” the King bill slams an artificial cap on awards for pain and suffering at $250,000 in both federal and state cases, among many other things.

Did the hospital negligently operate on the good leg instead of the bad one? 250K.

Did you lose the good leg? The same 250K.

Did you also lose your previously bad leg because they operated on the wrong  one? The same 250K.

And it comes as no surprise to anyone that lawyers won’t actively jump at the chance to spend hundreds of hours and tens of thousands of dollars on a suit that is so artificially limited. Thus, de facto immunity for most pain and suffering causes of action from medical malpractice.

How does King go all federal on this, going deep into what is most often a state cause of action? By stating that it will apply to anyone that receives health care through a “federal program, subsidy, or tax benefit.” [Copy Of Bill] That means anyone who uses Medicaid, Medicare, veterans health plans or Obamacare.

And by “tax benefit,” it may mean anyone who has a deduction for healthcare of any kind.  Essentially, the idea is to make sure that no one, anywhere in the country, can ever bring a meaningful action for medical malpractice.

The losers in this, of course, are the patients and their families who have already been injured once. And the taxpayers, who are now forced to pick up the tab for the rest of the loss.

King’s bill is based on a faulty premise, that doctors and hospitals order unnecessary tests to protect against malpractice claims. This is the “defensive medicine” theory of why medical costs go up.

But that theory was tested in Texas, and found to fail. As I noted in 2011, the $250,000 Texas cap didn’t stop medical increases. In fact, costs went up faster in Texas than in states that didn’t have a cap.

While doctors may have saved money with fewer suits, and insurance companies may have made buckets more money, it didn’t stop health care costs from rising.

The Texas Experiment also was also supposed to bring more doctors to Texas and more to rural counties. It didn’t work.  Even noted tort reformer Ted Frank wrote, in 2012, that the data from Texas “substantially undermines the empirical case for the conventional wisdom that Texas’s 2003 reforms against medical malpractice lawsuits attracted more doctors to Texas.” Ouch.

Frank went on to conclude:

I, for one, am going to stop claiming that Texas tort reform increased doctor supply without better data demonstrating that.

The real kicker to the artificial caps, of course, is that the taxpayers then get saddled with the costs of the injured person instead of the ones that negligently caused the injury. That’s right, saddling the taxpayers with the costs is a form of socialism. And it is being promoted by alleged conservatives.

The myth that tort “reform” reduces costs was debunked awhile ago. As Steven Cohen noted in Forbes two years ago regarding additional studies, there was no reduction in the expensive tests from states with caps:

That myth was dispatched by the recent publication of a major study in the New England Journal of Medicine. A team of five doctors and public health experts found that tort reform measures passed in three states – specifically designed to insulate emergency room doctors from lawsuits — did nothing to reduce the number of expensive tests and procedures those ER doctors prescribed.

Cohen went on to summarize that none of the “expected” reductions in health care costs came to fruition:

This latest study follows numerous others that deflated other tort reform myths: that making it harder for victims to file medical malpractice lawsuits would reduce the number of “frivolous” suits that “clog the courts;” that imposing caps on the damages victims could receive would reign in “out of control” juries that were awarding lottery-size sums to plaintiffs; and that malpractice insurance premiums would fall, thereby reversing a doctor shortage caused by specialists “fleeing the profession.”

Trump is now on the bandwagon also, or at least whoever wrote this portion of his speech last night:

“Fourthly, we should implement legal reforms that protect patients and doctors from unnecessary costs that drive up the price of insurance — and work to bring down the artificially high price of drugs, and bring them down immediately.”

This oblique reference — Trump never deals in details — was presumably put there by his staff, as I know of no other Trump comment on the subject of medical malpractice.

But wait, there’s more! Tort “reform,” you see, has never saved a life. But has it ever killed anyone? Answer, yes!

I addressed that subject a few year back by pointing to plunging payouts at Columbia Presbyterian Hosptial / Cornell Weill Medical Center. A study found that “instituting a comprehensive obstetric patient safety program decreased compensation payments and sentinel events resulting in immediate and significant savings.”

How much did they save by instituting new safety procedures — in pure dollars and cents leaving aside the human misery of injury? “The 2009 compensation payment total constituted a 99.1% drop from the average 2003-2006 payments (from $27,591,610 to $ 250,000).”

You read that right: 99.1% drop. Based on a safety program, not tort “reform.”

Now if Congress wants to take away the incentive for safety, and just give immunity, you can expect continued deaths. The results should have been screamed from the rooftops:

Safety improvements = fewer malpractice payments and healthier patients.

Tort reform = more patient deaths.

Now let’s return to politics, shall we? I just want to close by asking conservatives a few questions, and do so with the knowledge that medical protectionism has already been a proven failure in reducing health care costs:

1. Do you believe in limited government?

2.  Is giving immunity your idea of limited government?

3.  Do you believe in states rights? Would federal tort “reform” legislation that limits the state-run civil justice systems run contrary to that concept?

4.  Do you believe in personal responsibility?

5.  Do you want to limit the responsibility of negligent parties and shift the burden to taxpayers?

6.  If you believe in having the taxpayers pay for injuries inflicted by others, how much extra in taxes are you willing to authorize to cover those costs?

7.  Is shifting the cost of injuries away from those responsible, and on to the general public, a form of socialism?

Screen Shot 2015-11-18 at 4.55.47 PM

The European Union, which only 17 years ago set a goal to “leapfrog” the U.S. in economic growth and innovation, is today on the verge of dissolution. Don’t take our word for it — they’re the ones saying it.

In a recent White Paper, the Euro-Poobahs, led by European Commission President Jean-Claude Juncker, sketched out five scenarios for the future of the EU. One proposal would essentially dissolve the current bureaucratic structure of the EU, and replace it with what once was the sole reason for its existence: A European single market. It’s probably the only hope.

That this is being discussed now tells you everything you need to know about the EU’s dire condition today.

The truth is, none of the 500 million people in 27 European countries that belong to the massively-indebted EU like being ruled by an unaccountable bureaucracy. It has become not merely oppressive, but actively dangerous, advising countries to do economically foolish things and letting masses of “refugees” from the Mideast and Northern Africa migrate to Europe — destroying communities, disrupting law and order, and creating a massive welfare state that requires ever-higher taxes to support.

The truth is, the EU’s top-heavy bureaucrats mandate everything from the ingredients in Parma ham and fruit jam to the size of vacuum cleaners and how bent a banana or a cucumber can be. Other absurd examples number in the thousands, far too many to list.

Even an exasperated Pope Francis has weighed in, saying “bureaucracy is crushing Europe.” Yes, it’s that bad.

Worst of all, the EU is not even a democracy in any meaningful sense of the word. This is what happens when bureaucracy, not people, rules.

Take the elected European Parliament. It meets in Strasbourg, France. But the bureaucracy is in Brussels. So about once a month, the whole Parliament — all 751 members and 9,000 or so others, including staff, lobbyists and journalists — pull up stakes and go to Brussels. And the lawmakers “make” no laws at all. They only vote on laws from the nonelected European Commission — a virtual dictatorship of bureaucrats.

Is it any wonder that dismantling the whole mess is now viewed as a real possibility? Britain is thriving after voting to leave the EU. Maybe the rest of the EU can, too.

And, yes, this is relevant to Americans today. For one, the EU has not “leapfrogged” the U.S. The average American today produces about $52,000 in real GDP. The average EU citizen produces about $35,000. And the gap is growing wider.

Even so, the stagnant, dysfunctional EU is the same vision American progressives have for the U.S. — bureaucratized, undemocratic, heavy-handed and inefficient, soul-less socialism-lite.

The lesson is, Europe would be wise to dismantle the EU while it still has the chance, and the U.S. would be wise not to repeat the EU’s failures.