Screen Shot 2015-11-18 at 4.55.47 PM


Back when Ronald Reagan was president, he pointed to many pages worth of “Help Wanted” ads in the Washington Post as one of many pieces of evidence that his economic policies were working.  His enemies on the left predictably responded that Reaganomics was merely producing jobs of the “burger flipper” variety.  The market that was the voting booth revealed Reagan’s critics as hopelessly deluded: with elections always and everywhere about the economy, he won re-election in 1984 in landslide fashion, 49 states to 1.

But to show readers just how similar and tribal left and right in the U.S. have become, consider the rollout of Flippy.  Flippy is a robot “employee” of the Caliburger chain, and he is apparently able to cook as many as 2,000 hamburgers per day.  Interesting about all this is that while Reagan’s backers once defended him against the assertion that his policies were only creating fast food jobs, modern members of the right are now criticizing lefty policies that are allegedly – drumroll please – destroying those same jobs.  It would be funny if it weren’t so sad.

In response to attempts to automate cooking, the right have happened upon what they naively presume is political gold.  They promote the illusion that the minimum wage mindlessly supported by the left is behind the rise of Flippy.  One conservative op-ed claimed that “raising the minimum wage will accelerate this [automation] trend by making even costly robots a better deal than increasingly expensive, minimally skilled workers.” The jobs the right decried in the ‘80s are apparently now really good ones that would be preserved if it weren’t for the evil left and its endless desire to foist wage floors on us.  In truth, Flippy reminds us how ridiculous both sides are.

Up front, the minimum wage should be abolished.  People should be free to transact with whomever they want, and at any wage.  They should even be free to pay companies and individuals for the right to work for them.  Many would do just that if they could apprentice under Jeff Bezos, Anna Wintour, Nick Saban, or Jose Andres.

Still, what’s missed by the right is that Flippy and others like him would be even more common if the minimum wage were zero.  As for the loony left, Flippy is a reminder that companies don’t seek out low-wage workers as much as they go to great expense to avoid them.  Caliburger’s experimentation with Flippy vivifies each of the previous assertions.

Regarding the silly notion that excessive wage floors brought on Flippy, let’s be serious.  What led to Flippy is the basic truth revealed by Caliburger executives that human burger flippers were tough to keep employed.  Simply stated, they were quitting too often.  Well, of course they were.  Low-wage jobs have long been marked by high turnover.  That’s why they’re low-wage and entry level.  People don’t stay in them long.  They attain skills, then trade them for better pay.  The turnover is very expensive for businesses given the time-wasting cost of training new employees.  Automation is a logical response to turnover; turnover that – if anything – would be exacerbated if the minimum wage were zero.  Think about it.  Are people more likely to leave low wage, or high-salaried jobs?

Furthermore, if conservatives had been able to read about Flippy free of a desire to score points against their surely mindless lefty adversaries, they would know that one Flippy costs $100,000.  Think about the previous number for a minute.  The cost of Flippy loudly reveals how little the wrongheaded minimum wage hikes have to do with Caliburger’s purchase of same.  $100,000 amounts to much more than a forced wage increase.  Flippy is a reminder of just how expensive employee turnover is.  To reduce that cost, Caliburger is spending in the six figures.

All of which brings us to the popular view on the left that investors and corporations migrate to the lowest wages possible.  What a laugh, except that such economic illiteracy is disturbing.  How could even the ignorant believe what is so brightly incorrect? Back to reality, much as Caliburger has found that low-wage workers are incredibly costly thanks to endless turnover, so do businesses of all stripes strive to pay wages and salaries that keep workers around.

Turnover is once again very expensive.  It’s so expensive that a hamburger chain is willing to spend $100,000 on a robot in order to avoid what is a costly headache.  All this is a reminder that cheap labor is the opposite of cheap, but also a reminder that handsomely rewarding entry-level work would be the path to bankruptcy for businesses.  It would be simply because entry-level workers don’t want to do entry-level work.  They’re once again looking to move up in the world. They want their work skills to evolve.  Walmart isn’t greedily paying low hourly wages as much as high pay for entry-level workers at the retail giant would still occur in concert with costly turnover. To be clear, low-wage workers are paid low wages precisely because their inevitable departures are very expensive.

All of this helps explain why low-wage workers should lovingly embrace the robot.  That robots are job-destroyers speaks to their genius.  Robots have the potential to destroy the entry-level jobs that workers plainly don’t want.  If robots can erase what’s not desired, they won’t erase work as much as the definition of “entry level” will change.  And it will change for the better as first-time jobs involve the exhibition of more in the way of skills, all at higher pay.

Until then, left and right need to try to be serious.  Fun as it is to expose either side, the wage/robot debate reveals each tribe as clueless.  Flippy isn’t “taking our burger flipper” jobs, and the fact that he isn’t reminds us how much businesses of all stripes would love to compensate exponentially more fulfilled workers at exponentially higher pay.


Screen Shot 2015-11-18 at 4.55.47 PM


All day long, we’re inundated by interruptions and alerts from our devices. Smartphones buzz to wake us up, emails stream into our inboxes, notifications from coworkers and far away friends bubble up on our screens, and “assistants” chime in with their own soulless voices.

Such interruptions seem logical to our minds: we want technology to help with our busy lives, ensuring we don’t miss important appointments and communications.

But our bodies have a different view: These constant alerts jolt our stress hormones into action, igniting our fight or flight response; our heartbeats quicken, our breathing tightens, our sweat glands burst open, and our muscles contract. That response is intended to help us outrun danger, not answer a call or text from a colleague.

We are simply not built to live like this.

Our apps are taking advantage of our hard-wired needs for security and social interaction and researchers are starting to see how terrible this is for us. A full 89% of college students now report feeling “phantom” phone vibrations, imagining their phone is summoning them to attention when it hasn’t actually buzzed.Another 86% of Americans say they check their email and social media accounts “constantly,” and that it’s really stressing them out.

Endocrinologist Robert Lustig tells Business Insider that notifications from our phones are training our brains to be in a nearly constant state of stress and fear by establishing a stress-fear memory pathway. And such a state means that the prefrontal cortex, the part of our brains that normally deals with some of our highest-order cognitive functioning, goes completely haywire, and basically shuts down.

“You end up doing stupid things,” Lustig says. “And those stupid things tend to get you in trouble.”

Your brain can only do one thing at a time

Scientists have known for years what people often won’t admit to themselves: humans can’t really multi-task. This is true for almost all of us: about 97.5% of the population. The other 2.5% have freakish abilities; scientists call them “super taskers,” because they can actually successfully do more than one thing at once. They can drive while talking on the phone, without compromising their ability to gab or shift gears.

Samantha Lee/Business Insider

But since only about 1 in 50 people are super taskers, the rest of us mere mortals are really only focusing on just one thing at a time. That means every time we pause to answer a new notification or get an alert from a different app on our phone, we’re being interrupted, and with that interruption we pay a price: something called a “switch cost.”

Sometimes the switch from one task to another costs us only a few tenths of a second, but in a day of flip-flopping between ideas, conversations, and transactions on a phone or computer, our switch costs can really add up, and make us more error-prone, too. Psychologist David Meyer who’s studied this effect estimates that shifting between tasks can use up as much as 40% of our otherwise productive brain time.

Every time we switch tasks, we’re also shooting ourselves up with a dose of the stress hormone cortisol, Lustig says. The switching puts our thoughtful, reasoning prefrontal cortex to sleep, and kicks up dopamine, our brain’s addiction chemical.

In other words, the stress that we build up by trying to do many things at once when we really can’t is making us sick, and causing us to crave even more interruptions, spiking dopamine, which perpetuates the cycle.

More phone time, lazier brain

Our brains can only process so much information at a time, about 60 bits per second.

The more tasks we have to do, the more we have to choose how we want to use our precious brain power. So its understandable that we might want to pass some of our extra workload to our phones or digital assistants.

But there is some evidence that delegating thinking tasks to our devices could not only be making our brains sicker, but lazier too.

The combination of socializing and using our smartphones could be putting a huge tax on our brains.

Researchers have found smarter, more analytical thinkers are less active on their smartphone search engines than other people. That doesn’t mean that using your phone for searching causes you to be “dumber,” it could just be that these smarties are searching less because they know more. But the link between less analytical thinking and more smartphone scrolling is there.

We also know that reading up on new information on your phone can be a terrible way to learn. Researchers have shown that people who take in complex information from a book, instead of on a screen, develop deeper comprehension, and engage in more conceptual thinking, too.

Brand new research on dozens of smartphone users in Switzerland also suggests that staring at our screens could be making both our brains and our fingers more jittery.

In research published this month, psychologists and computer scientists have found an unusual and potentially troubling connection: the more tapping, clicking and social media posting and scrolling people do, the “noisier” their brain signals become. That finding took the researchers by surprise. Usually, when we do something more often, we get better, faster and more efficient at the task.

But the researchers think there’s something different going on when we engage in social media: the combination of socializing and using our smartphones could be putting a huge tax on our brains.

Social behavior, “may require more resources at the same time,” study author Arko Ghosh said, from our brains to our fingers. And that’s scary stuff.

Should being on your phone in public be taboo?

Despite these troubling findings, scientists aren’t saying that enjoying your favorite apps is automatically destructive. But we do know that certain types of usage seem especially damaging.

Checking Facebook has been proven to make young adults depressed. Researchers who’ve studied college students’ emotional well-being find a direct link: the more often people check Facebook, the more miserable they are. But the incessant, misery-inducing phone checking doesn’t just stop there. Games like Pokemon GO or apps like Twitter can be addictive, and will leave your brain craving another hit.

Addictive apps are built to give your brain rewards, a spike of pleasure when someone likes your photo or comments on your post. Like gambling, they do it on an unpredictable schedule. That’s called a “variable ratio schedule”and its something the human brain goes crazy for.This technique isn’t just used by social media, it’s all over the internet. Airline fares that drop at the click of a mouse. Overstocked sofas that are there one minute and gone the next. Facebook notifications that change based on where our friends are and what they’re talking about. We’ve gotta have it all, we’ve gotta have more, and we’ve gotta have it now. We’re scratching addictive itches all over our screens.

Lustig says that even these kinds of apps aren’t inherently evil. They only become a problem when they are given free reign to interrupt us, tugging at our brains’ desire for tempting treats, tricking our brains into always wanting more.

“I’m not anti technology per se,” he counters. “I’m anti variable-reward technology. Because that’s designed very specifically to make you keep looking.”

Lustig says he wants to change this by drawing boundaries around socially acceptable smartphone use. If we can make a smartphone addiction taboo (like smoking inside buildings, for example), people will at least have to sanction their phone time off to delegated places and times, giving their brains a break.

“My hope is that we will come to a point where you can’t pull your cell phone out in public,” Lustig says.

Screen Shot 2015-11-18 at 4.55.47 PM

U.S. oil and natural gas is on the verge of transforming the world’s energy markets for a second time, further undercutting Saudi Arabia and Russia.

The widespread adoption of fracking in the U.S. opened billions of barrels of oil and trillions of cubic feet of natural gas to production and transformed the global energy sector in a matter of a few years. Now, a leading global energy agency says U.S. natural gas is about to do it again.

The International Energy Agency (IEA) said in a new forecast this week that growth in U.S. oil production will cover 80% of new global demand for oil in the next three years. U.S. oil production is expected to increase nearly 30% to 17 million barrels a day by 2023 with much of that growth coming from oil produced through fracking in West Texas.

“Non-OPEC supply growth is very, very strong, which will change a lot of parameters of the oil market in the next years to come,” Fatih Birol, the head of the International Energy Agency, told reporters at the CERAWeek energy conference hosted by IHS Markit. “We are going to see a major second wave of U.S. shale production coming.”

Republicans politicians and policymakers celebrated the news and sought to take credit for the development. Trump has sought to portray himself as a savior of the U.S. oil and gas industryopening up federal lands to oil and gas development at a breakneck pace and undoing Obama-era climate regulations.

But analysts attributed the growth in U.S. production to market factors rather than Republican policy. In the report, the IEA forecast that higher oil prices and increased demand from China and India will trigger increased U.S. output to make up the gap. The IEA also predicts that demand for petrochemicals used in plastic will grow overall demand for oil.

Still, the White House sent out a press release highlighting the report on Monday. Republican Sen. Dan Sullivan of Alaska told reporters at CERAWeek that Republican dominated Washington has transformed the federal government from being “basically hostile” to oil and gas under President Obama to actively supporting the industry’s growth. (In reality, Obama promoted natural gas as part of an “all of the above” energy strategy and his signature climate change regulation would have benefited the fossil fuel.)

“There’s never been a more exciting time in the American energy sector,” Sullivan told oil and gas industry insiders. “The American energy renaissance that so many of you in this room are responsible for is now in full swing.”

A second rise in U.S. oil production comes with significant implications for both the global energy markets and geopolitics more broadly. The U.S. supply of oil and natural gas has contributed to political upheaval in the Middle East, creating new competition for oil exports, and in Russia, a leading supplier of natural gas to Europe.

Alexei Texler, Russia’s first deputy energy minister, acknowledged that U.S. shale “poses certain risk” Tuesday but said his country would continue collaborating with partners in Saudi Arabia and elsewhere in response.

“In a shale revolution world, no country is an island,” said Birol. “Everyone will be affected.”

Screen Shot 2015-11-18 at 4.55.47 PM

PAYING for pensions is like one of those never-ending historical wars; a confusing series of small battles and skirmishes that can obscure the long-term trend. The latest conflict is in Britain where university lecturers are indulging in strike action over changes to their future benefits.

Let us start by making the long-term trends clear.

1. People are living longer and retirement ages have not kept pace. This increases the cost of paying pensions

2 Interest rates and bond yields have fallen. This increases the cost of generating an income from a given pension pot

3. Private sector employers have reacted to this cost by closing their defined benefit (DB) schemes (which link pensions to salaries) and switching to defined contribution (DC) schemes (which simply generate a savings pot)

British universities have reacted in a similar way; they are proposing switching future benefits to a DC basis. To avoid confusion, this means that past benefits will be unaltered; if you are 50, and have worked for 25 years, you will still have 25 years of DB benefits. But since pensions are deferred pay, it does mean that the total benefits of academics are being cut so one can see why they are upset.

But there is still plenty of confusion, as this piece in the Independentillustrates all too well (to cite just one example, in a piece about workplace benefits, it quotes OECD numbers on state-pension replacement rates). There are three big areas where the debate gets muddled.

1. Investment risk. If there is a pension fund, then there is investment risk regardless of whether this is a DB or a DC scheme. The difference is on whom the risk falls. In a DC scheme, it does fall on the employee. In a DB scheme, it rests largely on the employer. But in a sector heavily funded by the public sector that could mean the taxpayer.

2. Accounting. The real cost of pensions can’t be measured in cash flow terms: how much is being paid out this year, as opposed to the contributions being put in. They are a long-term commitment in which one must work out the cost of future benefits, allowing for longevity, inflation etc. These future payments must then be discounted at some rate to get to a present value.

This column has always argued for the use of a bond yield as the discount rate. That is because pensions are a debt which must be paid. The problem is that low bond yields have forced up the present value of future benefits and widened deficits. The unions in the university case argue this is too conservative and that one can reasonably expect higher investment returns. But this rather contradicts another element of their case. On the one hand, they are saying that DC pensions are too risky for employees because the markets might not deliver; on the other hand, they are saying the markets will be fine so the employer should keep promising DB.

In the US, public pension schemes do assume a high rate of return on their investments and they are in a mess, with a $4trn deficit. In one school district I visited, the entire budget increase was eaten up by higher pensions payments.

The true test of a pension cost is “what would it cost to get rid of it”. Insurance companies will take over pension schemes but when they do, they use a bond yield as their discount rate. This buyout basis makes deficits look bigger.

3. With public pensions, the rich tend to subsidise the poor. They are also run on a pay-as-you-go basis with today’s workers paying the pensions of current retirees. What you put in is not what you get out. With public pensions, the rich tend to subsidise the poor. They are also run on a pay-as-you-go basis with today’s workers paying the pensions of current retirees. But in a DC scheme, contributions are very important. Yes, returns matter a lot. But the real reason that DC pensions are lower is that total contributions are smaller; that is why employers are switching after all. In the US, some employers make no contribution at all. In Britain, matching is fairly common. still, the ONS reckons that total contributions averaged 21% of payroll in British DB schemes and just 4% in DC.

That is the big issue; not investment risk and not management costs. As it happens, the university scheme is offering a fairly generous 13.25% from the employers. But that is still a lot less than they might be expected to contribute to bring the DB scheme back into balance.

So the real issue for workers is this; how much is the employer contributing? And the same is true in a sector heavily funded by taxpayers? If the scheme requires more money where will it come from? Higher taxpayer grants? Higher student fees (which will lead to more taxpayer support if the fees are ulitmately unpaid)? Or worse services?

Screen Shot 2015-11-18 at 4.55.47 PM

A month ago, I noted that prevailing valuation extremes implied negative total returns for the S&P 500 on 10-12 year horizon, and losses on the order of two-thirds of the market’s value over the completion of the current market cycle. With our measures of market internals constructive, on balance, we had maintained a rather neutral near-term outlook for months, despite the most extreme “overvalued, overbought, overbullish” syndromes in U.S. history. Still, I noted, “I believe that it’s essential to carry a significant safety net at present, and I’m also partial to tail-risk hedges that kick-in automatically as the market declines, rather than requiring the execution of sell orders. My impression is that the first leg down will be extremely steep, and that a subsequent bounce will encourage investors to believe the worst is over.”

On February 2nd, our measures of market internals clearly deteriorated, shifting market conditions to a combination of extreme valuations and unfavorable market internals, coming off of the most extremely overextended conditions we’ve ever observed in the historical data. At present, I view the market as a “broken parabola” – much the same as we observed for the Nikkei in 1990, the Nasdaq in 2000, or for those wishing a more recent example, Bitcoin since January.

Two features of the initial break from speculative bubbles are worth noting. First, the collapse of major bubbles is often preceded by the collapse of smaller bubbles representing “fringe” speculations. Those early wipeouts are canaries in the coalmine. For example, in July 2000, the Wall Street Journal ran an article titled (in the print version) “What were we THINKING?” – reflecting on the “arrogance, greed, and optimism” that had already been followed by the collapse of dot-com stocks. My favorite line: “Now we know better. Why didn’t they see it coming?” Unfortunately, that article was published at a point where the Nasdaq still had an 80% loss (not a typo) ahead of it.

Similarly, in July 2007, two Bear Stearns hedge funds heavily invested in sub-prime loans suddenly became nearly worthless. Yet that was nearly three months before the S&P 500 peaked in October, followed by a collapse that would take it down by more than 55%.

Observing the sudden collapses of fringe bubbles today, including inverse volatility funds and Bitcoin, my impression is that we’re actually seeing the early signs of risk-aversion and selectivity among investors. The speculation in Bitcoin, despite issues of scalability and breathtaking inefficiency, was striking enough. But the willingness of investors to short market volatility even at 9% was mathematically disturbing.

See, volatility is measured by the “standard deviation” of returns, which describes the spread of a bell curve, and can never become negative. Moreover, standard deviation is annualized by multiplying by the square root of time. An annual volatility of 9% implies a daily volatilty of about 0.6%, which is like saying that a 2% market decline should occur in fewer than 1 in 2000 trading sessions, when in fact they’ve historically occurred more often than 1 in 50. The spectacle of investors eagerly shorting a volatility index (VIX) of 9, in expectation that it would go lower, wasn’t just a sideshow in some esoteric security. It was the sign of a market that had come to believe that stock prices could do nothing but advance in an upward parabolic trend, with virtually no risk of loss.

As I’ve emphasized in prior market comments, valuations are the primary driver of investment returns over a 10-12 year horizon, and of prospective losses over the completion of any market cycle, but they are rather useless indications of near-term returns. What drives near-term outcomes is the psychological inclination of investors toward speculation or risk-aversion. We infer that preference from the uniformity or divergence of market internals across a broad range of securities, sectors, industries, and security-types, because when investors are inclined to speculate, they tend to be indiscriminate about it. This has been true even in the advancing half-cycle since 2009.

The only difference in recent years was that, unlike other cycles where extreme “overvalued, overbought, overbullish” features of market action reliably warned that speculation had gone too far, these syndromes proved useless in the face of zero interest rates. Evidently, once interest rates hit zero, so did the collective IQ of Wall Street. We adapted incrementally, by placing priority on the condition of market internals, over and above those overextended syndromes. Ultimately, we allowed no exceptions.

The proper valuation of long-term discounted cash flows requires the understanding that if interest rates are low because growth rates are also low, no valuation premium is “justified” by the low interest rates at all. It requires consideration of how the structural drivers of GDP growth (labor force growth and productivity) have changed over time.

Careful, value-conscious, historically-informed analysis can serve investors well over the complete market cycle, but that analysis must also include investor psychology (which we infer from market internals). In a speculative market, it’s not the understanding of valuation, or economics, or a century of market cycles that gets you into trouble. It’s the assumption that anyone cares.

The important point is this: Extreme valuations are born not of careful calculation, thoughtful estimation of long-term discounted cash flows, or evidence-based reasoning. They are born of investor psychology, self-reinforcing speculation, and verbal arguments that need not, and often do not, hold up under the weight of historical data. Once investor preferences shift from speculation toward risk-aversion, extreme valuations should not be ignored, and can suddenly matter to their full extent. It appears that the financial markets may have reached that point.

A second feature of the initial break from a speculative bubble, which I observed last month, is that the first leg down tends to be extremely steep, and a subsequent bounce encourages investors to believe that the worst is over. That feature is clearly evident when we examine prior financial bubbles across history. Dr. Jean-Paul Rodrigue describes an idealized bubble as a series of phases, including that sort of recovery from the initial break, which he describes as a “bull trap.”

Screen Shot 2018-03-08 at 7.14.53 AMI continue to expect the S&P 500 to lose about two-thirds of its value over the completion of the current market cycle. With market internals now unfavorable, following the most offensive “overvalued, overbought, overbullish” combination of market conditions on record, our market outlook has shifted to hard-negative. Rather than forecasting how long present conditions may persist, I believe it’s enough to align ourselves with prevailing market conditions, and shift our outlook as those conditions shift. That leaves us open to the possibility that market action will again recruit the kind of uniformity that would signal that investors have adopted a fresh willingness to speculate. We’ll respond to those changes as they arrive (ideally following a material retreat in valuations). For now, buckle up.

Screen Shot 2015-11-18 at 4.55.47 PM

Somewhat unintuitively, American corporations today enjoy many of the same rights as American citizens. Both, for instance, are entitled to the freedom of speech and the freedom of religion. How exactly did corporations come to be understood as “people” bestowed with the most fundamental constitutional rights? The answer can be found in a bizarre—even farcical—series of lawsuits over 130 years ago involving a lawyer who lied to the Supreme Court, an ethically challenged justice, and one of the most powerful corporations of the day.

That corporation was the Southern Pacific Railroad Company, owned by the robber baron Leland Stanford. In 1881, after California lawmakers imposed a special tax on railroad property, Southern Pacific pushed back, making the bold argument that the law was an act of unconstitutional discrimination under the Fourteenth Amendment. Adopted after the Civil War to protect the rights of the freed slaves, that amendment guarantees to every “person” the “equal protection of the laws.” Stanford’s railroad argued that it was a person too, reasoning that just as the Constitution prohibited discrimination on the basis of racial identity, so did it bar discrimination against Southern Pacific on the basis of its corporate identity.

The head lawyer representing Southern Pacific was a man named Roscoe Conkling. A leader of the Republican Party for more than a decade, Conkling had even been nominated to the Supreme Court twice. He begged off both times, the second time after the Senate had confirmed him. (He remains the last person to turn down a Supreme Court seat after winning confirmation). More than most lawyers, Conkling was seen by the justices as a peer.

It was a trust Conkling would betray. As he spoke before the Court on Southern Pacific’s behalf, Conkling recounted an astonishing tale. In the 1860s, when he was a young congressman, Conkling had served on the drafting committee that was responsible for writing the Fourteenth Amendment. Then the last member of the committee still living, Conkling told the justices that the drafters had changed the wording of the amendment, replacing “citizens” with “persons” in order to cover corporations too. Laws referring to “persons,” he said, have “by long and constant acceptance … been held to embrace artificial persons as well as natural persons.” Conkling buttressed his account with a surprising piece of evidence: a musty old journal he claimed was a previously unpublished record of the deliberations of the drafting committee.

Years later, historians would discover that Conkling’s journal was real but his story was a fraud. The journal was in fact a record of the congressional committee’s deliberations but, upon close examination, it offered no evidence that the drafters intended to protect corporations. It showed, in fact, that the language of the equal-protection clause was never changed from “citizen” to “person.” So far as anyone can tell, the rights of corporations were not raised in the public debates over the ratification of the Fourteenth Amendment or in any of the states’ ratifying conventions. And, prior to Conkling’s appearance on behalf of Southern Pacific, no member of the drafting committee had ever suggested that corporations were covered.

There’s reason to suspect Conkling’s deception was uncovered back in his time too. The justices held onto the case for three years without ever issuing a decision, until Southern Pacific unexpectedly settled the case. Then, shortly after, another case from Southern Pacific reached the Supreme Court, raising the exact same legal question. The company had the same team of lawyers, with the exception of Conkling. Tellingly, Southern Pacific’s lawyers omitted any mention of Conkling’s drafting history or his journal. Had those lawyers believed Conkling, it would have been malpractice to leave out his story.

When the Court issued its decision on this second case, the justices expressly declined to decide if corporations were people. The dispute could be, and was, resolved on other grounds, prompting an angry rebuke from one justice, Stephen J. Field, who castigated his colleagues for failing to address “the important constitutional questions involved.” “At the present day, nearly all great enterprises are conducted by corporations,” he wrote, and they deserved to know if they had equal rights too.

Rumored to carry a gun with him at all times, the colorful Field was the only sitting justice ever arrested—and the charge was murder. He was innocent, but nonetheless guilty of serious ethical violations in the Southern Pacific cases, at least by modern standards: A confidant of Leland Stanford, Field had advised the company on which lawyers to hire for this very series of cases and thus should have recused himself from them. He refused to—and, even worse, while the first case was pending, covertly shared internal memoranda of the justices with Southern Pacific’s legal team.

The rules of judicial ethics were not well developed in the Gilded Age, however, and the self-assured Field, who feared the forces of socialism, did not hesitate to weigh in. Taxing the property of railroads differently, he said, was like allowing deductions for property “owned by white men or by old men, and not deducted if owned by black men or young men.”

So, with Field on the Court, still more twists were yet to come. The Supreme Court’s opinions are officially published in volumes edited by an administrator called the reporter of decisions. By tradition, the reporter writes up a summary of the Court’s opinion and includes it at the beginning of the opinion. The reporter in the 1880s was J.C. Bancroft Davis, whose wildly inaccurate summary of the Southern Pacific case said that the Court had ruled that “corporations are persons within … the Fourteenth Amendment.” Whether his summary was an error or something more nefarious—Davis had once been the president of the Newburgh and New York Railway Company—will likely never be known.

Field nonetheless saw Davis’s erroneous summary as an opportunity. A few years later, in an opinion in an unrelated case, Field wrote that “corporations are persons within the meaning” of the Fourteenth Amendment. “It was so held in Santa Clara County v. Southern Pacific Railroad,” explained Field, who knew very well that the Court had done no such thing.

His gambit worked. In the following years, the case would be cited over and over by courts across the nation, including the Supreme Court, for deciding that corporations had rights under the Fourteenth Amendment.

Indeed, the faux precedent in the Southern Pacific case would go on to be used by a Supreme Court that in the early 20th century became famous for striking down numerous economic regulations, including federal child-labor laws, zoning laws, and wage-and-hour laws. Meanwhile, in cases like the notorious Plessy v. Ferguson(1896), those same justices refused to read the Constitution as protecting the rights of African Americans, the real intended beneficiaries of the Fourteenth Amendment. Between 1868, when the amendment was ratified, and 1912, the Supreme Court would rule on 28 cases involving the rights of African Americans and an astonishing 312 cases on the rights of corporations.

The day back in 1882 when the Supreme Court first heard Roscoe Conkling’s argument, the New-York Daily Tribune featured a story on the case with a headline that would turn out to be prophetic: “Civil Rights of Corporations.” Indeed, in a feat of deceitful legal alchemy, Southern Pacific and its wily legal team had, with the help of an audacious Supreme Court justice, set up the Fourteenth Amendment to be more of a bulwark for the rights of businesses than the rights of minorities.

Screen Shot 2015-11-18 at 4.55.47 PM

Even after a sharp correction earlier this year, the price of Bitcoin and other cryptocurrencies has remained unsustainably high, and techno-libertarians have continued to insist that blockchain technologies will revolutionize the way business is done. In fact, blockchain might just be the most over-hyped technology of all time.

NEW YORK – Predictions that Bitcoin and other cryptocurrencies will fail typically elicit a broader defense of the underlying blockchain technology. Yes, the argument goes, over half of all “initial coin offerings” to date have already failed, and most of the 1,500-plus cryptocurrencies also will fail, but “blockchain” will nonetheless revolutionize finance and human interactions generally.


The world’s leading thinkers and policymakers examine what’s come apart in the past year, and anticipate what will define the year ahead.


In reality, blockchain is one of the most overhyped technologies ever. For starters, blockchains are less efficient than existing databases. When someone says they are running something “on a blockchain,” what they usually mean is that they are running one instance of a software application that is replicated across many other devices.

The required storage space and computational power is substantially greater, and the latency higher, than in the case of a centralized application. Blockchains that incorporate “proof-of-stake” or “zero-knowledge” technologies require that all transactions be verified cryptographically, which slows them down. Blockchains that use “proof-of-work,” as many popular cryptocurrencies do, raise yet another problem: they require a huge amount of raw energy to secure them. This explains why Bitcoin “mining” operations in Iceland are on track to consume more energy this year than all Icelandic households combined.

Blockchains can make sense in cases where the speed/verifiability tradeoff is actually worth it, but this is rarely how the technology is marketed. Blockchain investment propositions routinely make wild promises to overthrow entire industries, such as cloud computing, without acknowledging the technology’s obvious limitations.

Consider the many schemes that rest on the claim that blockchains are a distributed, universal “world computer.” That claim assumes that banks, which already use efficient systems to process millions of transactions per day, have reason to migrate to a markedly slower and less efficient single cryptocurrency. This contradicts everything we know about the financial industry’s use of software. Financial institutions, particularly those engaged in algorithmic trading, need fast and efficient transaction processing. For their purposes, a single globally distributed blockchain such as Ethereum would never be useful.

Another false assumption is that blockchain represents something akin to a new universal protocol, like TCP-IP or HTML were for the Internet. Such claims imply that this or that blockchain will serve as the basis for most of the world’s transactions and communications in the future. Again, this makes little sense when one considers how blockchains actually work. For one thing, blockchains themselves rely on protocols like TCP-IP, so it isn’t clear how they would ever serve as a replacement

Furthermore, unlike base-level protocols, blockchains are “stateful,” meaning they store every valid communication that has ever been sent to them. As a result, well-designed blockchains need to consider the limitations of their users’ hardware and guard against spamming. This explains why Bitcoin Core, the Bitcoin software client, processes only 5-7 transactions per second, compared to Visa, which reliably processes 25,000 transactions per second.

Just as we cannot record all of the world’s transactions in a single centralized database, nor shall we do so in a single distributed database. Indeed, the problem of “blockchain scaling” is still more or less unsolved, and is likely to remain so for a long time.

Although we can be fairly sure that blockchain will not unseat TCP-IP, a particular blockchain component – such as Tezos or Ethereum’s smart-contract languages – could eventually set a standard for specific applications, just as Enterprise Linux and Windows did for PC operating systems. But betting on a particular “coin,” as many investors currently are, is not the same thing as betting on adoption of a larger “protocol.” Given what we know about how open-source software is used, there is little reason to think that the value to enterprises of specific blockchain applications will capitalize directly into only one or a few coins.

A third false claim concerns the “trustless” utopia that blockchain will supposedly create by eliminating the need for financial or other reliable intermediaries. This is absurd for a simple reason: every financial contract in existence today can either be modified or deliberately breached by the participating parties. Automating away these possibilities with rigid “trustless” terms is commercially non-viable, not least because it would require all financial agreements to be cash collateralized at 100%, which is insane from a cost-of-capital perspective.

Moreover, it turns out that many likely appropriate applications of blockchain in finance – such as in securitization or supply-chain monitoring – will require intermediaries after all, because there will inevitably be circumstances where unforeseen contingencies arise, demanding the exercise of discretion. The most important thing blockchain will do in such a situation is ensure that all parties to a transaction are in agreement with one another about its status and their obligations.

It is high time to end the hype. Bitcoin is a slow, energy-inefficient dinosaur that will never be able to process transactions as quickly or inexpensively as an Excel spreadsheet. Ethereum’s plans for an insecure proof-of-stake authentication system will render it vulnerable to manipulation by influential insiders. And Ripple’s technology for cross-border interbank financial transfers will soon be left in the dust by SWIFT, a non-blockchain consortium that all of the world’s major financial institutions already use. Similarly, centralized e-payment systems with almost no transaction costs – Faster Payments, AliPay, WeChat Pay, Venmo, Paypal, Square – are already being used by billions of people around the world.

Today’s “coin mania” is not unlike the railway mania at the dawn of the industrial revolution in the mid-nineteenth century. On its own, blockchain is hardly revolutionary. In conjunction with the secure, remote automation of financial and machine processes, however, it can have potentially far-reaching implications.

Ultimately, blockchain’s uses will be limited to specific, well-defined, and complex applications that require transparency and tamper-resistance more than they require speed – for example, communication with self-driving cars or drones. As for most of the coins, they are little different from railway stocks in the 1840s, which went bust when that bubble – like most bubbles – burst.