September 2016


Screen Shot 2015-11-18 at 4.55.47 PM

So the Clinton campaign, out to conceal the candidates’s illness, forced the Secret Service to break protocol and avoid the emergency room after her near-collapse Sunday.

This is one determined bunch of liars.

First, her team says nothing as it sneaks her from the 9/11 ceremony — after she nearly collapses. It later claims she was “overheated” and parades her outside her daughter’s place, where she claims she feels “much better.”

And has the candidate — a possibly infectious pneumonia case — hug a child to make the lie seem cute.

It adds up to hours of effort to deceive — even enlisting federal officers in the effort. That’s the bottom line of The Post’s scoop, thanks to sources who revealed that she was headed to the ER, as Secret Service protocols demand — until her staff insisted otherwise, for fear hospital staff might leak word of her illness.

It seems Team Hillary saw the risk of disclosure as worse than the risk to her health.

Only when their lies didn’t stop the questions did the Clinton camp announce that she’d been diagnosed with pneumonia — last Friday.

Heck, sending her to the 9/11 ceremonies in the first place — rather than admitting she had a common-enough ailment — was a bid to deceive. Her doctor had ordered rest.

So what if Clinton now promises “additional medical information” in the coming days? At this point, it’s impossible to believe she’ll ever provide a full and honest picture of her health.

If not for the video of her near-fall Sunday, her folks would’ve kept lying. Even then, it took them hours to tell the truth.

Or part of it. Clinton’s odd stiffness in the video, among other things, suggests more than pneumonia may be at play. The concussion and cranial blood clot she suffered in late 2012 sidelined her for six months. Are those issues truly resolved?

Dishonesty is her first instinct. After all, she answered federal orders to preserve her records by having her e-mails wiped clean with BleachBit and her hard drives smashed with a hammer.

And much of the press helped her hide her health woes after her recent coughing fits: CNN called those asking questions “the new birthers” and claimed to “debunk” the “conspiracy” theories. The Washington Post’s Chris Cillizza raged at the “conspiracy theorists.” Jimmy Kimmel and Stephen Colbert mocked concerns.

Funny: Those asking questions, from Donald Trump to Matt Drudge to our own Michael Goodwin, turned out to be right.

Like Trump, we wish Clinton a speedy recovery from this illness. Too bad she’ll never get over her Pinocchio syndrome.

Screen Shot 2015-11-18 at 4.55.47 PM

A quick update on near-term market action. Last week’s market retreat was a very minor example of the “unpleasant skew” I’ve discussed in recent months. Under present market conditions, the single most probable outcome in a given week remains a small advance, but with a smaller probability of a steep loss that can wipe out weeks or months of gains in one fell swoop. So the mode is positive, but the mean return is quite negative (see Impermanence and Full Cycle Thinking for a chart of what this distribution looks like). Again, last week was a very minor example. Prospective 10-12 year returns increased by only a few basis points.

In recent months, the compression of volatility has encouraged speculative put option writing by pension funds (see here for example), coupled with increased market exposure by volatility-targeting strategies that buy as volatility falls and sell as volatility rises. Last week, JP Morgan’s quantitative derivatives analyst Marko Kolanovic observed “Given the low levels of volatility, leverage in systematic strategies such as Volatility Targeting and Risk Parity is now near all-time highs. The same is true for CTA funds who run near-record levels of equity exposure.” This setup is reminiscent of the “portfolio insurance” schemes that were popular before the 1987 crash, and rely on the same mechanism of risk-control – the necessity of executing sales as prices fall – that contributed to that collapse.

I continue to expect market risk to become decidedly more hostile in the event that various widely-followed moving-average thresholds are violated, as those breakdowns are likely to provoke concerted efforts by trend-following market participants to exit, at price levelsnowhere near the levels where value-conscious investors would be interested in buying. For reference, the 100-day average of the S&P 500 is about 2120. The 200-day average is about 2057. For now, keep “unpleasant skew” in mind, to avoid becoming too complacent in the event of further marginal advances. I see the most preferable safety nets as those that don’t rely on the execution of stop-loss orders.

Looking out on a longer-term horizon, the following chart shows the ratio of nonfinancial market capitalization to nominal GDP, where we can reasonably proxy pre-war data back to the mid-1920’s. Our preferred measure is actually corporate gross-value added, including estimated foreign revenues (see The New Era is an Old Story), but the longer historical perspective we get from nominal GDP is also valuable. The chart shows this ratio on a log scale. To understand why, see the mathematical note at the end of this comment. The recent speculative episode has brought this ratio beyond every extreme in history with the exception of the 1929 and 2000 market peaks.

As I’ve often emphasized, one of the most important questions to ask about any indicator is:how strongly is this measure related to actual subsequent market returns? Investors could save themselves a great deal of confusion by asking that question. Hardly a week goes by that we don’t receive a note asking, for example, that “so-and-so says that earnings are going to strengthen in the second half – doesn’t that make stocks a buy?” Well, it might, if year-over-year earnings growth had any correlation at all with year-over-year market returns (it doesn’t), or if there was evidence that earnings tend to strengthen in an environment where unit labor costs are rising faster than the GDP deflator (they don’t). That’s not to say that we can be certain that earnings or stock prices won’t bounce, but we can already conclude that so-and-so hasn’t convincingly made their case. When investors ignore the correlation between indicators and outcomes, they make themselves the victims of anyone with an opinion.

The chart below shows the same data as above – the ratio of nonfinancial market capitalization to nominal GDP – but uses an inverted log scale. The red line is the actual subsequent total return of the S&P 500 over the following 12-year horizon. We use that horizon because that’s the point where the autocorrelation profile of valuations hits zero (seeValuations Not Only Mean-Revert, They Mean-Invert). The chart offers some sense of why Warren Buffett, in a 2001 Fortune interview, called this ratio “probably the best single measure of where valuations stand at any given moment.” Again, we prefer corporate gross value-added, but at that point, we’re quibbling over a 91% correlation and a 93% correlation with subsequent 12-year market returns. Buffett hasn’t mentioned this measure in quite some time, but that’s certainly not because it has been any less valuable in recent market cycles. As I’ve frequently observed, it’s fine to assert that stocks are “fairly valued” relative to interest rates here, but only provided that “fair” is defined as an expected nominal total return for the S&P 500 averaging about 1.5% annually over the coming 12-year period.

It’s not a theory, it’s just arithmetic

Understand that valuation levels similar to the present have never been observed without the stock market losing half of its value, or more, over the completion of the market cycle. We’ve periodically heard analysts talking about stocks being in a “secular” bull market that presumably has years and years to go. These analysts evidently have no sense of what drives such secular market phases.

The period from 1949 to about 1965 represented a secular bull market, comprising a series of complete bull-bear market cycles, characterized by progressively richer valuations at the peak of each cyclical bull market. Likewise, the period from 1982 to 2000 represented another long secular bull market, again characterized by progressively richer valuations at each cyclical bull market peak.

By contrast, the periods from 1929 to 1949, and again from 1965 to 1982 both represented secular bear markets, comprising a series of complete bull-bear market cycles, but with a somewhat less progressive profile. Still, each period was characterized by a move from extremely rich valuations at the beginning of the secular bear market to extremely depressed valuations (and extremely high expected future returns) nearly two decades later. I’m not at all convinced that these secular phases have reliable periods like 18 years, but suffice it to say that secular movements between durable extremes of overvaluation and undervaluation can span a whole series of cyclical bull-bear cycles.

From the chart above, it should be clear that the defining feature of a secular bear market low (and the beginning of a long secular bull market) is deep undervaluation. Indeed, the 1949 and 1982 market troughs each brought the ratio of market capitalization to nominal GDP below 0.33. By contrast, the defining feature of a secular bull market peak (and the beginning of a long secular bear market) is extreme overvaluation. Indeed, the 1929 and 2000 market peaks each brought the ratio of market capitalization to nominal GDP to levels similar to what we observe today (the 2015 peak slightly exceeded 1.30).

Let’s do some quick arithmetic. Suppose that real GDP growth accelerates to 2% and inflation picks up to 2%, producing 4% annual nominal GDP growth for the next 25 years. Now allow for the possibility that the stock market hits a secular bear market low similar to 1949 and 1982, not two or three years from now, but fully 25 years from now. On those assumptions, what would happen to the S&P 500 Index over the coming 25 years?

The answer is simple. The ratio of the future S&P 500 Index to the current S&P 500 Index would be:

(1.04)^25 * (0.33/1.30) = 0.677. Put differently, the S&P 500 Index would be 32.3% lower, 25 years from today, than it is at present. Even including the income from a growing stream of dividends, we estimate that in the event of a secular low 25 years from today, the average annual total return of the S&P 500 between now and then would come to less than 3% annually. It’s not a theory, it’s just arithmetic.

Am I suggesting that investors should avoid stocks until the next secular low? Certainly not. Current valuations are more consistent with the start of a secular bear than with a secular bull, and my impression is that we’ll eventually look back and see that the 2000 bubble peak was the beginning of what is currently still a secular bear with quite a long time ahead of it. But regardless of where valuations head in the long-term, we expect to observe regular and substantial investment opportunities in stocks over coming market cycles, with the most favorable opportunities emerging at points where a material retreat in valuations is joined by an early improvement in market action.

Am I suggesting that the long-term tradeoff between expected return and risk is unfavorable at current valuations, and that near-term and intermediate-term market outcomes could become steeply negative in response to a moderate further deterioration in market action? Absolutely – a century of market evidence offers little to support any other expectation.

Elaborate fallacies

The danger of the current iteration of “this time it’s different,” I think, is in how elaborate and far-reaching the underlying fallacies have become. By equating the delay of consequences with the absence of consequences, investors have now set up the most extreme episode of equity market speculation in U.S. history next to the 1929 and 2000 market peaks, and the broadest episode of general financial market speculation outside of the 11-month period from November 1928 to September 1929 (as measured by the estimated prospective 12-year total return on a conventional portfolio mix of 60% stocks, 30% bonds and 10% Treasury bills).

It’s not just that investors have oversimplified a complex interaction, which they are certainly doing here in assuming that “easy money makes risky assets go up.” This simplification fails to explain, for example, how the U.S. stock market could lose more than half of its value on two separate occasions since 2000, during periods when the Federal Reserve was persistently and aggressively easing. It also overlooks that the Japanese stock market shed more than 60% of its value on two separate occasions since 2000, despite short-term interest rates that were regularly pegged at zero and never breached even 1%.

The historically supported, but more complex statement recognizes that the relationship between monetary policy and the financial markets is not reliably mechanical but wholly psychological. Easy money operates by creating safe but low-interest liquidity that someonein the economy must hold at every moment in time until it is retired. Investors often treat that liquidity as an inferior and uncomfortable “hot potato,” but only if they don’t see safety as desirable. So the accurate statement is that “easy money can encourage speculation, but only does so reliably when investors are already inclined to speculate.” As I’ve often noted, we infer the preference of investors toward speculation or risk-aversion by the uniformity or divergence of market internals across a broad range of individual securities, industries, sectors, and security-types.

The fallacies underlying today’s “this time is different” mantra go even further, assuming not only that central bank behavior has permanently changed, but that we can also abandon everything we’ve learned from centuries of economic dynamics, human behavior, and even basic arithmetic.

Having repeatedly borrowed enough short-lived bursts of consumption from the future to keep U.S. real GDP growth barely above 1% over the past year (and indeed, over the pastdecade), monetary authorities have convinced investors of a cause-effect relationship between activist monetary policy and economic outcomes that is entirely absent in actual data (see Failed Transmission – Evidence on the Futility of Activist Fed Policy). Worse, central bankers have convinced investors that the progressive overpricing of financial securities can substitute for actual growth. Unfortunately, with every increase in price, what was “prospective future return” a moment earlier is suddenly converted into “realized past return,” leaving nothing but lower expected returns and greater risk on the table for investors who continue to hold those securities. The essence of a Ponzi scheme is to reward investors who leave early, out of the capital of investors who arrive later, thereby ensuring losses for anyone who stays. What else is current central bank policy but a massive greater-fool Ponzi scheme?

The recent speculative episode has even convinced investors that human nature itself has changed. Centuries of financial market behavior can easily verify that periodic cycles of greed and fear are an inherent part of market dynamics. Instead, investors have abandoned that lesson, believing that central banks have discovered the ability to do “whatever it takes” to keep markets higher (without realizing that the effectiveness of easy money is entirely dependent on the absence of risk-aversion among investors).

The thing that allows this is imagination. In every market cycle, imagination is what gives greed and fear their impetus. In a financial or economic crisis, imagination is what leads investors to question whether the economic system itself can survive. In a bubble, imagination is what leads investors to invent endless reasons why the carnival can continue indefinitely. For example, despite the fact that Japan’s real GDP has grown at just one-half of 1% annually over the past two decades, while the Nikkei stock index has taken an extraordinarily volatile trip to nowhere over that period, imagination leads investors to ask why the Federal Reserve won’t suddenly begin buying stocks, as the Bank of Japan and the Swiss National Bank have done. Well, one answer is that Sections 14 and 15 of the Federal Reserve Act prohibit it. Another is that even if the Fed could emulate the Bank of Japan, the Nikkei Index is still below where it was in 2000, 2007 and 2014 (not to mention 1986), so it’s not at all clear that such purchases exert any sustained effect on stock prices. In addition, one needs to examine the situation of each government to understand why certain central banks, and not others, have purchased equities in the first place.

As a fairly insular economy, Japan’s encouragement of overlapping and often centrally-planned relationships between government, business and the banking system has been the dominant economic model for decades, which has allowed more tolerance for the actions of the Bank of Japan. That said, buying corporate securities is actually quite a hostile act toward the public, compared with buying government debt. The reason is that when government bonds are issued for the purpose of public expenditure, or ideally, productive investment, central bank purchases of those bonds are a form of public finance. By contrast, when a central bank purchases corporate securities, and if they subsequently lose value, the creation of base money acts as a public subsidy to private investors who would otherwise have borne that loss. Since central bank purchases of stock are the last resort of a central bank that has already pushed other forms of speculation to the limit, the likelihood of loss is quite high. Those losses will involve a large opportunity cost to the public, as well as a transfer of public wealth to private individuals. From a contrarian perspective, I suspect that the worst time for a central bank to buy stocks is when the public itself is too bullish to oppose it.

Meanwhile in Switzerland, the desire to peg the Swiss franc to the value of the euro can only be achieved by following Mario Draghi down the primrose path of asset purchases, and the already bloated balance sheet of the Swiss National Bank leaves stocks among the few assets available to buy. My expectation is that this too, will turn out in hindsight to have imposed a huge opportunity cost on the Swiss public.

With regard to the basics of yield arithmetic, investors have equated raw yield with total return, in a way that leaves them with no meaningful prospect for investment returns over the coming 10-12 years, and the likelihood of deep interim losses over the completion of the current market cycle. Understand that the “current yield” of a stock or a bond (the annual dividend or interest payment divided by the current price) is quite a misleading indicator of likely total return. Consider, for example, a 30-year bond with a coupon yield (annual interest payment/face value) of 3%. By the time the price advances enough to bring the current yield (annual interest payment/current price) down to 1.58%, the yield-to-maturity on that bond has already hit zero; investors in that bond will then earn nothing for 30 years. Moreover, an increase in the yield-to-maturity from zero to just 1% will generate a -20% capital loss. Indeed, German 30-year bonds, which hit a record low yield-to-maturity of 0.34% at the end of July (think about that), have already lost about -8% as yields have increased by just a few tenths of 1%.

Investors seem to forget that the lower the yield and the longer the maturity of a financial asset, the greater its vulnerability to capital losses in response to even minor changes in yields or risk premiums. This is particularly true for equities. The language of a market top is “well, even if it goes down, it will eventually come back up.” To some extent, that’s true. Over the 16 years since the 2000 market peak, the S&P 500 has posted an average total return of 4% annually, though it’s taken the third most extreme equity market bubble in U.S. history to do it. Unfortunately, a century of market history suggests that all of that return is likely to be wiped away over the completion of the current market cycle; an outcome that would only be run-of-the-mill given current valuations. By that point, investors may be quite right that they didn’t lose anything by purchasing stocks at the 2000 highs. I doubt that it will be much solace.

The belief in “TINA” – the notion that “there is no alternative” but to own stocks – ignores that stocks are already so overvalued that the S&P 500 is likely to underperform even the 1.6% yield on Treasury bonds over the coming decade. Frankly, we expect even the average return on Treasury bills to be higher over that horizon. So, yes, I very much believe that safe, low-interest cash is a presently a better investment option, both in terms of prospective return and potential risk, than equities, corporate bonds, junk debt, or even long-term Treasury bonds.

My view is that investors should presently make room in their portfolios for safe, low-duration assets, hedged equities, and alternative strategies that have a modest or even negative correlation with conventional securities. I expect that there will be substantial opportunities to alter that mix over the completion of the current market cycle. The time to focus on higher beta and longer duration assets is when those assets are priced at levels that offer potential compensation for their prospective risk. Currently, investors in conventional assets face a combination of weak expected returns and spectacular downside potential. I expect that this will soon enough be as obvious as it was in 2002 and 2009, when investors looked back on their insistence that “This time is different” and replaced that thought with “What the hell were we thinking?”

In the interim, as value investor Howard Marks observed in The Most Important Thing, “Since many of the best investors stick most strongly to their approach – and since no approach will work all the time – the best investors can have some of the greatest periods of underperformance. Specifically, in crazy times, disciplined investors willingly accept the risk of not taking enough risk to keep up… Investment risk comes primarily from too-high prices, and too-high prices often come from excessive optimism and inadequate skepticism and risk aversion.”

A mathematical note on valuations and subsequent market returns

I’ve introduced a lot of analytical methods and indicators over the years, and various graphics demonstrating the relationship between reliable valuation measures and subsequent market returns have come to be known as the “Hussman valuation chart.” Since we continually do research and learn from those efforts, we’ve identified increasingly accurate measures over time. For example, while Shiller’s cyclically adjusted P/E (CAPE) is preferable to say, price/forward earnings, the CAPE becomes much more reliable when one corrects for variations in the embedded profit margin (see Two Point Three Sigmas Above the Norm). Some observers seem keen to characterize learning from research as some sort of nefarious “evolution,” or to dismiss a century of evidence on valuations as “data mining” or “curve fitting,” so it’s important to understand how strongly the relationships between valuations, growth rates, and investment returns are rooted in identities and basic arithmetic.

As a side note, I also use logarithms quite a bit. If you’re serious about investing, learning how to work with logs is time well-spent, because returns tend to be linear in log valuations (see, for example, The Coming Fed-Induced Pension Bust).

Let’s review this arithmetic (see Rarefied Air: Valuations and Subsequent Market Returns for details and data). Below, P is price, F is some reasonably reliable fundamental, V is the valuation ratio P/F, and g is the nominal growth rate of that fundamental over the following T years. We can then write the future capital gain in the form of an arithmetic identity:

P_future / P_today = (F_future/F_today) * (V_future / V_today)

P_future / P_today = (1+g)^T x (V_future / V_today)

Or in log terms:

log(P_future/P_today) = T x log(1+g) + log(V_future) – log(V_today)

All this says is that your future investment return is driven by: the holding period T, the growth rate of fundamentals g over that horizon, and the change in valuations over the holding period. Because departures of valuations and nominal growth from their historical norms tend to mean-revert over time, one can obtain reliable estimates of prospective 10-12 year market returns by using historical norms for g and V_future.

But we can actually go further. The estimates turn out to be accurate even in periods where g and V_future depart from their historical norms. The reason is that variations in g over a 10-year period tend to systematically offset variations in terminal valuations log(V_future), largely because of how investors respond to inflation. Put simply, market valuations tend to be negatively correlated with the growth rate of nominal GDP over the preceding decade. It’s a systematic relationship. Meanwhile, average dividend income over a given holding period has a high inverse correlation with starting valuations.

The consequence is that annual nominal total returns in the S&P 500 over a 10-12 year horizon have a robust and inverse correlation with the log of starting valuations, particularly as measured by market capitalization/GDP or market capitalization/corporate gross value-added. That’s not data mining or curve-fitting. It’s not a theory, it’s just arithmetic.

Screen Shot 2015-11-18 at 4.55.47 PM

I just read a ‘research essay’ purported to be a criticism of the NZ Court of Appeal decision in Jackson Mews.

So when I read, ‘criticism’ I expect to see original reasoning pertaining to the decision. Add in words like ‘misinterpretation’, ‘blurred reasoning’ and I am very interested.

Why?

Because for a student to have the gumption and balls in a ‘to be published’ critique of the Court of Appeal, well that’s worth reading.

Only it wasn’t. Put aside the spelling errors, grammatical errors and really strange sentence structures and assess the actual substantive work…and there is none.

This was not a critique. This was a summary of everyone else’s critiques. In addition, the ‘click bait’ of, ‘misinterpretation’ and ‘blurred reasoning’ was so hedged and watered down, that those words should never have appeared in the title of the essay in the first place.

Was this a total waste of time? No, as I am now aware of an issue that prior to reading, I was not aware of. So I have gained.

So how did I come by this essay from another student?

It was emailed to everyone by the lecturer. Presumably, this lecturer thought, or felt, that this piece of work was worthy of our reading.

The author, despite any criticisms of her work…is a babe.screen-shot-2016-09-13-at-5-20-00-am

Screen Shot 2015-11-18 at 4.55.47 PM

Forget about the headphone jack for a second.

Sure, it’s pretty annoying that Apple’s newest iPhones — the 7 and 7 Plus, which were unveiled in San Francisco on Wednesday and will start shipping to customers on Sept. 16 — will not include a port for plugging in standard earbuds. But you’ll get used to it.

The absence of a jack is far from the worst shortcoming in Apple’s latest product launch. Instead, it’s a symptom of a deeper issue with the new iPhones, part of a problem that afflicts much of the company’s product lineup: Apple’s aesthetics have grown stale.

Apple has squandered its once-commanding lead in hardware and software design. Though the new iPhones include several new features, including water resistance and upgraded cameras, they look pretty much the same as the old ones. The new Apple Watch does too. And as competitors have borrowed and even begun to surpass Apple’s best designs, what was iconic about the company’s phones, computers, tablets and other products has come to seem generic.

This is a subjective assessment, and it’s one that Apple rebuts. The company says it does not change its designs just for the sake of change; the current iPhone design, which debuted in 2014, has sold hundreds of millions of units, so why mess with success? In a video accompanying the iPhone 7 unveiling on Wednesday, Jonathan Ive, Apple’s design chief, called the device the “most deliberate evolution” of its design vision for the smartphone.

Yet there are signs that my critique of Apple’s designs are shared by others. Industrial designers and tech critics used to swoon over Apple’s latest hardware; nowadays you witness less swooning and more bemusement.

Last year, Apple put out a battery case that looked comically pregnant — “a design embarrassment,” said The Verge — and a rechargeable mouse with the charging port on the bottom, meaning you have to turn it over to charge it. And the remote control for Apple TV violated the first rule of TV remote design: Don’t make it symmetrical, so people can figure out which button is which in the dark. (One tip: Put a rubber band on the bottom, so you can quickly figure out which end is up.)

Then there’s software interface design. The Apple Watch, also released last year, looked fine (and some of its wristbands were truly stunning), but its user interface was so puzzling and took so long to learn that Apple was forced to go back to the drawing board. In a new update to be introduced soon, the watch’s interface has been substantially simplified.

IPhone 7 and Wireless Headphones: Analyzing Apple’s Announcements SEPT. 7, 2016

Apple’s iPhone Sales Drop Again, but Services Are a Bright Spot JULY 26, 2016

Samsung to Recall 2.5 Million Galaxy Note 7s Over Battery Fires SEPT. 2, 2016
It’s the same story for Apple Music. After the streaming service was widely panned for its confusing array of options, Apple had to completely redesign it this year.

It’s not just that a few new Apple products have been plagued with design flaws. The bigger problem is an absence of delight. I recently checked in with several tech-pundit friends for their assessment of Apple’s aesthetic choices. “What was the last Apple design that really dazzled you?” I asked.

There was a small chorus of support for the MacBook, the beautifully tiny (if functionally flawed) laptop that Apple released last year. But most respondents were split between the iPhone 4 and the iPhone 5 — two daring smartphone designs that were instantly recognized as surpassing anything else on the market.

The iPhone 5, in particular, was a jewel; to me, its flat sides, chamfered edges and remarkable build quality suggested something miraculous, as if Mr. Ive had been divinely inspired in his locked white room. But the iPhone 4 and iPhone 5 were released in 2010 and 2012. If you have to reach back to the last presidential election to find an Apple design that really caught your eye, there’s something amiss.

Apple’s design difficulties prompt two questions: How bad is this problem? And how can Apple solve it?

To the first: It’s not acute, but it is urgent. Despite a slowdown in growth, Apple is still by far the most profitable consumer electronics company in the world. Consumer satisfaction surveys show that customers love its products. And even if the tech cognoscenti no longer rave about Apple’s designs, there’s little sign that their griping has affected sales.

Despite criticism, Apple Music also signed up 17 million subscribers in about a year. Apple doesn’t release sales numbers for the watch, but many analysts believe that sales have been brisk, and customer-satisfaction surveys are through the roof. And the iPhone has proved remarkably durable; as I argued last year, the iPhone’s continuing dominance is the closest bet in tech to a sure thing.

The real danger is in Apple’s long-term reputation. Much of Apple’s brand is built on design and on a sense that everything it delivers is a gift from the vanguard.

Two years ago, the designer Khoi Vinh, a former design director for The New York Times who now works at Adobe, summed up Apple’s design prowess this way: “If there’s a single thread that runs through nearly every piece of Apple hardware, it’s conviction, the sense that its designers believed with every fiber of their being that the form factor they delivered was the result of countless correct choices that, in totality, add up to the best and only choice for giving shape to that particular product.”

But in assessing the iPhone 6, then new, Mr. Vinh felt Apple had gone astray. Whereas the iPhone 5 had sharp, sophisticated lines that set it apart from everything else, “the iPhone 6’s form seems uninspired, harkening back to the dated-looking forms of the original iPhone, and barely managing to distinguish itself from the countless other phones that have since aped that look,” he wrote.

That was in 2014. Now, two years later, we still have the same basic iPhone design. For years, Apple has released a redesigned iPhone every other year, but now we’re going to go three years without a new iPhone look.

And while Apple has slowed its design cadence, its rivals have sped up. Last year Samsung remade its lineup of Galaxy smartphones in a new glass-and-metal design that looked practically identical to the iPhone. Then it went further. Over the course of a few months, Samsung put out several design refinements, culminating in the Note 7, a big phone that has been universally praised by critics. With its curved sides and edge-to-edge display, the Note 7 pulls off a neat trick: Though it is physically smaller than Apple’s big phone, it actually has a larger screen. So thanks to clever design, you get more from a smaller thing — exactly the sort of advance we once looked to Apple for.

An important caveat: Samsung’s software is still bloated, and its reputation for overall build quality took a hit when it announced last week that it would recall and replace the Note 7 because of a battery defect that caused spontaneous explosions. To the extent that making a device that doesn’t explode suggests design expertise, Apple is still ahead of Samsung.

But the setbacks from Apple’s rivals aren’t likely to last. Apple can’t afford to rest on its past successes for long.

bikini-apple-logo090623071623_515x343

Screen Shot 2015-11-18 at 4.55.47 PM

Back in the days of printed newspapers, magazines, and newsletters the acquisition of news and information was easier, or so it seemed.  The reason it seemed easier is that there was much less of it.  Today, with the internet, 24-hour financial media, blogs, and every conceivable method of acquisition, information is overwhelming.  Once I realized that some information was actionable and most of the rest was categorized as observable, then things became greatly simplified.  Hopefully this article will shed some light on how to separate actionable information from the much larger observable information.  As you can see from the Webster definitions below they initially do not seem that different.

Actionable – able to be used as a basis or reason for doing something or capable of being acted on.

Observable – possible to see or notice or deserving of attention; noteworthy.

However, when real money gets involved the difference can be significant.  Let me give you my definition and then follow up with some scenarios.  The world is full of observable information being dispensed as if it is actionable.  All the experts, television pundits, talking heads, economists (especially them), most newsletter writers, most blog authors, in fact most of the stuff you hear in regard to the markets is rarely actionable.  Actionable means that you, upon seeing it, can make a decision to buy, sell, or do nothing – period.

I’ll start by mentioning Japanese candle patterns, a subject I beat to death in this blog over the past few months.  I have never stated anything other than the fact that Japanese candle patterns should never be used in isolation; you should always use them in concert with other technical tools.  Hence, Japanese candle patterns for me are observable information; not actionable.  Only when backed up by Western technical tools can they become actionable.  I demonstrated with in my article Candlestick Analysis – Putting It All Together.

Too often I hear the financial media discussing economic indicators and how they affect the stock market.  Initially it seems they forget the stock market is one of the components of the index of LEADING indicators; in other words, the stock market is better at predicting the economy.  First of all, economics can never be proved right or wrong since it is an art, just like technical analysis.  Economic data is primarily monthly, often quarterly, and occasionally weekly.  It gets rebased periodically and often gets adjusted for seasonal affects and everything else you can think of.  It just cannot reliably provide any valuable information to a trader or investor.  However, boy does it sound really good when someone creates a great story around it and how at one time in the past it occurred at a market top; it is truly difficult to ignore.  Ignore you should!  The beauty of the data generated by the stock market, mainly price, is that it is an instantaneous view of supply and demand.  I have said this a lot on these pages, but it needs to be fully understood.  The action of buyers and sellers making decisions and taking action is determined by price, and price alone.  The analysis of price at least is a first step to obtaining actionable information.  Using technical tools that help you reduce price into information that you can rely upon is where the actionable part surfaces.

I also seriously doubt anyone relies totally upon one technical tool or indicator.  If they do, then probably not for long.  I managed a lot of money using a weight of the evidence approach which means I used a bunch of indicators from price, breadth, and relative strength (called it PBR – see graphic).  Each individual indicator could be classified as observable, but when used in concert with others, THEY became actionable.

I think the point of this entire article is to alert or remind you that there is a giant amount of information out there and that most of it is not actionable; it is only observable.  Sometimes it is difficult to tell the difference so just think about putting real money into a trade based upon what you hear or read.  Real money separates a lot of people from making decisions based upon observable information, no matter how convincing it is.

I am really looking forward to speaking at ChartCon 2016.  The schedule shows me on at 10:30am PT where I’ll talk about the marketing of Wall Street disguised as research and show a couple of things about Technical Analysis that annoy me.

Dance with the Actionable Information,

Greg Morris

 

Screen Shot 2015-11-18 at 4.55.47 PM

After weeks of being down in the polls, Donald Trump has vaulted into a narrow lead over Hillary Clinton, according to a new national survey released Tuesday.

The CNN/ORC poll showed the Republican presidential nominee with 45% of the vote from likely voters. His Democratic rival earned 43% in a four-way race that included Libertarian candidate Gary Johnson and Green Party candidate Jill Stein.

Trump’s 2-point lead was within the poll’s margin of error of 3.5 percentage points, meaning the race is essentially tied. The real-estate mogul led by 1 point in a head-to-head race.

But Trump seems to be doing well with independent voters, a crucial bloc in any election — 49% of those voters said they’d vote for Trump, while only 29% said they backed Clinton in the CNN/ORC poll. Johnson carries a significant proportion of independents, with 16% saying they’d vote for him.

Trump, however, is in trouble with minority voters — 71% of non-whites in the CNN/ORC poll said they preferred Clinton.

Despite Trump’s recent rise in the polls — a CNN/ORC poll from early August, for comparison, showed Clinton with an 8-point lead — most voters surveyed said they still expect Clinton to win in November.

Clinton has consistently topped Trump since general-election polling began. Trump briefly took the lead over Clinton after the Republican National Convention in July, but Clinton came out on top again after the Democrats’ convention the following week.

Another poll released Tuesday shows Clinton ahead of Trump by four points in a four-way race with Johnson and Stein. The NBC News/Survey Monkey poll has Clinton with 48% of the vote and Trump with 42%.

The race has been narrowing in recent weeks, with Clinton seeing her large leads over Trump erased in some polls.

A CNN/ORC poll released Tuesday shows a tight race coming out of the Labour Day Weekend. Republican nominee Donald J. Trump has a slight lead over Democrat Hillary Clinton:

  • Trump: 45%
  • Clinton: 43%

The pollster surveyed a random sample of 1001 Americans. Trump’s lead is within the poll’s margin of error of 3.5%, so it suggests the race is essentially tied, although, as we’ll see later, for a candidate, it’s better to be ahead even within the margin of error. And it’s possible the poll is way off and support in reality is outside that margin.

So, what does this mean for Trump and Clinton? Answering that requires a clear sense of how polls work, and looking closer tells you everything about what we can and cannot trust.

It depends on whom you ask

In 1936, a magazine called The Literary Digest ran one of the biggest opinion polls of all time. It asked 2.4 million people whether they planned to vote for the incumbent Democratic president, Franklin D. Roosevelt, or his Republican challenger, Alfred Landon.

BI Graphics Lit Dig coversDragan Radovanovic/Business Insider

It trumpeted this prediction:

  • Landon: 57%
  • Roosevelt: 43%

The poll must have had one of the smallest margins of error in polling. But it was dead wrong.

Error margins apply only to the population a pollster is sampling.

This is what actually happened in the election:

  • Roosevelt: 62%
  • Landon: 38%

The Literary Digest fell prey to what is known as selection bias. That massive sample was made up of its subscribers and members of groups and organisations that tended to skew wealthier than the average American.

Today’s pollsters are savvier, but there are still many ways that bias seeps in. For instance, a poll that calls only landlines may leave out a whole demographic of younger, mobile phone-only households. Some polls are opt-in, where users of a specific website answer questions. That’s less reliable than a random sampling.

“Far more important than dialling down the margin of error is making sure that whatever you’re aiming at is unbiased and that you do have a representative sample,” says Andrew Bray, an assistant professor of statistics at Reed College.

Some polls have well-known biases. Rasmussen, for instance, is known to skew Republican.

Lee Miringoff, the director of the Marist Institute for Public Opinion — which produces polls for NBC News, The Wall Street Journal, and McClatchy — says polls are as much art as science.

“Scientifically, we should get the same result,” he says.

Modern polls are not immune to these issues. Some potential voters are harder to reach, and some polls skew more educated. And polls with a high percentage of potential voters who are undecided can lead to more uncertainty.

So how much can we trust today’s results?

Margin of error

Dragan Radovanovic/Business Insider

Pollsters and journalists tend to highlight the headline numbers in a poll. In July, before the Democratic convention, a Rasmussen survey showed Trump leading Clinton 43-42.

Rasmussen didn’t help matters by describing Trump as “statistically ahead.”

It’s actually not that simple.

First, you have to consider the margin of error. Rasmussen pollsters interviewed 1,000 people to represent the views of 320 million Americans. Naturally, the poll results might not perfectly match what the whole population thinks.

That Rasmussen poll has a 3-point margin of error. Here’s what that actually means.

Let’s take that Trump number: 43% is something called a “point estimate.” This is basically the polling firm’s best educated guess of what the number would be if it had asked the whole population. But it’s not guaranteed to be right.

The margin of error accounts for this:

  • Because the margin of error is 3 points, the pollsters are confident that support for Trump in the total population is between 40% and 46% — or 43% plus or minus 3 percentage points.
  • Support for Clinton is between 39% and 45%.
Confidence intervalsAndy Kiersz / Business Insider

The point estimate (the dots in the chart above) is like fishing with a spear; you’re stabbing for the right answer. The margin of error is like fishing with a net; somewhere in your catch is the true figure.

But this is not the whole story either.

Feeling confident

Before the 2016 Michigan primary, it looked as if Clinton had it made. FiveThirtyEight aggregated several polls and predicted that she had a 99% chance of winning the primary. Many polls had Clinton ahead of challenger Bernie Sanders by double digits.

The polls were wrong.

Sanders eked out a narrow victory. Of the many reasons pollsters might have been off, this may be one of them: There’s more to polling than the margin of error.

“The margin of error is a guidepost, but not a foolproof” one, Miringoff says.

Here’s what the margin of error really means.

Pollsters typically ask roughly 1,000 people a question like: Whom do you plan to vote for?Their goal is to be 95% sure that the real level of support in the whole population is captured in the sample’s range, from the low end of the margin of error to the high end.

That range is called a “confidence interval.”

Let’s say a pollster like Miringoff were to run that same poll 100 times. Each time, he would randomly select different groups of 1,000 people. Miringoff would expect that the true proportion — the candidate’s actual support — would be found within the margin of error of 95 out of the 100 polls. That’s why he’d say that he’s 95% confident in the results.

Those five outliers are one reason elections don’t always turn out the way pollsters predict.

Remember that Rasmussen poll in July showing Trump with 43% support? That 43% is thought to be the most likely reflection of reality. But the pollster is still only 95% confident that Trump’s true amount of support is found between 40% and 46%.

The further you get from that point estimate, the less likely it is that you are seeing the true number. So it’s more likely to be 42% than 41% — and 40% is even less likely.

Normal curve annotated 2Andy Kiersz / Dan Bobkoff / Business Insider

The chance that what’s happening in reality is captured by a number outside the 95% confidence interval is, as you might expect, quite unlikely. The more outside it is, the more minuscule the likelihood. But it’s still possible for a poll to be way off.

“If you really want to be 100% confident in your estimate, you’re either going to have to ask every American or be satisfied with a huge margin of error,” Bray, the Reed College statistics professor, says.

The whole point of polling is to extrapolate what a large group believes by asking a randomly selected subset of that group.

In the era of modern polling, most pollsters agree that being 95% confident in the margin of error is “good enough.”

“It’s a reasonably high number,” Bray says. “That means we’re going to be wrong one in 20 times, but for most people that’s acceptable.”

Many polls, such as those from the Pew Research Center, bury the margin of error in the fine print. Far fewer highlight the confidence interval. But anytime you see a poll, remember: There’s a 5% chance that the poll is far different from the headline number.

Keeping it 1,000

Dragan Radovanovic/Business Insider

Look closely and you’ll notice that most polls question roughly 1,000 people. That holds true whether pollsters are trying to approximate voter opinion in Rhode Island (about 1 million residents) or the entire US (nearly 320 million residents).

Why 1,000?

It’s a big enough number to be reasonably confident in the result — within the margin of error 19 out of 20 times. There’s a lot of variety in a group of 1,000 people, so it captures many of the elements in the larger group.

Asking more people than 1,000 leads to diminishing returns of accuracy.

For instance, sampling 2,000 people is not twice as precise as sampling 1,000. It might bring the margin of error from roughly 3 points to about 2.2 points.

Moe vs sample size 3Andy Kiersz / Business Insider

In modern polling, most statisticians see sampling 1,000 people as a good compromise between a manageable sample size and acceptable confidence.

What and when

Results differ among pollsters for many reasons.

There are simple explanations, like when the polls were conducted. It can take days or weeks to conduct and analyse a poll. A lot of news can happen between the dates on which the questions were asked and the date of the results’ release.

Dragan Radovanovic/Business Insider

This is especially a problem with polls close to Election Day. They’re generally a snapshot in the week before the election. If something happens in the final days of campaigning, those final polls may not be as predictive.

It also matters how a pollster phrases and orders questions, and whether it’s a phone interview, in-person interview, or online survey. Even the interviewer’s tone of voice can matter.

Then, pollsters have to decide how to analyse and weight the data, and those methodologies can vary.

But it’s not just pollsters analysing data — and that’s where we get another big problem.

Drilling down

BI Graphics 1000 GroupDragan Radovanovic/Business Insider

When Miringoff releases his Marist polls into the wild, they are quickly consumed by journalists, commentators, and a public looking for trends that create headlines. This drives him crazy.

“It’s too often to throw up your arms,” Miringoff says.

Here’s the problem: Let’s say his team interviews 1,000 people to represent the general population. In that 1,000, there are subgroups: men versus women, minorities, immigrants, young people, old people.

It’s tempting to pull out those subgroups and draw conclusions about, say, support for a candidate among Latinos or women.

But each of those subgroups is, in effect, its own sample, and those samples can be very small. That means the margin of error for each subset can be huge.

Take this poll from Pew: In the sample, there were only 146 black respondents. The margin of error for that subgroup is more than 9 points!

Redo screenshotPew Research Center

You can’t learn much by looking at a group with a 9-point error margin.

Why aggregating is good

If you combine results from multiple polls taken at the same time, you can think of it as one huge poll. That drives down the overall margin of error and can make you more confident in the predictive power of the polls.

In the real world, different polls are conducted in different ways, so you can’t think of an aggregated poll as truly one big sample. But this is also a virtue because it reduces the effect of pollster biases and errors. FiveThirtyEight, The New York Times, and RealClearPolitics all run averages with different weightings and methodologies.

Ahead or tied?

OK, so now that you know a lot more about polls, what should you think when a race is tight? The answer is not straightforward.

Let’s say that a poll comes out showing Clinton with 51% support and Trump with 49%. The margin of error is plus or minus 3 points. Are the two candidates statistically tied, or is Clinton slightly ahead?

In purely statistical terms, most would consider this example a “statistical dead heat.” Either candidate could be ahead.

“It’s pretty significant editorially,” Miringoff says. “It’s not significant statistically.”

That said, that doesn’t mean Clinton’s lead in this hypothetical example is completely insignificant.

“If I was running for office, I’d rather have 51 than 49,” Miringoff says.

Remember point estimates? In this scenario, 51% is still the pollster’s best guess at Clinton’s true level of support. That’s higher than Trump’s. If a series of polls shows Clinton with a slight edge — even within the margin of error — then it can suggest an advantage. A series of polls is more convincing than any single poll.

Feedback loop

Dragan Radovanovic/Business Insider

Finally, for as much as we want to believe that polls are a scientific reflection of reality, polls can also affect reality.

Here’s one example: Polls that show candidates falling behind can galvanize their supporters to get out to vote.

The media may also focus on polling trends, leading to changes in public opinion about which candidates are viable or worth supporting.

Polls don’t happen in a vacuum

Screen Shot 2015-11-18 at 4.55.47 PM

My own experience with Unions is that unfortunately, they are a waste of your money. In theory, they are a good idea as certainly the employer holds more power in the employer/employee relationship.

Several of my cases this year have been Union cases where the Union simply was not interested in becoming involved.

One case in particular, which involved the Collective Agreement, which therefore impacted the entire Union….not interested. All they are interested in is taking the Union dues every two weeks on pay day.

There is the example of a disciplinary case, where, the Union recommended dismissal. It turns out there was a viable defence and the employee is now back at work.

As a condition of my employment as a professor at George Washington University, I must pay the SEIU every month. This Labor Day the good news is that I have been appointed as an adjunct professor of economics at George Washington University. I’ll be teaching a seminar in labor economics and public policy. The bad news is that as a condition of my employment, I must become a card-carrying, dues-paying member of the Service Employees International Union Local 500 — or pay the SEIU an agency fee in order to get out of membership.

The letter from Provost Forrest Maltzman tells me that “failure to pay dues or agency fees may result in termination.” My hiring letter includes a form that I am required to sign. On the form, I must give the SEIU my home address, home phone, alternate phone, and e-mail address. In addition to paying dues, I have to give the union personal information such as where I live and how to contact me. Further, I need to “authorize and request my Employer, the George Washington University, and any successor Employer, to deduct from wages hereafter due me, and payable on each available pay period due me, such sums for Union dues, fees, and/or assessments to the Union at times and in a manner agreed upon between the Union and the Employer.”

Not only do I have to give George Washington University permission to deduct dues from my wages, but I also have to give successive employers — whoever they might be — the power to deduct these dues. The SEIU, with almost 2 million members, is one of the largest political players in terms of political donations, according to the Center for Responsive Politics. So far, SEIU’s PACs and committees have spent $10 million on the 2016 election cycle opposing Republicans and supporting Democrats.

The SEIU has spent $5 million against Donald Trump and $4 million for Hillary Clinton. It spent $307,000 each against Marco Rubio and Ted Cruz. Democrat Katie McGinty, who is challenging Republican senator Pat Toomey in Pennsylvania, received $400,000, and Ted Strickland, who is running against Ohio senator Rob Portman in Ohio, netted $900,000. The Local 500 branch had 8,703 members and almost $4 million in assets in 2013 — the latest data available from unionfacts.com. With me, it will have at least one more. Of course, the SEIU will say that I am not forced to join the union and pay the $36 monthly dues. Instead, I can pay a monthly agency fee of $29.38. But I have to do one or the other. The SEIU might also say that in return for the dues or agency fees, they bargain on my behalf with George Washington University.

I have no need for anyone to represent me. I can represent myself. If GW does not offer me enough to make it worthwhile for me to teach, I can look elsewhere or find other employment. Unfortunately, while the National Labor Relations Board (NLRB) is shrinking the time to vote to join a union, getting out of a union is not an easy matter. In order to decertify the SEIU Local 500, 30 percent of the part-time faculty of George Washington University (the represented group) would have to sign a petition for a decertification election.

This can only be presented to the National Labor Relations Board 60 days before the end of the contract or after the contract has expired. Should a new contract be ratified before a decertification petition is filed, then the clock is reset and no petition can be filed until the end of the new contract. As the GWU union contract expires on June 30, 2018, it means that a decertification petition cannot be considered before May 1, 2018. If the NLRB truly had workers’ interests at heart, the agency would make it as easy for workers to leave unions as it is to join them. Once in place, unions are not required to hold elections for decertification.

A union could have been chosen to represent workers in 1980 and still exist today — even though all the workers who voted for that union have died or quit. That is one reason, according to a new report by Heritage Foundation scholar James Sherk, that 94 percent of workers in union shops never voted to join the union. Sherk concluded that only 478,000 of America’s 8 million unionized private-sector workers have chosen to join their union. If the NLRB truly had workers’ interests at heart, the agency would make it as easy for workers to leave unions as it is to join them.

Just as is the case with public-sector employees in Wisconsin, workers should be allowed to vote once a year to determine whether they want to be represented by a union — instead of being automatically signed up based on the votes of those who are no longer around. GWU students have an opportunity to learn from professors in classrooms. The SEIU adds nothing to the education of these students, but it subtracts from the compensation of teachers. It’s a bad deal for the students and faculty to enrich the SEIU. If new faculty members want to represent themselves, they should be exempt from all payments to the union.

Screen Shot 2015-11-18 at 4.55.47 PM

Earlier this year, I attended a prison trade show in Louisiana, which has the nation’s highest rate of incarceration. Cheery representatives from CrossBar, a Kentucky-based company, demonstrated the bendable electronic cigarettes that are sold in prison commissaries. I chatted with employees of Wallace International, which makes the automated front gates for jails. Sentinel, which makes ankle bracelets to track parolees, distributed slick handouts. A couple hundred more exhibitors were packed into a two-hundred-and-twenty-four-thousand-square-foot space in a New Orleans convention center, a space larger than three professional football fields, including the end zones. It was an education in the scale of the industry of profiting on America’s incarceration system.

A part of that industry was much discussed earlier this month, when the Department of Justice announced it would phase out its use of private prisons. Private prisons—both state and federal—represent just a small slice of the eighty billion dollars spent yearly on corrections, and they housed only about a hundred and thirty-one thousand inmates in 2014, compared with the 1.4 million inmates locked up in government-run facilities. But, because private prison companies routinely lobby Congress for lengthier prison sentences, the federal government’s announcement was seen as a modest victory for criminal-justice-reform advocates, whose long-term goal is to end mass incarceration.

But the country’s historic incarceration boom has given rise to companies that provide services and products to government prisons. Many of these provide necessary equipment and services, of course, but some do so in rather unsavory ways.

Take, for instance, the prison phone industry, a market that’s dominated by several large, privately held firms that earn an estimated $1.2 billion per year. Short phone calls from prison can cost up to fifteen dollars, largely because the companies operate as monopolies within prison walls. The private companies also offer state and local authorities a percentage of their revenue, which contributes to the surging cost of the calls and creates other perverse incentives. Some jails, for instance, have removed in-person family-visitation rooms to make way for “video visitation” terminals, provided by private firms, which can charge as much as thirty dollars for forty minutes of screen time. One prison phone company, Securus Technologies, says in its marketing materials that it has paid out $1.3 billion in these so-called commissions over the past ten years.

“In some respects, this is worse than the private prison companies,” Peter Wagner, the executive director of the Prison Policy Initiative, a nonprofit criminal-justice think tank, said. “I expect the government to waste money. But it’s totally different for the government to collude with a private company to make poor people lose money.”

Prison phone companies are hardly the only private venders that capitalize on a captive market. Corizon Health, one of the sponsors of the Louisiana prison trade show, is the country’s largest prison health-care firm. It treats more than three hundred thousand prisoners nationwide, earning about $1.4 billion in annual revenue. It is also the subject of numerous investigations and lawsuits. The company has been named as a defendant in at least six hundred and sixty malpractice lawsuits over the past five years, according to the American Civil Liberties Union.

In February, 2015, for instance, the company paid out an $8.3 million settlement to the family of Martin Harrison, an inmate at the Santa Rita Jail, in Alameda County, California, who died, the plaintiffs charged, in part because of medical neglect. The lawsuit revealed that Corizon, in a bid to cut costs, used licensed practical nurses to assess inmates at intake—a job that, under California law, only registered nurses are allowed to complete. A court-ordered investigation of Corizon in Idaho, in 2012, revealed “inhumane” conditions in a prison south of Boise, where terminally ill inmates were left for periods of time without food or water and slept in soiled linens. “How does this for-profit prison healthcare company keep its costs low and profits high?” the A.C.L.U. notes on its Web site. “By failing to provide sick prisoners with needed care.”

(Martha Harbin, a spokeswoman for Corizon, said that “malpractice lawsuits are a fact of life,” especially in the correctional environment, where “the patient population is highly litigious.” In response to the Harrison case, she said that “Corizon Health’s contract in Alameda County stipulated our staffing structure” and added that, in Idaho, many conclusions in the court-ordered report “were unsupported by facts and conflicted with the thorough audit of care performed” two years earlier. She also noted that the facility has since been reaccredited.)

The prison economy is expansive. In many prisons and jails, basic commissary items like cereal and canned soup can cost five times the retail price. As the country’s inmate population has ballooned, so has revenue. The Prison Policy Initiative, a nonprofit criminal-justice think thank, estimates that commissary companies earn $1.6 billion per year.

The list of for-profit prison venders goes on: there are companies that use technology to scour prisons to find cell phones, companies that sell prisoner-transport vans, and companies that sell radar systems to prevent drones from dropping contraband into prison yards. Even the American for-profit bail market has grown to an estimated annual three billion dollars, according to the market-research firm IBISWorld. These companies’ activities tend to get less scrutiny than those of private prisons. Wagner has a few theories for why this might be. For one thing, physical prisons are run by contractors, “which makes them easy to demonize.” And, secondly, he said, “they’re publicly traded.”

This is an important point. In reporting and researching the industry around incarceration, you quickly discover that finding accurate financial information about many of these companies is nearly impossible. Why? They’re almost all privately held—except for the private prison companies. And, because the two largest private prison operators, Corrections Corporation of America and the GEO Group, are publicly held and must therefore make more information public, reporters (myself included) find them easier to write about.

“Private prison companies have to disclose offensive-sounding things in their prospectuses, like ‘If crime goes down, it will be bad for our bottom line,’ ” Wagner said. “The privately held companies are never forced to say something so impolite. So the attention that the private-prisons industry gets is a side effect of our corporate transparency laws for publicly traded companies.”

Private prisons deserve to be investigated: they have been found to provide substandard living conditions, improper medical care, and poor training for guards. In announcing the decision to phase out private prisons in the federal system, Deputy Attorney General Sally Q. Yates noted that private prisons “compare poorly” with government-run institutions—they don’t save much money and provide worse security. But it also raises the question: Would eliminating private prisons end mass incarceration? It’s unlikely. And it certainly won’t prevent private companies from profiting on prisoners.

Screen Shot 2015-11-18 at 4.55.47 PM

The world is rich and will become still richer. Quit worrying.

Not all of us are rich yet, of course. A billion or so people on the planet drag along on the equivalent of $3 a day or less. But as recently as 1800, almost everybody did.

The Great Enrichment began in 17th-century Holland. By the 18th century, it had moved to England, Scotland and the American colonies, and now it has spread to much of the rest of the world.

Economists and historians agree on its startling magnitude: By 2010, the average daily income in a wide range of countries, including Japan, the United States, Botswana and Brazil, had soared 1,000 to 3,000 percent over the levels of 1800. People moved from tents and mud huts to split-levels and city condominiums, from waterborne diseases to 80-year life spans, from ignorance to literacy.

You might think the rich have become richer and the poor even poorer. But by the standard of basic comfort in essentials, the poorest people on the planet have gained the most. In places like Ireland, Singapore, Finland and Italy, even people who are relatively poor have adequate food, education, lodging and medical care — none of which their ancestors had. Not remotely.

Inequality of financial wealth goes up and down, but over the long term it has been reduced. Financial inequality was greater in 1800 and 1900 than it is now, as even the French economist Thomas Piketty has acknowledged. By the more important standard of basic comfort in consumption, inequality within and between countries has fallen nearly continuously.

In any case, the problem is poverty, not inequality as such — not how many yachts the L’Oréal heiress Liliane Bettencourt has, but whether the average Frenchwoman has enough to eat. At the time of “Les Misérables,” she didn’t. In the last 40 years, the World Bank estimates, the proportion of the population living on an appalling $1 or $2 a day has halved. Paul Collier, an Oxford economist, urges us to help the “bottom billion” of the more than seven billion people on earth. Of course. It is our duty. But he notes that 50 years ago, four billion out of five billion people lived in such miserable conditions. In 1800, it was 95 percent of one billion.

We can improve the conditions of the working class. Raising low productivity by enabling human creativity is what has mainly worked. By contrast, taking from the rich and giving to the poor helps only a little — and anyway expropriation is a one-time trick. Enrichment from market-tested betterment will go on and on and, over the next century or so, will bring comfort in essentials to virtually everyone on the planet, and more to an expanding middle class.

Look at the astonishing improvements in China since 1978 and in India since 1991. Between them, the countries are home to about four out of every 10 humans. Even in the United States, real wages have continued to grow — if slowly — in recent decades, contrary to what you might have heard. Donald Boudreaux, an economist at George Mason University, and others who have looked beyond the superficial have shown that real wages are continuing to rise, thanks largely to major improvements in the quality of goods and services, and to nonwage benefits. Real purchasing power is double what it was in the fondly remembered 1950s — when many American children went to bed hungry.

What, then, caused this Great Enrichment?

Not exploitation of the poor, not investment, not existing institutions, but a mere idea, which the philosopher and economist Adam Smith called “the liberal plan of equality, liberty and justice.” In a word, it was liberalism, in the free-market European sense. Give masses of ordinary people equality before the law and equality of social dignity, and leave them alone, and it turns out that they become extraordinarily creative and energetic.

The liberal idea was spawned by some happy accidents in northwestern Europe from 1517 to 1789 — namely, the four R’s: the Reformation, the Dutch Revolt, the revolutions of England and France, and the proliferation of reading. The four R’s liberated ordinary people, among them the venturing bourgeoisie. The Bourgeois Deal is, briefly, this: In the first act, let me try this or that improvement. I’ll keep the profit, thank you very much, though in the second act those pesky competitors will erode it by entering and disrupting (as Uber has done to the taxi industry). By the third act, after my betterments have spread, they will make you rich.

And they did.

You may object that ideas are a dime a dozen and that to make them fruitful we must start with adequate physical and human capital and good institutions. It’s a popular idea at the World Bank, but a mistaken one. True, we eventually need capital and institutions to embody the ideas, such as a marble building with central heating and cooling to house the Supreme Court. But the intermediate and dependent causes like capital and institutions have not been the root cause.

The root cause of enrichment was and is the liberal idea, spawning the university, the railway, the high-rise, the internet and, most important, our liberties. What original accumulation of capital inflamed the minds of William Lloyd Garrison and Sojourner Truth? What institutions, except the recent liberal ones of university education and uncensored book publishing, caused feminism or the antiwar movement? Since Karl Marx, we have made a habit of seeking material causes for human progress. But the modern world came from treating more and more people with respect.

Ideas are not all sweet, of course. Fascism, racism, eugenics and nationalism are ideas with alarming recent popularity. But sweet practical ideas for profitable technologies and institutions, and the liberal idea that allowed ordinary people for the first time to have a go, caused the Great Enrichment. We need to inspirit masses of people, not the elite, who are plenty inspirited already. Equality before the law and equality of social dignity are still the root of economic, as well as spiritual, flourishing — whatever tyrants may think to the contrary.

« Previous Page