August 2017

Screen Shot 2015-11-18 at 4.55.47 PM

In writing my latest Thoughts from the Frontline, I reached out to my contacts looking for an uber-bull—someone utterly convinced that the market is on solid ground, with good evidence for their view.

Fortunately, a good friend who must remain nameless shared with me an August 4 slide deck from Krishna Memani, Chief Investment Officer of Oppenheimer Funds.

The current bull market is the second longest and has the third-highest gain. It will be the longest stock bull market of the modern era if it can last another two years or so.

However, he thinks the present bull market will continue for another year.

Here’s Memani:

For some investors, the sheer age of this cycle is enough to cause consternation. Yet there is nothing magical about the passage of time. As we have said time and again, bull markets do not die of old age. Like people, bull markets ultimately die when the system can no longer fight off maladies. In order for the cycle to end there needs to be a catalyst—either a major policy mistake or a significant economic disruption in one of the world’s major economies. In our view, neither appears to be in the offing.

15 Events That Could Be a Catalyst for the Next Recession

He goes on to list 15 specific events he thinks would be necessary to make him abandon his bullish position. (Comments in parentheses and italics are mine.)

1. Global growth would have had to decelerate. It is not.

(European growth is actually picking up. Germany blinked on financing Italian bank debt, and the markets now have more confidence that Draghi can do whatever it takes.)

2. Wages and inflation would have had to rise. They are not.

3. The Fed would have planned to tighten monetary policy significantly. It is not.

(They should have been raising rates four years ago. It is too late in the cycle now. They may raise rates once more, but the paltry amount of “quantitative tightening” they are likely to do is not going to amount to much. In fact, if for some reason they decided to go further with rate hike and enter a tightening cycle, their monetary policy error would probably trigger a recession and a deep bear market. I think they realize that—or at least I hope they do.)

4. The ECB would have to tighten policy substantially. It will likely not.

(Draghi will go through the motions, though he knows he is limited in what he can actually do – unless for some unexpected reason Europe takes off to the upside. And while Eastern Europe is actually doing that, “Old Europe” is not.)

5. Credit growth would have had to be surging. It is not.

(Credit growth is generally picking up but not surging. And most of the credit growth is in government debt.)

6. Corporate animal spirits would have been taking off. They are not.

(That is basically true for most public corporations. There are a number of private companies and smaller businesses that are pretty optimistic.)

7. Equities would have had to be expensive relative to bonds. They are not.

8. FAANG stocks would have had to be at extreme valuations. They are not.

(I don’t think I buy this one.)

9. Investors would have had to be euphoric about equities. They are not.

10. The current cyclical rally within the secular bull would have had to be old and stretched. It is not.

(Not buying this one either.)

11. High-yield spreads would have to be widening. They are not.

(I pay attention to high-yield spreads, a classic warning sign of a turn in market behavior. Are they at dangerous levels? Damn, Skippy, I cannot believe some of the bonds that are being sold out in the marketplace. Not that I can’t believe the sellers are willing to take the money—you’d have to be an idiot not to take free money with no strings attached. I just don’t understand why major institutions are buying this nonsense.)

12. The classic signs of excess would have had to be evident. They are not.

(Kind of, sort of, but we are really beginning to stretch the point.)

13. China’s credit binge would have had to threaten the global financial system. It does not.

(Xi has somehow managed to push off the credit crisis, at least for the rest of this year, until after the five-year Congress. Rather amazing.)

14. Global trade would have had to be weakening. It is not.

15. The US dollar would have had to be strengthening. It is not.

That’s quite a list. Seeing it with the charts and Memani’s comments makes it even more compelling. To pick just one for closer scrutiny, let’s consider #7.

Are Equities Expensive Relative to Bonds?

That’s a good question because it really matters to big, long-term investors like pension funds.

Pension fund managers need to meet certain return targets, and they want to put the odds on their side. Treasury bonds offer certainty—presuming the US government doesn’t default. (Ask me about that again in October.)

Stocks may offer higher returns but more variation.

Memani explains this relationship by looking at earnings yield. That’s the inverse of the P/E ratio.

Essentially, it’s the percentage of each dollar invested in a stock that comes back as profits. Some gets distributed via dividends, buybacks, etc., and some is retained.

If you think there’s a stock mania today akin to the euphoria of the late 1990s, you’ll find no support in this ratio. Back then, bonds were dirt cheap compared to stock market earnings yield.

Now we have the reverse: stocks are cheap compared to bonds.

This is one of the most convincing bullish arguments I see now.

I remember the late ’90s very well. I called the top about three years early, never dreaming we could see a year like 1999. That will always be my mania benchmark—and today we are not even remotely near it. I don’t remember thinking much about bonds back then. No one else was, either.

But buying them would have turned out much better than buying stocks in 1997–99.



Screen Shot 2015-11-18 at 4.55.47 PM

This rather misses the point: gold is protection against government induced inflation etc. If the dollar or whatever implodes, gold/silver are money.

Having waited patiently for the “any-minute-now” moment, gold investors are taking comfort from the recent rise in price in response to geopolitical tensions. Yet the responsiveness of gold, as well as the overall price, appears weaker than would have been expected from historically based models — and for understandable reasons. The precious metal’s status as a haven has been eroded by the influence of unconventional monetary policy and the growth of markets for cryptocurrencies.

Gold prices rose almost 1 percent on Tuesday morning as part of the risk aversion triggered by yet another brazen North Korean missile launch over Japan, together with uncertainty as to how the U.S. may respond. But trading below $1,330, the overall response of gold prices to the last few months of heightened geopolitical risks has been relatively muted, particularly as the 10-year Treasury bond, another traditional haven, saw its yield trade down to below 2.10 percent that same morning.

Two immediate reasons come to mind, one related to several assets and the other more specifically to gold.

First, and as I have discussed in several Bloomberg View articles, the prolonged pursuit of unconventional measures by central banks has helped meaningfully decouple asset prices from underlying fundamentals. In such circumstances, historically based models will tend to overestimate the reaction of asset prices to heightened geopolitical tensions — including the fall in risk assets such as equities, or the rise in gold.

Second, a portion of the traditional buyer interest in gold has been diverted to the growing markets for cryptocurrencies, which are also benefiting from a general increase in demand. As such, the returns to investors there have been significantly greater, sucking in even more funds.

The message for investors in both gold and multi-asset-class portfolios is clear.

While continuing to play a role in diversified market exposures, gold is less of a risk mitigator and asset-class diversifier, for now. Luckily for investors, the need has also been less pronounced, given that ample market liquidity has boosted returns, repressed volatility, and distorted correlations in their favor. But this is not to say that gold’s traditional role will not be re-established down the road. After all, central banks are in the later stages of reliance on unconventional monetary measures and, given this year’s spectacular price appreciation, cryptocurrencies are more vulnerable to unsettling air pockets.

Screen Shot 2015-11-18 at 4.55.47 PM

Well the big fight came and went and Mayweather won. I was hoping for an upset of sorts with McGregor winning.

Sort of went with the ‘experts’ assessment, McGregor needed to win fast, in the early rounds, or, and as eventuated, Mayweather would exert his superior [pure] boxing skills and carry the fight.

Screen Shot 2015-11-18 at 4.55.47 PM

Rory Sutherland claims that the real function for swimming pools is allowing the middle class to sit around in bathing suits without looking ridiculous. Same with New York restaurants: you think their mission is to feed people, but that’s not what they do. They are in the business of selling you overpriced liquor or Great Tuscan wines by the glass, yet get you into the door by serving you your low-carb (or low-something) dishes at breakeven cost. (This business model, of course, fails to work in Saudi Arabia).

So when we look at religion and, to some extent ancestral superstitions, we should consider what purpose they serve, rather than focusing on the notion of “belief”, epistemic belief in its strict scientific definition. In science, belief is literal belief; it is right or wrong, never metaphorical. In real life, belief is an instrument to do things, not the end product. This is similar to vision: the purpose of your eyes is to orient you in the best possible way, and get you out of trouble when needed, or help you find a prey at distance. Your eyes are not sensors aimed at getting the electromagnetic spectrum of reality. Their job description is not to produce the most accurate scientific representation of reality; rather the most useful one for survival.

Ocular Deception

Our perceptional apparatus makes mistakes –distortions — in order to lead to more precise actions on our parts: ocular deception, it turns out, is a necessary thing. Greek and Roman architects misrepresent the columns of the temples, by tilting them inward, in order to give us the impression that the columns are straight. As Vitruvius explains, the aim is to “counteract the visual deception by an change of proportions”[i]. A distortion is meant to bring about an enhancement of your aesthetic experience. The floor of the Parthenon is curved in reality so we can see it straight. The columns are in truth unevenly spaced, so we can see them lined up like a marching Russian division in a parade.

Should one go lodge a complain with the Greek Tourism Office claiming that the columns are not vertical and someone is taking advantage of our visual weaknesses?

Temple of Bacchus, Baalbeck, Lebanon

Ergodicity First

The same applies to distortions of beliefs. Is this visual deceit any different from leading someone to believe in Santa Claus, if it enhances his or her holiday aesthetic experience? No, unless the person engages in actions that ends up harming him or her.

In that sense harboring superstitions is not irrational by any metric: nobody has managed to reinvent a metric for rationality based on process. Actions that harm you are observable.

I have shown that, unless one has an overblown and (as with Greek columns), a very unrealistic representation of some tail risks, one cannot survive –all it takes is a single event for the irreversible exit from among us. Is selective paranoia “irrational” if those individuals and populations who don’t have it end up dying or extinct, respectively?

A statements that will orient us for the rest of the book

Survival comes first, truth, understanding, and science later

In other words, you do not need science to survive (we’ve done it for several hundred million years) , but you need to survive to do science. As your grandmother would have said, better safe than sorry. This precedence is well understood by traders and people in the real world, as per Warren Buffet expression “to make money you must first survive” –skin in the game again; those of us who take risks have their priorities firmer than vague textbook notions such as “truth”. More technically, this brings us again to the ergodic property (I keep my promise to explain it in detail, but we are not ready yet): for the world to be “ergodic”, there needs to be no absorbing barrier, no substantial irreversibilities.

And what do we mean by “survival”? Survival of whom? Of you? Your family? Your tribe? Humanity? We will get into the details later but note for now that I have a finite shelf life; my survival is not as important as that of things that do not have a limited life expectancy, such as mankind or planet earth. Hence the more “systemic”, the more important such a survival becomes.

An illustration of the Bias-Variance tradeoff. Assume two people (sober) shooting at a target in, say, Texas. The top shooter has a bias, a systematic “error”, but on balance gets closer to target than the bottom shooter who has no systematic bias but a high variance. Typically, you cannot reduce one without increasing the other. When fragile, the strategy at the top is the best: maintain a distance from ruin, that is, hitting a point in the periphery should it be dangerous. This schema explains why if you want to minimize the probability of the plane crashing, you may make mistakes with impunity provided you lower your dispersion.


Three rigorous thinkers will orient my thinking on the matter: the cognitive scientist and polymath Herb Simon, pioneer of Artificial Intelligence, and the derived school of thought led by Gerd Gigerenzer, on one hand, and the mathematician, logician and decision theorist Ken Binmore who spent his life formulating the logical foundations of rationality.

From Simon to Gigerenzer

Simon formulated the notion now known as bounded rationality: we cannot possibly measure and assess everything as if we were a computer; we therefore produce, under evolutionary pressures, some shortcuts and distortions. Our knowledge of the world is fundamentally incomplete, so we need to avoid getting in unanticipated trouble. Even if our knowledge of the world were complete, it would still be computationally near-impossible to produce precise, unbiased understanding of reality. A fertile research program on ecological rationality came out of it, mostly organized and led by Gerd Gigerenzer, mapping how many things we do that appear, on the surface, illogical have deeper reasons.

Ken Binmore

As to Ken Binmore, he showed that the concept casually dubbed “rational” is ill-defined, in fact so ill-defined that much of the uses of the term are just gibberish. There is nothing particularly irrational in beliefs per se (given that they can be shortcuts and instrumental to something else): to him everything lies in the notion of “revealed preferences”, which we explain next.

Binmore also saw that criticism of the “rational” man as posited by economic theory is often a strawman argument distorting the theory in order to bring it down. He recounts that economic theory, as posited in the original texts, is not as strict in its definition of “utility”, that is, the satisfaction a consumer and a decision-maker derive from a certain outcome. Satisfaction does not necessarily have to be monetary. There is nothing irrational, according to economic theory, in giving your money to a stranger, if that’s what makes you tick. And don’t try to invoke Adam Smith: he was a philosopher not an accountant; he never equated human interests and aims to narrow accounting book entries.

Revelation of Preferences

Next let us develop the following three points:

Judging people on their beliefs is not scientific

There is no such thing as “rationality” of a belief, there is rationality of action

The rationality of an action can only be judged by evolutionary considerations

The axiom of revelation of preferences states the following: you will not have an idea about what people really think, what predicts people’s actions, merely by asking them –they themselves don’t know. What matters, in the end, is what they pay for goods, not what they say they “think” about them, or what are the reasons they give you or themselves for that. (Think about it: revelation of preferences is skin in the game). Even psychologists get it; in their experiments, their procedures require that actual dollars be spent for the test to be “scientific”. The subjects are given a monetary amount, and they watch how he or she formulates choices by spending them. However, a large share of psychologists fughedabout the point when they start bloviating about rationality. They revert to judging beliefs rather than action.

For beliefs are … cheap talk. A foundational principle of decision theory (and one that is at the basis of neoclassical economics, rational choice, and similar disciplines) is that what goes on in the head of people isn’t the business of science. First, what they think may not be measurable enough to lend itself to some scientific investigation. Second, it is not testable. Finally, there may be some type of a translation mechanism too hard for us to understand, with distortions at the level of the process that are actually necessary for think to work.

Actually, by a mechanism (more technically called the bias-variance tradeoff), you often get better results making some type of “errors”, as when you aim slightly away from the target when shooting. I have shown in Antifragile that making some types of errors is the most rational thing to do, as, when the errors are of little costs, it leads to gains and discoveries.

This is why I have been against the State dictating to us what we “should” be doing: only evolution knows if the “wrong” thing is really wrong, provided there is skin in the game for that.

he classical “large world vs small world” problem. Science is currently too incomplete to provide all answers –and says it itself. We have been so much under assault by vendors using “science” to sell products that many people, in their mind, confuse science and scientism. Science is mainly rigor.

What is Religion About ?

It is therefore my opinion that religion is here to enforce tail risk management across generations, as its binary and unconditional rules are easy to teach and enforce. We have survived in spite of tail risks; our survival cannot be that random.

Recall that skin in the game means that you do not pay attention to what people say, only to what they do, and how much of their neck they are putting on the line. Let survival work its wonders.

Superstitions can be vectors for risk management rules. We have as potent information that people that have them have survived; to repeat never discount anything that allows you to survive. For instance Jared Diamond discusses the “constructive paranoia” of residents of Papua New Guinea, whose superstitions prevent them from sleeping under dead trees. [1]Whether it is superstition or something else, some deep scientific understanding of probability that is stopping you, it doesn’t matter, so long as you don’t sleep under dead trees. And if you dream of making people use probability in order to make decisions, I have some news: close to ninety percent of psychologists dealing with decision-making (which includes such regulators as Cass Sunstein) have no clue about probability, and try to disrupt our organic paranoid mechanism.

Further, I find it incoherent to criticize someone’s superstitions if these are meant to bring some benefits, yet not do so with the optical illusions in Greek temples.

The notion of “rational” bandied about by all manner of promoters of scientism isn’t defined well enough to be used for beliefs. To repeat, we do not have enough grounds to discuss “irrational beliefs”. We do with irrational actions.

Now what people say may have a purpose –it is not just what they think it means. Let us extend the idea outside of buying and selling to the risk domain: opinions in are cheap unless people take risks for them.

Extending such logic, we can show that much of what we call “belief” is some kind of background furniture for the human mind, more metaphorical than real. It may work as therapy.

“Tawk” and Cheap “Tawk”

The first principle we make:

There is a difference between beliefs that are decorative and a different sort of beliefs, those that map to action.

There is no difference between them in words, except that the true difference reveals itself in risk taking, having something at stake, something one could lose in case one is wrong.

And the lesson, by rephrasing the principle:

How much you truly “believe” in something can only be manifested through what you are willing to risk for it.

But this merits continuation. The fact that there is this decorative component to belief, life, these strange rules followed outside the Gemelli clinics of the world merits a discussion. What are these for? Can we truly understand their function? Are we confused about their function? Do we mistake their rationality? Can we use them instead to define rationality?

What Does Lindy Say?

Let us see what Lindy has to say about “rationality”. While the notions of “reason” and “reasonable” were present in ancient thought, mostly embedded in the notion of precaution, or sophrosyne, this modern idea of “rationality” and “rational decision-making” was born in the aftermath of Max Weber, with the works of psychologists, philosophasters, and psychosophasters. The classical sophrosyne is precaution, self-control, and temperance, all in one. It was replaced with something a bit different. “Rationality” was forged in a post-enlightenment period[2], at the time when we thought that understanding the world was at the next corner. It assumes no randomness, or a simplified the random structure of our world. Also of course no interactions with the world.

The only definition of rationality that I found that is practically, empirically, and mathematically rigorous is that of survival –and indeed, unlike the modern theories by psychosophasters, it maps to the classics. Anything that hinders one’s survival at an individual, collective, tribal, or general level is deemed irrational.

Hence the precautionary principle and sound risk understanding.

It may be “irrational” for people to have two sinks in their kitchen, one for meat and the other for dairy, but as we saw, it led to the survival of the Jewish community as Kashrut laws forced them to eat and bind together.

It is also rational to see things differently from the “way they are”, for improved performance.

It is also difficult to map beliefs to reality. A decorative or instrumental belief, say believing in Santa Claus or the potential anger of Baal can be rational if it leads to an increased survival.

The Nondecorative in the Decorative

Now what we called decorative is not necessarily superfluous, often to the contrary. They may just have another function we do not know much about –and we can consult for that the grandmaster statistician, time, in a very technical tool called the survival function, known by both old people and very complex statistics –but we will resort here to the old people version.

The fact to consider is not that these beliefs have survived a long time –the Catholic church is an administration that is close to twenty-four centuries old (it is largely the continuation of the Roman Republic). The fact is not that . It is that people who have religion –a certain religion — have survived.

Another principle:

When you consider beliefs do not assess them in how they compete with other beliefs, but consider the survival of the populations that have them.

Consider a competitor to the Pope’s religion, Judaism. Jews have close to five hundred different dietary interdicts. They may seem irrational to an observer who sees purpose in things and defines rationality in terms of what he can explain. Actually they will most certainly seem so. The Jewish Kashrut prescribes keeping four sets of dishes, two sinks, the avoidance of mixing meat with dairy products or merely letting the two be in contact with each other, in addition to interdicts on some animals: shrimp, pork, etc. The good stuff.

These laws might have had an ex ante purpose. One can blame insalubrious behavior of pigs, exacerbated by the heat in the Levant (though heat in the Levant was not markedly different from that in pig-eating areas further West). Or perhaps an ecological reason: kids compete with humans in eating the same vegetables while cows eat what we don’t eat.

But it remains that whatever the purpose, the Kashrut survived approximately three millennia not because of its “rationality” but because the populations that followed it survived. It most certainly brought cohesion: people who eat together hang together. Simply it aided those that survived because it is a convex heuristic. Such group cohesion might be also responsible for trust in commercial transactions with remote members of the community.

This allows us to summarize

Rationality is not what has conscious verbalistic explanatory factors; it is only what aids survival, avoids ruin.

Rationality is risk management, period.

[1] “Consider: If you’re a New Guinean living in the forest, and if you adopt the bad habit of sleeping under dead trees whose odds of falling on you that particular night are only 1 in 1,000, you’ll be dead within a few years. In fact, my wife was nearly killed by a falling tree last year, and I’ve survived numerous nearly fatal situations in New Guinea.”

Screen Shot 2015-11-18 at 4.55.47 PM

Every further new high in the price of Bitcoin brings ever more claims that it is destined to become the preeminent safe haven investment of the modern age — the new gold.

But there’s no getting around the fact that Bitcoin is essentially a speculative investment in a new technology, specifically the blockchain. Think of the blockchain, very basically, as layers of independent electronic security that encapsulate a cryptocurrency and keep it frozen in time and space — like layers of amber around a fly. This is what makes a cryptocurrency “crypto.”

That’s not to say that the price of Bitcoin cannot make further (and further…) new highs. After all, that is what speculative bubbles do (until they don’t).

Bitcoin and each new initial coin offering (ICO) should be thought of as software infrastructure innovation tools, not competing currencies. It’s the amber that determines their value, not the flies. Cryptocurrencies are a very significant value-added technological innovation that calls directly into question the government monopoly over money. This insurrection against government-manipulated fiat money will only grow more pronounced as cryptocurrencies catch on as transactional fiduciary media; at that point, who will need government money? The blockchain, though still in its infancy, is a really big deal.

While governments can’t control cryptocurrencies directly, why shouldn’t we expect cryptocurrencies to face the same fate as what started happening to numbered Swiss bank accounts (whose secrecy remain legally enforced by Swiss law)? All local governments had to do was make it illegal to hide, and thus force law-abiding citizens to become criminals if they fail to disclose such accounts. We should expect similar anti-money laundering hygiene and taxation among the cryptocurrencies. The more electronic security layers inherent in a cryptocurrency’s perceived value, the more vulnerable its price is to such an eventual decree.

Bitcoins should be regarded as assets, or really equities, not as currencies. They are each little business plans — each perceived to create future value. They are not stores-of-value, but rather volatile expectations on the future success of these business plans. But most ICOs probably don’t have viable business plans; they are truly castles in the sky, relying only on momentum effects among the growing herd of crypto-investors. (The Securities and Exchange Commission is correct in looking at them as equities.) Thus, we should expect their current value to be derived by the same razor-thin equity risk premiums and bubbly growth expectations that we see throughout markets today. And we should expect that value to suffer the same fate as occurs at the end of every speculative bubble.

If you wanted to create your own private country with your own currency, no matter how safe you were from outside invaders, you’d be wise to start with some pre-existing store-of-value, such as a foreign currency, gold, or land. Otherwise, why would anyone trade for your new currency? Arbitrarily assigning a store-of-value component to a cryptocurrency, no matter how secure it is, is trying to do the same thing (except much easier than starting a new country). And somehow it’s been working.

Moreover, as competing cryptocurrencies are created, whether for specific applications (such as automating contracts, for instance), these ICOs seem to have the effect of driving up all cryptocurrencies. Clearly, there is the potential for additional cryptocurrencies to bolster the transactional value of each other—perhaps even adding to the fungibility of all cryptocurrencies. But as various cryptocurrencies start competing with each other, they will not be additive in value. The technology, like new innovations, can, in fact, create some value from thin air. But not so any underlying store-of-value component in the cryptocurrencies. As a new cryptocurrency is assigned units of a store-of-value, those units must, by necessity, leave other stores-of-value, whether gold or another cryptocurrency. New depositories of value must siphon off the existing depositories of value. On a global scale, it is very much a zero sum game.

Or, as we might say, we can improve the layers of amber, but we can’t create more flies.

This competition, both in the technology and the underlying store-of-value, must, by definition, constrain each specific cryptocurrency’s price appreciation. Put simply, cryptocurrencies have an enormous scarcity problem. The constraints on any one cryptocurrency’s supply are an enormous improvement over the lack of any constraint whatsoever on governments when it comes to printing currencies. However, unlike physical assets such as gold and silver that have unique physical attributes endowing them with monetary importance for millennia, the problem is that there is no barrier to entry for cryptocurrencies; as each new competing cryptocurrency finds success, it dilutes or inflates the universe of the others.

The store-of-value component of cryptocurrencies — which is, at a bare-minimum, a fundamental requirement for safe haven status — is a minuscule part of its value and appreciation. After all, stores of value are just that: stable and reliable holding places of value. They do not create new value, but are finite in supply and are merely intended to hold value that has already been created through savings and productive investment. To miss this point is to perpetuate the very same fallacy that global central banks blindly follow today. You simply cannot create money, or capital, from thin air (whether it be credit or a new cool cryptocurrency). Rather, it represents resources that have been created and saved for future consumption. There is simply no way around this fundamental truth.

Viewing cryptocurrencies as having safe haven status opens investors to layering more risk on their portfolios. Holding Bitcoins and other cryptocurrencies likely constitutes a bigger bet on the same central bank-driven bubble that some hope to protect themselves against. The great irony is that both the libertarian supporters of cryptocurrencies and the interventionist supporters of central bank-manipulated fiat money both fall for this very same fallacy.

Cryptocurrencies are a very important development, and an enormous step in the direction toward the decentralization of monetary power. This has enormously positive potential, and I am a big cheerleader for their success. But caveat emptor—thinking that we are magically creating new stores-of-value and thus a new safe haven is a profound mistake.

Screen Shot 2015-11-18 at 4.55.47 PM

I would love this bike, but at $70,000.00 just a touch too much. This one is for sale in Auckland. It is new for 2018.

The final edition Panigale 1299R.


Screen Shot 2017-08-16 at 4.26.49 AM

Screen Shot 2015-11-18 at 4.55.47 PM

Larry Walters always wanted to fly. When he was old enough, he joined the Air Force, but he could not see well enough to become a pilot. After he was discharged from the military, he would often sit in his backyard watching jets fly overhead, dreaming about flying and scheming about how to get into the sky. On July 2, 1982, the San Pedro, California trucker finally set out to accomplish his dream. Because the story has been told in a variety of ways over a variety of media outlets, it is impossible to know precisely what happened but, as a police officer commented later, “It wasn’t a highly scientific expedition.”

Larry conceived his “act of American ingenuity” while sitting outside in his “extremely comfortable” Sears lawn chair. He purchased weather balloons from an Army-Navy surplus store, tied them to his tethered Sears chair and filled the four-foot diameter balloons with helium. Then, after packing sandwiches, Miller Lite, a CB radio, a camera, a pellet gun, and 30 one-pound jugs of water for ballast – but without a seatbelt – he climbed into his makeshift craft, dubbed “Inspiration I.” His plan, such as it was, called for him to float lazily above the rooftops at about 30 feet for a while, pounding beers, and then to use the pellet gun to explode the balloons one-by-one so he could float to the ground.

But when the last cord that tethered the craft to his Jeep snapped, Walters and his lawn chair did not rise lazily into the sky. Larry shot up to an altitude of about three miles (higher than a Cessna can go), yanked by the lift of 45 helium balloons holding 33 cubic feet of helium each. He did not dare shoot any of the balloons because he feared that he might unbalance the load and fall. So he slowly drifted along, cold and frightened, in his lawn chair, with his beer and sandwiches, for more than 14 hours. He eventually crossed the primary approach corridor of LAX. A flustered TWA pilot spotted Larry and radioed the tower that he was passing a guy in a lawn chair with a gun at 16,000 feet.

Eventually Larry conjured up the nerve to shoot several balloons before accidentally dropping his pellet gun overboard. The shooting did the trick and Larry descended toward Long Beach, until the dangling tethers got caught in a power line, causing an electrical blackout in the neighborhood below. Fortunately, Walters was able to climb to the ground safely from there.

The Long Beach Police Department and federal authorities were waiting. Regional safety inspector Neal Savoy said, “We know he broke some part of the Federal Aviation Act, and as soon as we decide which part it is, some type of charge will be filed. If he had a pilot’s license, we’d suspend that. But he doesn’t.” As he was led away in handcuffs, a reporter asked Larry why he had undertaken his mission. The answer was simple and poignant. “A man can’t just sit around,” he said.

The Inversion Principle

In one of the more glaringly obvious observations of all-time, it is safe to say that Larry’s decision-making process was more than a bit flawed. The Bonehead Club of Dallas awarded him its top prize – Bonehead of the Year – but he only earned an honorable mention from the Darwin Awards people, presumably because, even though things did not turn out exactly as he planned (another glaringly obvious observation), he was incredibly lucky and his flight did not end in disaster. Among his many errors, Larry did not follow the inversion principle popularized in the investment world by Charlie Munger. Charlie borrowed this highly useful idea from the great 19th Century German mathematician Carl Jacobi, who created this helpful approach for improving your decision-making process.

Invert, always invert (“man muss immer umkehren”).

Jacobi believed that the solution for many difficult problems could be found if the problems were expressed in the inverse – by working or thinking backwards. As Munger has explained, “Invert. Always invert. Turn a situation or problem upside down. Look at it backward. What happens if all our plans go wrong? Where don’t we want to go, and how do you get there? Instead of looking for success, make a list of how to fail instead – through sloth, envy, resentment, self-pity, entitlement, all the mental habits of self-defeat. Avoid these qualities and you will succeed. Tell me where I’m going to die, that is, so I don’t go there.” Charlie’s partner, Warren Buffett, makes a similar point: “Charlie and I have not learned how to solve difficult business problems. What we have learned is to avoid them.”

As in most matters, we would do well to emulate Charlie. But what does that mean?

It begins with working backwards, to the extent you can, quite literally. If you have done algebra, you know that reversing an equation is the best way to check your work. Similarly, the best way to proofread is back-to-front, one painstaking sentence at a time. But it also means much more than that.

Thinking in Reverse

Charlie’s inversion principle also means thinking in reverse. As Munger explains it: “In other words, if you want to help India, the question you should ask is not, ‘How can I help India?’ It’s, ‘What is doing the worst damage in India?’”

During World War II, the Allied forces sent regular bombing missions into Germany. The lumbering aircraft sent on these raids – most often B-17s – were strategically crucial to the war effort and were often lost to enemy anti-aircraft fire. That was a huge problem, obviously.

Boeing XB-17

One possible solution was to provide more reinforcement for the Flying Fortresses, but armor is heavy and restricts aircraft performance even more. So extra plating could only go where the planes were most vulnerable. The problem of where to add armor was a difficult one because the data set was so limited. There was no access to the planes that had been shot down. In 1943, the English Air Ministry examined the locations of the bullet holes on the returned aircraft and proposed adding armor to those areas that showed the most damage, all at the planes’ extremities.

The great mathematician Abraham Wald, who had fled Austria for the United States in 1938 to escape the Nazis, was put to work on the problem of estimating the survival probabilities of planes sustaining hits in various locations so that the added armor would be located most expeditiously. Wald came to a surprising and very different conclusion from that proposed by the Air Ministry. Since much of Wald’s analysis at the time was new – he did not have sufficient computing power to model results and did not have access to more recent statistical approaches – his work was ad hoc and his success was due to “the sheer power of his intuition” alone.

Wald began by drawing an outline of a plane and marking it where returning planes had been hit. There were lots of shots everywhere except in a few particular (and crucial) areas, with more shots to the planes’ extremities than anywhere else. By inverting the problem – considering where the planes that didn’t return had been hit and what it would take to disable an aircraft rather than examining the data he had from the returning bombers – Wald came to his unique insight, later confirmed by remarkable (for the time, and long classified) mathematical analysis (more here). Much like Sherlock Holmes and the dog that didn’t bark, Wald’s remarkable intuitive leap came about due to what he didn’t see (that Wald’s insight seems obvious now is a wonderful illustration of hindsight bias).

Wald realized that the holes from flak and bullets most often seen on the bombers that returned represented the areas where planes were best able to absorb damage and survive. Since the data showed that there were similar areas on each returning B-17 showing little or no damage from enemy fire, Wald concluded that those areas (around the main cockpit and the fuel tanks) were the truly vulnerable spots and that these were the areas that should be reinforced.

From a mathematical perspective, Wald considered what might have happened to account for the data he possessed. Therefore, what he did was to set the probability that a plane that took a hit to the engine managed to stay in the air to zero and thought about what that would mean. In other words, conceptually, he assumed that any hit to the engine would bring the plane down. Because planes returned from their missions with bullet holes everywhere but the engine, the other alternative was that planes were never hit in the engine. Thus, either the German gunfire hit every part of the plane but one, or the engine was a point of extreme vulnerability. Wald considered both possibilities, but the latter made much more sense.

The more useful data was in the planes that were shot down and unavailable, not the ones that survived, and had to be “gathered” by Wald via induction. This insight lies behind the related concepts we now call survivorship bias – our tendency to include only successes in statistical analysis, skewing or even invalidating the results – and selection bias – the distortions we see when the sample selection does not accurately reflect the target population. Thus, the fish you observe in a pond will almost certainly correspond to the size of the holes in your net. Inverting the problem allowed Wald to come to the correct conclusion, saving many planes (and lives).

This idea applies to baseball too. As I have argued before, the crucial insight of Moneyball was a “Mungeresque” inversion. In baseball, a team wins by scoring more runs than its opponent. The epiphany was to invert the idea that runs and wins were achieved by hits to the radical notion that the key to winning is avoiding outs. That led the story’s protagonist, general manager of the Oakland A’s Billy Beane, to “buy” on-base percentage cheaply because the “traditional baseball men” overvalued hits but undervalued on-base percentage even though it does not matter how a batter avoids making an out and reaches base.

Therefore, the key application of the Moneyball insight was for Beane to find value via underappreciated player assets (some assets are cheap for good reason) by way of an objective, disciplined, data-driven process that values OBP more than conventional baseball wisdom. In other words, as Michael Lewis explained, “it is about using statistical analysis to shift the odds [of winning] a bit in one’s favor” via market inefficiencies. As A’s Assistant GM Paul DePodesta said, “You have to understand that for someone to become an Oakland A, he has to have something wrong with him. Because if he doesn’t have something wrong with him, he gets valued properly by the marketplace, and we can’t afford him anymore.” Accordingly, Beane sought out players that he could obtain cheaply because their actual (statistically verifiable) value was greater than their generally perceived value.

The great Howard Marks has also applied this idea to the investing world:

“If what’s obvious and what everyone knows is usually wrong, then what’s right? The answer comes from inverting the concept of obvious appeal. The truth is, the best buys are usually found in the things most people don’t understand or believe in. These might be securities, investment approaches or investing concepts, but the fact that something isn’t widely accepted usually serves as a green light to those who’re perceptive (and contrary) enough to see it.”

The key investment application of the inversion principle, therefore, is that in most cases we would be better served by looking closely at the examples of people and portfolios that failed and why they failed instead of the success stories, even though such examples are unlikely to give rise to book contracts with six-figure advances. Similarly, we would be better served by examining our personal investment failures than our successes. Instead of focusing on “why we made it,” we would be better served by careful failure analysis and fault diagnosis. That is where the best data is and where the best insight may be inferred.

The smartest people may always question their assumptions to make sure that they are justified. The data set that was available to Wald was not a good sample. By inverting his thinking, Wald could more readily hypothesize and conclude that the sample was lacking.

Don’t Be Stupid

The inversion principle also means taking a step back (so to speak) to consider your goals in reverse. Our first goal, therefore, should not be to achieve success, even though that is highly intuitive. Note, for example, this recent list of 2017’s smartest companies, which focuses on “breakthrough technologies” and “successful” innovations. Instead, our first goal should be to avoid failure – to limit mistakes. Instead of trying so hard to be smart, we should invert that and spend more energy on not being stupid, in large measure because not being stupid is far more achievable and manageable than being brilliant. In general, we would be better off pulling the bad stuff out of our ideas and processes than trying to put more good stuff in.

As Munger has stated, “I think part of the popularity of Berkshire Hathaway is that we look like people who have found a trick. It’s not brilliance. It’s just avoiding stupidity.” Here is a variation: “we know the edge of our competency better than most. That’s a very worthwhile thing.” Buffett has a variation on this theme too: “Rule No. 1: Never lose money. Rule No. 2: Never forget rule No. 1.” Another is to be fearful when others are greedy and greedy when others are fearful. George Costanza has his own unique iteration (“If every instinct you have is wrong, then the opposite would have to be right”).

If we avoid mistakes we will generally win. By examining failure more closely, we will have a better chance of doing precisely that. Basically, negative logic works better than positive logic. What we know not to be true is much more robust that what we know to be true. Note how Michelangelo thought about his master creation, the David. He always believed that David was within the marble he started with. He merely (which is not to say that it was anything like easy) had to chip away that which was not David. “In every block of marble I see a statue as plain as though it stood before me, shaped and perfect in attitude and action. I have only to hew away the rough walls that imprison the lovely apparition to reveal it to the other eyes as mine see it.” By chipping away at what “did not work,” Michelangelo uncovered a masterpiece. There are not a lot of masterpieces in life, but by avoiding failure, we give ourselves the best chance of overall success.

As Charley Ellis famously established, investing is a loser’s game much of the time (as I have also noted before) – with outcomes dominated by luck rather than skill and high transaction costs. Charley employed the work of Simon Ramo, a scientist and statistician, from Extraordinary Tennis for the Ordinary Player, who showed that professional tennis players and weekend tennis players play a fundamentally different game. The expert player, playing another expert player, needs to win points affirmatively through good shot-making to succeed. The weekend player wins by not losing – keeping the ball in play until his or her opponent makes an error, because weaker players make many more errors.

“In expert tennis, about 80 per cent of the points are won; in amateur tennis, about 80 per cent of the points are lost. In other words, professional tennis is a Winner’s Game – the final outcome is determined by the activities of the winner – and amateur tennis is a Loser’s Game – the final outcome is determined by the activities of the loser. The two games are, in their fundamental characteristic, not at all the same. They are opposites.”

As Charlie wrote in a letter to Wesco Shareholders while he was chair of the company: “Wesco continues to try more to profit from always remembering the obvious than from grasping the esoteric. … It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent. There must be some wisdom in the folk saying, `It’s the strong swimmers who drown.’”

Moreover, it turns out that we can quantify this idea more precisely.

As Phil Birnbaum brilliantly suggested in Slate, not being stupid matters demonstrably more than being smart when a combination of luck and skill determines success. Suppose you are the GM of a baseball team and you are preparing for the annual draft. Avoiding a mistake helps more than being smart.

Suppose you have the 15th pick in the draft. You look at a player the Major League consensus says is the 20th best player and think he is better than that – perhaps the 10th best player. By contrast, the MLB consensus on another player is that he is the 15th best player but you think he is only the 30th best. What are the rewards and consequences if you are right about each player when the draft comes?

If the underrated player is available when your pick comes, you can snap him up for an effective gain of five spots. You get the 10th best player with the 15th pick. That is great. Of course, since everybody else is scouting too, you may not be the only one who recognizes the underrated player’s true value. Anybody with a pick ahead of you can steal your thunder. If that happens, your being smart did not help a bit.

If the overrated player is available when your turn comes up (in theory, he should be because he is the consensus 15th pick and you are picking 15th), you will pass on him, because you know he is not that good. If you had not done the scouting and done it right, you would have taken him with your 15th pick and suffered an effective loss of 15 spots by getting the 30th best player with the 15th pick. In that case, then, avoiding a mistake helped.

Moreover, and crucially, it does not matter if other teams scouted him correctly. You have dodged a bullet no matter what. Recognizing the undervalued player (being smart) only helps when you are alone in your recognition. Recognizing the overrated player (avoiding a mistake) always helps. Birnbaum’s moral: “You gain more by not being stupid than you do by being smart. Smart gets neutralized by other smart people. Stupid does not.” Thus, the importance of the error quotient becomes obvious (obviously, the lower the better).

The same principle can also be demonstrated mathematically, as Birnbaum also noted. Gather ten people and show them a jar that contains equal numbers of $1, $5, $20, and $100 bills. Pull one out, at random, so nobody can see, and auction it off. If the bidders are generally smart, the bidding should top out at just below $31.50 (how much less will depend on the extent of the group’s loss aversion), the value of the average bill {(1+5+20+100) ÷ 4}. If you repeat the process but this time let two prospective bidders see the bill you picked, what happens? If you picked a $100 bill, the insiders should be willing to pay up to $99.99 for the bill. Neither of them will benefit much from the insider knowledge. However, if it is a $1 bill, neither of the insiders will bid. Without that knowledge, each of the insiders would have had a one-in-10 chance of paying $31.50 for the bill and suffering a loss of $30.50. On an expected value basis, each gained $3.05 from being an insider. Avoiding errors matters more than being smart.

That investing successfully is really hard suggests to most of us that being really smart should be a big plus in investing. Yet while it can help, the existence of other smart people together with copycats and hangers-on greatly dilutes the value of being market-smart. On the other hand, the impact of bad decision-making stands alone. It is not lessened by the related stupidity of others. In fact, the more people act stupidly together, the greater the aggregate risk and the greater the potential for loss. This risk grows exponentially. Think of everyone piling on during the tech or real estate bubbles. When nearly all of us make the same kinds of poor decisions together – when the error quotient is especially high – the danger becomes enormous.


Science is perhaps the quintessential inversion. It is the most powerful tool there is for determining what is real and what is true, and yet it advances only by ascertaining what is false. In other words, it works due to disconfirmation rather than confirmation. As Munger observed about Charles Darwin: “Darwin’s result was due in large measure to his working method, which violated all my rules for misery and particularly emphasized a backward twist in that he always gave priority attention to evidence tending to disconfirm whatever cherished and hard-won theory he already had. In contrast, most people early achieve and later intensify a tendency to process new and disconfirming information so that any original conclusion remains intact. They become people of whom Philip Wylie observed: ‘You couldn’t squeeze a dime between what they already know and what they will never learn.’”

The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement and experiment, and the formulation, testing, and modification of hypotheses.” Science is about making observations and then asking pertinent questions about those observations. What it means is that we observe and investigate the world and build our knowledge base on account of what we learn and discover, but we check our work at every point and keep checking our work. It is inherently experimental. In order to be scientific, then, our inquiries and conclusions must be based upon empirical, measurable evidence. We will never just “know.”

The scientific method, broadly construed, can and should be applied not only to traditional scientific endeavors, but also, to the fullest extent possible, to any sort of inquiry into or study about the nature of reality, including investing. As I have noted before, the great physicist and Nobel laureate Richard Feynman even applied such experimentation to hitting on women. To his surprise, he learned that he (at least) was more successful by being aloof than by being polite or by buying a woman he found attractive a drink.

David Wootton’s brilliant book, The Invention of Science, makes a compelling case that modernity began with the scientific revolution in Europe, book-ended by Danish astronomer Tycho Brahe’s identification of a new star in the heavens in 1572, which proved that heavens were not fixed, and the publication of Isaac Newton’s Opticks in 1704, which drew conclusions based upon experimentation. In Wootton’s view, this was “the most important transformation in human history” since the Neolithic era and in no small measure predicated upon a scientific mindset, which includes the unprejudiced observation of nature, careful data collection, and rigorous experimentation. In his view, the “scientific way of thinking has become so much part of our culture that it has now become difficult to think our way back into a world where people did not speak of facts, hypotheses and theories, where knowledge was not grounded in evidence, where nature did not have laws.” I think Wootton’s claim is surely true, even if honored mainly in the breach.

The scientific approach was truly a new way of thinking (despite historical antecedents). Wootton shows that when Christopher Columbus came to the New World in 1492, he did not have a word to describe what he had done (or at least appeared to have done, with apologies to the Vikings). It was the Portuguese, the first global imperial power, who introduced the term “discovery” in the early 16th Century. There were other new words and concepts that were also important when trying to understand the scientific revolution, such as “fact” (only widely used after 1663), “evidence” (incorporated into science from the legal system) and “experiment.”

As Wootton explains, knowledge, as it was espoused in medieval universities and monasteries, was dominated by the ancients, the likes of Ptolemy, Galen, and Aristotle. Accordingly, it was widely believed that all of the most important knowledge was already known. Thus, learning was predominantly a backward-facing pursuit, about returning to ancient first principles, not pushing into the unknown. Indeed, Wootton details the emergence of fact and evidence as previously unknown terms of art. The modern scientific pursuit is the “formation of a critical community capable of assessing discoveries and replicating results.”

In its broadest context, science is the careful, systematic and logical search for knowledge, obtained by examination of the best available evidence and always subject to correction and improvement upon the discovery of better or additional evidence. That is the essence of what has come to be known as the scientific method, which is the process by which we, collectively and over time, endeavor to construct an accurate (that is, reliable, consistent and non-arbitrary) representation of the world. Otherwise (per James Randi), we are doing magic, and magic simply does not work.

Aristotle, brilliant and important as he was, posited, for example, that heavy objects fall faster than lighter objects and that males and females have different numbers of teeth, based upon some careful – though flawed – reasoning. But it never seemed to have occurred to him that he ought to check. Checking and then re-checking your ideas or work offers evidence that may tend to confirm or disprove them. By collecting “a long-term data set,” per field biologist George Schaller, “you find out what actually happens.” Testing can also be reproduced by any skeptic, which means that you need not simply trust the proponent of any idea. You do not need to take anyone’s word for things — you can check it out for yourself. That is the essence of the scientific endeavor.

Science is inherently limiting, however. We want deductive proof in the manner of Aristotle, but have to settle for induction. That is because science can never fully prove anything. It analyzes the available data and, when the force of the data is strong enough, it makes tentative conclusions. Moreover, these conclusions are always subject to modification or even outright rejection based upon further evidence gathering. The great value of facts and data is not so much that they point toward the correct conclusion (even though they do), but that they allow us the ability to show that some things are conclusively wrong.

Science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides relative certainty only as to what is not true). That makes it unwieldy. Thank you, Karl Popper. In investing, as in science generally, we need to build our processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow.

In investing, much like science generally and as in life, if we avoid mistakes we will generally win. We all want to be Michael Burry, an investor who made a fortune because he recognized the mortgage bubble in time to act accordingly. However, becoming Michael Burry starts by not being Wing Chau, an investor of Lawn Chair Larry foolishness who got crushed when the mortgage market collapsed. In fact, we all suffered when the real estate bubble burst. When the error quotient is especially high, our risks grow exponentially. Success starts with avoiding errors and looking at problems and situations differently.

Invert. Always invert.


Next Page »