August 2017

Screen Shot 2015-11-18 at 4.55.47 PM

In writing my latest Thoughts from the Frontline, I reached out to my contacts looking for an uber-bull—someone utterly convinced that the market is on solid ground, with good evidence for their view.

Fortunately, a good friend who must remain nameless shared with me an August 4 slide deck from Krishna Memani, Chief Investment Officer of Oppenheimer Funds.

The current bull market is the second longest and has the third-highest gain. It will be the longest stock bull market of the modern era if it can last another two years or so.

However, he thinks the present bull market will continue for another year.

Here’s Memani:

For some investors, the sheer age of this cycle is enough to cause consternation. Yet there is nothing magical about the passage of time. As we have said time and again, bull markets do not die of old age. Like people, bull markets ultimately die when the system can no longer fight off maladies. In order for the cycle to end there needs to be a catalyst—either a major policy mistake or a significant economic disruption in one of the world’s major economies. In our view, neither appears to be in the offing.

15 Events That Could Be a Catalyst for the Next Recession

He goes on to list 15 specific events he thinks would be necessary to make him abandon his bullish position. (Comments in parentheses and italics are mine.)

1. Global growth would have had to decelerate. It is not.

(European growth is actually picking up. Germany blinked on financing Italian bank debt, and the markets now have more confidence that Draghi can do whatever it takes.)

2. Wages and inflation would have had to rise. They are not.

3. The Fed would have planned to tighten monetary policy significantly. It is not.

(They should have been raising rates four years ago. It is too late in the cycle now. They may raise rates once more, but the paltry amount of “quantitative tightening” they are likely to do is not going to amount to much. In fact, if for some reason they decided to go further with rate hike and enter a tightening cycle, their monetary policy error would probably trigger a recession and a deep bear market. I think they realize that—or at least I hope they do.)

4. The ECB would have to tighten policy substantially. It will likely not.

(Draghi will go through the motions, though he knows he is limited in what he can actually do – unless for some unexpected reason Europe takes off to the upside. And while Eastern Europe is actually doing that, “Old Europe” is not.)

5. Credit growth would have had to be surging. It is not.

(Credit growth is generally picking up but not surging. And most of the credit growth is in government debt.)

6. Corporate animal spirits would have been taking off. They are not.

(That is basically true for most public corporations. There are a number of private companies and smaller businesses that are pretty optimistic.)

7. Equities would have had to be expensive relative to bonds. They are not.

8. FAANG stocks would have had to be at extreme valuations. They are not.

(I don’t think I buy this one.)

9. Investors would have had to be euphoric about equities. They are not.

10. The current cyclical rally within the secular bull would have had to be old and stretched. It is not.

(Not buying this one either.)

11. High-yield spreads would have to be widening. They are not.

(I pay attention to high-yield spreads, a classic warning sign of a turn in market behavior. Are they at dangerous levels? Damn, Skippy, I cannot believe some of the bonds that are being sold out in the marketplace. Not that I can’t believe the sellers are willing to take the money—you’d have to be an idiot not to take free money with no strings attached. I just don’t understand why major institutions are buying this nonsense.)

12. The classic signs of excess would have had to be evident. They are not.

(Kind of, sort of, but we are really beginning to stretch the point.)

13. China’s credit binge would have had to threaten the global financial system. It does not.

(Xi has somehow managed to push off the credit crisis, at least for the rest of this year, until after the five-year Congress. Rather amazing.)

14. Global trade would have had to be weakening. It is not.

15. The US dollar would have had to be strengthening. It is not.

That’s quite a list. Seeing it with the charts and Memani’s comments makes it even more compelling. To pick just one for closer scrutiny, let’s consider #7.

Are Equities Expensive Relative to Bonds?

That’s a good question because it really matters to big, long-term investors like pension funds.

Pension fund managers need to meet certain return targets, and they want to put the odds on their side. Treasury bonds offer certainty—presuming the US government doesn’t default. (Ask me about that again in October.)

Stocks may offer higher returns but more variation.

Memani explains this relationship by looking at earnings yield. That’s the inverse of the P/E ratio.

Essentially, it’s the percentage of each dollar invested in a stock that comes back as profits. Some gets distributed via dividends, buybacks, etc., and some is retained.

If you think there’s a stock mania today akin to the euphoria of the late 1990s, you’ll find no support in this ratio. Back then, bonds were dirt cheap compared to stock market earnings yield.

Now we have the reverse: stocks are cheap compared to bonds.

This is one of the most convincing bullish arguments I see now.

I remember the late ’90s very well. I called the top about three years early, never dreaming we could see a year like 1999. That will always be my mania benchmark—and today we are not even remotely near it. I don’t remember thinking much about bonds back then. No one else was, either.

But buying them would have turned out much better than buying stocks in 1997–99.


Screen Shot 2015-11-18 at 4.55.47 PM

This rather misses the point: gold is protection against government induced inflation etc. If the dollar or whatever implodes, gold/silver are money.

Having waited patiently for the “any-minute-now” moment, gold investors are taking comfort from the recent rise in price in response to geopolitical tensions. Yet the responsiveness of gold, as well as the overall price, appears weaker than would have been expected from historically based models — and for understandable reasons. The precious metal’s status as a haven has been eroded by the influence of unconventional monetary policy and the growth of markets for cryptocurrencies.

Gold prices rose almost 1 percent on Tuesday morning as part of the risk aversion triggered by yet another brazen North Korean missile launch over Japan, together with uncertainty as to how the U.S. may respond. But trading below $1,330, the overall response of gold prices to the last few months of heightened geopolitical risks has been relatively muted, particularly as the 10-year Treasury bond, another traditional haven, saw its yield trade down to below 2.10 percent that same morning.

Two immediate reasons come to mind, one related to several assets and the other more specifically to gold.

First, and as I have discussed in several Bloomberg View articles, the prolonged pursuit of unconventional measures by central banks has helped meaningfully decouple asset prices from underlying fundamentals. In such circumstances, historically based models will tend to overestimate the reaction of asset prices to heightened geopolitical tensions — including the fall in risk assets such as equities, or the rise in gold.

Second, a portion of the traditional buyer interest in gold has been diverted to the growing markets for cryptocurrencies, which are also benefiting from a general increase in demand. As such, the returns to investors there have been significantly greater, sucking in even more funds.

The message for investors in both gold and multi-asset-class portfolios is clear.

While continuing to play a role in diversified market exposures, gold is less of a risk mitigator and asset-class diversifier, for now. Luckily for investors, the need has also been less pronounced, given that ample market liquidity has boosted returns, repressed volatility, and distorted correlations in their favor. But this is not to say that gold’s traditional role will not be re-established down the road. After all, central banks are in the later stages of reliance on unconventional monetary measures and, given this year’s spectacular price appreciation, cryptocurrencies are more vulnerable to unsettling air pockets.

Screen Shot 2015-11-18 at 4.55.47 PM

Well the big fight came and went and Mayweather won. I was hoping for an upset of sorts with McGregor winning.

Sort of went with the ‘experts’ assessment, McGregor needed to win fast, in the early rounds, or, and as eventuated, Mayweather would exert his superior [pure] boxing skills and carry the fight.

Screen Shot 2015-11-18 at 4.55.47 PM

Rory Sutherland claims that the real function for swimming pools is allowing the middle class to sit around in bathing suits without looking ridiculous. Same with New York restaurants: you think their mission is to feed people, but that’s not what they do. They are in the business of selling you overpriced liquor or Great Tuscan wines by the glass, yet get you into the door by serving you your low-carb (or low-something) dishes at breakeven cost. (This business model, of course, fails to work in Saudi Arabia).

So when we look at religion and, to some extent ancestral superstitions, we should consider what purpose they serve, rather than focusing on the notion of “belief”, epistemic belief in its strict scientific definition. In science, belief is literal belief; it is right or wrong, never metaphorical. In real life, belief is an instrument to do things, not the end product. This is similar to vision: the purpose of your eyes is to orient you in the best possible way, and get you out of trouble when needed, or help you find a prey at distance. Your eyes are not sensors aimed at getting the electromagnetic spectrum of reality. Their job description is not to produce the most accurate scientific representation of reality; rather the most useful one for survival.

Ocular Deception

Our perceptional apparatus makes mistakes –distortions — in order to lead to more precise actions on our parts: ocular deception, it turns out, is a necessary thing. Greek and Roman architects misrepresent the columns of the temples, by tilting them inward, in order to give us the impression that the columns are straight. As Vitruvius explains, the aim is to “counteract the visual deception by an change of proportions”[i]. A distortion is meant to bring about an enhancement of your aesthetic experience. The floor of the Parthenon is curved in reality so we can see it straight. The columns are in truth unevenly spaced, so we can see them lined up like a marching Russian division in a parade.

Should one go lodge a complain with the Greek Tourism Office claiming that the columns are not vertical and someone is taking advantage of our visual weaknesses?

Temple of Bacchus, Baalbeck, Lebanon

Ergodicity First

The same applies to distortions of beliefs. Is this visual deceit any different from leading someone to believe in Santa Claus, if it enhances his or her holiday aesthetic experience? No, unless the person engages in actions that ends up harming him or her.

In that sense harboring superstitions is not irrational by any metric: nobody has managed to reinvent a metric for rationality based on process. Actions that harm you are observable.

I have shown that, unless one has an overblown and (as with Greek columns), a very unrealistic representation of some tail risks, one cannot survive –all it takes is a single event for the irreversible exit from among us. Is selective paranoia “irrational” if those individuals and populations who don’t have it end up dying or extinct, respectively?

A statements that will orient us for the rest of the book

Survival comes first, truth, understanding, and science later

In other words, you do not need science to survive (we’ve done it for several hundred million years) , but you need to survive to do science. As your grandmother would have said, better safe than sorry. This precedence is well understood by traders and people in the real world, as per Warren Buffet expression “to make money you must first survive” –skin in the game again; those of us who take risks have their priorities firmer than vague textbook notions such as “truth”. More technically, this brings us again to the ergodic property (I keep my promise to explain it in detail, but we are not ready yet): for the world to be “ergodic”, there needs to be no absorbing barrier, no substantial irreversibilities.

And what do we mean by “survival”? Survival of whom? Of you? Your family? Your tribe? Humanity? We will get into the details later but note for now that I have a finite shelf life; my survival is not as important as that of things that do not have a limited life expectancy, such as mankind or planet earth. Hence the more “systemic”, the more important such a survival becomes.

An illustration of the Bias-Variance tradeoff. Assume two people (sober) shooting at a target in, say, Texas. The top shooter has a bias, a systematic “error”, but on balance gets closer to target than the bottom shooter who has no systematic bias but a high variance. Typically, you cannot reduce one without increasing the other. When fragile, the strategy at the top is the best: maintain a distance from ruin, that is, hitting a point in the periphery should it be dangerous. This schema explains why if you want to minimize the probability of the plane crashing, you may make mistakes with impunity provided you lower your dispersion.


Three rigorous thinkers will orient my thinking on the matter: the cognitive scientist and polymath Herb Simon, pioneer of Artificial Intelligence, and the derived school of thought led by Gerd Gigerenzer, on one hand, and the mathematician, logician and decision theorist Ken Binmore who spent his life formulating the logical foundations of rationality.

From Simon to Gigerenzer

Simon formulated the notion now known as bounded rationality: we cannot possibly measure and assess everything as if we were a computer; we therefore produce, under evolutionary pressures, some shortcuts and distortions. Our knowledge of the world is fundamentally incomplete, so we need to avoid getting in unanticipated trouble. Even if our knowledge of the world were complete, it would still be computationally near-impossible to produce precise, unbiased understanding of reality. A fertile research program on ecological rationality came out of it, mostly organized and led by Gerd Gigerenzer, mapping how many things we do that appear, on the surface, illogical have deeper reasons.

Ken Binmore

As to Ken Binmore, he showed that the concept casually dubbed “rational” is ill-defined, in fact so ill-defined that much of the uses of the term are just gibberish. There is nothing particularly irrational in beliefs per se (given that they can be shortcuts and instrumental to something else): to him everything lies in the notion of “revealed preferences”, which we explain next.

Binmore also saw that criticism of the “rational” man as posited by economic theory is often a strawman argument distorting the theory in order to bring it down. He recounts that economic theory, as posited in the original texts, is not as strict in its definition of “utility”, that is, the satisfaction a consumer and a decision-maker derive from a certain outcome. Satisfaction does not necessarily have to be monetary. There is nothing irrational, according to economic theory, in giving your money to a stranger, if that’s what makes you tick. And don’t try to invoke Adam Smith: he was a philosopher not an accountant; he never equated human interests and aims to narrow accounting book entries.

Revelation of Preferences

Next let us develop the following three points:

Judging people on their beliefs is not scientific

There is no such thing as “rationality” of a belief, there is rationality of action

The rationality of an action can only be judged by evolutionary considerations

The axiom of revelation of preferences states the following: you will not have an idea about what people really think, what predicts people’s actions, merely by asking them –they themselves don’t know. What matters, in the end, is what they pay for goods, not what they say they “think” about them, or what are the reasons they give you or themselves for that. (Think about it: revelation of preferences is skin in the game). Even psychologists get it; in their experiments, their procedures require that actual dollars be spent for the test to be “scientific”. The subjects are given a monetary amount, and they watch how he or she formulates choices by spending them. However, a large share of psychologists fughedabout the point when they start bloviating about rationality. They revert to judging beliefs rather than action.

For beliefs are … cheap talk. A foundational principle of decision theory (and one that is at the basis of neoclassical economics, rational choice, and similar disciplines) is that what goes on in the head of people isn’t the business of science. First, what they think may not be measurable enough to lend itself to some scientific investigation. Second, it is not testable. Finally, there may be some type of a translation mechanism too hard for us to understand, with distortions at the level of the process that are actually necessary for think to work.

Actually, by a mechanism (more technically called the bias-variance tradeoff), you often get better results making some type of “errors”, as when you aim slightly away from the target when shooting. I have shown in Antifragile that making some types of errors is the most rational thing to do, as, when the errors are of little costs, it leads to gains and discoveries.

This is why I have been against the State dictating to us what we “should” be doing: only evolution knows if the “wrong” thing is really wrong, provided there is skin in the game for that.

he classical “large world vs small world” problem. Science is currently too incomplete to provide all answers –and says it itself. We have been so much under assault by vendors using “science” to sell products that many people, in their mind, confuse science and scientism. Science is mainly rigor.

What is Religion About ?

It is therefore my opinion that religion is here to enforce tail risk management across generations, as its binary and unconditional rules are easy to teach and enforce. We have survived in spite of tail risks; our survival cannot be that random.

Recall that skin in the game means that you do not pay attention to what people say, only to what they do, and how much of their neck they are putting on the line. Let survival work its wonders.

Superstitions can be vectors for risk management rules. We have as potent information that people that have them have survived; to repeat never discount anything that allows you to survive. For instance Jared Diamond discusses the “constructive paranoia” of residents of Papua New Guinea, whose superstitions prevent them from sleeping under dead trees. [1]Whether it is superstition or something else, some deep scientific understanding of probability that is stopping you, it doesn’t matter, so long as you don’t sleep under dead trees. And if you dream of making people use probability in order to make decisions, I have some news: close to ninety percent of psychologists dealing with decision-making (which includes such regulators as Cass Sunstein) have no clue about probability, and try to disrupt our organic paranoid mechanism.

Further, I find it incoherent to criticize someone’s superstitions if these are meant to bring some benefits, yet not do so with the optical illusions in Greek temples.

The notion of “rational” bandied about by all manner of promoters of scientism isn’t defined well enough to be used for beliefs. To repeat, we do not have enough grounds to discuss “irrational beliefs”. We do with irrational actions.

Now what people say may have a purpose –it is not just what they think it means. Let us extend the idea outside of buying and selling to the risk domain: opinions in are cheap unless people take risks for them.

Extending such logic, we can show that much of what we call “belief” is some kind of background furniture for the human mind, more metaphorical than real. It may work as therapy.

“Tawk” and Cheap “Tawk”

The first principle we make:

There is a difference between beliefs that are decorative and a different sort of beliefs, those that map to action.

There is no difference between them in words, except that the true difference reveals itself in risk taking, having something at stake, something one could lose in case one is wrong.

And the lesson, by rephrasing the principle:

How much you truly “believe” in something can only be manifested through what you are willing to risk for it.

But this merits continuation. The fact that there is this decorative component to belief, life, these strange rules followed outside the Gemelli clinics of the world merits a discussion. What are these for? Can we truly understand their function? Are we confused about their function? Do we mistake their rationality? Can we use them instead to define rationality?

What Does Lindy Say?

Let us see what Lindy has to say about “rationality”. While the notions of “reason” and “reasonable” were present in ancient thought, mostly embedded in the notion of precaution, or sophrosyne, this modern idea of “rationality” and “rational decision-making” was born in the aftermath of Max Weber, with the works of psychologists, philosophasters, and psychosophasters. The classical sophrosyne is precaution, self-control, and temperance, all in one. It was replaced with something a bit different. “Rationality” was forged in a post-enlightenment period[2], at the time when we thought that understanding the world was at the next corner. It assumes no randomness, or a simplified the random structure of our world. Also of course no interactions with the world.

The only definition of rationality that I found that is practically, empirically, and mathematically rigorous is that of survival –and indeed, unlike the modern theories by psychosophasters, it maps to the classics. Anything that hinders one’s survival at an individual, collective, tribal, or general level is deemed irrational.

Hence the precautionary principle and sound risk understanding.

It may be “irrational” for people to have two sinks in their kitchen, one for meat and the other for dairy, but as we saw, it led to the survival of the Jewish community as Kashrut laws forced them to eat and bind together.

It is also rational to see things differently from the “way they are”, for improved performance.

It is also difficult to map beliefs to reality. A decorative or instrumental belief, say believing in Santa Claus or the potential anger of Baal can be rational if it leads to an increased survival.

The Nondecorative in the Decorative

Now what we called decorative is not necessarily superfluous, often to the contrary. They may just have another function we do not know much about –and we can consult for that the grandmaster statistician, time, in a very technical tool called the survival function, known by both old people and very complex statistics –but we will resort here to the old people version.

The fact to consider is not that these beliefs have survived a long time –the Catholic church is an administration that is close to twenty-four centuries old (it is largely the continuation of the Roman Republic). The fact is not that . It is that people who have religion –a certain religion — have survived.

Another principle:

When you consider beliefs do not assess them in how they compete with other beliefs, but consider the survival of the populations that have them.

Consider a competitor to the Pope’s religion, Judaism. Jews have close to five hundred different dietary interdicts. They may seem irrational to an observer who sees purpose in things and defines rationality in terms of what he can explain. Actually they will most certainly seem so. The Jewish Kashrut prescribes keeping four sets of dishes, two sinks, the avoidance of mixing meat with dairy products or merely letting the two be in contact with each other, in addition to interdicts on some animals: shrimp, pork, etc. The good stuff.

These laws might have had an ex ante purpose. One can blame insalubrious behavior of pigs, exacerbated by the heat in the Levant (though heat in the Levant was not markedly different from that in pig-eating areas further West). Or perhaps an ecological reason: kids compete with humans in eating the same vegetables while cows eat what we don’t eat.

But it remains that whatever the purpose, the Kashrut survived approximately three millennia not because of its “rationality” but because the populations that followed it survived. It most certainly brought cohesion: people who eat together hang together. Simply it aided those that survived because it is a convex heuristic. Such group cohesion might be also responsible for trust in commercial transactions with remote members of the community.

This allows us to summarize

Rationality is not what has conscious verbalistic explanatory factors; it is only what aids survival, avoids ruin.

Rationality is risk management, period.

[1] “Consider: If you’re a New Guinean living in the forest, and if you adopt the bad habit of sleeping under dead trees whose odds of falling on you that particular night are only 1 in 1,000, you’ll be dead within a few years. In fact, my wife was nearly killed by a falling tree last year, and I’ve survived numerous nearly fatal situations in New Guinea.”

Screen Shot 2015-11-18 at 4.55.47 PM

Every further new high in the price of Bitcoin brings ever more claims that it is destined to become the preeminent safe haven investment of the modern age — the new gold.

But there’s no getting around the fact that Bitcoin is essentially a speculative investment in a new technology, specifically the blockchain. Think of the blockchain, very basically, as layers of independent electronic security that encapsulate a cryptocurrency and keep it frozen in time and space — like layers of amber around a fly. This is what makes a cryptocurrency “crypto.”

That’s not to say that the price of Bitcoin cannot make further (and further…) new highs. After all, that is what speculative bubbles do (until they don’t).

Bitcoin and each new initial coin offering (ICO) should be thought of as software infrastructure innovation tools, not competing currencies. It’s the amber that determines their value, not the flies. Cryptocurrencies are a very significant value-added technological innovation that calls directly into question the government monopoly over money. This insurrection against government-manipulated fiat money will only grow more pronounced as cryptocurrencies catch on as transactional fiduciary media; at that point, who will need government money? The blockchain, though still in its infancy, is a really big deal.

While governments can’t control cryptocurrencies directly, why shouldn’t we expect cryptocurrencies to face the same fate as what started happening to numbered Swiss bank accounts (whose secrecy remain legally enforced by Swiss law)? All local governments had to do was make it illegal to hide, and thus force law-abiding citizens to become criminals if they fail to disclose such accounts. We should expect similar anti-money laundering hygiene and taxation among the cryptocurrencies. The more electronic security layers inherent in a cryptocurrency’s perceived value, the more vulnerable its price is to such an eventual decree.

Bitcoins should be regarded as assets, or really equities, not as currencies. They are each little business plans — each perceived to create future value. They are not stores-of-value, but rather volatile expectations on the future success of these business plans. But most ICOs probably don’t have viable business plans; they are truly castles in the sky, relying only on momentum effects among the growing herd of crypto-investors. (The Securities and Exchange Commission is correct in looking at them as equities.) Thus, we should expect their current value to be derived by the same razor-thin equity risk premiums and bubbly growth expectations that we see throughout markets today. And we should expect that value to suffer the same fate as occurs at the end of every speculative bubble.

If you wanted to create your own private country with your own currency, no matter how safe you were from outside invaders, you’d be wise to start with some pre-existing store-of-value, such as a foreign currency, gold, or land. Otherwise, why would anyone trade for your new currency? Arbitrarily assigning a store-of-value component to a cryptocurrency, no matter how secure it is, is trying to do the same thing (except much easier than starting a new country). And somehow it’s been working.

Moreover, as competing cryptocurrencies are created, whether for specific applications (such as automating contracts, for instance), these ICOs seem to have the effect of driving up all cryptocurrencies. Clearly, there is the potential for additional cryptocurrencies to bolster the transactional value of each other—perhaps even adding to the fungibility of all cryptocurrencies. But as various cryptocurrencies start competing with each other, they will not be additive in value. The technology, like new innovations, can, in fact, create some value from thin air. But not so any underlying store-of-value component in the cryptocurrencies. As a new cryptocurrency is assigned units of a store-of-value, those units must, by necessity, leave other stores-of-value, whether gold or another cryptocurrency. New depositories of value must siphon off the existing depositories of value. On a global scale, it is very much a zero sum game.

Or, as we might say, we can improve the layers of amber, but we can’t create more flies.

This competition, both in the technology and the underlying store-of-value, must, by definition, constrain each specific cryptocurrency’s price appreciation. Put simply, cryptocurrencies have an enormous scarcity problem. The constraints on any one cryptocurrency’s supply are an enormous improvement over the lack of any constraint whatsoever on governments when it comes to printing currencies. However, unlike physical assets such as gold and silver that have unique physical attributes endowing them with monetary importance for millennia, the problem is that there is no barrier to entry for cryptocurrencies; as each new competing cryptocurrency finds success, it dilutes or inflates the universe of the others.

The store-of-value component of cryptocurrencies — which is, at a bare-minimum, a fundamental requirement for safe haven status — is a minuscule part of its value and appreciation. After all, stores of value are just that: stable and reliable holding places of value. They do not create new value, but are finite in supply and are merely intended to hold value that has already been created through savings and productive investment. To miss this point is to perpetuate the very same fallacy that global central banks blindly follow today. You simply cannot create money, or capital, from thin air (whether it be credit or a new cool cryptocurrency). Rather, it represents resources that have been created and saved for future consumption. There is simply no way around this fundamental truth.

Viewing cryptocurrencies as having safe haven status opens investors to layering more risk on their portfolios. Holding Bitcoins and other cryptocurrencies likely constitutes a bigger bet on the same central bank-driven bubble that some hope to protect themselves against. The great irony is that both the libertarian supporters of cryptocurrencies and the interventionist supporters of central bank-manipulated fiat money both fall for this very same fallacy.

Cryptocurrencies are a very important development, and an enormous step in the direction toward the decentralization of monetary power. This has enormously positive potential, and I am a big cheerleader for their success. But caveat emptor—thinking that we are magically creating new stores-of-value and thus a new safe haven is a profound mistake.

Screen Shot 2015-11-18 at 4.55.47 PM

I would love this bike, but at $70,000.00 just a touch too much. This one is for sale in Auckland. It is new for 2018.

The final edition Panigale 1299R.


Screen Shot 2017-08-16 at 4.26.49 AM

Screen Shot 2015-11-18 at 4.55.47 PM

Larry Walters always wanted to fly. When he was old enough, he joined the Air Force, but he could not see well enough to become a pilot. After he was discharged from the military, he would often sit in his backyard watching jets fly overhead, dreaming about flying and scheming about how to get into the sky. On July 2, 1982, the San Pedro, California trucker finally set out to accomplish his dream. Because the story has been told in a variety of ways over a variety of media outlets, it is impossible to know precisely what happened but, as a police officer commented later, “It wasn’t a highly scientific expedition.”

Larry conceived his “act of American ingenuity” while sitting outside in his “extremely comfortable” Sears lawn chair. He purchased weather balloons from an Army-Navy surplus store, tied them to his tethered Sears chair and filled the four-foot diameter balloons with helium. Then, after packing sandwiches, Miller Lite, a CB radio, a camera, a pellet gun, and 30 one-pound jugs of water for ballast – but without a seatbelt – he climbed into his makeshift craft, dubbed “Inspiration I.” His plan, such as it was, called for him to float lazily above the rooftops at about 30 feet for a while, pounding beers, and then to use the pellet gun to explode the balloons one-by-one so he could float to the ground.

But when the last cord that tethered the craft to his Jeep snapped, Walters and his lawn chair did not rise lazily into the sky. Larry shot up to an altitude of about three miles (higher than a Cessna can go), yanked by the lift of 45 helium balloons holding 33 cubic feet of helium each. He did not dare shoot any of the balloons because he feared that he might unbalance the load and fall. So he slowly drifted along, cold and frightened, in his lawn chair, with his beer and sandwiches, for more than 14 hours. He eventually crossed the primary approach corridor of LAX. A flustered TWA pilot spotted Larry and radioed the tower that he was passing a guy in a lawn chair with a gun at 16,000 feet.

Eventually Larry conjured up the nerve to shoot several balloons before accidentally dropping his pellet gun overboard. The shooting did the trick and Larry descended toward Long Beach, until the dangling tethers got caught in a power line, causing an electrical blackout in the neighborhood below. Fortunately, Walters was able to climb to the ground safely from there.

The Long Beach Police Department and federal authorities were waiting. Regional safety inspector Neal Savoy said, “We know he broke some part of the Federal Aviation Act, and as soon as we decide which part it is, some type of charge will be filed. If he had a pilot’s license, we’d suspend that. But he doesn’t.” As he was led away in handcuffs, a reporter asked Larry why he had undertaken his mission. The answer was simple and poignant. “A man can’t just sit around,” he said.

The Inversion Principle

In one of the more glaringly obvious observations of all-time, it is safe to say that Larry’s decision-making process was more than a bit flawed. The Bonehead Club of Dallas awarded him its top prize – Bonehead of the Year – but he only earned an honorable mention from the Darwin Awards people, presumably because, even though things did not turn out exactly as he planned (another glaringly obvious observation), he was incredibly lucky and his flight did not end in disaster. Among his many errors, Larry did not follow the inversion principle popularized in the investment world by Charlie Munger. Charlie borrowed this highly useful idea from the great 19th Century German mathematician Carl Jacobi, who created this helpful approach for improving your decision-making process.

Invert, always invert (“man muss immer umkehren”).

Jacobi believed that the solution for many difficult problems could be found if the problems were expressed in the inverse – by working or thinking backwards. As Munger has explained, “Invert. Always invert. Turn a situation or problem upside down. Look at it backward. What happens if all our plans go wrong? Where don’t we want to go, and how do you get there? Instead of looking for success, make a list of how to fail instead – through sloth, envy, resentment, self-pity, entitlement, all the mental habits of self-defeat. Avoid these qualities and you will succeed. Tell me where I’m going to die, that is, so I don’t go there.” Charlie’s partner, Warren Buffett, makes a similar point: “Charlie and I have not learned how to solve difficult business problems. What we have learned is to avoid them.”

As in most matters, we would do well to emulate Charlie. But what does that mean?

It begins with working backwards, to the extent you can, quite literally. If you have done algebra, you know that reversing an equation is the best way to check your work. Similarly, the best way to proofread is back-to-front, one painstaking sentence at a time. But it also means much more than that.

Thinking in Reverse

Charlie’s inversion principle also means thinking in reverse. As Munger explains it: “In other words, if you want to help India, the question you should ask is not, ‘How can I help India?’ It’s, ‘What is doing the worst damage in India?’”

During World War II, the Allied forces sent regular bombing missions into Germany. The lumbering aircraft sent on these raids – most often B-17s – were strategically crucial to the war effort and were often lost to enemy anti-aircraft fire. That was a huge problem, obviously.

Boeing XB-17

One possible solution was to provide more reinforcement for the Flying Fortresses, but armor is heavy and restricts aircraft performance even more. So extra plating could only go where the planes were most vulnerable. The problem of where to add armor was a difficult one because the data set was so limited. There was no access to the planes that had been shot down. In 1943, the English Air Ministry examined the locations of the bullet holes on the returned aircraft and proposed adding armor to those areas that showed the most damage, all at the planes’ extremities.

The great mathematician Abraham Wald, who had fled Austria for the United States in 1938 to escape the Nazis, was put to work on the problem of estimating the survival probabilities of planes sustaining hits in various locations so that the added armor would be located most expeditiously. Wald came to a surprising and very different conclusion from that proposed by the Air Ministry. Since much of Wald’s analysis at the time was new – he did not have sufficient computing power to model results and did not have access to more recent statistical approaches – his work was ad hoc and his success was due to “the sheer power of his intuition” alone.

Wald began by drawing an outline of a plane and marking it where returning planes had been hit. There were lots of shots everywhere except in a few particular (and crucial) areas, with more shots to the planes’ extremities than anywhere else. By inverting the problem – considering where the planes that didn’t return had been hit and what it would take to disable an aircraft rather than examining the data he had from the returning bombers – Wald came to his unique insight, later confirmed by remarkable (for the time, and long classified) mathematical analysis (more here). Much like Sherlock Holmes and the dog that didn’t bark, Wald’s remarkable intuitive leap came about due to what he didn’t see (that Wald’s insight seems obvious now is a wonderful illustration of hindsight bias).

Wald realized that the holes from flak and bullets most often seen on the bombers that returned represented the areas where planes were best able to absorb damage and survive. Since the data showed that there were similar areas on each returning B-17 showing little or no damage from enemy fire, Wald concluded that those areas (around the main cockpit and the fuel tanks) were the truly vulnerable spots and that these were the areas that should be reinforced.

From a mathematical perspective, Wald considered what might have happened to account for the data he possessed. Therefore, what he did was to set the probability that a plane that took a hit to the engine managed to stay in the air to zero and thought about what that would mean. In other words, conceptually, he assumed that any hit to the engine would bring the plane down. Because planes returned from their missions with bullet holes everywhere but the engine, the other alternative was that planes were never hit in the engine. Thus, either the German gunfire hit every part of the plane but one, or the engine was a point of extreme vulnerability. Wald considered both possibilities, but the latter made much more sense.

The more useful data was in the planes that were shot down and unavailable, not the ones that survived, and had to be “gathered” by Wald via induction. This insight lies behind the related concepts we now call survivorship bias – our tendency to include only successes in statistical analysis, skewing or even invalidating the results – and selection bias – the distortions we see when the sample selection does not accurately reflect the target population. Thus, the fish you observe in a pond will almost certainly correspond to the size of the holes in your net. Inverting the problem allowed Wald to come to the correct conclusion, saving many planes (and lives).

This idea applies to baseball too. As I have argued before, the crucial insight of Moneyball was a “Mungeresque” inversion. In baseball, a team wins by scoring more runs than its opponent. The epiphany was to invert the idea that runs and wins were achieved by hits to the radical notion that the key to winning is avoiding outs. That led the story’s protagonist, general manager of the Oakland A’s Billy Beane, to “buy” on-base percentage cheaply because the “traditional baseball men” overvalued hits but undervalued on-base percentage even though it does not matter how a batter avoids making an out and reaches base.

Therefore, the key application of the Moneyball insight was for Beane to find value via underappreciated player assets (some assets are cheap for good reason) by way of an objective, disciplined, data-driven process that values OBP more than conventional baseball wisdom. In other words, as Michael Lewis explained, “it is about using statistical analysis to shift the odds [of winning] a bit in one’s favor” via market inefficiencies. As A’s Assistant GM Paul DePodesta said, “You have to understand that for someone to become an Oakland A, he has to have something wrong with him. Because if he doesn’t have something wrong with him, he gets valued properly by the marketplace, and we can’t afford him anymore.” Accordingly, Beane sought out players that he could obtain cheaply because their actual (statistically verifiable) value was greater than their generally perceived value.

The great Howard Marks has also applied this idea to the investing world:

“If what’s obvious and what everyone knows is usually wrong, then what’s right? The answer comes from inverting the concept of obvious appeal. The truth is, the best buys are usually found in the things most people don’t understand or believe in. These might be securities, investment approaches or investing concepts, but the fact that something isn’t widely accepted usually serves as a green light to those who’re perceptive (and contrary) enough to see it.”

The key investment application of the inversion principle, therefore, is that in most cases we would be better served by looking closely at the examples of people and portfolios that failed and why they failed instead of the success stories, even though such examples are unlikely to give rise to book contracts with six-figure advances. Similarly, we would be better served by examining our personal investment failures than our successes. Instead of focusing on “why we made it,” we would be better served by careful failure analysis and fault diagnosis. That is where the best data is and where the best insight may be inferred.

The smartest people may always question their assumptions to make sure that they are justified. The data set that was available to Wald was not a good sample. By inverting his thinking, Wald could more readily hypothesize and conclude that the sample was lacking.

Don’t Be Stupid

The inversion principle also means taking a step back (so to speak) to consider your goals in reverse. Our first goal, therefore, should not be to achieve success, even though that is highly intuitive. Note, for example, this recent list of 2017’s smartest companies, which focuses on “breakthrough technologies” and “successful” innovations. Instead, our first goal should be to avoid failure – to limit mistakes. Instead of trying so hard to be smart, we should invert that and spend more energy on not being stupid, in large measure because not being stupid is far more achievable and manageable than being brilliant. In general, we would be better off pulling the bad stuff out of our ideas and processes than trying to put more good stuff in.

As Munger has stated, “I think part of the popularity of Berkshire Hathaway is that we look like people who have found a trick. It’s not brilliance. It’s just avoiding stupidity.” Here is a variation: “we know the edge of our competency better than most. That’s a very worthwhile thing.” Buffett has a variation on this theme too: “Rule No. 1: Never lose money. Rule No. 2: Never forget rule No. 1.” Another is to be fearful when others are greedy and greedy when others are fearful. George Costanza has his own unique iteration (“If every instinct you have is wrong, then the opposite would have to be right”).

If we avoid mistakes we will generally win. By examining failure more closely, we will have a better chance of doing precisely that. Basically, negative logic works better than positive logic. What we know not to be true is much more robust that what we know to be true. Note how Michelangelo thought about his master creation, the David. He always believed that David was within the marble he started with. He merely (which is not to say that it was anything like easy) had to chip away that which was not David. “In every block of marble I see a statue as plain as though it stood before me, shaped and perfect in attitude and action. I have only to hew away the rough walls that imprison the lovely apparition to reveal it to the other eyes as mine see it.” By chipping away at what “did not work,” Michelangelo uncovered a masterpiece. There are not a lot of masterpieces in life, but by avoiding failure, we give ourselves the best chance of overall success.

As Charley Ellis famously established, investing is a loser’s game much of the time (as I have also noted before) – with outcomes dominated by luck rather than skill and high transaction costs. Charley employed the work of Simon Ramo, a scientist and statistician, from Extraordinary Tennis for the Ordinary Player, who showed that professional tennis players and weekend tennis players play a fundamentally different game. The expert player, playing another expert player, needs to win points affirmatively through good shot-making to succeed. The weekend player wins by not losing – keeping the ball in play until his or her opponent makes an error, because weaker players make many more errors.

“In expert tennis, about 80 per cent of the points are won; in amateur tennis, about 80 per cent of the points are lost. In other words, professional tennis is a Winner’s Game – the final outcome is determined by the activities of the winner – and amateur tennis is a Loser’s Game – the final outcome is determined by the activities of the loser. The two games are, in their fundamental characteristic, not at all the same. They are opposites.”

As Charlie wrote in a letter to Wesco Shareholders while he was chair of the company: “Wesco continues to try more to profit from always remembering the obvious than from grasping the esoteric. … It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent. There must be some wisdom in the folk saying, `It’s the strong swimmers who drown.’”

Moreover, it turns out that we can quantify this idea more precisely.

As Phil Birnbaum brilliantly suggested in Slate, not being stupid matters demonstrably more than being smart when a combination of luck and skill determines success. Suppose you are the GM of a baseball team and you are preparing for the annual draft. Avoiding a mistake helps more than being smart.

Suppose you have the 15th pick in the draft. You look at a player the Major League consensus says is the 20th best player and think he is better than that – perhaps the 10th best player. By contrast, the MLB consensus on another player is that he is the 15th best player but you think he is only the 30th best. What are the rewards and consequences if you are right about each player when the draft comes?

If the underrated player is available when your pick comes, you can snap him up for an effective gain of five spots. You get the 10th best player with the 15th pick. That is great. Of course, since everybody else is scouting too, you may not be the only one who recognizes the underrated player’s true value. Anybody with a pick ahead of you can steal your thunder. If that happens, your being smart did not help a bit.

If the overrated player is available when your turn comes up (in theory, he should be because he is the consensus 15th pick and you are picking 15th), you will pass on him, because you know he is not that good. If you had not done the scouting and done it right, you would have taken him with your 15th pick and suffered an effective loss of 15 spots by getting the 30th best player with the 15th pick. In that case, then, avoiding a mistake helped.

Moreover, and crucially, it does not matter if other teams scouted him correctly. You have dodged a bullet no matter what. Recognizing the undervalued player (being smart) only helps when you are alone in your recognition. Recognizing the overrated player (avoiding a mistake) always helps. Birnbaum’s moral: “You gain more by not being stupid than you do by being smart. Smart gets neutralized by other smart people. Stupid does not.” Thus, the importance of the error quotient becomes obvious (obviously, the lower the better).

The same principle can also be demonstrated mathematically, as Birnbaum also noted. Gather ten people and show them a jar that contains equal numbers of $1, $5, $20, and $100 bills. Pull one out, at random, so nobody can see, and auction it off. If the bidders are generally smart, the bidding should top out at just below $31.50 (how much less will depend on the extent of the group’s loss aversion), the value of the average bill {(1+5+20+100) ÷ 4}. If you repeat the process but this time let two prospective bidders see the bill you picked, what happens? If you picked a $100 bill, the insiders should be willing to pay up to $99.99 for the bill. Neither of them will benefit much from the insider knowledge. However, if it is a $1 bill, neither of the insiders will bid. Without that knowledge, each of the insiders would have had a one-in-10 chance of paying $31.50 for the bill and suffering a loss of $30.50. On an expected value basis, each gained $3.05 from being an insider. Avoiding errors matters more than being smart.

That investing successfully is really hard suggests to most of us that being really smart should be a big plus in investing. Yet while it can help, the existence of other smart people together with copycats and hangers-on greatly dilutes the value of being market-smart. On the other hand, the impact of bad decision-making stands alone. It is not lessened by the related stupidity of others. In fact, the more people act stupidly together, the greater the aggregate risk and the greater the potential for loss. This risk grows exponentially. Think of everyone piling on during the tech or real estate bubbles. When nearly all of us make the same kinds of poor decisions together – when the error quotient is especially high – the danger becomes enormous.


Science is perhaps the quintessential inversion. It is the most powerful tool there is for determining what is real and what is true, and yet it advances only by ascertaining what is false. In other words, it works due to disconfirmation rather than confirmation. As Munger observed about Charles Darwin: “Darwin’s result was due in large measure to his working method, which violated all my rules for misery and particularly emphasized a backward twist in that he always gave priority attention to evidence tending to disconfirm whatever cherished and hard-won theory he already had. In contrast, most people early achieve and later intensify a tendency to process new and disconfirming information so that any original conclusion remains intact. They become people of whom Philip Wylie observed: ‘You couldn’t squeeze a dime between what they already know and what they will never learn.’”

The Oxford English Dictionary defines the scientific method as “a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement and experiment, and the formulation, testing, and modification of hypotheses.” Science is about making observations and then asking pertinent questions about those observations. What it means is that we observe and investigate the world and build our knowledge base on account of what we learn and discover, but we check our work at every point and keep checking our work. It is inherently experimental. In order to be scientific, then, our inquiries and conclusions must be based upon empirical, measurable evidence. We will never just “know.”

The scientific method, broadly construed, can and should be applied not only to traditional scientific endeavors, but also, to the fullest extent possible, to any sort of inquiry into or study about the nature of reality, including investing. As I have noted before, the great physicist and Nobel laureate Richard Feynman even applied such experimentation to hitting on women. To his surprise, he learned that he (at least) was more successful by being aloof than by being polite or by buying a woman he found attractive a drink.

David Wootton’s brilliant book, The Invention of Science, makes a compelling case that modernity began with the scientific revolution in Europe, book-ended by Danish astronomer Tycho Brahe’s identification of a new star in the heavens in 1572, which proved that heavens were not fixed, and the publication of Isaac Newton’s Opticks in 1704, which drew conclusions based upon experimentation. In Wootton’s view, this was “the most important transformation in human history” since the Neolithic era and in no small measure predicated upon a scientific mindset, which includes the unprejudiced observation of nature, careful data collection, and rigorous experimentation. In his view, the “scientific way of thinking has become so much part of our culture that it has now become difficult to think our way back into a world where people did not speak of facts, hypotheses and theories, where knowledge was not grounded in evidence, where nature did not have laws.” I think Wootton’s claim is surely true, even if honored mainly in the breach.

The scientific approach was truly a new way of thinking (despite historical antecedents). Wootton shows that when Christopher Columbus came to the New World in 1492, he did not have a word to describe what he had done (or at least appeared to have done, with apologies to the Vikings). It was the Portuguese, the first global imperial power, who introduced the term “discovery” in the early 16th Century. There were other new words and concepts that were also important when trying to understand the scientific revolution, such as “fact” (only widely used after 1663), “evidence” (incorporated into science from the legal system) and “experiment.”

As Wootton explains, knowledge, as it was espoused in medieval universities and monasteries, was dominated by the ancients, the likes of Ptolemy, Galen, and Aristotle. Accordingly, it was widely believed that all of the most important knowledge was already known. Thus, learning was predominantly a backward-facing pursuit, about returning to ancient first principles, not pushing into the unknown. Indeed, Wootton details the emergence of fact and evidence as previously unknown terms of art. The modern scientific pursuit is the “formation of a critical community capable of assessing discoveries and replicating results.”

In its broadest context, science is the careful, systematic and logical search for knowledge, obtained by examination of the best available evidence and always subject to correction and improvement upon the discovery of better or additional evidence. That is the essence of what has come to be known as the scientific method, which is the process by which we, collectively and over time, endeavor to construct an accurate (that is, reliable, consistent and non-arbitrary) representation of the world. Otherwise (per James Randi), we are doing magic, and magic simply does not work.

Aristotle, brilliant and important as he was, posited, for example, that heavy objects fall faster than lighter objects and that males and females have different numbers of teeth, based upon some careful – though flawed – reasoning. But it never seemed to have occurred to him that he ought to check. Checking and then re-checking your ideas or work offers evidence that may tend to confirm or disprove them. By collecting “a long-term data set,” per field biologist George Schaller, “you find out what actually happens.” Testing can also be reproduced by any skeptic, which means that you need not simply trust the proponent of any idea. You do not need to take anyone’s word for things — you can check it out for yourself. That is the essence of the scientific endeavor.

Science is inherently limiting, however. We want deductive proof in the manner of Aristotle, but have to settle for induction. That is because science can never fully prove anything. It analyzes the available data and, when the force of the data is strong enough, it makes tentative conclusions. Moreover, these conclusions are always subject to modification or even outright rejection based upon further evidence gathering. The great value of facts and data is not so much that they point toward the correct conclusion (even though they do), but that they allow us the ability to show that some things are conclusively wrong.

Science progresses not via verification (which can only be inferred) but by falsification (which, if established and itself verified, provides relative certainty only as to what is not true). That makes it unwieldy. Thank you, Karl Popper. In investing, as in science generally, we need to build our processes from the ground up, with hypotheses offered only after a careful analysis of all relevant facts and tentatively held only to the extent the facts and data allow.

In investing, much like science generally and as in life, if we avoid mistakes we will generally win. We all want to be Michael Burry, an investor who made a fortune because he recognized the mortgage bubble in time to act accordingly. However, becoming Michael Burry starts by not being Wing Chau, an investor of Lawn Chair Larry foolishness who got crushed when the mortgage market collapsed. In fact, we all suffered when the real estate bubble burst. When the error quotient is especially high, our risks grow exponentially. Success starts with avoiding errors and looking at problems and situations differently.

Invert. Always invert.


Screen Shot 2015-11-18 at 4.55.47 PM

Volatility is on the rise, thanks in no small part to N. Korea and Trumpster exchanging threats of plunging into nuclear war.

Who knows what will happen.

Screen Shot 2015-11-18 at 4.55.47 PM

Long volatility. That is the trade that I like.

Screen Shot 2015-11-18 at 4.55.47 PM

You who caught the turtles better eat them goes the ancient adage: Ipsi testudines edite, qui cepistis [i]

The origin of the expression is as follows. It was said that a group of fishermen caught a large number of turtles. After cooking them, they found out at the communal meal that these sea animals were much less edible that they thought: not many members of the group were willing to eat them. But Mercury happened to be passing by –Mercury was the most multitasking, sort of put-together god, as he was the boss of commerce, abundance, messengers, the underworld, as well as the patron of thieves and brigands and, not surprisingly, luck. The group invited him to join them and offered him the turtles to eat. Detecting that he was only invited to relieve them of the unwanted food, he forced them all to eat the turtles, thus establishing the principle that you need to eat what you feed others.

A Customer is Born Every Day

I have learned a lesson from my own naive experiences,

Beware of the person who gives advice, telling you that a certain action on your part is “good for you” while it is also good for him, while the harm to you doesn’t directly affect him.

Of course such advice is usually unsolicited. The asymmetry is when the said advice applies to you but not to him –he may be selling you something, trying to get you to marry his daughter or hire his son-in-law.

Years ago I received a letter from a lecture agent. His letter was clear; it had about ten questions of the type “do you have the time to field requests?”, “can you handle the organization of the trip”, the gist of it being that a lecture agent would make my life better and allow me the pursuit of knowledge or whatever else I was about (a deeper understanding of gardening, stamp collections, or Lebanese wine) while the burden of the gritty falls on someone else. And it wasn’t any lecture agent: only he could do all these things; he reads books and can get in the mind of intellectuals (at the time I didn’t feel insulted by being called an intellectual). As is typical with people who volunteer unsolicited advice, I smelled a rat: at no phase in the discussion did he refrain from directly apprising me or hinting that it was “good for me”.

As a sucker, while I didn’t buy into the argument, I ended up doing business with him, letting him handle a booking in the foreign country where he was based. Things went fine until, six years later I received a letter from the tax authorities of that country. I immediately contacted him to wonder if similar U.S. citizens he had hired incurred such tax conflict, or if he had heard of similar situations. His reply was immediate and curt: “I am not your tax attorney” –volunteering no information as to whether other U.S. customers who hired him because it was “good for them” encountered such a problem.

Indeed, in the dozen or so cases I can pull from memory, it always turns out that what is presented as good for you is not really good for you but certainly good for the other party. As a trader, you learn to identify and deal with upright people, those who inform you that they have something to sell, by explaining that the transaction arises for their own benefit, with such question as “do you have an axe?” (meaning an inquiry whether you have a certain interest). Avoid at all costs those who call you to tout a certain product disguised with advice –trying to dump inventory on you. In fact the story of the turtle is the archetype of the history of transactions between mortals.

I worked once for a U.S. investment bank, one of the prestigious variety, called “white shoe” because the partners were members of hard-to-join golf clubs where they played the game wearing white footwear. As with all such firms, an image of ethics and professionalism was cultivated, emphasized, and protected. But the job of the salespeople (actually, salesmen) on days when they wore black shoes was to “unload” inventory with which traders were “stuffed”, that is, securities they had in plethora in their books and needed to get rid of them to lower their risk profile. Selling to other traders was out of the question as professional traders, typically non golfers, would smell excess inventory and cause the price to drop. Some traders paid the sales force with (percentage) “points”, a variable compensation that increased with our eagerness to part with securities. Salesmen took clients out to dinner, bought them expensive wine (often, ostensibly the highest on the menu), and got a huge return on the thousands of dollars of restaurant bills by unloading the unwanted stuff on them. One expert salesman candidly explained to me: “If I buy the client, working for the finance department of a municipality, who buys his suits at some department store in New Jersey, a bottle of $2,000 wine, I own him for the next few months. I can get at least $100,000 profits out of him. Nothing in the mahket gives you such return”. Given that the said customer’s employment was for managing some public employee pension fund, is the New Jersey currently and to-be retired person that was in fact paying more than $100,000 for a $2,000 bottle of wine.

Salesmen hawked how a given security will be perfect for the client’s portfolio, how they were certain it would rise in price and how the client would suffer great regret if he missed “such an opportunity”, that type of discourse. Salespeople were experts in the art of psychological manipulation, making the client trade, often against his own interest, all the while being happy about it and loving them and their company. One of the top salesman of the firm, a man of huge charisma who came to work in a chauffeured Rolls Royce, was once asked whether customers didn’t get upset when they got the short end of the stick. “Rip them off don’t tick them off” was his answer. He also added “remember that every day a new customer is born”.

As the Romans were fully aware, one lauds merrily the merchandise to get rid of it. (Plenius aequo I /audat vena/is qui vult extrudere merces[1])

The Price of Corn in Rhodes

So, “giving advice” as a sales pitch is fundamentally unethical –selling cannot be deemed advice. We can safely settle on that. You can give advice, you can sell (by advertising the quality of the product) and the two need to be kept separate.

But there is an associated problem in the course of the transactions: how much should the seller reveal to the buyer?

The question “is it ethical to sell something to someone knowing the price will eventually drop” is an ancient one –but its solution is no less straightforward. The debate goes back to a disagreement between two stoic philosophers, Diogenes of Babylon and his student Antipater of Tarsus, who took the higher moral grounds on asymmetric information and seems to match current ethics endorsed by this author. Not a piece from both authors is extant, but we know quite a bit from secondary sources, or, in the case of Cicero, tertiary. The question was presented as follows, retailed by Cicero in De Officiis. Assume a man brought a large shipment of corn from Alexandria to Rhodes, at a time when corn was expensive in Rhodes because of shortage and famine. Suppose that he also knew that many boats had set sail from Alexandria on their way to Rhodes with similar merchandise. Does he have to inform the Rhodians? How can one act honorably or dishonorably in these circumstances?[ii]

We traders had a straightforward answer. We called this “stuffing” –selling quantities to people without informing them that there are large inventories waiting to be sold. An upright trader will not do that to other professional traders; it was a no-no. The penalty was ostracism. But it was sort of permissible to do it to the anonymous market and the faceless nontraders, or those we called “the Swiss”, or some sucker far away. There were people with whom we have a relational rapport, others with whom we had a transactional one. The two were separated by an ethical wall, much like the case with domestic animals that could not be harmed, while rules on cruelty were lifted when it came to cockroaches.

Diogenes held that the seller ought to disclose as much as civil law would allow. As to Antipater, he believed that everything ought to be disclosed –beyond the law –so that there was nothing that the seller knew that the buyer didn’t know.

Clearly Antipater’s position is more robust –robust being invariant to time, place, situation, and color of the eyes of the participants. Take for now that

The ethical is always more robust than the legal. Over time, it is the legal that should converge to the ethical, never the reverse.


Laws come and go; the ethics stays.

For the notion of “law” is ambiguous and highly jurisdiction dependent: in the U.S., civil law thanks to consumer advocates and similar movements, integrates such disclosures while other countries have different laws. This is particularly visible with securities laws, as there are “front running” regulations and those concerning insider information that make such disclosure mandatory in the U.S. , though it wasn’t so for a long time in Europe.

Indeed much of the work of investment banks my days was to play on regulations, find loopholes in the laws. And, counterintuitively, the more regulations, the easier it was to make money.

Equality in Uncertainty

Which brings us to asymmetry, the core concept behind skin in the game. The question becomes: to what extent can people in a transaction have an informational differential between them? The ancient Mediterranean and, to some extent the modern world, seems to be converging to Antipater’s position. While we have “buyer beware” (caveat emptor) in the Anglo-Saxon West, the idea is rather new, and never general, often mitigated by lemon laws. (A “lemon” was originally a chronically defective car, say my convertible Mini, in love with the garage, now generalized to apply to about anything that moves).

So to the question voiced by Cicero in the debate between the two ancient stoics , “If a man knowingly offers for sale wine that is spoiling, ought he to tell his customers?” , the world is getting closer to Diogenes position of transparency, not necessarily via regulations as much as thanks to tort laws, one’s ability to sue for harm in the event the seller deceived him or her. Recall that tort laws put some skin in the game back into the seller –which is why they are reviled, hated by corporations. But tort laws have side effects –they should only be used in a nonnaive way, that is, in a way they cannot be gamed. As we will see in the discussion of the visit to the doctor, they will be gamed.

Sharia, in particular the law regulating Islamic transactions and finance, is of interest to us insofar as preserves some of the lost Mediterranean and Babylonian methods and practices –not to prop up the ego of Saudi princes. It is at the intersection of Greco-Roman law (as reflected from their contact with the School of Law of Berytus), Phoenician trading rules, Babylonian legislations, and Arab tribal commercial customs and, as such, it provides a repository of all ancient Mediterranean and Semitic lore. I hence view Sharia as a museum of the history of ideas on symmetry in transactions. Sharia establishes the interdict of gharar, drastic enough to be totally banned in any form of transaction. It is an extremely sophisticated term in decision theory that does not exist in English; it means both uncertainty and deception –my personal take is that it means something beyond informational asymmetry between agents. It means inequality of uncertainty. Simply, as the aim is for both parties in a transaction to have the same uncertainty facing random outcomes, an asymmetry becomes equivalent to theft. Or more robustly:

No person in a transaction should have certainty about the outcome while the other one has uncertainty.

Gharar, like every legalistic term, will have its flaw; it remains weaker than the approach by Antipater. If only one party in a transaction has certainty all the way through, it is a violation of Sharia. But if there is a weak form of asymmetry, say someone has inside information which gives an edge in the markets, there is no gharar as there remains enough uncertainty for both parties, given that the price is in the future and only God knows the future. Selling a defective product (where there is certainty as to the defect) on the other hand is illegal. So the knowledge by the seller of corn in Rhodes in my first example does not fall under Gharar, while the second case, that of defective liquid, would[iii].[iv]

As we see, the problem of asymmetry is so complicated that different schools give different ethical solutions, so let us look at the Talmudic approach.

Rav Safra and the Swiss

Jewish ethics on the matter is closer to Diogenes than Antipater; in fact even more extreme than Diogenes in its aims at transparency. Not only there should be transparency concerning the merchandise, but perhaps there has to be one concerning what the seller has in mind, what he thinks deep down. The medieval Rabbi Shlomo Yitzhaki (a.k.a. Salomon Isaacides), known as “Rashi”, relates the following story. Rav Safra, a third century Babylonian scholar who was also an active trader, was offering some goods for sale. A buyer came as he was praying in silence, tried to purchase the merchandise at an initial price, and given that the Rabbi did not reply, raised the price. But Rav Safra had no intention of selling at a higher price than the initial offer, and felt that he had to honor the initial intention. Now the question: is Rav Safra obligated to sell at the initial price, or should he take the improved one?[v] [vi]

Such total transparency is not absurd and not uncommon in what seems to be a cut-throat world of transactions, my former world of trading. I have frequently faced that problem as a trader and will side in favor of Rav Safra’s action in the debate. Let us follow the logic. Recall the rapacity of salespeople earlier in the chapter. Sometimes I would offer something for sale for, say $5, but communicated with the client through a salesperson, and the salesperson would come back with an “improvement”, of $5.10. Something never felt right about the extra ten cents. It was, simply, not a sustainable way of doing business. What if the customer subsequently discovered that my initial offer was $5? No compensation is worth the feeling of shame. The overcharge falls in the same category as the act of “stuffing” people with bad merchandise. Now, to apply this to Rav Safra’s story, what if he sold to one client at the marked-up price, and to another one the exact same item for the initial price, and the two buyers happened to know one another? What if they were agents for the same end customer?

It may not be ethically required, but the most effective, shame-free policy is maximal transparency, even transparency of intentions.

However, the story doesn’t tell us whether the purchaser was a “Swiss”, those outsiders towards whom our ethical rules don’t apply. I suspect that there would be a species for which our ethical rules would be relaxed or possibly lifted. Otherwise, as Eleanor Ostrom has recently shown, the system cannot function properly.[2]

Members and Non Members

For the exclusion of the “Swiss” from our ethical is not trivial. Things don’t “scale” and generalize which is why I have trouble with intellectuals talking about abstract notions. A country is not a large city, a city is not a large family, and, sorry, the world is not a large village. There are scale transformations we will discuss here, and in a special more technical chapter at the end, in Section X.

When Athenians treat all opinions equally and discuss “democracy”, they only apply it other citizens, not slaves or metics (the equivalent of green card or J1b visa holders). Effectively, Theodosius’ code deprived Roman citizens who marry “Barbarians” of their legal rights –hence ethical parity with others. They lost their club membership. Jewish ethics distinguishes between thick blood and thin blood: we are all brothers but some are more brothers than others[3].

Individuals have been traditionally part of clubs, with rules and member behavior similar to those in today’s country clubs, with inside and outside. As club members know, the very existence of a club is exclusion and size limitation. Spartan could hunt and kill helots, those noncitizens with a status of slaves for training, but were otherwise equal to other Spartans and expected to die for theirs and the sake of Sparta. The large cities in the pre-Christian ancient world, particularly in the Levant and Asia Minor, were full of fraternities and clubs, open and (often) secret societies –there were even such a thing as funeral clubs where members shared the costs of, and participated in the ceremonials, of the funerals.

Today’s Roma people (a.k.a. gypsies) have tons of strict rules of behavior towards gypsies and others towards the unclean non-gypsies called payos. And, as the anthropologist David Graeber has observed, even the investment bank Goldman Sachs, known for its aggressive cupidity, acts like a communist community from within, thanks to the partnership system of governance.

So we exercise our ethical rules, but there is a limit –from scaling –beyond which the rules cease to apply. It is unfortunate, but the general kills the particular. The question we will reexamine later, after deeper discussion of complexity theory: is it possible to be both ethical and universalist? In theory, but, sadly, not in practice. For whenever the “we” becomes too large a club, things degrade, and each one starts fighting for his own interest. The abstract is way too abstract for us. This is the main reason I advocate political systems that start with the municipality, and work their way up (ironically, as in Switzerland, those “Swiss”), rather than the reverse that has failed with larger states. Being somewhat tribal is not a bad thing –and we have to work in a fractal way in the organized harmonious relations between tribes, rather than merge all tribes in one large soup. Is that sense, an American style federalism is the ideal system.

This scale transformation from the particular to the general is behind my skepticism with unfettered globalization and large centralized multiethnic states. My collaborator, the physicist and complexity researcher Yaneer Bar-Yam showed that “better fences made better neighbors” –something both “policymakers” and local governments fail to get about the Near East. Scaling matters, I will keep repeating until I get hoarse. Putting Shiites, Christians and Sunnis in one pot and ask them to sing Kumbaya around the camp fire while holding hands in the name of unity and fraternity of mankind has failed (interventionistas aren’t yet aware that “should” is not a sufficiently empirically valid statement to “build nations”). Blaming people for being “sectarian” –instead of making the best of such a natural tendency –is one of the stupidities of interventionistas. Separate tribes administratively (as the Ottomans did), or just put some markers somewhere, and they suddenly become friendly to one another.

But we don’t have to go very far to get the importance of scaling. You know instinctively that people get along better as neighbors than roommates.

When you think about it, it is obvious, even trite, from the well known behavior of crowds in “the anonymity” of big cities compared to the groups in small villages. I spend some time in my ancestral village, where it feels like a family. People attend others funerals (funeral clubs were mostly in large cities), help out, care about the neighbor, even if they hate his dog. There is no way you can get the same cohesion in a larger city when the other person is a theoretical entity, and our behavior towards him or her governed by some general ethical rule, not someone in flesh and blood. We get it easily when seen that way, but fail to generalize that ethics is something fundamentally local.

All (Literally) in the Same Boat

Greek is a language of precision; it has a word describing the opposite of risk transfer: risk sharing. Synkyndineo means “taking risks together”, which was a requirement in maritime transactions.[4]

The Acts of the Apostles[5] describes a voyage of St Paul on a cargo ship from Sidon to Crete to Malta. As they hit a storm: “ When they had eaten what they wanted they lightened the ship by throwing the corn overboard into the sea.”

Now while they jettisoned particular goods, all owners were to be proportioned the costs of the lost merchandise, not just the specific owners. For it turned out that they were following a practice that dates to at least 800 B.C., codified in Lex Rhodia, Rhodian Law, after the mercantile Aegean island of Rhodes; the code is no longer extant but has been cited since antiquity. It stipulates that the risks and costs for contingencies are to incurred equally, with no concern of responsibility. Justinian’s code[6]summarizes it:

“It is provided by the Rhodian Law that where merchandise is thrown overboard for the purpose of lightening a ship, what has been lost for the benefit of all must be made up by the contribution of all.”

And the same mechanism for risk-sharing took place with caravans along desert routes. If merchandise was stolen or lost, all merchants had to split the costs, not just its owner.

Synkyndineo has been translated into Latin by maestro classicist Armand D’Angour as compericlitor henceif it ever makes it into English, should becompericlity, and its opposite, the Bob Rubin risk transfer will be incompericlity. But I guess risk sharing will do in the meanwhile.

How to Not Be a Doctor

Attempts at putting skin in the game in medicine, while important and needed, usually have a certain class of adverse effects, in shifting uncertainty from the doctor to the patient.

The legal system and the regulatory measures are likely to put the skin of the doctor in the wrong game.

How? The problem resides in the reliance on metrics. Every metric is gameable –the cholesterol lowering we mentioned in the Prologue is a metric gaming technique taken to its limit. More realistically, say a cancer doctor or hospital are judged by the five-year survival of patients and need to face a variety of modalities for a new patient: what choice of treatment would they elect to do? There is a tradeoff between laser surgery (a surgical procedure) and radiation therapy, which is toxic to both patient and cancer. Statistically, laser surgery may have worse five-year outcomes than radiation therapy, but the latter tends to create second tumors in the longer run and offers comparatively reduced twenty-year disease-specific survival. Given that the window used for the calculation of patient survival is five years, not twenty, the incentive is to shoot for the former.

So the doctor is likely to be in the process of shifting uncertainty away from him or her by electing the second best option.

A Doctor is pushed by the system to transfer risk from himself to you, and from the present into the future.

And in the case we saw earlier from future into more distant future.

You need to remember that, when you visit a medical office, you will be facing someone who, in spite of his authoritative demeanor, is in a fragile situation. He is not you, not a member of your family, so he has no direct emotional loss should your health experience a degradation. His objective is, naturally, to avoid a lawsuit, something that can prove disastrous to his career.

Some metrics can actually kill you. Now, say you happen to visit a cardiologist and turn out to be in the mild risk category, something that doesn’t really raise your risk of a cardiovascular event, but precedes the stage of a possibly worrisome condition. (There is a strong nonlinearity: a person classified as prediabetic or prehypertensive is 90% closer to a normal person than to one with the condition. ) But the doctor is pressured to treat you to protect himself. Should you drop dead immediately after the visit, a low probability event, the doctor can be sued for negligence, for not having prescribed the right medicine that is temporarily believed to be useful, say as in the case of statins, but that we now know has been backed up by suspicious or incomplete studies. Deep down, he may know that statin is harmful, as it will lead to long term effects. But the pharmaceutical companies have managed to convince everyone that these –unseen –consequences are harmless, when the right precautionary approach is to consider the unseen as potentially harmful. In fact for most people except those that are very ill, the risks outweigh the benefits. Except that the risks are hidden; they will play out in the long run whereas the legal risk is immediate. This is no different from the Bob Rubin risk transfer trade, of delaying risks and making them look invisible.

Now can one make medicine less asymmetric? Not directly; the solution, I have argued in Antifragile and more technically, elsewhere, is for the patient to avoid treatment when he or she is mildly ill, but use medicine for the “tail events”, that is, for rarely encountered severe conditions. The problem is that the “mildly” ill represents a much larger pool of people than the severely ill –and people who are expected to live longer and consume drugs for longer — hence pharmaceutical companies have an incentive to focus on these.

In sum, both the doctor and the patient have skin in the game, though not perfectly, but administrators don’t –and they seem to be the cause of the troubling malfunctioning of the system. Administrators everywhere on the planet and at all times in history have been the plague.


This chapter introduced us to the agency problem and risk sharing, seen from both a commercial and an ethical viewpoint. We also introduced the problem of scale. Next we will try to get deeper into the structure of things in life by switching our approach when we look at a collection of things –towns, countries, families, markets. Aggregates are strange animals.