فصل نوزدهم: منطق ریسک‌پذیری

کتاب: پوست در بازی / فصل 24

پوست در بازی

26 فصل

فصل نوزدهم: منطق ریسک‌پذیری

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Chapter 19 The Logic of Risk Taking

The central chapter always comes last—Always bet twice—Do you know your uncle point?—Who is “you”?—The Greeks were almost always right

Time to explain ergodicity, ruin, and (again) rationality. Recall that to do science (and other nice things) requires survival but not the other way around.

Consider the following thought experiment. First case, one hundred people go to a casino to gamble a certain set amount each over a set period of time, and have complimentary gin and tonic—as shown in the cartoon in Figure 5. Some may lose, some may win, and we can infer at the end of the day what the “edge” is, that is, calculate the returns simply by counting the money left in the wallets of the people who return. We can thus figure out if the casino is properly pricing the odds. Now assume that gambler number 28 goes bust. Will gambler number 29 be affected? No.

You can safely calculate, from your sample, that about 1 percent of the gamblers will go bust. And if you keep playing and playing, you will be expected to have about the same ratio, 1 percent of gamblers going bust, on average, over that same time window.

Now let’s compare this to the second case in the thought experiment. One person, your cousin Theodorus Ibn Warqa, goes to the casino a hundred days in a row, starting with a set amount. On day 28 cousin Theodorus Ibn Warqa is bust. Will there be day 29? No. He has hit an uncle point; there is no game no more.

No matter how good or alert your cousin Theodorus Ibn Warqa is, you can safely calculate that he has a 100 percent probability of eventually going bust.

The probabilities of success from a collection of people do not apply to cousin Theodorus Ibn Warqa. Let us call the first set ensemble probability, and the second one time probability (since the first is concerned with a collection of people and the second with a single person through time). Now, when you read material by finance professors, finance gurus, or your local bank making investment recommendations based on the long-term returns of the market, beware. Even if their forecasts were true (they aren’t), no individual can get the same returns as the market unless he has infinite pockets and no uncle points. This is conflating ensemble probability and time probability. If the investor has to eventually reduce his exposure because of losses, or because of retirement, or because he got divorced to marry his neighbor’s wife, or because he suddenly developed a heroin addiction after his hospitalization for appendicitis, or because he changed his mind about life, his returns will be divorced from those of the market, period.

Anyone who has survived in the risk-taking business more than a few years has some version of our by now familiar principle that “in order to succeed, you must first survive.” My own has been: “never cross a river if it is on average four feet deep.” I effectively organized all my life around the point that sequence matters and the presence of ruin disqualifies cost-benefit analyses; but it never hit me that the flaw in decision theory was so deep. Until out of nowhere came a paper by the physicist Ole Peters, working with the great Murray Gell-Mann. They presented a version of the difference between ensemble and time probabilities with a thought experiment similar to mine above, and showed that just about everything in social science having to do with probability is flawed. Deeply flawed. Very deeply flawed. Largely, terminally flawed. For, in the quarter millennia since an initial formulation of decision making under uncertainty by the mathematician Jacob Bernoulli, one that has since become standard, almost all people involved in the field have made the severe mistake of missing the effect of the difference between ensemble and time.

As with my “Fat Tails” project, economists may have been aware of the ensemble-time problem, but in a sterile way. Further, they keep saying “we’ve known about fat tails,” but somehow they don’t realize that taking the idea to the next step contradicts much of their work. It is the consequences that matter.

Everyone? Not quite: every economist maybe, but not everyone: the applied mathematicians Claude Shannon and Ed Thorp, and the physicist J. L. Kelly of the Kelly Criterion got it right. They also got it in a very simple way. The father of insurance mathematics, the Swedish applied mathematician Harald Cramér, also got the point. And, more than two decades ago, practitioners such as Mark Spitznagel and myself built our entire business careers around it. (I mysteriously got it right in my writings and when I traded and made decisions, and detect deep inside when ergodicity is violated, but I never explicitly got Peters and Gell-Mann’s mathematical structure—ergodicity is even discussed in Fooled by Randomness, two decades ago). Spitznagel and I even started an entire business to help investors eliminate uncle points so they could get the returns of the market. While I retired to do some flaneuring, Mark continued relentlessly (and successfully) at his Universa. Mark and I have been frustrated by economists who, not getting ergodicity, keep saying that worrying about the tails is “irrational.” The idea I just presented is very very simple. But how come nobody for 250 years quite got it? Lack of skin in the game, obviously.

For it looks like you need a lot of intelligence to figure probabilistic things out when you don’t have skin in the game. But for an overeducated nonpractitioner, these things are hard to figure out. Unless one is a genius, that is, has the clarity of mind to see through the mud, or has a sufficiently profound command of probability theory to cut through the nonsense. Now, certifiably, Murray Gell-Mann is a genius (and, likely, Peters). Gell-Mann discovered the subatomic particles he himself called quarks (which got him the Nobel). Peters said that when he presented the idea to Gell-Mann, “he got it instantly.” Claude Shannon, Ed Thorp, J. L. Kelly, and Harald Cramér are, no doubt, geniuses—I can personally vouch for Thorp, who has an unmistakable clarity of mind combined with a depth of thinking that juts out in conversation. These people could get it without skin in the game. But economists, psychologists, and decision theorists have no geniuses among them (unless one counts the polymath Herb Simon, who did some psychology on the side), and odds are they never will. Adding people without fundamental insights does not sum up to insight; looking for clarity in these fields is like looking for aesthetic harmony in the cubicle of a self-employed computer hacker or the attic of a highly disorganized electrician.

ERGODICITY

To take stock: a situation is deemed non-ergodic when observed past probabilities do not apply to future processes. There is a “stop” somewhere, an absorbing barrier that prevents people with skin in the game from emerging from it—and to which the system will invariably tend. Let us call these situations “ruin,” as there is no reversibility away from the condition. The central problem is that if there is a possibility of ruin, cost-benefit analyses are no longer possible.

Consider a more extreme example than the casino experiment. Assume a collection of people play Russian roulette a single time for a million dollars—this is the central story in Fooled by Randomness. About five out of six will make money. If someone used a standard cost-benefit analysis, he would have claimed that one has an 83.33 percent chance of gains, for an “expected” average return per shot of $833,333. But if you keep playing Russian roulette, you will end up in the cemetery. Your expected return is…not computable.

REPETITION OF EXPOSURES

Let us see why “statistical testing” and “scientific” statements are highly insufficient in the presence of both ruin problems and repetition of exposures. If one claimed that there is “statistical evidence that a plane is safe,” with a 98 percent confidence level (statistics are meaningless without such confidence bands), and acted on it, practically no experienced pilot would be alive today. In my war with the Monsanto machine, the advocates of genetically modified organisms (transgenics) kept countering me with benefit analyses (which were often bogus and doctored up), not tail risk analyses for repeated exposures.

Psychologists determine our “paranoia” or “risk aversion” by subjecting a person to a single experiment—then declare that humans are rationally challenged, as there is an innate tendency to “overestimate” small probabilities. They manage to believe that their subjects will never ever again take any personal tail risk! Recall from the chapter on inequality that academics in social science are…dynamically challenged. Nobody could see the grandmother-obvious inconsistency of such behavior with our ingrained daily life logic, which is remarkably more rigorous. Smoking a single cigarette is extremely benign, so a cost-benefit analysis would deem it irrational to give up so much pleasure for so little risk! But it is the act of smoking that kills, at a certain number of packs per year, or tens of thousand of cigarettes—in other words, repeated serial exposure.

But things are even worse: in real life, every single bit of risk you take adds up to reduce your life expectancy. If you climb mountains and ride a motorcycle and hang around the mob and fly your own small plane and drink absinthe, and smoke cigarettes, and play parkour on Thursday night, your life expectancy is considerably reduced, although no single action will have a meaningful effect. This idea of repetition makes paranoia about some low-probability events, even that deemed “pathological,” perfectly rational.

Further, there is a twist. If medicine is progressively improving your life expectancy, you need to be even more paranoid. Think dynamically.

If you incur a tiny probability of ruin as a “one-off” risk, survive it, then do it again (another “one-off” deal), you will eventually go bust with a probability of one hundred percent. Confusion arises because it may seem that if the “one-off” risk is reasonable, then an additional one is also reasonable. This can be quantified by recognizing that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases.

The flaw in psychology papers is to believe that the subject doesn’t take any other tail risks anywhere outside the experiment and, crucially, will never again take any risk at all. The idea in social science of “loss aversion” has not been thought through properly—it is not measurable the way it has been measured (if it is at all measurable). Say you ask a subject how much he would pay to insure a 1 percent probability of losing $100. You are trying to figure out how much he is “overpaying” for “risk aversion” or something even more foolish, “loss aversion.” But you cannot possibly ignore all the other financial risks he is taking: if he has a car parked outside that can be scratched, if he has a financial portfolio that can lose money, if he has a bakery that may risk a fine, if he has a child in college who may cost unexpectedly more, if he can be laid off, if he may be unexpectedly ill in the future. All these risks add up, and the attitude of the subject reflects them all. Ruin is indivisible and invariant to the source of randomness that may cause it.

Another common error in the psychology literature concerns what is called “mental accounting.” The Thorp, Kelly, and Shannon school of information theory requires that, for an investment strategy to be ergodic and eventually capture the return of the market, agents increase their risks as they are winning, but contract after losses, a technique called “playing with the house money.” In practice, it is done by threshold, for ease of execution, not complicated rules: you start betting aggressively whenever you have a profit, never when you have a deficit, as if a switch was turned on or off. This method is practiced by probably every single trader who has survived. Now it happens that this dynamic strategy is deemed out of line by behavioral finance econophasters such as the creepy interventionist Richard Thaler, who, very ignorant of probability, calls this “mental accounting”

Mental accounting refers to the tendency of people to mentally (or physically) put their funds in separate insulated accounts, focusing on the source of the money, and forgetting that as net owners the source should not matter. For instance, someone who would not buy a tie because it is expensive and appears superfluous gets excited when his wife buys for his birthday the same tie using funds from a joint checking account. In the case under discussion, Thaler finds it a mistake to vary one’s strategy pending on whether the source of funds is gains from the casino or the original endowment. Clearly, Thaler, like other psycholophasters, is oblivious of the dynamics: social scientists are not good with things that move.

a mistake (and, of course, invites government to “nudge” us away from it, and prevent strategies from being ergodic).

I believe that risk aversion does not exist: what we observe is, simply, a residual of ergodicity. People are, simply, trying to avoid financial suicide and take a certain attitude to tail risks.

But we do not need to be overly paranoid about ourselves; we need to shift some of our worries to bigger things.

WHO IS “YOU”?

Let us return to the notion of “tribe.” One of the defects modern education and thinking introduces is the illusion that each one of us is a single unit. In fact, I’ve sampled ninety people in seminars and asked them: “what’s the worst thing that can happen to you?” Eighty-eight people answered “my death.” This can only be the worst-case situation for a psychopath. For after that, I asked those who deemed that their worst-case outcome was their own death: “Is your death plus that of your children, nephews, cousins, cat, dogs, parakeet, and hamster (if you have any of the above) worse than just your death?” Invariably, yes. “Is your death plus your children, nephews, cousins (…) plus all of humanity worse than just your death?” Yes, of course. Then how can your death be the worst possible outcome?

Actually, I usually joke that my death plus someone I don’t like surviving, such as the journalistic professor Steven Pinker, is worse than just my death.

Unless you are perfectly narcissistic and psychopathic—even then—your worst-case scenario is never limited to the loss of only your life.

Thus, we see the point that individual ruin is not as big a deal as collective ruin. And of course ecocide, the irreversible destruction of our environment, is the big one to worry about.

To use the ergodic framework: my death at Russian roulette is not ergodic for me but it is ergodic for the system. The precautionary principle, as I formulated with a few colleagues, is precisely about the highest layer.

About every time I discuss the precautionary principle, some overeducated pundit suggests that “we take risks by crossing the street,” so why worry so much about the system? This sophistry usually causes a bit of anger on my part. Aside from the fact that the risk of being killed as a pedestrian is less than one in 47,000 years, the point is that my death is never the worst-case scenario unless it correlates to that of others.

I have a finite shelf life, humanity should have an infinite duration.

Or, I am renewable, not humanity or the ecosystem.

Even worse, as I have shown in Antifragile, the fragility of the system’s components (provided they are renewable and replaceable) is required to ensure the solidity of the system as a whole. If humans were immortals, they would go extinct from an accident, or from a gradual buildup of misfitness. But shorter shelf life for humans allows genetic changes across generations to be in sync with the variability of the environment.

COURAGE AND PRECAUTION AREN’T OPPOSITES

How can both courage and prudence be classical virtues? Virtue, as presented in Aristotle’s Nicomachean Ethics, includes: sophrosyne (σωφροσύνη), prudence, and a form of sound judgment he called more broadly phronesis. Aren’t these inconsistent with courage?

In our framework, they are not at all. They are actually, as Fat Tony would say, the same ting. How?

I can exercise courage to save a collection of kids from drowning, at the risk of my own life, and it would also correspond to a form of prudence. Were I to die, I would be sacrificing a lower layer in Figure 6 for the sake of a higher one.

Courage, according to the Greek ideal that Aristotle inherited from Homer (and conveyed by Solon, Pericles, and Thucydides) is never a selfish action:

Courage is when you sacrifice your own well-being for the sake of the survival of a layer higher than yours.

Selfish courage is not courage. A foolish gambler is not committing an act of courage, especially if he is risking other people’s funds or has a family to feed.

To show the inanity of social science, they have to muster up the sensationalism of “mirror neurons” to explain the link between the individual and the collective. Relying on neuro-something is a form of scientism called “brain porn,” discussed in Antifragile.

RATIONALITY, AGAIN

The last chapter reframed rationality in terms of actual decisions, not what are called “beliefs,” as these may be adapted to stimulate us in the most convincing way to avoid things that threaten systemic survival. If superstition is what it takes, not only is there absolutely no violation of the axioms of rationality there, but it would be technically irrational to stand in its way. If superstition is what’s needed to satisfy ergodicity, let it be.

Let us return to Warren Buffett. He did not make his billions by cost-benefit analysis; rather, he did so simply by establishing a high filter, then picking opportunities that pass such a threshold. “The difference between successful people and really successful people is that really successful people say no to almost everything,” he said. Likewise our wiring might be adapted to “say no” to tail risk. For there are a zillion ways to make money without taking tail risk. There are a zillion ways to solve problems (say, feed the world) without complicated technologies that entail fragility and an unknown possibility of tail blowup. Whenever I hear someone saying “we need to take (tail) risks” I know it is not coming from a surviving practitioner but from a finance academic or a banker—the latter, we saw, almost always blows up, usually with other people’s money.

Indeed, it doesn’t cost us much to refuse some new shoddy technologies. It doesn’t cost me much to go with my “refined paranoia,” even if wrong. For all it takes is for my paranoia to be right once, and it saves my life.

LOVE SOME RISKS

Antifragile shows how people confuse risk of ruin with variations and fluctuations—a simplification that violates a deeper, more rigorous logic of things. I make the case for risk loving, for systematic “convex” tinkering, and for taking a lot of risks that don’t have tail risks but offer tail profits. Volatile things are not necessarily risky, and the reverse is also true. Jumping from a bench would be good for you and your bones, while falling from the twenty-second floor will never be so. Small injuries will be beneficial, never larger ones, those that have irreversible effects. Fearmongering about some classes of events is fearmongering; about others it is not. Risk and ruin are different tings.

NAIVE EMPIRICISM

All risks are not equal. We often hear that “Ebola is causing fewer deaths than people drowning in their bathtubs,” or something of the sort, based on “evidence.” This is another class of problems that your grandmother can get, but the semi-educated cannot.

Never compare a multiplicative, systemic, and fat-tailed risk to a non-multiplicative, idiosyncratic, and thin-tailed one.

Recall that I worry about the correlation between the death of one person and that of another. So we need to be concerned with systemic effects: things that can affect more than one person should they happen.

A refresher here. There are two categories in which random events fall: Mediocristan and Extremistan. Mediocristan is thin-tailed and affects the individual without correlation to the collective. Extremistan, by definition, affects many people. Hence Extremistan has a systemic effect that Mediocristan doesn’t. Multiplicative risks—such as epidemics—are always from Extremistan. They may not be lethal (say, the flu), but they remain from Extremistan.

More technically:

Mediocristan risks are subjected to the Chernoff bound.

The Chernoff bound can be explained as follows. The probability that the number of people who drown in their bathtubs in the United States doubles next year—assuming no changes in population or bathtubs—is one per several trillions lifetimes of the universe. This cannot be said about the doubling of the number of people killed by terrorism over the same period.

Journalists and social scientists are pathologically prone to such nonsense—particularly those who think that a regression and a graph are sophisticated ways to approach a problem. Simply, they have been trained with tools for Mediocristan. So we often see the headline that many more American citizens slept with Kim Kardashian than died of Ebola. Or that more people were killed by their own furniture than by terrorism. Your grandmother’s logic would debunk these claims. Just consider that: it is impossible for a billion people to sleep with Kim Kardashian (even her), but that there is a non-zero probability that a multiplicative process (a pandemic) causes such a number of Ebola deaths. Or even if such events were not multiplicative, say, terrorism, there is a probability of actions such as polluting the water supply that can cause extreme deviations. The other argument is one of feedback: if terrorism casualties are low, it is because of vigilance (we tend to search passengers before boarding planes), and the argument that such vigilance is superfluous indicates a severe flaw in reasoning. Your bathtub is not trying to kill you.

I was wondering why the point appears to be unnatural to many “scientists” (which includes policymakers), but natural to some other people, such as the probabilist Paul Embrechts. Simply, Embrechts looks at things from the tail. Embrechts studies a branch of probability called extreme value theory and is part of a group we call “extremists”—a narrow group of researchers who specialize, as I do, in extreme events. Well, Embrechts and his peers look at the difference between processes for extremes, never the ordinary. Do not confuse this with Extremistan: they study what happens for extremes, which includes both Extremistan and Mediocristan—it just happens that Mediocristan is milder than Extremistan. They classify what can happen “in the tails” according to the generalized extreme value distribution. Things are a lot—a lot—clearer in the tails. And things are a lot—a lot—clearer in probability than they are in words.

SUMMARY

We close this chapter with a few summarizing lines.

One may be risk loving yet completely averse to ruin.

The central asymmetry of life is:

In a strategy that entails ruin, benefits never offset risks of ruin.

Further:

Ruin and other changes in condition are different animals.

Every single risk you take adds up to reduce your life expectancy.

Finally:

Rationality is avoidance of systemic ruin.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.