Article Categories
[ Show ] All [ Hide ]
Clean Language
Article Selections
[ Show ] All [ Hide ]
 
Three Types of Swan

Taleb distinguishes between three types of unexpected event:

White Swans result from the ‘Normal’ randomness of a Gaussian Distribution (Bell Curve) circumstance, called Mediocristan by Taleb.  This is the land of the known unknown.  It’s main features are:
  • Averages have an empirical reality
  • The likelihood of an extreme event reduces at a faster and faster rate as you move away from the average
  • This effectively puts an upper and lower limit on the scale of an event (people can only be so tall or so short)
  • Outliers are therefore extremely unlikely
  • No single event significantly affects the aggregate (you can’t get fat in one day)
Because probabilities are known, the chance or risk of an unexpected event can be calculated precisely, e.g. Weight, height (most anything in nature that grows), coin tossing, IQ scores.

Grey Swans result from circumstances where Power Law (Fractal or Fat Tailed) randomness occurs, called Extremistan by Taleb.  This is the province of the somewhat known unknown. Its main features are:
  • Although probability of events follow a pattern this does not give precise predictability (think of forecasting the weather)
  • Averages are meaningless because the range of possible events is so large
  • The likelihood of an extreme event reduces more and more slowly as you move away from the most frequent event (that’s why they are said to have ‘fat’ or ‘long tails’)
  • This means there is no upper limit (Bill Gates could be worth twice, three, four ... times as much)
  • Outliers are therefore much more likely than you might think
  • One single event can change the whole picture (if Bill Gates is in your sample of wealth, no one else matters)
e.g.    Magnitude of earthquakes, blockbuster books and movies, stock market crashes, distribution of wealth, extinctions (species and companies), population of cities, frequency of words in a text, epidemics, casualties in wars, pages on the WWW.

Black Swans result from all those forms of nonlinear randomness that we do not know about (and possibly never will). We are in the realm of the unknown unknown. We have no means of calculating the likelihood of an event. Everything about Extremistan applies without any probability distributions to guide us. A Black Swan:
  • Is an outlier that lies outside the realm of regular expectations (because nothing in the past can convincingly point to its possibility)
  • Carries extreme impact (so much so that at the time, if it is a ‘negative’ Black Swan, almost nothing else matters)
  • Gets explained after the event, therefore making it look like we could have predicted it, (if only ...).  
This fools us into thinking someone (an ‘expert’) might be able to predict the next Black Swan, which in itself increases the chances of another one catching us unawares in the future.

I see the ideas in Taleb’s book arranging them self into four levels:

I.    The nature of uncertainty, unpredictability and randomness.
II.    How we get fooled by I
III.    Why we don’t seem to learn from II
IV.    And how we can.


LEVEL I — The nature of uncertainty, unpredictability and randomness

Randomness:

“Randomness is the result of incomplete information at some layer. It is functionally indistinguishable from ‘true’ or ‘physical’ randomness. Simply, what I cannot guess is random because my knowledge about causes is incomplete, not necessarily because the process has truly unpredictable properties.” (p. 308-9, Taleb)

The Bell Curve (‘Normal’ or Gaussian distribution):

“We can make good use of the Gaussian approach in variables for which there is a rational reason for the largest not to be too far away from the average. If there is a gravity pulling numbers down, or there are physical limitations preventing very large observations, [or] there are strong forces of equilibrium bringing things back rather rapidly after conditions diverge from equilibrium.
 
Note that I am not telling you that Mediocristan does not allow for some extremes. But it tells you that they are so rare that they do not play a significant role in the total. The effect of such extremes is pitifully small and decreases as your population gets larger.

In Mediocristan, as your sample size increases, the observed average will present itself with less and less dispersion — uncertainty in Mediocristan vanishes. This is the ‘law of large numbers’.” (pp. 236-8, Taleb)

The empirical rule for normal distributions is that:

        68% of the population lie between -1 and +1 standard deviations
        95% of the population lie between -2 and +2 standard deviations
        99.7% of the population lie between -3 and +3 standard deviations

Power Law, Fractal, Fat-Tail distribution:

“Dramatic, large-scale events are [always] far less frequent than small ones. In systems characterized by power-law behaviour, however, they can occur at any time and for no particular reason.  Sometimes, a very small event can have profound consequences, and occasionally a big shock can be contained and be of little import. It is not the power law itself with gives rise to these unexpected features of causality; rather it is the fact that we observe a power law in a system which tells us that causality within it behaves in this way. The conventional way of thinking, which postulates a link between the size of an event and its consequences, is broken.” (p. 170, Ormerod)

Problem of Induction:   

“How can we logically go from specific instances to reach general conclusions?” (p. 40, Taleb)

This is not a ‘chicken and egg’ problem, it is a ‘chicken and death’ problem: Everyday of a chicken’s life it has more and more evidence that because humans feed, house and protect it they are benign and look out for its best interests — until the day it gets its neck rung.

Circularity of statistics:

“We need data to discover a probability distribution. How do we know if we have enough? From the probability distribution. If it is Gaussian, then a few points of data will suffice. How do we know it is Gaussian? From the data. So we need data to tell us what probability distribution to assume, and we need a probability distribution to tell us how much data we need. This causes a severe regress argument, which is somewhat shamelessly circumvented by resorting to the Gaussian and its kin.” (p. 310, Taleb)

Non-repeatability:   

Black Swan events are always unique and therefore not subject to testing in repeatable experiments.

Complex systems:

“If individuals interact with each other, if their behaviour can be altered by the actions of others, we are in a [complex] system. No matter how much information we gather, no matter how carefully we analyze it, a strategy of predict, plan and control will in general fail.” [p, 76 & p. 68, Ormerod]

LEVEL II — How we get fooled by uncertainty, unpredictability and randomness

Black Swan blindness:

“The underestimation of the role of the Black Swan, and occasional overestimation of a specific one.” (p. 307, Taleb).

This means that the effects of extreme events are even greater because we think they are more unexpected than they actually are. As an example, we do not distinguish between ‘risk’, which technically, can be calculated, and ‘uncertainty’ which cannot. Taleb maintains that the ‘high’, ‘medium’ and ‘low’ risk options offered to you by your financial adviser are a fiction because they are all based on ‘normal’ distributions that ignore the effect of Black Swans.

Confirmation bias:

“You look for instances that confirm your beliefs, your construction (or model) — and find them.” (p. 308, Taleb)

Part of the bias comes from our overvaluing what we find that confirms our model and undervaluing or ignoring anything that challenges our model.

Fallacy of silent evidence:

“Looking at history, we do not see the full story, only the rosier parts of the process.” (p. 308, Taleb) Ignoring silent evidence is complementary to the Confirmation Bias. “Our perceptual system may not react to what does not lie in front of our eyes. The unconscious part of our inferential mechanism will ignore the cemetery, even if we are intellectually aware of the need to take it into account. Out of sight, out of mind: we harbor a natural, even physical, scorn of the abstract.” [and randomness and uncertainty certainly are abstractions] (p. 121, Taleb).

By “the cemetery” Taleb means: The losers of wars don’t get to write history; Only survivors tell their story; Prevented problems are ignored; Only fossils that are found contribute to theories of evolution; Undiscovered places do not get onto the map; Unread books do not form part of our knowledge.

Apparently 80 per cent of epidemiological studies fail to replicate, but most of these studies don’t get published so we think causal relationships are more common than they are.

Fooled by Randomness:

“The general confusion between luck and determinism, which leads to a variety of superstitions with practical consequences, such as the belief that higher earnings in some professions are generated by skills when there is a significant component of luck in them.” (p. 308, Taleb)

We have a hard time seeing animals as randomly selected, but even when we can accept that, we are even less likely to accept that the design of a car or a computer could be the product of a random process.  Also, with so many people in the financial markets, there will be lots of “spurious winners.” Trying to become a winner in the stock market is hard, because even if you are smart, you are competing with all the spurious winners.

Future Blindness:

“Our natural inability to take into account the properties of the future” (p. 308, Taleb)

Our inability to conceive of a future that contains events that have never happened in the past — just think how difficult it can be to help some people associate into a desired outcome of their ownchoosing!

We can use Robert Dilts’ Jungle Gym model to postulate that Future Blindness is equivalent to Perceptual Position Blindness (e.g. the inability to take into account the mind of someone else — one of the features of autism)  and Logical Level Blindness — the inability to notice the qualitative differences occurring at different levels (e.g. using sub-atomic theory to ‘explain’ human behaviour). Gregory Bateson frequently griped about Logical Level Blindness, and it is a characteristic of people Ken Wilber calls “flatlanders”.

Lottery-ticket fallacy:

“The naive analogy equating an investment in collecting positive Black Swans to the accumulation of lottery tickets.” (p. 308, Taleb)

The probability of winning the lottery is known precisely and is therefore not a Black Swan — just unlikely.

Locke’s madman:

“Someone who makes impeccable and rigorous reasoning from faulty premises thus producing phony models of uncertainty that make us vulnerable to Black Swans.” (p. 308, Taleb). 

As examples, both Taleb and Ormerod pick out a couple of Nobel prize-winning economists (Robert Merton Jr. and Myron Scholes) who did much to promote the Modern Portfolio Theory. In the summer of 1998, as a result of having applied their theories, their company, LTCM, could not cope with the Black Swan of the Russian financial crisis and “one of the largest trading losses ever in history took place in almost the blink of an eye ... LTCM went bust and almost took down the entire financial system with it.” (p. 44 & p. 282, Taleb) However that’s not the real story. The more important question is, why ten years after the failure of LTCM, is the Modern Portfolio Theory still the dominant model taught at universities and business schools?

 Ludic fallacy:

Using “the narrow world of games and dice” to predict real life. In Real life “randomness has an additional layer of uncertainty concerning the rules of the game.” (p. 309, Taleb).

Narrative fallacy:

 “Our need to fit a story or pattern to a series of connected or disconnected facts.” (p. 309, Taleb)

i.e. to explain past events. Note, the plausibility of the story is not a factor. The fallacy is forgetting any narrative is only one of many possible narratives that could be made to fit ‘the facts’ and therefore believing we understand ‘the cause’.

Nonlinearity:

“Our emotional apparatus is designed for linear causality. With counter intuitive: linearities, relationships between variables are clear, crisp, and constant, therefore Platonically easy to grasp in a single sentence. In a primitive environment, the relevant is the sensational. This applies to our knowledge. When we try to collect information about the world around us, we tend to be guided by our biology, and our attention flows effortlessly toward the sensational — not the relevant.  Our intuitions are not cut out for nonlinearities. Nonlinear relationships can vary; they cannot be expressed verbally in a way that does justice to them. These nonlinear relationships are ubiquitous in [modern] life. Linear relationships are truly the exception; we only focus on them in classrooms and textbooks because they are easier to understand.” (extracts from pp. 87-89, Taleb)

Platonic fallacy:

Has several manifestations:
  1. “the focus on those pure, well-defined, and easily discernible objects like triangles” and believing those models apply to “more social notions like friendship or love”;
  2. focusing on categories “like friendship and love, at the cost of ignoring those objects of seemingly messier and less tractable structures”; and
  3. the belief that “what cannot be Platonized [put into well-defined ‘forms’] and studied does not exist at all, or is not worth considering.” (p. 309, Taleb)
Good examples are the importance we attach to numbers (did you know that 76.27% of all statistics are made up?), and how much better we feel when we have a label for an illness.

Retrospective distortion (Hindsight bias):

“Examining past events without adjusting for the forward passage of time leads to the illusion of posterior predictability.” (p. 310, Taleb)

Commonly known as ‘hindsight’, we think we could have predicted events, if only we known what we know now.  Most of what people look for, they do not find; most of what they find, they did not look for. But hindsight bias means discoveries and inventions appear to be more planned and systematic than they really are. As a result we underestimate serendipity and the accidental.

Reverse-engineering problem:

“It is easier to predict how an ice cube would melt into a puddle than, looking at a puddle, to guess the shape of the ice cube that may have caused it. This ‘inverse problem’ makes narrative disciplines and accounts (such as histories) suspicious.” (p. 310, Taleb)

Round-trip fallacy:

 “The confusion of absence of evidence of Black Swans (or something else) for evidence of absence of Black Swans (or something else).” (p. 310, Taleb) Just because you have never seen a Black Swan doesn’t mean they don’t exist.

Toxic knowledge:

“Additional knowledge of the minutiae of daily business can be useless, even actually toxic. The more information you give someone, the more hypothesis they will formulate along the way, and the worse off they will be.  They see more random noise and mistake it for information. The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds — so those that delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: Confirmation bias and belief perseverance. Remember, we treat ideas like possessions, and it will be hard for us to part with them.” (p. 144, Taleb)

My summary of Level II - How we get fooled:

UNDERESTIMATE
 OVERESTIMATE
  Unknown known
Uncertain
 Certain
Unpredictable
 Predictable
Uncontrollable
Controllable
Unplanable Planning & preparedness
Random
 Causal
 Ignorance Expertise
Effect of a single extreme  Rate of incremental change
 Nonlinearities Linearities
Power Laws, Fractals, Fat Tails
Averages & Bell Curves
 Forward march of time  Backward view of history
Relevant
Sensational
Luck
Skill
The ‘impossible’
The ‘possible’
 Commitment to our own models Ability to be objective
Subjectivity
Objectivity
Messy reality
Neat constructs 
 Illogical evidence  Logical reasoning
 Complexity Simplicity
The idiosyncratic
 The norm
Empirical evidence
Relevance of stories
 Failure  Success


LEVEL III — Why we don’t seem to learn that we are fooled by uncertainty, unpredictability and randomness

Black Swan ethical problem:

“There is an asymmetry between the rewards of those who prevent and those who cure.” (p. 308, Taleb)

The former are ignored; the latter highly decorated. Would we have rewarded the person who made aircraft manufacturers fit security locks to cockpit doors before 9/11? Hence it pays to be a problem solver.  Similarly we pay forecasters who use complicated mathematics much more than forecasters who say “We don’t know”.  Therefore it pays to forecast.  Both these asymmetries increase the likelihood of negative Black Swans.

Expert problem:

 “Some professionals have no differential abilities from the rest of the population, but against their empirical records, are believed to be experts: clinical psychologists, academic economists, risk ‘experts’, statisticians, political analysts, financial ‘experts’, military analysts, CEOs etc. They dress up their expertise in beautiful language, jargon, mathematics, and often wear expensive suits.” (p. 308, Taleb)

Scandal of prediction:

“The lack of awareness of their own poor predicting record in some forecasting entities (particularly the narrative disciplines).” (p. 310, Taleb)  “What matters is not how often you are right, but how large your cumulative errors are.  And these cumulative errors depend largely on the big surprises, the big opportunities.” (p. 149, Taleb)

Common ways we blame others for our poor predictive abilities:
  • I was just doing what everyone else does. (The “herd”, or maybe the “nerd”, effect.)
  • I was almost right.
  • I was playing a different game.
  • No one could have forecast that.
  • Other than that, it was okay.
  • They didn’t do what they should have.
  • If only I’d known X.
  • But the figures said ...
Epistemic arrogance:

A “measure [of] the difference between what someone actually knows and how much he thinks he knows. An excess will imply arrogance, a deficit humility.” (p. 308, Taleb)

Arrogance is the result of

“a double effect: we overestimate what we know, and we underestimate uncertainty, by compressing the range of possible uncertain states (i.e. by reducing [in our mind] the space of the unknown).”  (p. 140, Taleb)

And I would add to Taleb’s list:

Comfort of certainty:

We’d rather have a story that makes us comfortable than face the discomfort of uncertainty.

Uncertainty quickly becomes existential:

I have observed that, generally, when we consider:
  • what we know in relation to what we don’t know;
  • our ability to predict in the face of what’s unpredictable;
  • how much we can control as opposed to what’s uncontrollable;
  • how much effect we can have compared to a Black Swan event;
we are faced with the nature of our individual existence in relation to the Cosmos ... Oh my God!


LEVEL IV —  And how we can learn from how we are fooled by uncertainty, unpredictability and randomness

Maximize serendipity:

“A strategy of seeking gains by collecting positive accidents from maximising exposure to ‘good Black Swans’.” (p. 307, Taleb)

Taleb calls this an “Apelles-style strategy”. Apelles the Painter was a Greek who, try as he might, could not depict the foam from a horse’s mouth. In irritation he gave up and threw the sponge he used to clean his brush at the picture. Where the sponge hit, it left a beautiful representation of foam.

Despite the billions of dollars spent on cancer research, the single most valuable cancer drug discovered to date, chemotherapy, was a by-product of mustard gas used in the First World War.

Taleb recommends living in a city and going to parties to increase the possibility of serendipity. [Other examples: speed dating, holding liquidity, write articles, newspapers, sowing seeds]

Use a barbell strategy:

“A method that consists of taking both a defensive attitude and an excessively aggressive one at the same time, by protecting assets from all sources of uncertainty while allocating a small portion for high-risk strategies.” (p. 307, Taleb)

“What you should avoid is unnecessary dependence on large-scale predictions — those and only those. Avoid the big subjects that may hurt your future: be fooled in small matters, not large. Know how to rank beliefs not according to their plausibility but by the harm they may cause. Knowing that you cannot predict does not mean that you cannot benefit from unpredictability.  The bottom line: be prepared! Narrow-minded prediction has an analgesic effect. Be aware of the numbing effect of magic numbers. Be prepared for all relevant eventualities.” (p. 203, Taleb)

Taleb suggests that rather than putting your money in ‘medium risk’ investments (see Black Swan Blindness) “you need to put a portion, say 85-90 percent, in extremely safe instruments, like Treasury Bills.  The remaining 10-15 percent you put in extremely speculative bets, preferably venture capital-style portfolios. (Have as many of these small bets as you can conceivably have; avoid being blinded by the vividness of one single Black Swan.)”  (p. 205, Taleb)

Taleb lists a number of “modest tricks”.  He says to “note the more modest they are the more effective they will be”:

a.      Make a distinction between positive and negative contingencies.
b.      Don’t look for the precise and the local — invest in preparedness, not prediction.
c.      Seize any opportunity or anything that looks like an opportunity.
d.      Beware of precise plans by governments.
e.      Don’t waste your time trying to fight forecasters, economists etc. (pp. 206-210, Taleb)

Accept random causes:

Avoid the silent evidence fallacy, “Whenever your survival is in play, don’t immediately look for causes and effects. The main reason for our survival might simply be inaccessible to us. We think it is smarter to say because than to accept randomness.” (p. 120, Taleb)

By “survival” Taleb means not only your physical survival from, say, disease, but also survival of your company, survival as an employee, survival of your marriage, survival of your success or status, etc.

De-narrate:

“Shut down the television set, minimize time spent reading newspapers, ignore blogs” because the more news you listen to and newspapers you read, the more your view will converge on the general view — and the greater the tendency to underestimate extreme events.

Account for emotions:

“Train your reasoning abilities to control your decisions. Train yourself to spot the difference between the sensational and the empirical.” (p. 133, Taleb) “Most of our mistakes in reasoning come from using [intuition] when we are in fact thinking that we are using [the cogitative]” (p. 82, Taleb).

Taleb’s point is that our emotional responses will unconsciously dominate unless we take that into account when we make decisions in the nonlinear world of Extremistan.

Avoid tunneling:   

“The neglect of sources of uncertainty outside the plan itself.” (p. 156, Taleb)

Question the error rate:

“No matter what anyone tells you, it is a good idea to question the error rate of an expert’s procedure.  Do not question his procedure, only his confidence. A hernia surgeon will rarely know less about hernias than [you]. But their probabilities, on the other hand, will be off — and this is the disturbing point, you may know much more on that score than the expert.” (p. 145, Taleb)

Recognize self-delusion:

“You cannot ignore self-delusion. The problem with experts is that they do not know what they do not know. Lack of knowledge and delusion about the quality of your knowledge come together — the same process that makes you know less also makes you satisfied with your knowledge.” (p. 147, Taleb)

Also see an article by Penny Tompkins and myself on Self-deception, delusion and denial.

Use Stochastic tinkering:

Use a lot of trial and error.  To do that you have to learn to love to lose, to be wrong, to be mistaken — and keep trialling.

Personally, I prefer the term trial-and-feedback because it presupposes the noticing of ‘errors’ and acting on that awareness.  Wikipedia is a living example of stochastic tinkering.

Focus on Consequences:

“The idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty. Much of my life is based on it.” (p.211, Taleb)

Foster cognitive diversity:

“The variability in views and methods acts like an engine for tinkering. It works like evolution. By subverting the big structures we also get rid of Platonified one way of doing things — in the end, the bottom-up theory-free empiricist should prevail.” (p.224-5, Taleb)

My conclusion?

David Grove was himself a Black Swan ...



The next Developing Group will continue to focus on Level IV — Maximising Serendipity:The art of recognising and fostering potential.

James Lawley

James LawleyJames Lawley is a UKCP registered psychotherapist, coach in business, and certified NLP trainer, and professional modeller. He is a co-developer of Symbolic Modelling and co-author (with Penny Tompkins) of Metaphors in Mind: Transformation through Symbolic Modelling. For a more detailed  biography see about us and his blog.

 
 »  Home  »  The Developing Group  »  Black Swan Logic
Article Options
4
Clean Events
in
Nipomo
California



with
James Lawley
Penny Tompkins
Sharon Small
-
5-18 Jan 2018

Intro to CL and SyM
Enhancing & Integrating SyM Skills
Self-Modelling Retreat
Clean Interviewing

cleanlanguagetraining.com
view all featured events