Jump to ratings and reviews
Rate this book

Expert Political Judgment: How Good Is It? How Can We Know?

Rate this book
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts.


Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat.


Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.

344 pages, Paperback

First published July 5, 2005

Loading interface...
Loading interface...

About the author

Philip E. Tetlock

9 books320 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
227 (34%)
4 stars
252 (38%)
3 stars
136 (20%)
2 stars
30 (4%)
1 star
13 (1%)
Displaying 1 - 30 of 73 reviews
Profile Image for Adam S. Rust.
49 reviews6 followers
December 7, 2012
An ambitious and thought-provoking study on the value and reliability of experts in the field of politics and the economy. Starting in the 1980s political scientist Philip Tetlock interviewed experts seeking their predictions on the outcomes various future events (such as Gorbachev's interest in reform, the ascendency of Japanese economic power, etc.).

The conclusions drawn from the outcomes of these expert predictions were bleak, experts are frequently wrong and almost consistently underperform mathematical algorithms that are used to make general predictions on economic and social outcomes. The only bright spot was that some experts seemed to perform consistently better than others. Tetlock labeled the better performers "foxes" in contrast with underperforming colleagues, "hedgehogs". "Foxes" were less ideologically committed, more open to changing their conclusions based on new data, and generally more open to admitting when they had called something wrong in the past. Hedgehogs were the precise opposite of that description.

Though full of highly technical statistical charts, the argument is presented clearly and persuasively. It gives equal time to critics of Tetlock's conclusions and provides the reader with ample food for thought. You'll never look at your favorite ideological pundit the same way again.
Profile Image for Daniel Hageman.
340 reviews47 followers
January 8, 2021
This book really hit the sweet spot with respect to counterfactual reasoning as applied to probability adjustment and its intimate relationship with political decision making. While some of the taxonomies identified to distinguish between the myriad tendencies people display did seem like an overreach at times, this is definitely a must-read for anyone seriously interested in historical and future societal trajectories (or those like myself who find solace in maintaining baseline levels of epistemic humility when speculating about such issues.
Profile Image for Marks54.
1,432 reviews1,180 followers
July 28, 2013
This book reports on a research project to understand the bases behind expert political judgment. What does it mean to make such judgments and how do we determine the quality of such judgments -- or the "track record" of those experts making the judgments. This is a hard question to address. Quality judgment is not just about whether some prediction comes true or not. It is not just about simple forecasting. It is not about simple topics, such as whether a make of car will be reliable, but concerns issues that are fraught with differences in values, where considerable risk is on the table, and situations that are highly unusual and unlikely to have many comparable situations to consider in coming up with a prediction. These settings concern expertise, but also local knowledge and historical context. This research is looking at how some of the most difficult judgments get made with a view towards developing principles so that they can be made better. A recent example that did not make it into the study due to timing was how to conclude whether there were weapons of mass destruction in Iraq in 2003 and what to do about it.I

There is a lot going on here and it is a complex study. What is interesting to me is that the subjects are not just typical college undergraduates but real experts and specialists - people who are supposed trained and paid to know how to make these decisions. It is the nature of the participants that also makes this study fascinating.

I don't wish to give anything away. Let me say that I have long been suspicious of expert and doubtful of claims of expertise and the results of this study are consistent with those doubts. People tend to do poorly at making quality judgments when one tries to pin down just what such quality judgments entail. This is not a matter of political orientation (conservative versus liberal) -- both on average tend to do as well or worse than what would be predicted by a chimp throwing random darts to make predictions off of a dartboard. Most other demographic and background characteristics fare poorly as well. What fares relatively better is ones generic approach to thinking and making decisions - based on Isaiah Berlin's distinction between hedgehogs and foxes. Part of the analysis also includes "updating strategy" - the extent to which people adjust their predictions and decisions on the basis of information they receive from experience. Better updaters tend to make better decisions. (Read up on Bayes rule if you want to follow this.)

The author and his team are very thoughtful and go to great lengths to be fair to all of the different styles and positions that are relevant to the evaluation of political expertise. The book also includes an excellent methodological section to clearly identify was the research was investigating and what it was not addressing.

With the popularity of Thinking Slow/Thinking Fast, there are lots of books around that raise issues about how we think and how our thinking skills fit in with our behavior. I found this book to be one of the more thoughtful ones -- it was published in 2005 before the Kahneman book was published but deals with related topics to those of behavioral decision theory. It is a carefully written report on an involved research program, so it is sometimes not easy reading. The author has made sure that this book has benefited from good writing and better editing, so it is more accessible than your typical article.

It is well worth the effort.
168 reviews10 followers
June 29, 2019
Amazing book! Nowadays, with so many opinions flying around the internet, it's hard to know: Who should we listen to? What are reliable sources of information? Whose predictions should we take seriously? How much uncertainty is these in this prediction? Do the talking heads on TV know more than the rest of us, or is it just entertainment?

To answer these questions, Phil Tetlock records the predictions of hundreds of political experts for many political events, such as the fall of the Soviet Union. Then Tetlock studies how good the expert predictions are, and what characteristics lead to good and bad forecasts. Early in the book, Tetlock partitions forecasters into two categories: Fox's and Hedgehogs, where Fox's "know many things", and Hedgehogs "know one big thing. (this is based on Isaiah Berlin's famous essay). Fox's can be seen as self-critical, open to counter-argument, and prone to hedge their bets. Hedgehogs are more dogmatic, and want to fit everything into a single unifying idea (e.g. capitalism is good / bad).

I think this book is on par with "How to Lie With Statistics", "The Signal and the Noise", etc., in that it provides a great commentary on **how to act** as a statistician / analyst. It's not so much the technical details, but rather, what's the right **mindset** for dealing with uncertainty. In some sense, this book tells me that I should strive to be more fox-like in my decision making process, especially as evidence pours in on the advantages of this type of thinking.

I had two other takeaways from this book:

1. The power of statistical models
2. The importance of integrating psychological insights with data-analysis

On point one, in the beginning of the book, Tetlock compares the forecasting performance of different approaches (fox's, hedgehog's, base-rate predictions, statistical models, etc.). The evidence showed that nothing means a well-tuned statistical model, which was nice to see, since it indicates that working with a statistical model is a good way integrate pieces of information.

On point two, it's amazing how consistently people make mistakes in their reasoning, and how predictable these mistakes are. For example, hindsight bias allows experts to believe their old predictions were correct, when they were actually wrong. Experts don't update their beliefs when new evidence comes in, experts are combative against contradictory information, more receptive to confirming information, and so on. There's also an interesting case when the Fox makes errors, and that's when Fox's consider "too many" alternative scenario's, and their forecast gets convoluted. All things considered, it seems clear that finding ways to integrate an understanding of human biases into forecasting projects would lead to tremendous improvements.
Profile Image for Teo 2050.
840 reviews90 followers
April 10, 2020
2017.07.14–2017.07.15

Superforecasting is a popularization of this work. This is difficult to grasp in audio form: e.g. contains factor analysis tables.

Contents

Tetlock PE (2005) (09:48) Expert Political Judgment - How Good Is It? How Can We Know?

Acknowledgments
Preface

1. Quantifying the Unquantifiable
• Here Lurk (the Social Science Equivalent of) Dragons
• Tracking Down an Elusive Construct
• • Getting It Right
• • • 1. Challenging whether the playing fields are level.
• • • 2. Challenging whether forecasters’ “hits” have been purchased at a steep price in “false alarms.”
• • • 3. Challenging the equal weighting of hits and false alarms.
• • • 4. Challenges of scoring subjective probability forecasts.
• • • 5. Challenging reality.
• • Thinking the Right Way
• • Preview of Chapters to Follow

2. The Ego-deflating Challenge of Radical Skepticism
• Radical Skepticism
• • Varieties of Radical Skepticism
• • • • Ontological skeptics
• •��• • Path dependency
• • • • Figure 2.1. The varied grounds that skeptics have for suspecting that observers will never be able to predict better than either chance or extrapolation algorithms.
• • • • Complexity Theorists
• • • • Game Theorists
• • • • Probability Theorists
• • • • Figure 2.2. The first panel displays the bewildering array of possible relationships between causal antecedents and possible futures when the observer does not yet know which future will need to be explained. The second panel displays a simpler task. The observer now knows which future materialized and identifies those antecedents “necessary” to render the outcome inevitable. The third panel “recomplexifies” the observer’s task by imagining ways in which once-possible outcomes could have occurred, thereby recapturing past states of uncertainty that hindsight bias makes it difficult to reconstruct. The dotted arrows to faded E’s represent possible pathways between counterfactual worlds and their conjectured antecedents.
• • • Psychological Skeptics
• • • • preference for simplicity
• • • • aversion to ambiguity and dissonance
• • • • need for control
• • • • the unbearable lightness of our understanding of randomness
• Advancing Testable Hypotheses
• • 1. Debunking hypotheses: humans versus chimps and extrapolation algorithms of varying sophistication.
• • 2. The diminishing marginal returns from expertise hypothesis.
• • 3. The fifteen minutes of fame hypothesis.
• • 4. The loquacious overconfidence (or hot air) hypothesis.
• • 5. The seduced by fame, fortune, and power hypothesis.
• • 6. The indefinitely sustainable illusion hypothesis.
• • Methodological Background
• • • 1. The sophistication of the research participants who agreed—admittedly with varying enthusiasm—to play the role of forecasters.
• • • 2. The broad, historically rolling cross section of political, economic, and national security outcomes that we asked forecasters to try to anticipate between 1988 and 2003.
• • • 3. The delicate balancing acts that had to be performed in designing the forecasting exercises.
• • • 4. The transparency and rigor of the rules for assessing the accuracy of the forecasts.
• • • • Figure 2.3. It is possible to be perfectly calibrated but achieve a wide range of discrimination scores: poor (the fence-sitting strategy), good (using a broad range of values correctly), and perfect (using only the most extreme values correctly).
• • • • a. Are some forecasters achieving better (smaller) probability scores by playing it safe and assigning close-to-guessing probabilities?
• • • • b. Did some forecasters do better merely because they were dealt easier tasks?
• • • • c. Are some forecasters getting worse probability scores because they are willing to make many errors of one type to avoid even a few of another?
• • The Evidence
• • • The Debunking Hypotheses: Humanity versus Algorithms of Varying Sophistication
• • • Figure 2.4. The calibration (CI) and discrimination (DI) scores of all subjective probability judgments entered into later data analyses.
• • • Figure 2.5. The calibration and discrimination scores achieved by human forecasters (experts and dilettantes), by their mindless competition (chimp random guessing, restrictive and expansive base-rate extrapolation, and cautious and aggressive case-specific extrapolation algorithms), and by the sophisticated statistical competition.
• • • The Diminishing Marginal Predictive Returns Hypothesis
• • • Figure 2.6. The first panel compares the calibration functions of several types of human forecasters (collapsing over thousands of predictions across fifty-eight countries across fourteen years). The second panel compares the calibration functions of several types of statistical algorithms on the same outcome variables.
• • • Figure 2.7. The impact of the k-value adjustment (procedure) on performance of experts (E), dilettantes (D), chimps (C), and an aggressive case-specific extrapolation algorithm (A) in three different forecasting tasks.
• • • The Fifteen Minutes of Fame Hypothesis
• • • The Hot Air Hypothesis
• • • The Seduction Hypothesis
• • • The Indefinitely Sustainable Illusion Hypothesis
• • • Groping toward Compromise: Skeptical Meliorism

3. Knowing the Limits of One’s Knowledge: Foxes Have Better Calibration and Discrimination Scores than Hedgehogs
• The Quantitative Search for Good Judgment
• • Demographic and Life History Correlates
• • Table 3.1. Individual Difference Predictors of Calibration of Subjective Probability Forecasts
• • Content Correlates
• • • Table 3.2. Variable Loadings in Rotated Factor Matrix from Maximum Likelihood Factor Analysis (Quartimin Rotation) of Belief Systems Items
• • • left versus right
• • • institutionalists versus realists
• • • doomsters versus boomsters
• • Cognitive Style Correlates
• • Figure 3.1. Calibration and discrimination scores as a function of forecasters’ attitudes on the left-right, realism-idealism, and boomster-doomster “content” scales derived from factor analysis.
• • Table 3.3. Variable Loadings in Rotated Factor Matrix from the Maximum Likelihood Analysis of the Style-of-Reasoning Items
• • Figure 3.2. How thoroughly foxes and fox-hog hybrids (first and second quartiles on cognitive-style scale) making short-term or long-term predictions dominated hedgehogs and hedge-fox hybrids (fourth and third quartiles) making short- and long-term predictions on two indicators of forecasting accuracy: calibration and discrimination.
• • Figure 3.3. The calibration functions of four groups of forecasters compared to the ideal of perfect calibration (diagonal).
• • Figure 3.4. Calibration scores of hedgehog and fox moderates and extremists making short- and long-term predictions as either experts or dilettantes.
• • Figure 3.5. The foxes’ advantage in forecasting skill can be traced to two proximal mediators, greater integrative complexity of free-flowing thoughts and a cautious approach to assigning subjective probabilities.
• The Qualitative Search for Good Judgment
• • Foxes Are More Skeptical of the Usefulness of Covering Laws for Explaining the Past or Predicting the Future
• • Foxes Are Warier of Simple Historical Analogies
• • • post-communist Russia (early 1992)
• • • India (mid-1988)
• • • Kazakhstan (early 1992)
• • • Poland (early 1992)
• • • waiting for the last communist “dominoes” to fall (1992)
• • • Saudi Arabia (1992)
• • • analogical perspectives on the root causes of war and peace
• • Foxes are Less Likely to Get Swept Away in Their Own Rhetoric
• • Foxes Are More Worried about Our Judging Those in the Past Too Harshly (and Less Worried about Those in the Future Judging Us Harshly for Failing to see the Obvious)
• • Foxes See More Value in Keeping “Political Passions Under Wraps”
• • Foxes Make More Self-conscious Efforts to Integrate Conflicting Cognitions
• • • integrative resolutions to “when do leaders matter?”
• • • • USSR (1988)
• • • • South Africa (1988)
• • • • Japan (1992–1993)
• • • • Nigeria (1992)
• • • hedge bets on the rationality of leaders
• • • • Persian Gulf War I (1990–1991)
• • • • Macroeconomic Policies in Latin America (1988–1992)
• • • • China (1992)
• Closing Observations

4. Honoring Reputational Bets: Foxes Are Better Bayesians than Hedgehogs
• A Logical-coherence Test
• A Dynamic-process Test: Bayesian Updating
• • Reactions to Winning and Losing Reputational Bets
• • Figure 4.1. The relative willingness of hedgehogs, hybrids (hedge-foxes and fox-hogs), and foxes to change their minds in response to relatively expected or unexpected events, and the actual amounts of belief adjustment compared to the Bayesian-prescribed amounts of belief adjustment.
• • Belief System Defenses
• • • Qualitative Analysis of Arguments
• • • • Challenge whether the Conditions for Hypothesis Testing were Fulfilled.
• • • • The Exogenous-shock Defense.
• • • • The Close-call Counterfactual Defense (“I was almost Right”).
• • • • The “Just-off-on-Timing” Defense.
• • • • The “Politics is Hopelessly Cloudlike” Defense.
• • • • The “I made the Right Mistake” Defense.
• • • • The Low-probability Outcome Just Happened to Happen.
• • • Quantitative Analysis of Belief System Defenses
• • • Hindsight Effects: Artifact and Fact
• • • • Figure 4.2. The relative magnitude of the hindsight bias when experts try to recall: (a) the probabilities that they themselves once assigned to possible futures (own perspective); and (b) the probabilities that they once said intellectual rivals would assign the same possible futures.
• • Linking Process and Correspondence Conceptions of Good Judgment
• • • Figure 4.3. A conceptual framework that builds on figure 3.5.

5. Contemplating Counterfactuals: Foxes Are More Willing than Hedgehogs to Entertain Self-subversive Scenarios
• Judging the Plausibility of Counterfactual Reroutings of History
• • History of the USSR
• • Table 5.1. Correlations between Political Ideology and Counterfactual Beliefs of Area Study Specialists
• • Demise of White-minority Rule in South Africa
• • Rerouting History at Earlier Choice Points
• • • Unmaking the West
• • • The Outbreak of World War I
• • • Table 5.2. Predicting Resistance to Close-call Counterfactuals
• • • The Outcomes of World Wars I and II
• • • Why the Cold War Never Got “Hot”
• Assessing Double Standards in Setting Standards of Evidence and Proof
• Table 5.3. Average Reactions to Dissonant and Consonant Evidence of Low- or High-quality Bearing on Three Controversial Close-call Counterfactuals
• Closing Observations
• Figure 5.1. This figure builds on figures 3.5 and 4.3 by inserting what we have learned about the greater openness of moderates, foxes, and integratively complex thinkers to dissonant historical counterfactuals. This greater willingness to draw belief-destabilizing lessons from the past increases forecasting skill via three hypothesized mediators: the tendencies to hedge subjective probability bets, to resist hindsight bias, and to be better Bayesians.

6. The Hedgehogs Strike Back
• Really Not Such Bad Forecasters
• • the need for value adjustments
• • Figure 6.1. The impact of k-value adjustment on performance of hedgehog and fox experts and dilettantes (HE, HD, FE, FD) and chimps in three different forecasting tasks
• • the need for probability-weighting adjustments
• • Figure 6.2. The gap between hedgehogs and foxes narrows, and even disappears, when we apply value adjustments
• • Figure 6.3. The gap between foxes and hedgehogs narrows, but never closes in the first and second panels and even eventually reverses itself in the third panel, when we apply increasingly extreme values of gamma to the weighted probabilities entered into the probability-scoring function. Extreme values of gamma treat all mistakes in the “maybe zone” (.1 to .9) as increasingly equivalent to each other.
• • the need for difficulty adjustments
• • Table 6.1. How often “Things” Happened (Continuation of Status Quo, Change in the Direction of More of Something, and Change in the Direction of Less of Something.
• • Figure 6.4. The difficulty-adjusted forecasting skill of hedgehogs and foxes making short or long-range forecasts inside or outside their specialties.
• • the need for controversy adjustments
• • the need for fuzzy-set adjustments
• • Figure 6.5. The gap between hedgehogs and foxes narrows, and even disappears, when we apply fuzzy-set adjustments that give increasingly generous credibility weights to belief system defenses.
• • a paradox: why catch-up is far more elusive for the average individual than for the group average
• Really Not Incorrigibly Closed-minded Defense
• Rebutting Accusations of Double Standards
• Rebutting Accusations of Using History to Prop Up One’s Prejudices
• Defending the Hindsight Bias
• We Posed the Wrong Questions
• We Failed to Talk to Properly Qualified and/or Properly Motivated People at the Right Time
• Misunderstanding What Game Is Being Played
• Closing Observations

7. Are We Open-minded Enough to Acknowledge the Limits of Open-mindedness?
• The Power of Imagination
• Debiasing Judgments of Possible Futures
• • Figure 7.1. The set of possible futures of Canada unpacked into increasingly differentiated subsets.
• • Canadian Futures Scenarios
• • Figure 7.2. Effects of scenario-generation exercises on hedgehog and fox, expert and dilettante, forecasters of possible five- and ten-year futures on Canada (1992–1997–2002).
• • Japanese Futures Scenario Experiment
• • Figure 7.3. The set of possible Japanese futures unpacked into increasingly differentiated subsets.
• • Summing Up the Scenario Experiments
• • Figure 7.4. Effects of scenario-generation exercises on hedgehog and fox, expert and dilettante, forecasters of five- to ten-year futures of Japan (1992–1997–2002).
• • Figure 7.5. The performance of hedgehogs and foxes, making predictions inside or outside of their domains of expertise, deteriorates when we replace their original forecasts with best estimates of the forecasts they would have made if they had disciplined their scenario-based thinking with reflective equilibrium exercises that required probabilities to sum to 1.0 or if they had not so disciplined their scenario-based thinking.
• Debiasing How We Think about Possible Pasts
• • Hindsight Bias
• • Figure 7.6. The impact of imagining scenarios in which events could have unfolded differently on hindsight bias in 1997–1998 recollections of predictions made in 1992–1993 for China and North Korea.
• • Sensitizing Observers to Historical Contingency
• • • cuban missile crisis experiment
• • • • Figure 7.7. Unpacking alternative, more violent endings of the Cuban missile crisis.
• • • • Figure 7.8. Inevitability and impossibility curves for the Cuban missile crisis.
• • • unmaking the West experiment
• • • • Figure 7.9. Inevitability and impossibility curves for the Rise of the West.
• • Thoughts on “Debiasing” Thinking about Possible Pasts
• Closing Observations

[Continued in comments due to Goodreads character limit]
Profile Image for Billie Pritchett.
1,107 reviews103 followers
October 23, 2015
Philip Tetlock's book Expert Political Judgment wants to know something very simple that is very difficult to find out. Through research, Tetlock wants to know how people can make good predictions about big social, economic, and political issues. For example: Is it possible for an expert to have predicted the collapse of the Soviet Union? Did anyone predict the collapse? What kinds of knowledge would an expert have to have to predict something like that?

After a long and detailed study, he discovered that there are two basic kinds of forecasters of social, economic, and political issues: foxes and hedgehogs. Foxes "'know many little things,' draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life." Hedgehogs "'know one big thing,' toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems."

Here are some basic facts Tetlock discovered:
- What experts think matters far less than how they think.
- [There are] few signs that expertise translates into greater ability to make either "well-calibrated" or "discriminating" forecasts.
- Foxes are better Bayesians than hedgehogs [meaning foxes are more likely to revise their beliefs in light of new evidence]
- Foxes are more willing than hedgehogs to entertain self-subversive scenariors
The ultimate lesson is that it is better to think like a fox, to know a lot of facts and have an understanding of several possible scenarios and theories, use whatever tools are necessary to find out what you would like to find out, and accept some of the messy aspects of human behavior. It might just be that in some respects human behavior is irreducibly complex.

A note of caution to the reader who thinks that this book in some way implies that experts are useless regarding topics: This book challenges experts to make precise, or more precise, judgments about the future of some aspect of human behavior, judgments that are inherently difficult to make, and which are in many respects outside the scope of the experts' expertise, whether they would like to admit it or not. However, if you ask economists, for example, if, all things being equal, free trade with a foreign country be a good thing for everybody, they would say yes and they would be right, generally speaking. But that kind of prediction seems so prosaic as not to be worth asking. But of course that is precisely the kind of thing experts like economists would know!

So anyway, that's all I'd say about the book. It is really quite wonderful. I admit to having skipped over some of the highly technical stuff. It's not exactly a general read but you can read it as though it is.
Profile Image for Zhou Fang.
141 reviews
December 16, 2018
After reading Superforecasting, I knew I had to pick up Philip Tetlock's original work. What I appreciated about this book and Tetlock's work in general is the level of rigor he goes into to make his arguments. Additionally, he gives opposing arguments fair treatments in their strongest forms. Here, he makes the case again that "hedgehogs" who derive arguments from knowing "one big thing" are weaker forecasters than "foxes" who know a lot of little things. He gives strong credence to the hedgehogs' arguments including aspects such as value judgment errors, almost-right fuzzy set adjustments, and difficulty of questions. Overall the work is less approachable than Superforecasting though, as it is more tailored towards academics.
Profile Image for Devin Partlow.
326 reviews4 followers
February 11, 2014
At first glance you'd think, "Awesome a book that will help to choose which political experts I should put my faith in". But then you'd have to remember that this big scientific experiment which didn't take influence into account. If a prominent figure predicts that something is going to happen, that prediction is going to influence the outcome.

If life could be neatly controlled like simulated lab environments, the results of these social experiments would hold weight, but unfortunately that's not how life works.

2.5 stars for effort though.
Profile Image for Frank.
146 reviews3 followers
October 17, 2015
Worst book I have read this year. Basically just a very long academic article. Here I thought I'd read an interesting non-fiction on expert political opinion, and instead I was bored by page after page of methodology, blabber and dense footnotes. Not recommendable to anyone.
Profile Image for Dylan.
119 reviews1 follower
June 10, 2020
I place Philip Tetlock right among the most important social science researchers of the past 50 years. I won’t pretend to know the work of a wide enough array of social scientists to make overly hyperbolic claims, but it’s hard to understate the long term potential of his work. And it’s beyond baffling that there’s so little competition. The empirical study of good judgment should be an entire discipline–instead, we largely have Tetlock (with many covering important adjacent work, but not tackling the question directly), and an endless chorus of those motivated to dislike the underlying premise.

That’s not to say that I think every result in his work is bulletproof, clearly it isn’t. The broad thrust of his work is airtight, the minutiae are hard to gauge. And that’s what’s so frustrating… there should be a dozen Tetlocks pushing their own competing theories on what constitutes good judgment.

The book itself is far from perfect. The writing style is not my favorite (it can be a weird mix of folksy-but-academic that makes basic points less accessible than they should be). “Expert Political Judgment” is naturally paired with “Superforecasting”, but it’s hard to know which to start with (I wish there was some combined version…). I doubt all the methodology holds up. But still, it’s probably the research I just wish would be more universally embraced. The criticisms of the limits of probabilistic forecasting and empirical judgment are perfectly valid, and Tetlock devotes much of the text to giving them a full voice. But we remain impossibly far from these limits, and without giving empirical judgment the respect it deserves, much of our other expertise will go to waste. In oversimplified summary…

Probabilistic forecasting is a skill. It is difficult, and we consistently underestimate its limits (and the reach of irreducible uncertainty). It is not impossible, and those with “the skill” can rack up an impressive track record over time, but that doesn’t necessarily include experts with advanced domain knowledge. Yet, why should this surprise us? If it’s a skill in its own right, when would domain experts ever actually learn it? How often are they incentivized to be good probabilistic forecasters, let alone to practice that as a pursuit? It’s a completely natural result. If something is difficult, and people aren’t rewarded for doing it, why would they learn?

Again, an absolutely essential book. For an extremely brief version of some key results, [AI Impacts](https://aiimpacts.org/evidence-on-goo...) wrote up a good summary (mostly on the later Superforecasting work). Included below are my jotted notes, as I really do want to retain this stuff, and need to jog my memory later.


Preface to the 2017 Edition:
The central findings of EPJ are roughly
1. Experts were broadly overconfident about their future forecasts.
2. They were only marginally superior to random chance (“dart-tossing chimps”), and generally were outperformed by straightforward extrapolation algorithms.
3. Ideological leanings and educational background were not strong predictors of forecasting success, but “hedgehog” versus “fox” cognitive temperament was (described below).
4. Good judgment stems from curiosity, open-mindedness, acceptance of dissonance and doubt, and epistemological humility.
Tetlock pushes back on the narrative that the takeaway was “experts don’t know anything” (although, you can’t entirely blame people, given the provocative way he framed it), rather showing this rather specific deficiency, and diagnosing how they are lead astray (usually, the burden of the weight of ideology). Experts offer an enormous amount of essential knowledge. Even in probabilistic forecasting, they provide their own creative insights, and know how to frame the right questions. But probabilistic forecasting is a skill, in its own right. And it’s simply not one we can ignore, given how much of our decision making is premised on these short term forecasts. Tetlock summarizes some other criticisms, and mentions how Superforecasting expanded upon this work.

Chapter 1: Introduction
The Preface is a nice addition, because it’s a much better introduction than the first chapter, which is a slog. Once you suffer through some of Tetlock’s unfortunate writing tendencies, you get to his summary of the results of the book, and some key terminology. In particular, the “Radical Skeptics”, who believe that the enterprise is largely doomed because the future is so chaotic and unknowable. He points out that even when we are uncomfortable pointing to the results of “unique” events, we can still apply coherence and consistency checks to different forecasts, and at least show when they can’t be right. They are contrasted with “Meliorists”, who hold optimistic faith in our ability to predict the future through good judgment.

Chapter 2:
Radical Skeptics have a variety of reasons to believe that forecasting is a futile exercise: from path dependency to the complexity (the butterfly effect and so on), to our psychological biases. Tetlock examines this using the results from his regional geopolitical forecasting tournaments, based on recruited domain experts. Their forecasting judgment was poor, only barely beating random chance and running below that of simple extrapolation algorithms. However, the Radical Skeptics are not fully validated, as the research is discovering consistent patterns in good judgment, with significant liabilities bringing the average right back down towards the chimps. The fact that the specific domain of expertise had so little bearing on forecasting ability does not imply that forecasting is foolhardy, because there were cognitive traits that did show consistent ability.

Chapter 3:
Ideology had little correlation to good forecasting judgment, but there was a clear divide between “foxes” and “hedgehogs”. “Hedgehogs” know one big thing, and are strong ideologically driven thinkers who favor parsimony. They see coherence in the chaotic world, and are more confident in the wrongness of any other ideology.

“Foxes” know many little things, and constantly revise their judgments in the face of new information. They constantly use “however” and “but” in their arguments, they are tolerant of dissent, they take the “outside” (base rate) rather than “inside” (specific information) view to start, they take note of evidence which challenges their beliefs, they tend to be detached, they think we use hindsight bias to judge the past too harshly, they keep their political identities comparatively hidden, and they are inductive rather than deductive.

Hedgehogs particularly suffered in overconfident long range forecasts. If you have an aggressive, parsimonious ideological model, you can convince yourself that you can see the chain of causality that stretches into the future. This is almost never the case, and the long range future is only knowable through careful understanding of compounding noise.

Chapter 4
An easy way to understand the forecasting dominance of foxes lies in Bayesian updating. Their initial insight didn’t outpace the hedgehogs, but they knew how to constantly adjust their beliefs in the presence of new information. Hedgehogs were more reliant on belief defense mechanisms, like “right prediction but wrong timing”, “useful mistake to make”, “unlucky outcome”, which allowed them to avoid the required rational Bayesian updating.


Chapter 5
Similar patterns appear when evaluating the cognitive methods of foxes and hedgehogs in evaluating historical counterfactuals. Tetlock walks through a number of historical case studies, showing how the different minds see the same evidence in vastly different ways. (Note, his recent research involves studying counterfactual judgment through Civilization 5, the video game… very cool). Hedgehogs make many of the same cognitive errors, being overly drawn to parsimony, and not adjusting in response to new information.

Chapter 6
A long list of objections to this exercise, showing how the results might be adjusted if you take those complaints into account. It’s a reminder that “good judgment” is provisional, and has to be put carefully into context.

Chapter 7
Scenario exercises (where you try and imagine all the other things which could have happened) help check hindsight bias and can help temper some flaws in forecasting techniques. But they can overdo the effect for foxes, who might become too questioning of their own beliefs. Broadly, there is only scant work showing how these skills can be taught (more work was done in Superforecasters, but it’s still an incredibly understudied area).

Chapter 8
First, an amusing (if rather self indulgent) imaginary intellectual debate between skeptics and meliorists along a range of extremes. Then, the policy implications of good judgment. More detail is provided in the preface, but I struggle to see how anyone can seriously believe that probabilistic forecasting could not help us make better institutional and personal decisions. But I’ve heard some skepticism there, so it’s still an argument that needs to be made.
Profile Image for Jurij Fedorov.
385 reviews74 followers
August 24, 2020
Review

No, not for me at all even though I love politics and social science.

Now, the audiobook may be lower quality than the book itself for sure. I really didn't enjoy the wooden but yet still over the top narrator with a voice I never quite enjoyed. It was like he was reading words without reading the meaning of the sentences while nearly screaming each word. The book itself doesn't really make for a good audiobook though so the narrator is not fully to blame. It's dry academic writing and a complicated topic. The 2 things combined makes this a package way too hard to unpack and understand. You have to be fully focused on each sentence to get anything out of this hard text yet it's not some deep philosophical book with some great life lessons so the work doesn't seem to be rewarded fairly. Rather it presents complicated topics via simple examples without anything deep to it. So why it's written this way I just do not get at all. But I assume the author may have feared being seen down upon if he wrote it in simple English.

And this is even more confusing as his book "Superforecasting: The Art and Science of Prediction" was good and had opposite problems. It was too simple for the topic at hand yet the writing style never tried to make it more intellectual than it really was. So Superforecasting was a book I could get something from even though it wasn't anything scientific or important. So while Superforecasting isn't an essential read it's a basic good book. While this book is about a more complicated and less important topic, but much more academic writing style.

I feel really bad for not liking it because I think this book has a ton of good points in it and I agree with the conclusions overall. As a package it's just not for me. It's boring and sleep inducing. That doesn't mean it's a bad book or that I don't recommend it. You just need to have great patience and focus to enjoy it and those things are not my strong suits. I really feel like there is a 4 star book in here for me somehow.

Also, the silly analogies and weird names for things just had me more confused. The academic writing style is already really hard to get into so the various new terms I had to learn were too much. Too much boring work for the amount of lessons I could get out of the book. It's not that I refuse to put in this work if needed. It's just that I don't see why I should when there are hundreds of audiobooks out there with good lessons but that I will actually enjoy. Maybe I am lazy or something? But this is something I do for fun not for work. So it needs to be fun somehow.

Overall I think the issue is that the academic writing style does not include clear and good findings I can fully agree on. It's different with academic papers as I can read them and still check the data and agree with it. Here it's a loose discussion so it needs to have a more discussion focused style. A personal narration that I can clearly think about from all sides.

Conclusion

The audiobook went into one ear and out the other. Most conclusions didn't stick because the critical thinking needed for such a philosophy book wasn't made easy by the academic writing style. The dry academic writing style is pretty much how you make me not learn your lessons. But I still think this is a book many will enjoy and love. I just didn't.

I give it 2 stars because I think it has potential for other readers who don't have anything even close to ADHD. For me it's a 1 star book.
Profile Image for Pandit.
192 reviews11 followers
July 21, 2019
One of the classic textbooks! I had it on my radar for a while, but after John Cleese mentioned it in a recent interview, it was time to pick it up for myself. I hear that Tetlocks more recent book is an easier read, but academic though the style is here, it is easy enough to follow. The story presented is quite clear: Political experts do not have a good track record in making predictions.
Tetlock though, in this long term study, looks at the underlying character styles of different experts, breaking them down for the most part into Foxes (who know many things, and consider evidence from many angles) and Hedgehogs (whose prickly defence helps them to stick to a single story, with a jarring outcome). Though foxes won out, the research presented suggests that neither group is much better at forecasting than 'dart-throwing chimps'.
"When good case-based stories are circulating for expecting unusual outcomes, people will base their confidence largely on these stories and ignore the cautionary base rates"
I found interesting the 'hedgehog' story-style of reasoning. And how such displayed confidence and case-scenario making attracts the attention of media much better than the more reasonable foxes. This applies not just to political fortelling, but to most of the media - the current hype about climate change is a good example. Voices of reason warning that climate change is not such an utter disaster as the hedgehogs warn, are not being heeded.
"overconfident experts may be more quotable and attract more media attention. On the other, overconfident experts may also be more likely to seek out the attention. The three principles - authoritative-sounding experts, the ratings conscious media, and the attentive public - may thus be locked in a symbiotic triangle."
"hedgehog opinion was in greater demand from the media ... simple decisive statements are easier to package into sound bites."
Also interesting is the analysis on how likely people are to update their beliefs in the face of evidence. Many of the people interviewed, when presented years later with their earlier (wrong) predictions, still insisted that they were 'almost right', that their predictions could still happen, or that only intervening events derailed their forecast. Again, hedgehogs were much more stubborn. I'm thinking of Paul Ehrlich and his predictions that within 10 years, the world faces massive food wars as populations outgrow resources. He's still predicting this 40 years on, despite history proving him utterly wrong on every single count.
In terms of believability of predictions, good stories trump good science or a proven track record. Tetlock presents some verification of this phenomenon with examples in his test group. The more imaginable a prediction is, the more it is believed by both the proclaimer, and the public.
In summary, it is giving nothing away to say that Tetlock shows with methodical research, that 'experts' do not have a good track record in making political predictions.
Profile Image for Hariharan Gopalakrishnan.
100 reviews1 follower
December 26, 2018
Rereading after a year (actually read most of the mathematical parts, but just listened to the rest on audible- it's surprisingly east to grok in audio form considering the subject matter, although taking notes on your smartphone helps!).

This is a thoughtful work about the effectiveness (or lack thereof) of expert judgement in forecasting scenarios in messy domains such as politics and economics. This is the condensation of 20 years of research using forecasting tournaments by the author. Tl;dr? Be humble, not just because even the best humans do not even come close to the statistical models, and those models themselves aren't close to being omniscient either, but also because even within humans, it's the humbler, more self-critical ones that do better. Also read widely, cutting across domains: it's the foxes (the ones who know many little things) rather than the hedgehogs (ones who know one big thing) who succeed.

It is a dense read, but Tetlock's arguments are so clearly expressed here (the general pattern is: argument for his interpretation of a result-- counter-argument -- counter-counter-argument and so on with the author usually having the last word (though surprisingly often not as Tetlock seems to have taken to heart the lesson about humility in judgement), and then move on to the next argument.), and all the sophisticated statistical adjustments so clearly explained, that I found this a pleasure to read. The extent to which this book goes on to address is detractors makes me realize what separates good academic writing from the polemics that most popularizations of research on the social sciences are. And the research itself is extremely interesting and important, not just for governments (as evidenced by the fact of the author's involvement in prediction tournaments funded by US intelligence), but also for normal people, either in forming our own models of reality or in deciding which 'expert' to trust.
.
Profile Image for Ed.
333 reviews33 followers
May 30, 2012
This a fantastic data based exploration of just how little political pundits actually know. And in fact the more media exposed, the more single view of the world they possess, the less accurate are their political forecasts. Philip Tetlock over 20 years persuaded political experts to make predictions on a wide variety of topics, only to find that most experts were less reliable than a chimp picking options via a dart board. He used Isaiah Berlin's wonderful distinction between the Hedgehog that knows one thing and the Fox who knows many things, to show that the only experts with anything like a respectable ability to sometimes predict future political developments were those with many models of reality, who were willing to think they might be wrong, who did not try to re-write their (written) predictions and who were rather tentative in the first place. Beware of TV talking heads who pontificate about future political developments: they are no better than chimps. I think every political science student should read this, as should all voters.
Profile Image for Lorenzo Barberis Canonico.
131 reviews2 followers
December 23, 2019
Wow, so this is the collection of Tetlock's primary research findings that paved the way for "Superforecasting". I recommend reading the latter first though.

Also, if you've read "Superforecasting" first, don't expect the same writing style for this one because unfortunately academic books are not written in the nicest way. Still, the findings and the methodologies are so impressive: they will make you believe in social science research again.
Profile Image for Jeff Duda.
43 reviews1 follower
December 17, 2023
BOTTOM LINE: this book is okay, but you'll likely find greater enjoyment and get more out of reading the follow-up book to this, "Superforecasting."

‐---‐--‐----‐--------------------------------------

DETAILED REVIEW: This book was chewy - hard to get through. The language was dense and complex - I frequently needed to look words up in the dictionary (maybe i just have a poor vocabulary). While it is good to repeat thesis statements and main points, which the author did, I felt he used too much filler material and narrative to restate the points. In other words, I felt this book could have made its point using substantially less prose.

The other major issue I had with the book concerned the relevance of the topical content -- the policial events covered in the forceasting exercise. While I was alive during the late 1980s-early 2000s, I was young (childhood) and not particularly versed in the political issues of the time. Therefore, when the book moved over into heavily discussing specific political matters, I lacked reference or context and thus tuned out completely and skipped most of those parts. I didn't feel that rehashing those issues so much was necessary for explaining the results of the hedgehogs vs. foxes forecasting tendencies.

Also, as a PhD-holding meteorologist who has published papers in scientific journals that use the same forecast verification metrics that were presented in this book, I was a bit annoyed at the particular choices of metric presentation and graphs. Many of the data figures in this book would have been criticized in peer review for scientific publication. It is always strange to me that people *insist* every single quantity be presented as positively oriented; error is a negatively oriented measure, so "converting" it to a positively-oriented one, by plotting calibration as "1 - calibration_score" actually confused me more than if the calibration term (referred to as "reliability" in atmospheric science) in the Brier score decomposition.

Also, labels!
197 reviews2 followers
March 24, 2021
Eye opening perspective on how far we should trust expert judgement. The forward to the newer edition was helpful perspective and allowed the author to try to unpack his theory that has been turned into a negative sound-bite. The fact that political experts used this attack method proves his major point: that experts do no better than informed dilettantes and often less well in their subject of expertise in the long term.

The experimental method is betting on outcomes with numerical values. Then the "players" would need to own up to their reputational bets. There are some boundaries to when it is applicable such as time horizons, turbulence of the environment, etc., as there is for any useful tool. This could be used in politics, intelligence analysis, and many other fields. However, the current system and economics prevent using this improvement method in political punditry (which the author discusses better than I can).

The group that performed well were "foxes", who were skeptical and willing to adjust their world views. Even the best "foxes" only did as well as simple statistical analysis and were soundly beaten by tailored algorithms. The "hedgehogs" are more reliant on deterministic models, and performed almost as poorly as pure chance (leading to the sound-bite of "dart-throwing monkeys") individually. They made larger bets on "sure" or "impossible" things, so overall scored lower when they were proven wrong. However, confident "hedgehogs" make for better TV and news hosts in polarized media seeking profit over best political advice.

I would not recommend this as a first foray into the subject. Technical details are explained, but it is very dense (I have taken an undergrad and graduate course in statistics and could barely keep up). I also used several concepts from Thinking Fast and Slow and Fooled by Randomness as building blocks to understand this book.
Profile Image for Lazarus.
159 reviews3 followers
February 21, 2022
Because of Tetlock, I have somewhat informed of the world of political advisement of which this and a book he wrote called “Super Forecasters” take a deep dive into. I could tell that this was a reflective book, designed to highlight the flaws in the profession as a whole, and then offer a viable solution. The profession is turning prediction itself into a science, but humans are so predictably unpredictable, that the science itself is probabilistic.

I think Tetlock’s answer to this problem is truth and growth through truth, but forecasting is not designed to predict the truth, or to predict what’s going happen, in fact the first 7 chapters are about highlighting how no one knows what the hell is going on. Political pundits and experts are paid to manage the “itching ear”. It is a management of ideas based on knowledge of current events and the response to that knowledge. In some ways experts are like lawyers, who reconcile truth with what can be handled. No one is going to have advisors around them, who doesn’t have alternatives.

All in all it is a Tetlock brainstorm of the entire profession, of which his knowledge is vast and acumen unquestioned.
Profile Image for Ryan.
1,193 reviews170 followers
February 3, 2020
An interesting book about the kind of biases "professional" pundits have in talking about political topics. Very data-driven, and shows that experts are better than the completely uninformed (college students or worse), but that experts are actually also good outside of their areas of expertise, if they rely on decent information sources. There's an even more exciting result -- fairly straightforward computer models can do even better than experts, even within their areas of expertise.

One thing missing is that experts often are better at coming up with indicators -- i.e. "if X happens, then Y will result" -- which is more useful than just saying "I think Y will happen". There are also structured ways to get far better predictions from people (on par with the best machine models) by asking better questions (and using markets, etc.)

Irony: most of the "wrong" predictions citied in the book ended up coming true shortly thereafter ("Russia invades Ukraine", etc.)

Unfortunately, the audiobook narrator is gratingly annoying; I'd read the book instead.
104 reviews
April 8, 2020
This book is one of those where the conclusions are fascinating, but the book itself is not worth reading. The vocabulary is often inaccessible for readers who are not versed in statistics. I was hoping that the book would be a collection of crappy takes by Stephen A. Smith-like people who think too highly of themselves, but the book is more a narrative account of a science experiment with references to hedgehogs, foxes, dilettantes, and chimps instead of real people. It felt like I was reading an excessively long law review article, one which I would normally just read the introduction and conclusion and never touch it again. This book would be perfect for one of those services that break down nonfiction books into fifteen-minute podcasts.
Profile Image for Planar.
40 reviews1 follower
July 7, 2017
Very interesting book. Makes a complete and concise case on how political forecasting can be evaluated with quantifiable criteria. Also such thesis has the obvious relativist limitations (recognised by the author himself) but overall is a valiant effort to quantify and evaluate different forecasting approaches.
Sadly I chose to listen to this as an audiobook. This was a bad choice since the text is supported by numerous graphs and equations that are difficult to follow in the audio format.
Go for the hard copy.
Profile Image for Esben.
122 reviews15 followers
September 5, 2022
Expert Political Judgement is a wonderful book: Dense in information and with beautiful experimental arguments from 20 years of research. It feels like reading one long research paper and I'm all the happier for it.

The basic idea is that generalist experts that accept and integrate conflicting perspectives and do not get swayed by rhetoric or getting credit are clearly better than close-minded experts, the ones we often see in news articles spouting their confident perspectives. Additionally, humans are also generally worse at predicting than baseline statistical models.
Profile Image for Monzenn.
508 reviews1 follower
November 22, 2022
Very good book. It does get research-paper-y at times, but even for those moments I get a dose of punditry which is always fun to read. Part experiment, part social commentary, Expert Political Judgment documents an appropriate test of the accuracy of punditry. As a bonus, it even has multiple sections that deal with the test's limitations and weaknesses. The mark of a good paper, and a nice bonus to an already great book.
131 reviews
May 24, 2023
I wrote a terrible review but I am going to give it another chance because the other reviewers loved it. I just decided to skip the preface and have started with chapter one. A work in progress. More to follow.

I did reread it. About 5 to 10 pages in I realize that I was looking up at least five works of page for their definitions and I have a good vocabulary. I then realized a few pages later that it would take forever to finish this book if one read it word for work. So I then skimmed it for chapters and subchapters that seemed of interest. I got the gist of it which is enough but not enough for me to recommend this book and it is far too complicated. I do not understand why the author insisted on using such difficult vocabulary words. The average person could not possibly understand anything in this book.
Profile Image for Tom.
39 reviews
July 3, 2020
I read the superforecasting book first which was excellent, 5/5. The issue being i got that much into forcasting and watching Podcasts with Tetlock that i was reading story`s and theories i know in less detail.

If you fancy taking on Tetlocks work, do so in the order of the books:
Expert political judgement, Then, Superforecasting
Profile Image for Michael.
476 reviews45 followers
September 28, 2023
3.5 stars.

Hard to rate a because this is a write-up for a research project, and doesn't include any of the interesting research, just the summaries. You don't really get a feel for how exactly the research was done.

But it seemed quite rigorous, and is probably the best study going round related to forecasting and how people rationalise their errors.
Profile Image for Stefan Bruun.
279 reviews59 followers
November 2, 2020
The book includes a lot of very valuable points. Most people would benefit from understanding these. The reason I only give it three stars is that the point could have been conveyed in far fewer pages.
Displaying 1 - 30 of 73 reviews

Can't find what you're looking for?

Get help and learn more about the design.