Jump to ratings and reviews
Rate this book

Superforecasting: The Art and Science of Prediction

Rate this book
Everyone would benefit from seeing further into the future, whether buying stocks, crafting policy, launching a new product, or simply planning the week’s meals. Unfortunately, people tend to be terrible forecasters. As Wharton professor Philip Tetlock showed in a landmark 2005 study, even experts’ predictions are only slightly better than chance. However, an important and underreported conclusion of that study was that some experts do have real foresight, and Tetlock has spent the past decade trying to figure out why. What makes some people so good? And can this talent be taught?
 
In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are "superforecasters."
 
In this groundbreaking and accessible book, Tetlock and Gardner show us how we can learn from this elite group. Weaving together stories of forecasting successes (the raid on Osama bin Laden’s compound) and failures (the Bay of Pigs) and interviews with a range of high-level decision makers, from David Petraeus to Robert Rubin, they show that good forecasting doesn’t require powerful computers or arcane methods. It involves gathering evidence from a variety of sources, thinking probabilistically, working in teams, keeping score, and being willing to admit error and change course. Superforecasting offers the first demonstrably effective way to improve our ability to predict the future—whether in business, finance, politics, international affairs, or daily life—and is destined to become a modern classic.

352 pages, Hardcover

First published September 29, 2015

Loading interface...
Loading interface...

About the author

Philip E. Tetlock

18 books321 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
7,332 (36%)
4 stars
8,307 (41%)
3 stars
3,476 (17%)
2 stars
724 (3%)
1 star
264 (1%)
Displaying 1 - 30 of 1,534 reviews
Profile Image for Maggie Stiefvater.
Author 61 books170k followers
April 4, 2024
It took me several years to unearth this book from my TBR but it was a good, thought-provoking read that hammered the importance of intellectual humility and challenging our own beliefs again and again.
Profile Image for Yannick Serres.
240 reviews4 followers
November 9, 2015
During the first hundred pages, I was sure to give the book a perfect score. It totally caught my attention and made me want more and more. The book made me feel like it had been written for me, someone that don't know much about predictions and forecasts, but feels like he could be good at it.

Then, after the half of the book, you get a little bored because it always come back to the same thing: Use number to make your predictions in a well established timeframe, always question your predictions till the time runs out, learn from the past and see beyond your conic vision.

This book is very interesting and worth giving a shot. It's a good mix of science and history, but you still feel like you're reading a novel.

I was expecting nothing from this book and got quite a fun at reading it. I've been positively surprised and hope you'll be too.

I got to thank Philip E. Tetlock and Random House of Canada for this book I received through Goodreads giveaways.
Profile Image for David Rubenstein.
822 reviews2,664 followers
November 13, 2018
Philip Tetlock is a professor at the University of Pennsylvania. He is a co-leader of the Good Judgment Project, a long-term forecasting study. It is a fascinating project whose purpose is to improve the accuracy of forecasts. You can learn more about the project on theGood Judgment website. In this book you can learn the basics of how to make accurate forecasts in the face of uncertainty and incomplete facts.

An amazing tournament was held, which pitted amateur volunteers in the Good Judgment Project with the best analysts at IARPA (Intelligence Advanced Research Projects Agency). The amateurs with the best records for accuracy are termed "superforecasters". They performed 30% better than the professional analysts, who had access to classified information. This was not a simple tournament. It was held over a long period of time, enough time to allow a good amount of research and thinking and discussions among team members. It involved hundreds of questions. These questions were asked in a precise, quantitative way, with definite time frames. And besides giving predictions, players in the tournament estimated their confidence levels in each of their predictions. Their forecasts, along with their estimated confidence levels went into the final scores.

So, what are the qualities of a good superforecaster? Perhaps the dominant trait is active open-mindedness. They do not hold onto beliefs when evidence is brought against them. They all have an intellectual humility; they realize that reality is very complex. Superforecasters are almost all highly numerate people. They do not use sophisticated mathematical models, but they understand probability and confidence levels. Superforecasters intuitively apply Bayes theorem, without explicitly using the formula quantitatively. They care about their reputations, but their self esteem stakes are less than those of career CIA analysts and reputable pundits. So, when new evidence develops, they are more likely to update their forecasts. Superforecasters update their forecasts often, in small increments of probability.

The book discusses the movie, Zero Dark Thirty, about the military assault on the compound in Pakistan, where Osama bin Laden was hiding. The character playing Leon Panetta railed against all the different opinions of the intelligence analysts. But the real Leon Panetta understood the differences in opinions, and welcomed them. He understood that analysts do not all think alike, they have diverse perspectives, and this helps to make the "wisdom of the crowd" more accurate overall. It was found that teams score 23% better than individuals.

The book dispells the myth that during World War II, German soldiers unquestioningly followed orders, while Americans took the initiative and improvised. The truth, especially in the early phases of the war, was often exactly the opposite. The Germans followed a philosophy that military orders should tell leaders what to do, but not how to do it. American leaders were given very detailed orders that removed initiative, creativity, and improvisation. The author deliberately chose this example to make us squirm. One should always keep in mind, that even an evil, vicious, immoral enemy can be competent. Never underestimate your adversary. This is difficult in practice; even superforecasters can conflate facts and values.

Nowadays, the military has radically changed. The military encourages initiative and improvisation. However, corporations are much more focused on command and control. Their hierarchical structure tends to micro-manage. In fact, some corporations have hired ex-military officers to advise company executives to worry less about status, and instead to empower their employees.

An appendix at the end of the book is a list of the Ten Commandments for superforecasting. These are useful generalities for successful forecasting. But even here, the authors are intellectually humble; their last commandment is not always to treat all the commandments as commandments!

This is a fascinating, engaging book, about a subject I had never thought much about. The book is easy reading, filled with lots of anecdotes and interesting examples. The authors rely quite a bit on the wisdom of behavioral economists, Daniel Kahneman and Amos Twersky. They have given a lot of thought to the subject of forecasting, and it really shows.
Profile Image for Anton.
326 reviews92 followers
December 11, 2017
5⭐️ - What a great book!

It will definitely appeal to the fans of Thinking, Fast and Slow, Predictably Irrational: The Hidden Forces That Shape Our Decisions and The Black Swan: The Impact of the Highly Improbable.

Thought-provoking and full of very perceptive observations. But I particularly would like to commend authors for how well this book is written. This is an example of non-fiction at its best. There is definitely research and background science overview but each chapter is a proper story as well. Philip E. Tetlock and/or his co-author (not sure who should take the credit) are superb storytellers! It was not only insightful but genuinely enjoyable to read this book.

I usually read several books simultaneously one or two non-fiction titles and a bunch of fiction stories. But last week 'Superforecasting' monopolised my reading time. And it is particularly telling how well it managed to trample competition from its fiction 'rivals'.

It goes straight to my absolute best non-fiction shelf. I recommend it strongly to all curious about the psychology of decision making and an ability of our mind to cope the uncertainty.
Profile Image for Michael.
544 reviews20 followers
January 24, 2016
Philip E. Tetlock feels a bit too polite. Sometimes it seems he is excusing wrong predictions by finding weasel words in them or interpreting them kindly instead of using the intended assertion.
Just say, Thomas Friedman is a bad forecaster.
Instead of reading this book I recommend reading the books he references:
Thinking, Fast and Slow The Black Swan: The Impact of the Highly Improbable and The Signal and the Noise: Why So Many Predictions Fail - But Some Don't
This books feels like a (superficial) summary of the afore mentioned books and an attempt to combine them.

"The central lessons of “Superforecasting” can be distilled into a handful of directives. Base predictions on data and logic, and try to eliminate personal bias. Keep track of records so that you know how accurate you (and others) are. Think in terms of probabilities and recognize that everything is uncertain. Unpack a question into its component parts, distinguishing between what is known and unknown, and scrutinizing your assumptions." (New York Times review)
Profile Image for Andy.
1,603 reviews524 followers
December 18, 2017
This book features some interesting trivia about "Super-forecasters" but when it comes to explaining evidence-based practice, it was Super-disappointing. It starts off well with a discussion of Archie Cochrane and evidence-based medicine (EBM), but then it bizarrely ignores the core concepts of EBM.

-In EBM, you look up what works and then use that info to help people instead of killing them. But when Tetlock talks about social philanthropy he implies that it's evidence-based as long as you rigorously evaluate what you're doing. NO! If your doctor gives you arsenic instead of antibiotics for your bacterial infection, that's not OK even if he does lots of lab tests afterwards to see how you're progressing.

-In EBM, you focus on the best available evidence. There's a difference between what some drug rep told you vs. the conclusions of a randomized clinical trial. But when Tetlock reviews the Iraq War fiasco, he argues there was a really big pile of evidence so it made sense to go to war. He doesn't seem to get that a really big pile of crap is still just crap. Some elements of the pro-war narrative were known to be bogus before the war. Others turned out to be bogus after (Curveball, etc.), so those were not investigated and confirmed as solid evidence beforehand either.

-In EBM, the point of a diagnostic test is to get a predictive value. This number tells you how likely a test result is to be true, based on its track record. Instead, Tetlock praises his forecasters for making up percentages that reflect their subjective degree of certitude. And he calls those "probabilities" but that is very misleading, because in science a probability is something like the chance of drawing a royal flush in poker, i.e. it's an objectively calculated number based on reality.

-In EBM, the big issue is whether the treatment works for the main relevant outcome. So for the Iraq War example, the question for the CIA was whether an invasion would A) spread democracy in the Middle East after preventing an imminent nuclear attack on the USA, or B) not prevent anything (because there was no secret new nukes program) and increase regional chaos as well as global terrorism (think ISIS). This decision tree is absent from the book, and that omission violates Tetlock's own rule about asking the meaningful hard question.

This book has good content on cognitive biases. But I would recommend going directly to the source on that topic. Thinking, Fast and Slow by Daniel Kahneman
Profile Image for Elizabeth Theiss Smith.
315 reviews83 followers
October 14, 2015
When it comes to forecasting, most pundits and professionals do little better than chimps with dartboards, according to Phillip Tetlock, who ought to know because he has spent a good deal of his life keeping track. Tetlock has partnered with Dan Gardner, an excellent science journalist, to write this engaging book about the 2 percent of forecasters who manage to consistently outperform their peers.

Oddly, consumers of forecasts generally do not require evidence of accuracy. Few television networks or web sites score the accuracy of forecasts. Years ago, as a stockbroker I gave very little weight to the forecasts of my firm's experts; the stocks they recommended were as likely to go down as they were to go up. Today, as an occasional television pundit, I'm often asked to forecast electoral outcomes, so I was very curious about Tetlock's 2 percent that managed "superforecasting."

"How predictable something is depends on what we are trying to predict, how far into the future, and under what circumstances," according to Tetlock and Gardner. It makes no sense to try to predict the economy ten years from now, for example. But he wanted to understand how the best forecasters manage to maintain accuracy over the course of many predictions. In order to find out, he launched the Good Judgement Project, which involved 2800 volunteer forecasters who worked on a series of prediction problems over several years. After the first year, he identified the best forecasters and put them on teams to answer questions like whether Arafat was poisoned by polonium, whether WMDs were in Iraq and whether Osama bin Laden was in Abbottabad. His findings shed light on the kind of evidence-based, probabilistic, logical thought processes that go into the best predictions. A section on group think is nicely illustrated by the Bay of Pigs disaster; the ability of JFK's team to learn from their mistakes is demonstrated by the same group's more skillful response to the Cuban missile crisis.

Written in an engaging and accessible style, Superforecasting illustrates every concept with a good story, often featuring national surprises like 9/11 and the lack of WMDs in Iraq with explanations of why forecasters missed what looks obvious in hindsight. Ultimately, this is a book about critical thinking that challenges the reader to bring more rigor to his or her own thought processes. Tetlock and Gardner have made a valuable contribution to a world of internet factoids and snap judgments.
Profile Image for Numidica.
421 reviews8 followers
April 10, 2023
This book was recommended to me by my son, and I found myself nodding in agreement with almost all of it. Tetlock focuses on what makes a person capable of excellent forecasting, and the short version of his explanation is that one must have an open mind, work hard at digesting a lot of information, be numerate, and be willing to adjust ones views as new facts heave into view. As Lincoln said, one must "adopt new views as quickly as they are shown to be true views".

Tetlock had some interesting / controversial things to say about the German Army in WW2; the German Army (the Wehrmacht) was, one must remember, inherited by the Nazis, not invented by them. The open-minded decision processes ingrained in the Wehrmacht, which pushed decision-making (and generals) down to the front lines, had a lot to do with their early successes. Similar approaches to problem solving are usually successful in forecasting and decision-making. Interestingly, both the US Army and to a greater degree, the Israelis, adopted the German war-fighting approach. Commanders told subordinates what to do, and what the larger goal was; they did not tell them how to accomplish the goal; having read a good deal about the early battles of the Wehrmacht, I know what Tetlock is saying is correct. The lesson from these kinds of tactics is, don't fall in love with a plan and be prepared to improvise, because you almost certainly will have to do so. Similarly, forecasters must constantly incorporate new information and adjust forecasts as appropriate.

I used to jokingly open staff meetings by saying, "Let's look at the numbers, because, you know, everybody likes the truth". But of course, the reality is that if you are doing poorly as reflected by your scoring - well, in that case, you may not in fact like the truth - but generally you can't get better unless you keep score. The IARPA project that Tetlock describes in detail attempts to dramatically improve forecasting by breaking down big problems into smaller components which can be calculated; this was a trick that Enrico Fermi used with humorous effect to astonish people. Fermi would give his students problems like, estimate the number of piano tuners in the City of Chicago, and then demonstrate how to break down the problem in such a way as to get a reasonable result. I had a wonderful math professor, Frank Giordano, who gave us these kinds of problems, and they were always fascinating to me.

My only complaint is that the author repeated himself occasionally, but, on the other hand, in my case he was preaching to the converted. This was a very worthwhile book which emphasizes the need for testing claims with data. Fundamentally, Tetlock's recommendation is to use the scientific method in areas outside of science. There is no doubt that public discourse could use a strong infusion of facts right now to influence decision-making.
Profile Image for John Kaufmann.
674 reviews58 followers
June 26, 2018
This book was solid, though perhaps not quite as good as I hoped/expected. It was interesting reading, full of interesting stories and examples. The author doesn't prescribe a particular method - superforecasting, it appears, is more about a toolbox or set of guidelines that must be used and adapted based on the particular circumstances. As a result, at times I felt the author's thread was being lost or scattered; however, upon reflection I realized it was part of the nature of making predictions. On reflection, his guidelines are clear and should be helpful, even if they cannot provide a method for correct predictions 100% of the time.

One critique I had was that the author didn't provide any statistical evidence of why the people he identified as superforecasters were good as opposed to lucky. I continued to think some of the examples he gave were based on luck, not necessarily skill - the author distilled a lesson that contributed to the success, but I would have had more confidence that his conclusion represented the reason for the superforecasters' success if he had provided more statistical evidence to support that conclusion. Nonetheless, his conclusions/guidelines appear sound, and I plan on using them.
Profile Image for Atila Iamarino.
411 reviews4,426 followers
December 30, 2016
Me surpreendeu positivamente. Tinha comprado este livro em 2015 e nem lembrava o que me motivou. Não me arrependi.

O livro começa com a explicação de como e porque a maioria dos especialistas em previsão política e similares geralmente estão errados. Muitas vezes, mais errados do que tentativas aleatórias de prever o futuro (o anedótico chimpanzé com um dardo).

Philip E. Tetlock fez parte de um longo estudo chamado The Good Judgment Project, onde os participantes passaram anos fazendo predições e tendo elas avaliadas. E este é o principal ponto que ele traz: raramente alguém que prediz o que vai acontecer é avaliado com base nas previsões que fez. Mas foi justamente o que fizeram nesse projeto. E encontraram pessoas que conseguiam fazer previsões bastante acuradas – a curto prazo, já que no longo é caótico demais.

Em seguida, ele explica como essas pessoas conseguem fazer ótimas previsões. O que, resumindo bem, é algo bastante próximo da forma como se faz ciência. Pesquisando muito, atribuindo probabilidades bem ponderadas para cada evento, revisando as previsões com base em novos dados e não deixando concepções erradas e só a intuição guiarem as ideias. Acaba sendo um ótimo livro sobre pensamento crítico.

Profile Image for Andrew.
656 reviews209 followers
June 29, 2016
Superforcasting: The Art of Science and Prediction, by Philip E. Tetlock, is a book about the art and science of statistical prediction, and its everyday uses. Except it isn't really, that is just what they are selling it as. The book starts off really strong, analyzing skepticism, illusions of statistical knowledge, and various types of bias. However, the majority of the book focuses on a US government intelligence project called IARPA, designed to use everyday citizens to make statistical predictions on real life events.

This book really threw me off, after such a strong start. I was expecting a book on forecasting; its design, uses and various techniques. And in some ways Tetlock delivers. However, I did not expect a book on US foreign policy, or a comparison of the fictional version of CIA director Leon Panetta from Zero Dark Thirty with the real one. It was all a little bit mind-boggling, and not in a statistical way. The book just doesn't hold my interest.

Frankly, I could continue to criticize, suffice to say that a book on a US intelligence program should probably be labeled a bit better than this. The title suggests a cut and dry analysis of forecasting, but the book delivers in propaganda, US political criticism and so on, as opposed to interesting information on a form of statistical prediction. I would recommend a pass on this one, if you are not interested in the addition of fairly watered down US political theory. If you are, however, the book may be of interest to you. I was more disappointed than anything, and hope to read a book actually focusing on forecasting in the near future.
Profile Image for Pavlo Illiashenko.
25 reviews18 followers
March 16, 2016
Harry Truman famously said
: Give me a one-handed economist! All my economics say, ''On the one hand? on the other.''

Philip Tetlock combines three major findings from different areas of research:

1) People don't like experts who are context specific and could not provide us with clear simple answers regarding complex phenomena in a probabilistic world. People don't like if an expert sounds not 100% confident. They reason, that confidence represents skills.

2) Experts who employ publicly acceptable role of hedgehogs (ideologically narrow-minded) and/or express ideas with 100% certainty are wrong on most things most of the time. General public is fooled by hindsight bias (on the part of experts) and lack of accountability.

3) We live in the nonlinear complex probabilistic world, thus, we need to shape our thinking accordingly. Those who do it ("foxes" comparing to "hedgehogs" can think non-simplistically) become much better experts in their own field and better forecasters in general.

I guess, nobody with sufficient IQ or relevant experience will find any new and surprising ideas in this book. However, the story is interesting in itself and many Tetlock's arguments and examples can be borrowed for further discussions with real people in the real life settings.

Profile Image for Michal Mironov.
147 reviews11 followers
September 10, 2019
I usually rank my favorite books on a line between „extremely readable“ and „ very useful“. This one is probably among my Top 3 most useful books ever. The other two are Kahneman's “Thinking, Fast and Slow”, and Taleb's “Black Swan”. You don't have to agree on everything with the author, but you still will get dozens of truly important facts that can fundamentally affect your life. Don't be misguided by the title – you really have to read this book even if you don't have the ambition to predict stock prices or revolutions in the Arab world. Whether we like it or not, we are all forecasters - making important life decisions such as changing career path or choosing a partner - based on dubious, personal forecasts. This book will show you how to dramatically improve those forecasts based on data and experience of the most successful forecasters. You’ll be surprised that those experts usually aren’t CIA analysts or skilled journalists, but ordinary intelligent people, who knows how to avoid most common biases, suppress their ego and systematically assess things from different angles. We will never be able to make perfect predictions but at least we can learn from the very best.
Profile Image for Tony.
547 reviews42 followers
January 17, 2020
I'm giving this a 4 even though I didn't complete it. It's very well written and structured but I just decided half way through that the subject wasn't for me.

Some exceptional real-world examples though!
Profile Image for Maru Kun.
218 reviews514 followers
Shelved as 'interesting-but-dubious'
February 18, 2020
Troubled to find I own, as yet unread, a book recommended by Dominic Cummings.

Now I’ve lost all desire to read it and have to put it on the “interesting-but-dubious shelf” and wonder whether or not I’m a weirdo who can’t tell the difference between real science and pseudoscience.

The book gets a mention from Cummings in this video: Tory backlash as Boris Johnson's rogue No10 chief Dominic Cummings refuses to condemn ousted 'superforecaster' Andrew Sabisky who 'posted vile Reddit comments defending rape and incest fantasies'
Profile Image for Frank.
812 reviews42 followers
August 27, 2018
PT's Superforecasting correctly remarks upon the notable failure to track the performance of people who engaged in predicting the outcome of political events. This lack of accountability has led to a situation where punditry amounts little more than entertainment; extreme positions offered with superficial, one-sided reasoning; aimed mainly at flattering the listeners' visceral prejudices.

One problem is expressed positions are deliberately vague. This makes it easy for the pundit to later requalify his position conform with the eventual outcome. For example: a pundit claims quantitative easing will lead to inflation. When consumer inflation doesn't appear, he can claim that 1) it will, given enough time, 2) in fact, there is inflation in stock prices 3) He never said how much inflation.

Thus, the first task in assessing performance is to require statements of clearly defined, easily measurable criteria. Once this is done, PT began a series of experiments, testing which personality characteristics and process variables led to good prediction outcomes, both for individuals and groups. Key attributes include independence from ideology, an openness to consider a variety of sources and points of view and a willingness to change one's mind. Native intelligence, numeracy and practical experience with logical thinking all correlate positively with prediction accuracy; at least to a point. But moderately intelligent and diligent individuals can often surpass the super bright, who sometimes show a tendency to be blinded by their own rhetorik. And some "superforecasters" consistently outperform professionals with access to privileged information. The chapter on how to get a group to function well together is especially applicable for business management.

PT wrote his book at a mid brow style, and anyone already familiar with basic psychology writing, e.g. from D Kahneman, will often feel annoyed by his long and overly folksy explanations. Indeed, while it has good things to say about applied epistemology, it isn't necessary read all all 200 pages. A good alternative starting point would be to consult Evan Gaensbauer's review at the Less Wrong website: https://www.lesswrong.com/posts/dvYeS....
Profile Image for Civilisation ⇔ Freedom of Speech.
965 reviews265 followers
June 21, 2021
I first heard of this book on CNN's GPS podcast, but the name "Superforecasting" reminded me of "Super-freakonomics", which inturn reminded of dubious smartass hindsights and which caused me to ignore the recommendation. Tetlock was cited again by Steven Pinker in his book "Enlightenment Now" and that finally got me to pick it up.
Can you really forecast geopolitical events ? Surprisingly yes.
Do you need a special ability to be a "super-forecaster" ? Not really.
What then do you need ?
The book describes the methods used by super-forecasters and in doing so describes a number of systemic biases in our thinking. Also, there are many relevant examples and except for a couple of complex equations which can be ignored, the author makes his points really well. This was a fun, fast read that was also satisfying.
To the author's credit, he has finally made me pick up Thinking, Fast and Slow which I already think will be life-changing as far as books and ideas can be.
Profile Image for Paul Phillips.
40 reviews
March 17, 2018
Really good and well thought out ideas, particularly relevant to anyone who has any sort of forecasting responsibilities in their work. I think this is a must read for economists.
My only quarrel is that the beginning is a lot more punchy and the end kind of drags.
Profile Image for JJ Khodadadi.
435 reviews108 followers
August 6, 2022
این کتاب با استفاده از مثال هایی از دنیای واقعی اطراف پیش بینی کردن را توصیف و روش هایی که به پیش بینی های دقیق تر منجر می شود را توضیح می دهد. نکته ای که دوست داشتم اینه که نگاه آماری با دید آمار و احتمال در پیش بینی خیلی مهم هست
Profile Image for Leland Beaumont.
Author 5 books31 followers
August 17, 2015
Summarizing 20 years of research on forecasting accuracy conducted from 1984 through 2004, Philip Tetlock concluded “the average expert was roughly as accurate as a dart-throwing chimpanzee.” More worrisome is the inverse correlation between fame and accuracy—the more famous a forecasting expert was, the less accurate he was. This book describes what was learned as Tetlock set out to improve forecasting accuracy with the Good Judgement Project.

Largely in response to colossal US intelligence errors, the Intelligence Advanced Research Projects Activity (IARPA) was created in 2006. The goal was to fund cutting-edge research with the potential to make the intelligence community smarter and more effective. Acting on recommendations of a research report the IARPA sponsored a massive tournament to see who could invent the best methods of making the sorts of forecasts that intelligence analysis make every day. This tournament provided the experimental basis for rigorously testing the effectiveness of many diverse approaches to forecasting.

And learn they did! Thousands of ordinary citizen volunteers applied, approximately 3,200 were invited to participate, and 2,800 eventually joined the project. “Over four years, nearly five hundred questions about international affairs were asked of thousands of Good Judgment Project’s forecasters, generating well over one million judgments about the future.” Because fuzzy thinking can never be proven wrong, questions and forecasts were specific enough that the correctness of each forecast could be clearly judged. These results were used to compute a Brier score—a quantitative assessment of the accuracy of each forecast— for each forecaster.

In the first year 58 forecasters scored extraordinary well; they outperformed regular forecasters in the tournament by 60%. Remarkably these amateur superforecasters “performed about 30 percent better than the average for intelligence community analysts who could read intercepts and other secret data.” This is not just luck; the superforecasters as a whole increased their lead over all other forecasters in subsequent years.

Superforecasters share several traits that set them apart, but more importantly they use many techniques that we can all learn. Superforecasters have above average intelligence, are numerically literate, pay attention to emerging world events, and continually learn from their successes and failures. But perhaps more importantly, they approach forecasting problems using a particular philosophic outlook, thinking style, and methods, combined with a growth mindset and grit. The specific skills they apply can be taught and learned by anyone who wants to improve their forecasting accuracy.

This is an important book. Forecasting accuracy matters and the track record has been miserable. Public policy, diplomacy, military action, and financial decisions often depend on forecast accuracy. Getting it wrong, as so often happens, is very costly. The detailed results presented in this book can improve intelligence forecasts, economic forecasts, and other consequential forecasts if we are willing to learn from them.

This is as close to a page-turner as a nonfiction book can get. The book is well-written and clearly presented. The many rigorous arguments presented throughout the book are remarkably accessible. Sophisticated quantitative reasoning is well presented using examples, diagrams, and only a bare minimum of elementary mathematical formulas. Representative evidence from the tournament results support the clearly-argued conclusions presented. Personal accounts of individual superforecasters add interest and help create an entertaining narrative. An appendix summarizes “Ten Commandments for Aspiring Superforecasters”. Extensive notes allow further investigation, however the advanced reader edition lacks an index.

Applying the insights presented in this book can help anyone evaluate and improve forecast accuracy. “Evidence-based policy is a movement modeled on evidence-based medicine.” The book ends with simple advice and a call to action: “All we have to do is get serious about keeping score.”
Profile Image for Asif.
126 reviews34 followers
December 6, 2015
Possibly the best book I read in 2015. Couldn't have read at a better time as the year nears an end. I could relate with a lot of things as I work as an equity analyst trying to do the seemingly impossible thing of forecasting stock prices. In particular, the examples of how superforecasters go about doing their jobs were pretty inspiring. Examples of taking the outside view and creating a tree of various outcomes and breaking down that tree into branches are something I could benefit from.

As an analyst I am certain of only one thing. My estimates for earnings and target price for stocks will be wrong 99% of the time. However, I try to make sure that I am not missing the big picture and when there is a screaming buy or a sell caused by changing fundamentals or market overreactions I do not want to miss that. My earnings estimates and target prices are actually much less important. What is true however that small bit and pieces of information, including quarterly earnings do play a role in getting to a high conviction big picture story.
Profile Image for Ahmad Abbasi.
28 reviews19 followers
Read
June 28, 2019
بیشتر موضوعات مطرح شده در کتاب شامل گزارش هایی از وقایع جنگ های عراق، افغانستان و موضوعات مربوط به این موارد است. هم چنین در مورد بحران مالی 2008 که گریبان کشورهای مختلف اروپایی و امریکا را گرفت صحبت میکند. خلاصه کتاب درصدد این است که بیان کند که متوسط پیش بینی انسان ها از هیچ اعتبار وسندی ندارد و برای اینکه به تخصص و مهارت در پیش بینی برسید باید از خلاصه اخبار و رویدادهای جهان اگاه باشید.
در کل این کتاب اطلاعات عمومی را بالا می برد و اگر به دنبال این هستید که اتفاق ویژه ای در این کتاب بیفتد خبری از آن نیست.
فکر میکنم کتاب واقع نگری کتاب به مراتب بهتری نسبت به این کتاب برای روبرو شدن با رخداد های جهان حاضر باشد.
Profile Image for Mehrsa.
2,235 reviews3,631 followers
December 15, 2017
Really interesting. The book shows some common misconceptions about forecasting, well, statistics really. I'm often surprised by how people, including me, misinterpret data. The book also showed what it takes to be an excellent forecaster. Basically, requires the same skills as anything: pay attention, evaluate yourself, know your blindspots, be humble, practice.
Profile Image for Frank Ruscica.
8 reviews3 followers
June 24, 2015
Just finished reading an advance copy. The signal-to-noise ratio of this book: maximum.
Profile Image for Ivan.
23 reviews3 followers
June 11, 2016
Среди плотно заставленных полок в разделе “Smart Thinking” как-то мне на глаза попалась книга “Superforecasting”. Интригующая тема, подумал я, много упоминаний в элитной прессе – нужно читать! Если кратко, то эта книга о том, насколько хорошо люди могут предск��зывать результаты важных глобальных событий, как это измерить и чему можно научиться у тех, кто предсказывает лучше.

Каждый из нас регулярно занимается прогнозированием: мы размышляем над тем, на сколько нам повысят зарплату, упадёт ли доллар, и кто выиграет завтра в футбол. Вопросы могут быть очень разными: от обыденных вроде “пойдёт ли завтра дождь?” до жизненно важных: “стоит ли делать эту операцию?”. Иногда мы ошибаемся, иногда – нет. Неуспешные прогнозы быстро вылетают из головы, мало кто ведёт им счёт и пытается сделать выводы. Ладно, мы-то простые смертные.

Но когда в мире происходит что-то непонятное и неожиданное, мы обращаемся к экспертам. Они легко могут рассказать свои соображения на тему того, когда будет следующий кризис, кто победит в выборах США, когда искусственный интеллект наконец завоюет планету и тому подобные вопросы. Мы не всегда согласны с их мнением, но эксперты говорят уверенно и убедительно, мы склонны им верить, на то ведь они и эксперты! Однако даже знатоки иногда ошибаются. Часто это сходит с рук: в конце концов, вопрос был сложным, всякое могло произойти. Но измеряет ли кто-то точность предсказаний конкретных экспертов? Как это ни странно, ответ – “почти нет”. Даже в случае действительно важных решений и профессиональных аналитиков. Неудивительно, ведь в общем случае мнения специалистов слишком сложно трактовать однозначно: они защищают себя и избегают конкретных деталей и чётких фраз. Часто можно услышать “существует риск” (какой?), “вероятно, что” (насколько?) и прочие размытые фразы. Профессионалы не хотят лезть на рожон и в публичных высказываниях оставляют пути к отступлению. Если предс��азанное не происходит, выражения можно обернуть в свою пользу: “риск на самом деле был небольшим”, “вероятно – не значит гарантированно” и т.д. Кроме того, люди часто не указывают временные рамки предсказаний: “ждите, всё ещё будет, как я сказал”.

В итоге, мы пребываем в ситуации, аналогичной состоянию медицины до начала двадцатого века. Врачи тогда лечили, базируясь на своём опыте и уверенности в своих методах. Никто и не думал количественно проверять какие лекарства работают, а какие нет; выздоровел ли больной именно от лекарства или сам по себе. Кровопускание, например, было популярно со времён Галена, лечившего римских императоров, до конца девятнадцатого века. Многие учёные мужи даже не сомневались, что оно работает. Однако качественный прорыв в медицине произошёл именно с изобретением клинических исследований, когда люди догадались использовать статистику: разбить случайным образом людей на две группы, одним дать лекарство, другим – нет и посмотреть, что будет. Одним – кровопускание, другим – пиявок, третьим – ничего и измерить результаты через две недели. Вот это подход! Но этот, кажущийся теперь весьма разумным, метод не сразу завоевал доверие. Мало кто из врачей хотел ставить на кон свою карьеру, дав измерить результаты своего лечения. Так и сейчас: эксперты с опаской и недоверием смотрят на попытки превратить свои глубокие и тонкие анализы в бездушные проценты вероятностей.

Филип Тетлок (Philip Tetlock), автор книги, считает, что это нужно наконец изменить: точность предсказаний следует измерять. Таким образом можно понять, кто предсказывает лучше и какие факторы на это влияют. Измерение даёт необходимый фидбек: люди могут понимать, что не работает и корректировать свои методы и рассуждения. В результате это может привести к революции, сходной с революцией в медицине.

У Филипа слова не расходятся с делом: с 1984-го по 2003-й, на протяжении почти двадцати лет, он проводил эксперименты и собрал около 28 тысяч предсказаний от 284 экспертов, которые согласились поучаствовать на условиях анонимности. Результаты были неутешительными: в среднем эксперты предсказывали чуть лучше случайного угадывания. Причём любопытно, что чем известнее персона, тем хуже у неё прогнозы. Эти результаты получили широкую огласку после того, как Тетлок опубликовал свою книгу “Expert Political Judgement”. Об экспертах стали говорить, что у них получается примерно так же, как у “dart-throwing chimpanzee”, что стало мемом в определённых кругах.

Так что же, всё, предсказать ничего нельзя и стоит покориться судьбе и бросать монетку? Зачем тогда эта новая книга “Superforecasting”? Оказалось, что не всё потеряно и у эксперимента Тетлока были и другие, менее замеченные результаты: во-первых, точность предсказа��ий значительно улучшается для более близких событий (до двух лет от даты предсказания), во-вторых, существовала некоторая группа людей, которые демонстрировала результаты заметно лучше средних, что давало пищу для размышлений.

Через некоторое время появилась возможность провести новый эксперимент. Разведка США допускает очень серьёзную ошибку, неверно предсказав наличие оружия массового поражения в Ираке. Результатом этого провтыка было вторжение в Ирак, дорогая и непопулярная кампания, которую Джордж Буш впоследствии назвал главной ошибкой своего президентства. Последовали разбирательства: кто виноват и что делать? Оказалось, что разведка не просто подыграла политикам, которые хотели начать войну, а действительно была сильно уверена в наличии ОМП в Ираке, что поставило под сомнение надёжность её методов. Как следствие, постановили организовать новый элитный отдел IARPA (аналогичная знаменитой DARPA, ответственной за создание Интернета) – Intelligence Advanced Research Projects Activity и заняться, наконец, предсказаниями серьёзно.

Для этого решили провести турнир по прогнозированию, в котором участвовало несколько исследовательских групп из известных университетов США. Каждый день, на сайте появлялся новый вопрос, вроде такого (примеры реальных вопросов): “Will Serbia be officially granted EU candidacy by 31 December 2011?”, “Will former Ukrainian Prime Minister Yulia Tymoshenko be found guilty on any charges in a Ukrainian court before 1 November 2011?”, “Will Japan officially become a member of the Trans-Pacific Partnership before 1 March 2012?” и т.д. В ответ ожидается число – вероятность события. Ответы можно (и нужно!) обновлять по мере поступления новой информации. Задача исследователей: найти много экспертов, понять как научить их предсказывать лучше, затем агрегировать полученные прогнозы и выдавать “на гора” ответы по каждому вопросу. Когда ответы на вопросы становятся известными, результаты каждого университета сравниваются с контрольной группой. Для сравнения использовались так называемые оценки Брайера (Brier score).

После первого года турнира одна группа исследователей заметно вырвалась вперёд, опережая контрольную группу и соперников на 30 процентов. Неплохо справляются, подумала разведка. Но может совпадение? На второй год эта же группа опередила контрольную группу на 60 процентов, а ближайших соперников – на 40 процентов. Причём согласно редактору Washington Post, они смогли даже побить аналитиков из разведки, которые имели доступ к засекреченной информации! Эксперимент решили приостановить – победители были известны. Surprise, surprise, ими оказалась группа Филипа Тетлока – “The Good Judgement Project”. Около 2800 добровольцев, очень разных людей: программисты на пенсии, танцоры, агрономы, студенты. Они участвовали в проекте в основном из интереса, получая в качестве награды подарочный сертификат на Амазоне стоимостью в $250. После первого года турнира, исследователи выбрали 2% участников с наилучшими результатами (так называемых “суперфорекастеров”), сгруппировали их в команды, и именно эти люди дали наиболее точный результат во втором году, который позволил выиграть турнир.

Целью эксперимента было не просто получение точных предсказаний и победа в турнире, а понимание того, что влияет на качество прогнозов. Собственно, об этом и повествует книга “Superforecasting”. Чем суперфорекастеры отличаются от большинства обычных экспертов, что даёт им возможность предсказывать лучше? Представьте себе суперфорекастера. Наверняка это умный и эрудированный человек, хорошо разбирается в экономике и политике, регулярно следит за новостями, дружит с математикой и программированием (чтобы строить хитрые модели и считать их на компьютере). Далее, автор, глава за главой, анализирует каждый из этих факторов и показывает насколько важным он оказался на практике.

Например, что насчёт IQ? Тесты показывают, что суперфорекастеры действительно умнее большинства людей, но не на уровне гениальности: IQ среднего суперфорекастера выше, чем у 80% людей. Важнее чем чистый интеллект оказалась способность смотреть на факты с разных точек зрения.

Философ Исайя Берлин когда-то написал эссе, где он выделял два типа людей: ежи и лисы. “Лисы знают много вещей, а ежи знают одну большую вещь”. Ежи любят выстраивать всё в одну систему, которая позволяет оценивать новые факты с точки зрения этой системы. Их взгляд сфокусирован: они видят мир через линзу основной идеи. Они когда-то решили придерживаться определённого мнения в политике, культуре, экономике и т.д. Они смогли разобраться в вопросе, найти аргументы в поддержку своих убеждений и уверены в своей правоте. Если новые знания противоречат системе, то их можно либо игнорировать, как очевидную неправду, либо как-то дискредитировать. Лисы же более осторожны, прагматичны, пытаются не иметь фиксированных взглядов и в спорах пытаются взглянуть на аргументы с обоих сторон. У них как правило нет резко выраженных мнений, они более склонны к сомнению и самокритике. Их любимые слова: “с одной стороны, с другой стороны”, в то время как у ежей: “более того”, “кроме этого, ещё”.

Угадайте, кто более популярен в качестве экспертов на телевидении? Конечно же ежи. Зрителям нравятся простые, доходчивые объяснения без излишних сомнений. Тогда не нужно думать самим, эксперт ведь так понятно объяснил! К сожалению, реальность часто выбивается из простых схем, и, как показывает практика, гораздо лучше справляются с предсказаниями именно лисы. Ежи часто бывают затуманены своей системой и закрыты к фактам, которые по хорошему должны повлиять на их мнение. “When the facts change, I change my mind. What do you do, sir?”, как говорится в известной цитате.

Это наблюдение насчёт лис и ежей мне показалось очень занимательным. Теперь, когда я читаю всякие интервью и статьи, на меня отовсюду смотрят лисы и ежи! “Если в руках есть молоток, всё кажется гвоздём”. Недавно Хиллари Клинтон во время избирательной компании заявила, что полезно посмотреть на мир глазами людей, которые поддерживают Дональда Трампа и даже попытаться их понять. “The Economist” одобрил. “Лиса!”, подумал я.

В общем, идея здесь в том, что не так важен багаж знаний в определённых областях, как умение посмотреть на вопрос с разных точек зрения и умело скомпоновать их. Нельзя быть одновременно экспертом в израильско-палестинском конфликте, криптовалютах, политике Руанды и экономике США, но тем не менее можно агрегировать имеющиеся мнения по каждой из этих тем, взвесить “за” и “против” и выдать неплохой прогноз.

Ещё в книге рассказывается о том, что важно уметь чередовать так называемый “внешний” взгляд с “близоруким” взглядом. Что имеется в виду? Предположим, что вам описывают семью Ронцетти из Нью-Йорке: папа Фрэнк, которому 42, мама Джулия и сын Томми, пяти лет. Они живут в небольшом доме в Бруклине, отец работает бухгалтером, мать временами подрабатывает официанткой, а ещё с ними живёт бабушка Камилла, мать Фрэнка. Вопрос: какова вероятность того, что у н��х есть собака?

Можно начать внимательно изучать детали: ага, семья, похоже, итальянская, наверняка Фрэнк сам вырос в многодетной семье, но теперь не может себе позволить больше одного ребёнка. Наверняка, он захочет немного пополнить свою семью домашним любимцем, так что вероятность большая, скажем процентов 75. Такие истории звучат убедительно, но это не то, как поступят суперфорекастеры. Они пойдут и банально погуглят статистику: у скольких семей в Нью-Йорке есть собака. Это сразу даст неплохую оценку, которую уже можно настраивать в зависимости от ��олее тонких деталей. Почему важно сначала применить внешний взгляд, а потом тюнинговать, а не наоборот? Потому что существует такой забавный эффект, как “anchoring” – первое упомянутое число (даже если оно совершенно случайно и люди об этом знают) оказывает на нас значительное влияние, потому важно, чтобы оно было как можно более точным и непредвзятым. Anchoring подтверждается многими любопытными психологическими экспериментами.

Вообще, в книге рассматривается много экспериментов, многие из которых наверняка будут знакомыми прочитавшим бестселлер нобелевского лауреата Даниэля Канемана “Thinking fast and slow”. Эти эксперименты демонстрируют разные эвристики, которые наш мозг применяет для того, чтобы быстро получить ответ. Иногда эти эвристики приводят к тому, что мы легко приходим к простому, но неправильному ответу.

Ещё в одной главе рассматривается влияние командной игры на качество анализа и предсказаний. С одной стороны хорошо, в команде люди могут делиться информацией или конструктивно критиковать друг друга, с другой стороны – согласие в команде может привести к иллюзии, что “миллион леммингов не может ошибаться” и излишней самоуверенности. Что окажется важнее? Интересна также “проблема лидера” – можно ли быть одновременно рациональным форекастером, чётко осознающим границы своего видения, и при этом уверенным в себе лидером, ведущим за собой компанию или армию?

Отдельно хочется отметить большое количество ссылок на оригиналы: ни одна цитата не остаётся без явного указания источника, даже если источником является личная переписка автора. После такого подхода становится сложно читать обычные научно-популярные книги, в которых вместо источников в конце просто библиография.

Если по прочтению книги, вам захочется попробовать свои силы в прогнозировании – к вашим услугам авторы подготовили открытый турнир форекастеров GJOpen.com, где можно развлекаться и продвигать науку одновременно. Мне кажется, что это должно сильно вдохновить гиков, которые любят всё измерять, в том числе и себя. А здесь можно делать всё сразу: смотреть на вероятности событий, используя это для ориентирования на геополитической местности, самому влиять на предсказания, видеть насколько твои предсказания хороши и в идеале даже улучшать свои методы.

В заключение хочу сказать следующее. Эта книга не сделает из вас суперфорекастера сама по себе, она лишь даёт понять, что для этого важно, а что нет. Основная работа, как всегда, за читателем. Но однозначно книга даёт пищу для любопытных размышлений, ставит под сомнение вещи, которые считаются очевидными, подчёркивает важность экспериментов. В целом, она может достаточно сильно поменять взгляд на вещи, что для меня является признаком действительно хорошей книги.

“Beliefs are hypotheses to be tested, not treasures to be protected.” – Philip Tetlock
Profile Image for Pouri.
37 reviews42 followers
Shelved as 'read_summary'
July 19, 2022
Core Messages
-Being able to realistically estimate the limits of predictability (e.g., the maximum useful time frame for weather forecasts) is crucial. (pp. 13-14)
-Archie Cochrane wanted to accurately determine the health effects of newly introduced cardiac care units in comparison to bed rest at home. He faced ethical accusations when proposing this real-life experiment but was proven right when bed rest turned out to be at least equally effective. This illustrates that people often trust their intuitions without even bothering to question them. (pp. 30-31)
-People make up explanations for why they did what they did. This was confirmed by making experiments with split-brain patients. The same happens with stock market analysts when they try to explain price changes. People refrain from just saying "I don't know" or "It was just a random event."
(pp. 36-37)
-Check your written analyses for transition words like "however" and "but." Forecasts with these words are usually better than forecasts with self-confirming words like "furthermore" and "moreover." (p. 69)
-Forecasters that know a little bit about many areas tend to be more successful than domain-specific experts. (p. 69)
-There is a reverse relationship between fame and accuracy. Successful experts and moderators are often the worst forecasters. Experts often have one big idea that they repeatedly use to explain the world. (pp. 71-72)
-Averaging estimates of different people often leads to better results because people have different sources of information that can be gathered by averaging. Assuming that individual errors are not correlated, wisdom of the crowds should serve as a consistent estimator. (p. 73-74) Extremization can help to reflect the diversity of information in a group. (pp. 210-211)
-Epistemic uncertainty arises from a lack of research (including a lack of accessibility to information), aleatory uncertainty is unavoidable uncertainty. (p. 143)
-The dilution effect says that irrelevant information can bias predictions to become uncertain.
-Scope insensitivity: People are willing to donate the same amount of money to small-scale local projects as to large-scale projects (even if the local project is included). People only think about how much they would be willing to pay to change the picture they see in their heads when they think about something bad but they don't think about the scale of the problem. (p. 234)
-The Brier score is an interesting concept when evaluating prediction accuracy.
-I noticed that "superforecasting" uses many concepts that consultancies look for in assessment centers.

What makes a good forecaster (summary of pp. 191-192, supplemented by learnings that I found elsewhere in the book):
-Philosophy:
--Cautious: Nothing is certain
--Humble: Reality is infinitely complex
--Nondeterministic: What happens is not meant to be and does not have to happen
-Abilities and thinking styles:
--Actively open-minded: Beliefs are hypotheses to be tested, not treasures to be protected, don't identify with your forecasts
--Intelligent and knowledgeable, with a "need for cognition": Intellectually curious, enjoy puzzles and mental challenges
--Reflective: Introspective and self-critical
--Numerate: Comfortable with numbers
-Methods of forecasting:
--Pragmatic
--Analytic: Start with a base probability and adjust starting from that anchor
--Diverse: Consider different views
--Probabilistic
--Exact: Use exact percentages, don't round
--Thoughtful updates when relevant news change the pool of information
--Good intuitive psychologists: Check thinking for biases
--Feedback: Write down your predictions, search for timely feedback, and make your predictions as unequivocal as possible to allow for evaluations of your predictions
-Work ethic
--Growth mindset: Believe that it's possible to get better
--Grit: Determined to keep at it however long it takes

SOURCE: Jan Spörer's review
Profile Image for RoWoSthlm.
97 reviews19 followers
December 19, 2018
As Hume noted, there’s no rational basis for believing that the sun will rise tomorrow. Yet our brain wants to have it simple – we believe that the future will be like the past. The problem here is that the truth of that belief is not self-evident, and there are always numerous possibilities that the future will change. It means that our causal reasoning can’t be justified rationally, and thus there’s no rational basis for believing that the sun will rise tomorrow. However, virtually nobody would claim there will be no sunrise tomorrow (at least in Stockholm we haven’t seen it during the last three weeks). This is human nature – to believe that certain things will happen. To predict and forecast is in human nature too. Some people claim they are very good at it. The author takes on it and gives and excellent overview and analysis of the state of affairs in the prediction and forecasting world, and introduces some big achievers in this area – superforecasters.

Regarding superforecasters, obviously, they exist. Exist just like Warren Buffet, like the guys winning the lottery, like some personas becoming presidents of the US, etc., i.e. the rare ones who reaches extremely improbable heights. Is it a pure skill or is it a product of the skill and the result of the randomness filter, aka as luck? Forecasting of the events emerging from the social formations and human made systems is virtually impossible. Assuming free will exists, it makes our world extremely complex, and complicated. There’s a possibility that supercomputers and AI will eventually greatly improve forecasting of natural phenomena, like weather, earthquakes, flooding and the like. But then again, humans are in the loop, and most likely have the capacity to significantly alter the natural processes. This makes the whole human altered natural system practically unpredictable.

To become a superforecaster one needs to constantly calibrate and verify the underlying models. As author points out, it is relatively easy in some areas, because some events forecasted have a high frequency, like weather forecasts, where one can verify the forecast on daily basis and improve the models. Rare processes are virtually impossible to calibrate, just because they are rare and there is not enough data to calibrate the prediction model, would it be a mental or a computer model. That’s why the forecasting of the events residing in fat tails, the extremes, Black Swans, as Taleb calls them, is epistemically impossible.

I took on this book with the premise that listening to anyone who thinks they can predict the future is a big waste of time. The author mapped out what abilities a superforecaster usually possess to become such a good one. Among the strengths of a good forecaster is the open-mindedness and the ability to update one’s believes based on the new evidence. I think, I have those dispositions and this book actually shifted me some degrees toward the optimistic sceptic. My curiosity on the aspect of how the groups can do some valuable forecasting has increased.

Sometimes, it can be very dangerous to base ones activities on predictive grounds. Good reactive capabilities, like risk management is essential to be able to cope with whatever the future will bring.

Someone said that planning is useless, but it is indispensable. And many say we should live in a moment. Still, it’s hard and maybe even not so wise to not put down some ideas on how the future would unfold. Maybe planning is an illusion, but like many illusions it might give some existential comfort.

Very soon we will close the books, raise a glass and try to look into the new year with the hope it will be at least a better year than the previous one. We will certainly forecast many things we believe will happen next year. No one could have said it better about this kind game as Lin Wells (just change 2010 to 2019) -- “All of which is to say that I’m not sure what 2010 will look like, but I’m sure that it will be very little like we expect, so we should plan accordingly”.

This was my 100th book this year. I have forecasted this to happen, and it did! On this forecast my Brier score is 0 – meaning: perfection! On that note, to my followers and friends:

Warmest thoughts and best wishes for a wonderful holiday and a very happy new year!!!
Profile Image for Allen Adams.
517 reviews30 followers
October 29, 2015
http://www.themaineedge.com/style/fut...

Ever since mankind has grasped the concept of time, we have been trying to predict the future. Whole cottage industries have sprung up around the process of prediction. Knowing what is coming next is a need that borders on the obsessive within our culture.

But is it even possible to predict what has yet to happen?

According to “Superforecasters: The Art and Science of Prediction”, the answer is yes…sort of. Social scientist Philip Tetlock and journalist Dan Gardner have teamed up to offer a treatise on the nature of prognostication. Not only do they discuss the many pitfalls of prediction, but they also offer up some thoughts on that small percentage of the population who, for a variety of reasons, are very, VERY good at it.

Tetlock has spent decades researching the power of prediction. Basically, people are pretty terrible at it. He himself uses the oft-offered analogy of a chimpanzee throwing darts – the implication is that random chance is at least as good at predicting future outcomes as the average forecaster. Even the well-known pundits, the newspaper columnists and talking heads – even they struggle to outperform the proverbial dart-tossing simian.

But over the course of Tetlock’s years of study by way of his ongoing Good Judgment Project, he uncovered an astonishing truth. Yes, most people have no real notion of how to predict the outcome of future events. However, there are some who can outperform the chimp. They can outperform the famous names. They can outperform think tanks and universities and national intelligence agencies and algorithms.

Tetlock calls these people “superforecasters.”

These superforecasters were among the tens of thousands that volunteered to be a part of the Good Judgment Project. They were part of a massive, government-funded forecasting tournament. These people – folks from all walks of life, filmmakers and retirees and ballroom dancers and you name it – were asked to predict the outcome of future events. And so they did. These people soon separated themselves from the pack, offering predictions about a vast and varied assemblage of global events with a degree of unmatched accuracy.

In “Superforecasters,” we get a chance to look a little closer at some of these remarkably gifted individuals. Tetlock offers analysis of some past predictions that were successful and others that were failures. We also get insight from prominent figures in the intelligence community and from people ensconced in the public and private sectors alike. And as Tetlock and company dig deeper, it becomes clear that the key to forecasting accuracy – to becoming a superforecaster – isn’t about computing power or complex algorithms or secret formulas. Instead, it’s about a mindset, a desire to devote one’s intellectual powers to a flexibility of thought. The accuracy of these superforecasters springs from their ability to understand probability and to work as a team, as well as an acceptance of the possibility of error and a willingness to change one’s mind.

There’s something inherently fascinating about predicting the future. One might think that Tetlock��s findings are a bit complex – and they undoubtedly are. However, what he and co-author Gardner have done is condense the myriad complications of his decades of research into something digestible. A wealth of information has been distilled into a compelling and fascinating work. It’s not quite pop science – it’s a bit denser than that – but it’s still perfectly comprehensible to the layman. In essence, this book gives us a clear and demonstrable way to improve the way we predict the future.

“Superforecasting” is a fascinating and compelling exploration of something to which many of us may not have given much thought. It’s not all haphazard chance – there are actually ways to improve your ability to predict the future, some of which are laid out right here for you. If nothing else, you’ll never look at a pundit’s prediction the same way again.
Profile Image for Sabin.
354 reviews34 followers
July 2, 2018
It sucks when an audiobook is penned by two people but you hear a lot of “I” and “me”. After a little bit of background check, apparently the “I” and “me” guy is Tetlock, the scientist, while Gardner is just here for the ride. And also because he’s a journalist and because he can write. But maybe I’m wrong.

Anyway, the end-result is worth it. It’s a very detailed account of two forecasting tournaments, which aim to find out if people are better than chance at predicting the future. Short answer: for some people yes, but on average no. Which means that, while some are better than average, some people are actually worse at predicting the future than the flip of a coin, or Tetlock’s infamous “dart-throwing monkey”.

Aside from describing the experiments and drawing conclusions from them, which by themselves would have made this book worth every minute spent reading it, the authors also discuss other experiments and connect their findings to other theories proposed by psychologists Daniel Kahneman and Amos Tversky, essayst of “Black Swan” fame Nassim Nicholas Taleb and a few other theorists, both with civilian and military backgrounds.

The authors focus on predictions about international events and especially on their correctness. The tournaments aim to find people who can make very good predictions (a lot better than the monkey) and find out how they go about achieving such good scores. Their conclusions are, as always, common-sense, if you stop to think about it. But their insight also helps avoid the pitfalls we may encounter along the way. While not many of us will be called upon to predict if, upon examination, Yasser Arafat’s body contains traces of Polonium, or if North Korea develops nuclear weapons within a given timeframe, these experiments point out judgement flaws inherent in our human nature and make us aware of our own mistakes.

Unlike Kahneman’s Thinking Fast and Slow, this book’s contents are perhaps less directly relevant to everyone. It seems to apply more to people who are in the business of forecasting, like economists, financial analysts and stock brokers. And I say this also because it’s very practical and gives a lot of details on the methods one can use to achieve better forecasting results. What is actually relevant to everyone, is their description of the mindset required for good decision-making and how someone should go about weighing the consequences of important decisions before making them. Of course, going with your gut feelings is one way of making a decision, and, apparently, if you already have experience in that situation, a gut decision is already a lot better than random. But a better way, even if your gut tells you a course of action is good, and you have enough time to analyse the issue, is to check your internal point of view against an external one, which can be quantified, and then adjust your initial estimate accordingly. Actually you get a much better explanation and also a lot of examples in the book, and they do, in fact, make sense.

It is one of the better books I read his year, and I found it pleasant to be reminded of some of the concepts discussed previously by Kahneman. Forecasting is an integral part of our humanity, and this book can also help us understand ourselves a bit better. One more thing: listening to it was a delightful experience. The performance, except for a few German names, was admirable, while the pace and the examples kept everything interesting.
Profile Image for Morgan Blackledge.
695 reviews2,266 followers
March 10, 2019
I don’t have a lot to say about this book. Other than it’s good and the author Phil Tetlock is an extremely well regarded social scientist.

My uncharacteristic lack of verbiage is not necessarily a slam on this book. It’s more of a reflection of how dang overloaded I am at present with work and school.

It all has to get done. But in the meantime, my goodreads output has jumped the shark so to speak.

That being said:

Superforcasters documents Tetlock’s work studying individuals and teams that have extraordinary (i.e. consistently slightly better than chance over the long term) track records for predicting important events.

As it turns out, these individuals are not necessarily experts in the given field they make predictions in. They aren’t necessarily smarter or better at math. They are certainly not psychic or particularly intuitive.

But rather, they are meticulous critical thinkers who understand logic and probability, and utilize a bayesian methodology for incrementally adjusting their predictions as the data rolls in.

It’s like google maps estimated arrival time (ETA). It’s shockingly accurate because it continuously compares its initial estimate (based on previous data) to real progress, and up and down modulates your ETA based on an algorithm that goes something like:

Probability (P) = revised estimate of probability (B) divided by initial estimate of probability (A) multiplied by A, and divided by B.

At least I think that’s something like the the way it goes.

Please feel free to clarify in the comments.

So. It’s at once really neat, and like so many of the findings in social science, kind of a no duh when you consider what is being said i.e. I can tell you what’s about to happen if I am allowed to continuously change my prediction the closer we get to the event.

It’s not exactly that simple. Because every datahead can run that same methodology, but not all of them are Superforcasters.

So read the book.

But that’s it in a nutshell.
Displaying 1 - 30 of 1,534 reviews

Can't find what you're looking for?

Get help and learn more about the design.