Jump to ratings and reviews
Rate this book

Judgment Under Uncertainty: Heuristics and Biases

Rate this book
The thirty-five chapters in this book describe various judgmental heuristics and the biases they produce, not only in laboratory experiments but in important social, medical, and political situations as well. Individual chapters discuss the representativeness and availability heuristics, problems in judging covariation and control, overconfidence, multistage inference, social perception, medical diagnosis, risk perception, and methods for correcting and improving judgments under uncertainty. About half of the chapters are edited versions of classic articles; the remaining chapters are newly written for this book. Most review multiple studies or entire subareas of research and application rather than describing single experimental studies. This book will be useful to a wide range of students and researchers, as well as to decision makers seeking to gain insight into their judgments and to improve them.

544 pages, Paperback

First published April 30, 1982

Loading interface...
Loading interface...

About the author

Daniel Kahneman

34 books8,771 followers
From Wikipedia:

Daniel Kahneman (Hebrew: דניאל כהנמן‎; born 5 March 1934 - died 27 March 2024), was an Israeli-American psychologist and winner of the 2002 Nobel Memorial Prize in Economic Sciences, notable for his work on behavioral finance and hedonic psychology.

With Amos Tversky and others, Kahneman established a cognitive basis for common human errors using heuristics and biases (Kahneman & Tversky, 1973, Kahneman, Slovic & Tversky, 1982), and developed Prospect theory (Kahneman & Tversky, 1979). He was awarded the 2002 Nobel Prize in Economics for his work in Prospect theory. Currently, he is professor emeritus of psychology at Princeton University's Department of Psychology.

http://us.macmillan.com/author/daniel...

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
677 (49%)
4 stars
378 (27%)
3 stars
209 (15%)
2 stars
62 (4%)
1 star
37 (2%)
Displaying 1 - 30 of 32 reviews
Profile Image for Andrew Hunt.
45 reviews22 followers
March 24, 2014
I read this book because it and Gödel, Escher, Bach were mentioned in the same breath in Eliezer Yudkowsky's incomparable Harry Potter and the Methods of Rationality. With an endorsement like that, how could I resist at least taking a look?

My development as a scientist and rationalist has been intertwined in some unlikely ways with the Harry Potter phenomenon. I first encountered game theory about eight years ago in a book called The Science of Harry Potter. Expounding the famous Prisoner's Dilemma, the author replaced the traditional prison sentences in the cost-benefit matrix with losses of House points; instead of anonymous prisoners, the actors in the drama were none other than Harry and Draco, improbably compelled to illustrate a lesson from Econ 101.

I was intrigued, but, unaware of the vastness of the field I had glimpsed or of the broader edifice of rationalism, I looked into the subject no further. There were plenty of other interesting things to learn - not to mention that I stopped reading the above-mentioned book when I ran across the section on sleep paralysis and couldn't sleep with the lights off for weeks. (Thankfully, I've only experienced the phenomenon once. Once was enough.)

Fast forward to 2012, the year during which I ran across Yudkowsky's Methods, an improbably brilliant reimagining of the Harry Potter universe in which Harry is required at every turn to make use of the methods he used only in isolated, illustrative cases in The Science of Harry Potter. My dormant interest in game and decision theory was reawakened, and Eliezer Yudkowsky, who for me has ascended into the pantheon, had the book recommendations to get me started on the path. Judgment Under Uncertainty topped the list.

And so it should have. If there's anyone who needs to be aware of cognitive biases, it's everyone. No one is immune, and I can't imagine beginning to understand cognitive science without a basic, if only abstract, grasp of the ways in which, under circumstances of partial ignorance, perception itself can be the source of systematic errors. (Psychologists, statisticians, and decision theorists will recognize "circumstances of partial ignorance" as "essentially all circumstances.")

So reading this book felt, in a way, like coming full circle. Kahneman and his collaborators have made their statement, have written their Critique; as for me, I've been shown yet another road to becoming a consistent scientist - which is to say, a rationalist and humanist as well.

Nihil supernum.
Profile Image for Nick Klagge.
761 reviews64 followers
November 8, 2014
This book is a collection of academic papers on behavioral economics. It was first published in 1982, so a reader today should approach it as a presentation of the "first wave" of this field of research, which became much more well-known over the following 30 years.

For me, the book was very hit-and-miss. Some of the papers were very engaging; others, I barely got through without falling asleep. It shouldn't be a surprise that the ones by Kahneman and Tversky are generally among the most interesting and insightful--the guys won a Nobel for a reason. One thing that I think would have helped a great deal is if the editors (K&T + Paul Slovic) had written introductory remarks about each paper, similar to what Axelrod did in "The Complexity of Cooperation." As it is, the papers just come one after the other, with no connecting thread other than a broad organization by category.

One thing I did really appreciate was reading some of K&T's original work on concepts that are by now very well-known, such as the representativeness and availability heuristics. These concepts have been written about a lot in a pop-sci setting, and I think the treatment in these papers is much more nuanced than the typical presentation. My favorite essay in the book was "On the study of statistical intuitions," by K&T. In it, they discuss some deep potential issues with the general experimental design used for the study of this topic. Among these is the existence of generally and implicitly accepted "rules of conversation" that are often violated by experimenters; broadly speaking, that one's interlocutor will be "informative, truthful, relevant, and clear."

For almost anyone interested in the topic, however, I would recommend first reading Kahneman's excellent and accessible book, "Thinking, Fast and Slow."
Profile Image for Takuro Ishikawa.
18 reviews10 followers
July 17, 2010
This book offers a collection of papers on decision science, the study (and improvement) of human decision making. These papers are particularly useful to all business analytics professionals who want or need to evangelize about the need for analytics.

Altogether, the articles describe when intuitive decision fails, and why. More importantly, they make a case for analytics and provide ideas on how it can improve decision making.
Profile Image for David.
460 reviews
July 6, 2009
Very academic, peer reviewed treatment of social psychology married to economic. It’s hard to cut through much of the nuts and bolts, and I didn’t try too hard. I skimmed through the cryptic parts, detailed proofs, equations, etc. But the general concepts are invaluable. It’s really intended as a collection of journal articles for post-graduates in social psychology and behavioral economics. Five-star revelations, but two-star presentation from a layman’s perspective. Kahneman won the Nobel Prize for his lifetime contribution to this area of study, and I dare say he deserves it.
Profile Image for Chad.
388 reviews71 followers
January 16, 2018
We usually think of bias in the context of underlying motivations or interests, particularly in the political realm. The underlying premise of this book is that there are much more fundamental biases in human judgments. Humans aren't perfectly logical creatures. Even when we have perfectly good information, and we are free from motivational biases, we still make poor decisions.

I picked up this book after there was a few passing references to it in "Harry Potter and the Methods of Rationality." In this alternate world, Harry Potter is supposed to represent the paragon of Baconian rationalism, and he uses the fact that he has read Kahnemann's work as evidence of his rationality.

The book itself isn't for the faint of heart; it is a collection of scientific articles published by psychologists with research interests in the science of judgment. The topic matter itself is interesting. If you were interested in the material, I would recommend reading the introductions and conclusions of each essay, and if it captured your interest enough, you could read further into the experimental sections. I got less thorough in my reading as I got through the book, because I was anxious to be done; I already read enough scientific papers as a graduate student, I don't want to read more!

The material itself is fascinating. I believe the editor, Kahneman, has written another book directed more towards a lay audience, "Thinking Fast and Slow." I will look into reading that too, but I'm sure a lot of the material in it is drawn from these scientific studies.

I think the material itself would be good for most readers to be aware, to recognize the limitations and tendencies of human thought. Here are a few examples:

Humans tend to be uncharitable in making judgments of others; when seeking explanations of others' behavior, we tend to attribute more to characteristics of the individual and not the situation in which they find themselves. For instance, if a student is doing bad in school, we are more likely to think they are lazy rather than to consider the circumstances going on in their home.

Humans tend to be very bad predictors of outcomes that have multiple steps. For instance, when trying to predict how long a project will take to complete, we very easily underestimate the time required. Why? Each step of the process requires successful completion, and so a single fudge factor doesn't account for all the delay.

Humans rarely take into account base case statistics. For instance, if an editor is very confident a manuscript will get published because of the excellent writing, he rarely takes into account the success rates of similar books. The integration of book base case data AND intuitive judgments is referred to as regression, and leads to better estimates.

Definitely a good read, but I just wasn't in the mood for scientific papers!


Here's a list of the essays contained in the book and a brief description of each:

Judgment under uncertainty: heuristics and biases
More of a summary of the entire book with introductory concepts, including representativeness (e.g. what is the probability that object A belongs to class B?), misconceptions of chance (truly random events don't seem random to humans), and sample size (humans are bad at taking into account the effects of sample size when making decisions).

Belief in the law of small numbers
The believer in the law of small numbers gambles his hypotheses on small sample sizes without realizing that the odds against him are unreasonably high. Bad for scientists who only do 5-6 replicates in a study.

Subjective probability: A judgment of representativeness
Human evaluate the representativeness of a sample by looking for similarities to the population of interest and the apparent "randomness" of the sample.

On the psychology of prediction
Predictions have three sources of information: (1) prior general knowledge e.g. base rates, (2) information specific to the case at hand, and (3) information on the reliability of the information you have been given. Humans generally ignore (3) entirely, and generally rely entirely on (2).

Studies of representativeness
When seeking to attribute causes to effects, the lay person has three sources of information: distinctiveness information (how does this situation differ from others?), consistency information (does this happen in repeated experiments?), and consensus information (does everyone respond this way?). Humans generally ignore consensus information.

Judgments of and by representativeness

Popular induction: Information is not necessarily informative

Causal schemas in judgments under uncertainty
It is a psychological commonplace that people strive to achieve a coherent interpretation of the events that surround them, and that the organization of events by schemas of cause-effect relations serve to achieve this goal.

Shortcomings in the attribution process: On the origins
This chapter examines "non-motivational attribution biases", biases that aren't due to self-serving motives. For example, the fundamental attribution error, in which we "infer broad personal dispositions and expect consistency in behavior or outcomes across widely disparate situations and contexts."

Evidential impact of base rates
Even when given base rate data, humans rarely take it into account, using their initial intuitions rather than the hard numbers provided by scientific studies.

Availability: A heuristic for judging frequency and probability
Introduce a new heuristic: availability. Humans estimate probabilities by how easily information is retrieved from memory. For instance, if I asked you to compare the word count of words that start with r and words that have r in the third position, you would have an easier time recalling words that being with r and likely overestimate the word count compared to the latter.

Egocentric biases in availability and attribution
This looks at how the ego plays a role in availability. For instance, I tend to focus on my inputs into the project rather than my teammates, and am likely to overestimate my contribution. This can result in tensions, such as who gets authorship on a paper.

The availability bias in social perception and interaction

The simulation heuristic
There are two kinds of judgments where availability can play a role: how easily past information is recalled, and how easily new situations are created using the imagination. The latter is called the simulation heuristic. For instance, if I asked to to think up all the ways you could kill someone with a paper clip, you would start to get an idea of availability.

Informal covariation assessment: Data-based versus theory-based judgments
Humans are really bad at evaluating covariation, because they look at a limited selection of the data. For instance, when evaluating the question, does God answer prayers, you have to take into account (a) the time you prayed, and your prayer was answered, (b) the times you prayed and the prayer wasn't answered, (c) the times you didn't pray and you still got positive results, and (d) the times you didn't pray and you got negative results. Hard to evaluate, right?

The illusion of control
In situations where the actor has absolutely no control e.g. rolling dice doesn't stop the actor from behaving as if he has some control over the situation resulting in all sorts of odd behaviors.

Test results are what you think they are
This was one of my favorites, and is basically what it says. In many instances, psychologists interpret what they want to see. The authors look at the example of how Rorschach blot tests were used to evaluate whether patients were homosexual or not.

Probabilistic reasoning in clinical medicine: Problems and opportunities
Oftentimes, physicians don't have proper training in probability and aren't using diagnostic tests appropriately. False positives and false negatives should be taken seriously, and understanding what diagnostic results is vital for recommendations on the physicians' part.

Learning from experience and suboptimal rules in decision making

Overconfidence in case-study judgments
The more information decision-makers have, the more confident they are in their decisions. But this isn't reflective of the actual accuracy of their predictions.

A progress report on the training of probability assessors

Calibration of probabilities: The state of the art to 1980
How do you tell how good someone is at making predictions? "If a person assesses the probability of a proposition being true as .7 and later finds that the proposition is false, that in itself does not invalidate the assessment. However, if a judge assigns .7 to 10,000 independent propositions, only of 25 of which subsequently are found to be true, there is something wrong with these assessments."

For those condemned to study the past: Heuristics and biases in hindsight
The idea that we can learn from the past is in some aspects overrated. We tend to focus on salient details rather than the ordinary ones, and string them together in causal diagrams. We also tend to view the past with the foreknowledge of how it will end. In real decisions in the present, we do not have that luxury. "Inevitably we are all captives of our present personal perspective. We know things that those living in the past did not... Historians do 'play new tricks on the dead in every generation.'"

Evaluation of compound probabilities in sequential choice
Humans are really bad at compound probabilities, probabilities based on sequential events.

Conservatism in human information processing
Baye's theorem gives the user updated probabilities based on new information. But the probabilities predicted by Baye's theorem are much higher than those that humans make. Humans are much more conservative. Probably partly because we don't like getting into 90%+. That's why using odds instead of probabilities is probably easier to interpret.

The best-guess hypothesis in multi-stage inference
When making multi-stage inferences, humans tend to use the best-guess hypothesis: make your best guess, and pretend it's actually 100% true when taking action, rather than taking into account other possibilities that still might exist.

Inferences of personal characteristics on the basis of information retrieved from one's memory
When making decisions based on memory, the user should take into account (1) diagnostic value of the information available and (2) the reliability of the information available. Humans tend to ignore the latter.

The robust beauty of improper linear models in decision making
This was an interesting topic. Let's say we're evaluating student applications for graduate school. You could use a regression model that takes into account GRE scores, GPA, etc. Or you could use human evaluators. Or, even if you had an imperfect model (improperly weighted), these models will still do better than humans. Humans are still useful though, because their intuition into what factors are actually important is usually pretty good.

The vitality of mythical numbers
Another good one, it looks at how humans can be overconfident in quick calculations. The author looks at one quick calculation of how much stolen property in NYC is attributable to heroin addicts. The numbers sound good, perhaps to a newspaper reporter, but they are terrible, and can be countered by starting with different sets of data to arrive at different conclusions.

Intuitive prediction: Biases and corrective procedures

Debiasing

Improving inductive inference

Facts versus fears: Understanding perceived risk

On the study of statistical intuitions

Variants of uncertainty
Profile Image for Karl.
408 reviews67 followers
September 7, 2017
Almost everything is also in Thinking Fast and Slow.

If you do not like TFaS, reading this will give you the same info.
257 reviews2 followers
December 26, 2020
While the title of this book may sound intimidating, it provides useful information whether or not the reader has a math degree (which I do but don’t remember much of!). The authors do a nice job of explaining terms and concepts in ways that are easy to understand. For instance, they discuss the following:

-- False consensus or egocentric attribution bias: “people’s tendency to perceive a ‘false consensus’—to see their own behavioral choices and judgments as relatively common and appropriate to existing circumstances while viewing alternative responses as uncommon, deviant, and inappropriate” (p. 140).
-- “… it can clearly be concluded that a psychologist’s increasing feelings of confidence as he works through a case are not a sure sign of increasing accuracy for his conclusions. So-called clinical validation, based on the personal feelings of confidence of the clinician, is not adequate evidence for the validity of clinical judgment in diagnosing or predicting human behavior” (p. 293).
-- “two kinds of ‘goodness’ in probability assessments: normative goodness, which reflects the degree to which assessments express the assessor’s true beliefs and conform to the axioms of probability theory, and substantive goodness, which reflects the amount of knowledge of the topic area contained in the assessments” (p. 306).
-- “Benson has identified four reasons for studying the past: to entertain, to create a group (or national) identity, to reveal the extent of human possibility, and to develop systematic knowledge about our world, knowledge that may eventually improve our ability to predict and control” (p. 335).
-- “… overconfidence was reduced by having respondents list reasons why their preferred answer might be wrong… Without the specific prompting to ‘consider why you might be wrong,’ people seem to be insufficiently critical or even intent on justifying their initial answer” (p. 438).
Profile Image for An Te.
386 reviews25 followers
July 12, 2020
A compendium of contributions from psychologists on cognitive biases and heuristics. This is a technical book, yet the range covered is vast. If you're an academic who has even heard of the field, give it a read (at least browse the contents page). You may wish to read Kahneman's 'Thinking fast and slow' before approaching this book.
119 reviews11 followers
June 5, 2017
If, like me, you're a layperson interested in decision-making, read Thinking, Fast and Slow, where Dr. Kahneman crystallizes much of what is in this book into something far more accessible. The book is full of interesting data, but was obviously written for a much more technical audience.
November 7, 2020
By far the most illuminating book I have read in my life. I recommend it to anyone who wants to see not only the extent we know about the heuristics of our minds but also how judgment theories are tested scientifically.
Profile Image for Demma Be.
33 reviews8 followers
December 29, 2017
Such a novel research work, love it! Kahneman also put some of those experiment results into his major book - "thinking fast and slow"
Profile Image for Sabrina Birowo.
8 reviews2 followers
April 5, 2018
Thank you strategy class for providing me with thought-provoking list of books (this included)
Profile Image for Joe Hightower.
40 reviews
January 9, 2021
A wealth of information on the subject. Looking forward to subsequent volume that adds 20 years of additional research
Profile Image for Carter.
597 reviews
October 6, 2021
Some old book, I read as a preteen. I use to be fairly interested in AI topics, when I was younger.
59 reviews2 followers
November 21, 2022
A book every teacher should read.Question everything and read data.
Profile Image for Peter Sandwall.
147 reviews1 follower
December 30, 2020
Inspired to read some of the source material for, "Thinking, Fast and Slow." Insightful gems within; although, I believe rereading his popular book may have been a better use of time.
Profile Image for Jeff Cliff.
209 reviews8 followers
June 9, 2019
While I agree with Sam that there's good stuff in here, little needles in a bed of hay; and think that this academic work could easily have been compressed into a work half its size - it was also written in the 80's, and things *were* different back then. I do have some level of memory of the decade this was written in and have perhaps a greater appreciation for the difficulty in even getting as far as this book does.

Anyway.

This book inspired a couple of blog posts. In it I found:

* A new bias for me: False consensus bias.
* Though they don't really talk about it that way, they dance all around the idea, since they speak so much about subjective probability, randomness and uncertainty, and
especially in the last chapter where they discuss different ways of looking at uncertainty, the concept of 'subjective randomness' came to me as something potentially
valuable (random TO YOU). This concept immediately seemed applicable to tying together 3
things in my head: the likelihood of free will of inanimate objects like point particles, when to defy data and the likelihood of free will for an observer like you, the
reader
* A bias that I haven't seen anyone else point out "Shit has Happened even when I was more constipated than this and I had to push much harder" came out while I was thinking of this stuff.
* ...and another one: toplevel bias

And in general added a good couple of new biases to the list for me to watch out for.

There's lots of warnings of the many ways that availability bias in particular can manifest. In particular, if you're watching a lot of Fox News, and there's a lot of
discussions of terrorism, you will start to believe the exaggerated impact of terrorism on your life. Likewise, it also points out that the causality goes the other way --
that the sources of news are likely to eventually cater to what people want to see(=low risk, high impact events like terrorism) at the cost of things that are actually
likely to kill people, or at the cost of biasing towards negative outcomes(=terrorism). For more on each of these, see Adam Curtis' "The Power of Nightmares", "The Rise
and Fall of the TV Journalist
", and "Oh Dearism".

Also, it turns out, we will believe random data supports our hypothesis if we're not looking out for it. Especially if we're reasoning about uncertain things.
This among other things, is why it turns out to be a good book to read along with Karl Popper's The Open Society and Its Enemies. Popper is trying to write and conceive
of how to think about the margin between economics and psychological models of behaviour. So, too, does this book -- sometimes openly and explicitly, other times implicitly.
But reading the two together helps to give a broader context for the contents of this book, and this book helps give a kind of empirical backing to the claims of Popper.
Right up to and including a criticism of Plato's account of Socrates and his method near the end.

There's a lot more here than I'm letting on, but if you want to get the whole picture you'll have to read this one yourself. I wouldn't classify it as a 'must read' --
the lessons in it are finite, the little parts of Ariadne's golden thread it has can be catalogued and no doubt have been by serious researchers...but it's one that
I definitely think I'll be coming back to in the future.
Profile Image for Bria.
859 reviews71 followers
June 26, 2022
This book was not very useful in helping me become more comfortable in everyday human situations. I already fall for the literal meaning of people's words any time I'm not concentrating very hard on not being a pedant, particularly when someone says they are "99%" or "100% certain" of something. Dwelling on the gap between colloquial meanings of numbers and probabilities and their technical definitions for several weeks did not push me toward being any more tolerable of a human companion. If you are aiming to just be sort of agreeable in casual conversation and maintain a comfortable status quo, this is not the book for you. If you are somewhat perturbed by the recognition of your own failure to intuitively feel the distinction between very small or very large numbers as you do completely inconsequential things like the distinction between different moods and voice inflections in other people, then this book will probably pique that perturbation.
Profile Image for Pi.
21 reviews7 followers
July 28, 2016
Great technical read for anybody starting in psychology. Although the book is relatively old, it introduces topics studied today at university courses on cognitive psychology and decision making. In hindsight, one is tempted to see some of the presented results and conclusions as more-or-less intuitive and mostly common knowledge by now, but this collection of articles organizes and discusses the available information in an accessible and methodical manner that facilitates deeper understanding. It basically, at the time of writing, paved the way towards much of the psychological research on probabilistic reasoning, heuristics and biases in the 80s and 90s, as well as, some of the later works by Kahneman and Tversky, including the popular book Thinking, Fast and Slow.
Profile Image for Usman.
30 reviews5 followers
July 6, 2012

Very insightful book. It shows deviations from mathematical thinking and leans towards descriptive research. It is self contained and accessible. Recommended to people who want a strong grasp on decision sciences.
Profile Image for Dave Peticolas.
1,377 reviews42 followers
October 8, 2014

A collection of research papers investigating the ways the human mind estimates probability. A lot of intriguing material, but also lots of dry writing. I must confess, I did a lot of skimming.

Profile Image for Sam Jaques.
7 reviews5 followers
August 14, 2013
Really dry, and I was worried that most of the data is now outdated. Some of it was really interesting, and some might even be useful, but it was a heck of a slog just for those bits.
Displaying 1 - 30 of 32 reviews

Can't find what you're looking for?

Get help and learn more about the design.