- Lautaro Cabrera

Dunning, D. (2012). Judgment
and decision making. In Fiske,
S. T., & Macrae, C. N. The SAGE
handbook of social cognition
(pp. 251–272). Los Angeles,
Calif.: Sage.
Suppose one were asked
whether there are more
7-letter words in the English language that have
the form - - - - - n - or
- - - - i n g? Most people
“know” the answer within
seconds, and know it
without a comprehensive
review of the closes Webster’s Dictionary. They
merely sit back and see if
they can generate words
with an “- n -” ending or
an “-ing” one. For most
people, it’s the
latter that are
more easily generated
than the former - and so
they conclude that “-ing”
words are more numerous (Tversky & Kahneman, 1973). But they are
necessarily wrong. Stare
at the “- n -” form a little
more, and one would
realize that all “ing” words
fir the “- n -” form. Also,
there are many words
(present, benzene) that
fit -n-, and so -n- words
must be more common
than -ing ones.
The quick-and-crude rule
of thumb that produces
this error was termed by
Kahneman and Tversky
(1973) as the availability
heuristic, which suggests
that people think of
something as more likely
or true to the extent that
it (or examples of it) can
be easily brought to mind.
The Heuristic might be a
good rule of thumb, but
it can lead to systematic
mistakes in belief. For
example, people believe
that homicides are more
frequent than suicides,
availability
bias
Students estimated deaths by
accident as being
more likely than
death by stroke. On the contrary,
studies show that stroke causes
more deaths
than the sum of
all accidents.
which stands to reason
given how often the
former is in the news
relative to the latter, but
the truth is actually opposite is true. People also
overestimate the prevalance of lethal risks such
as car accidents, fire, and
drowning, in part because
these risks are made available in the news, but not
more invisible risks such
as hepatitis, diabetes, and
breast cancer (Lichtenstein, Slovic, Fischoff,
Laman, & Combs, 1978).
Kahneman,D., &
Tversky, A. (1973).
On the psychology of
prediction. Psychology
Review, 80, 237–251.
Lichtenstein, S., Slovic,
P. Fischoff, B., Layman,
M., & Combs, B. (1978).
Judged frequency of
lethal events. Journal of
Experimental Psychology:
Human Learning and
Memory, 4, 751–778.
People interpret an event as more likely to
happen or to be true based on how easy they
can recall instances of that event. Although
a useful tool to make quick judgments, it can
also lead to mistakes.
The term was first de- Schwartz et al. (1991) demonscribed by Tversky &
strated that the Availability
Kahneman (1973), who Heuristic’s effect on judgment
conducted a series of does not depend on the amount
ten different studies to of instances of The group who had to remember only 6 situations
demonstrate the difan event we
was able to complete
the task easily, and later
ferent shapes that the can recall, but evaluated themselves as
more assertive.
bias can take.
on whether
This statistic can be explained, partially, by the
average exposure to accidents through the media.
Think of how many news people hear in a day
about car, train, and plane accidents compared to
the amount of headlines that say: “Grandpa dies
of stroke.”
Word Construction
remembering
Even when we consciously try to avoid biases and
attempt to base our judgments on rational or analytical processes, the availability heuristic unintentionally becomes present.
Concept Retrieval
those times is
Word frequency
easy or hard.
Jacoby et al. (1989) had participants read a list of
names that included slightly famous people and
nonfamous people. When reading a different list
that included some of the names in the first list plus
another set of new ones, participants judged some of
the nonfamous names from the first list as famous.
This is explained through the availability bias that
led participants to believe that certain names were
famous because they were easy to recall.
Combinations
The rest of the participants found it very difficult to recall 12 instances
and concluded that if it
took so much effort, they
mustn’t be so assertive.
Permutations
Extrapolation
Binomial
Fame, Frequency & Recall
Word Pairs Ilusory Correlation
Personality Traits Illusory Correlation
Participants of this
study were first asked to
recall either six or twelve
times when they acted
assertively. Later, they
were asked to rate how
assertive they believed
to be, and the answers
from both groups were
compared to each other.
Schwarz, N., Bless, H.,
Strack, F., Klumpp, G.,
Rittenauer-Schatka, H.,
& Simons, A. (1991).
Ease of retrieval as
information. Journal
of Personality and
Social Psychology,
61(2), 195–202.
Representativeness
Heuristic
Doctors
& nurses
use it
People make judgments
about specific examples
based on comparison
with a mental prototype.
In short, the prototype
serves as an example of
the representativeness
of the specific patient in
question. Even though
each case presented
with salient physical
symptoms, participants
who received contextual
information (e.g., presence of a job loss) were
more likely to dismiss the
patient’s physical symptoms in favor of a less
serious situational explanation. Thus, the nurses
in our study were more
likely to choose potentially less serious diagnoses in favor of ones that
were available through
the representativeness
heuristic.
Brannon, L. A., &
Carson, K. L. (2003).
The representativeness
heuristic: Influence on
nurses’ decision making.
Applied Nursing Research,
16(3), 201–204.
Try the
heuristic
yourself
Let’s say there is a room
with a bunch of college
students. Thirty percent
of them are majoring in
Engineering, while seventy percent have opted
for a very different career: Theater. One of the
students in the room is
described as a very strict
person, who keeps his
room tidy, likes to play
videogames and wears
glasses. If you were to
guess which major does
this student belong to,
what would you say?
There are two possible
different routes to the
answer: the representativeness heuristic and
mathematical logic.
If we were to reason in
mathematical terms,
it is more likely that
the student is a theater
major, because it has a
.7 probability against
engineering’s .3. However, our idea of a typical
engineering student resembles the description
much better than the
one of a theater student.
Therefore, we ignore the
statistics and guess based
on our comparison.
Try it
on your
kids
Kids from the third,
fifth and seventh grade
participated in a study
to investigate the use of
the representativeness
heuristic across different ages. The task consisted in an amount of
problems that would ask
the child to choose the
option that was more
likely to occur between a
typical class (something
that is usual, common,
or regular), an atypical
class (something out of
the ordinary) and a class
that would include both
the typical and the atypical class. For example,
they would be asked
were presented. The first
problem is familiar: “On
the beach in summer are
then more women, more
tanned women, or more
pale women?”
Logically, the answer is
women (inclusive category), because that includes
both pale (atypical) and
tanned (typical) women. However, a tanned
woman is representative
of a woman at the beach
during summer, so kids
very often chose the latter option.
Agnoli, F. (1991). Development
of judgmental heuristics and
We tend to believe that an
outcome is likely to occur
based on a comparison with a
mental prototype.
The probability that an outcome will
occur depends on its sheer frequency.
Common events happen commonly;
rare events only seldomly. Thus, when
predicting whether an event will
occur, we should consult simply how
frequent or probable it is.
Dunning, D. (2012). Judgment and decision making. In
Fiske, S. T., & Macrae, C. N. The SAGE handbook of social
cognition (pp. 251–272). Los Angeles, Calif.: Sage.
logical reasoning: Training
counteracts the representativeness heuristic. Cognitive
Development, 6(2), 195–217.
Kahneman, D., & Tversky,
A. (1972). Subjective probability: A judgment of representativeness. Cognitive
Psychology, 3(3), 430-454.
Kahneman & Tversky (1972) first defined the representativeness heuristic and
investigated its effects through a series of experiments. One of the most common
errors people made was the misrepresentation of randomness. We often think
that randomn arrangements will give a regular yet not perfect outcome, so we
do not interpret results that are skewed one way or are perfectly balanced as
being random. However, mathematics and logic contradict this representation,
making our heuristic useless in these cases.
Dunning, D. (2012).
Judgment and decision making. In Fiske,
S. T., & Macrae, C. N.
The SAGE handbook
of social cognition (pp.
251–272). Los Angeles, Calif.: Sage.
People estimate the
probability of a series
of specific events as
more likely to happen
than a more inclusive
instance of the event.
Unpacking
Murder
Rottenstreich, Y., &
Tversky, A. (1997).
Unpacking, repacking, and anchoring.
Psychological Review,
104(2), 406–415.
=
Estimated
Probability
0.20
=
Estimated
Probability
0.25
Accidental death
Murder by stranger
or by acquaintance
Accidental death
Estimates of holiday shopping duration
Kruger, J., & Evans,
M. (2004). If you
don't want to be late,
enumerate: Unpacking reduces the planning fallacy. Journal
of Experimental Social
Psychology, 40(5).
Biswas, D., Keller,
L. R., & Burman, B.
(2012). Making probability judgments of
future product failures: The role of mental unpacking. Journal
of Consumer Psychology, 22(2), 237–248.
Sloman, S., Wisniewski, E., Rottenstreich, Y., Hadjichristidis, C., & Fox,
C. R. (2004). Typical
Versus Atypical Unpacking and Superadditive Probability
Judgment. Journal Of
Experimental Psychology. Learning,
Memory & Cognition,
30(3), 573–582.
Redden, J. P., & Frederick, S. (2011). Unpacking unpacking.
Journal of Experimental Psychology: General, 140(2), 159–167.
Days
Hours
End Date
Dollars
Packed
5.20
13.22
12/19
$224
Unpacked
7.29
25.92
12/21
$244
Probability Judgment of Car Failure
Packed
27.85%
Unpacked
4 reasons
37.10%
Unpacked
12 reasons
25.24%
Estimates with well-defined categories
Packed
65%
Unpacked
Typical
63.8%
Unpacked
Atypical
58.9%
Estimates with ambiguous categories
Packed
60.4%
Unpacked
Typical
65.0%
Unpacked
Atypical
46.4%
Gamble preference
Even #
5.4
2, 4 or 6
5.0
1, 4 or 6
4.8
Rottenstreich & Yuval (1997) conducted a study where college students judged the probability of the cause of a particular death. When
asked to decide whether it was a murder or an accidental death, they
estimated a 1 in 5 probability of being murder. However, when they
were asked to compare the probability of the death being accidental
or either a murder by stranger or murder by an acquaintance, the
probability rose to 1 in 4. The unpacking of the different instances
that "murder" includes produced this effect.
During the month of November, students were asked to predict several
factors regarding their Christmas shopping. One group was asked to
list every person they would have to get a gift for (unpacked), while
the other one did not receive such instruction (packed). On average,
participants in the first group expected their holiday shopping to take
40% more days, 96% more hours, 20 more dollars than did subjects
in the packed condition. They also expected to be done with the task
4 days before Christmas, compared with the 6 days participants in the
packed condition expected to have to spare (Kruger & Evans, 2004).
Just like the effects of accessibility depended on the grade of difficulty
perceived by the subject, the effects of unpacking are also susceptible
to such appraisal. When the participants in a study were asked to list
the reasons for which their car could have starting problems (unpacking) they only judged the probability of failure as higher when the
unpacking was made easy by requesting only 4 reasons. When they
were asked to list 12 instances, participants assumed that if it was so
hard to come up with reasons, then it mustn't be so plausible, so their
judgments of probability were lower (Biswas, Keller & Burman, 2012).
Not every type of unpacking increases the probability judgments of
the assessed events. Participants in a study were asked to estimate
the probability of events unpacked either with a typical example or
an atypical example. For example, some were asked about the probability of New Yorkers vacationing in Hawaii, Jamaica, or any other
island; others had Japan and Ireland instead. Notice that the "any
other island" was present in both cases, so the probability should be
technically the same. Subjects' estimates were lower in the atypically unpacked condition than the control or packed condition: "New
Yorkers vacationing in any island." The effects were even larger if the
category was ambiguous or fuzzy, such as "mammals that can hold
their breath" with a whale as typical example and a weasel as an
atypical instance (Sloman et al., 2004).
There is also another case when unpacking actually decreases the
probability estimates of the event. Participants were asked to choose
between a safe money earning or a proposed bet in a series of six
different studies (Redden & Frederick, 2011). When the unpacking
resulted in a more complex description of the event, participants estimated that the probability of winning the bet was lower and preferred
the safe earnings more often. For example, subjects were offered to
gamble by throwing a die and winning by getting an even number; 2, 4,
or 6; or 1, 4, or 6. Notice all possibilities had a 50% chance of winning.
However, participants chose to gamble in the simple description (even
numbers) more often than the other options.
When striving to
determine whether
some conclusion
is true, people are
biased in their
search for information. They tend
to favor information that confirms
that conclusion
over information
that would disconfirm or contradict
it. For example, if
someone asks me if
people are likely to
get taller over the
next few centuries, I
am likely to grope
around for facts and
theories that suggest
that, yes, people will
get taller. However,
if someone asks me
if people are likely
to get shorter, my
search for information and argument
shifts in the opposite direction.
fiilippiq7
O
ne way to describe this confirmation bias is that people look for
positive matches between the conclusion they are considering
and the information they search for (Wason, 1960). The conclusion can
come from many different sources. People seem biased to conside, and
then confirm, conclusions that they favor over those they dislike (Hoch,
1985; Pyszczynski & Greenberg, 1987; Tabor & Lodge, 2006). People
tend to confirm conclusions that fit their expectations (e.g., the sun will
rise in the east tomorrow) than those they consider less plausible (Nickerson, 1998). Even the way a question is posed will suggest a conclusion,
and thus the direction in which people will seek out information (Snyder
& Swann, 1978). For example, participants who were asked to judge
whether they were happy with their social life tended to bring to mind
positive social experiences, and ended up being much more bullish on
their social life than those asked whether they were unhappy with their
social life (Kunda, Fong, Sanitioso, & Reber, 1993).
Con
firma
tion
Bias
Confirmation bias can lead to perverse conclusions, with people coming
to different decisions based on the way they frame the question in front
of them. Suppose that the decision being considered is to which parent
a child should be granted custody, with Parent A unremarkable in a
remarkable number of ways, but Parent B being an individual with some
real strenghts and obvious weaknesses as a parent. When participants
were asked in one study which parent should be given custody of the
child, they tended to go with Parent B. But when asked, instead, which
parent should be denied custody, they chose to deny Parent B custody.
Apparently, the strengths that suggested good parenting skills under the
first frame of the question were ignored under the second frame in favor
of those shortcomings and weakened Parent B's case (Shafir, 1983).
The timing when people encounter information can also influence what
gets chosen. Across several studies, Russon and colleagues have discovered that people form tentative conclusions about the options they favor
when making a choice. And once one option nudges ahead in favoritism,
confirmatory bias seals its ultimate selection (Russo, Medvec, & Meloy,
1996; Russo, Meloy & Medvec, 1998)—a tendency observed among
professional auditors, for example, deciding which firm should receive
an on-site review (Russo, Meloy & Wilks, 2000). This tendency for one
option to nose ahead in the horse race can also lead to perverse decisions. People will choose an inferior option over a superior one if the first
piece of information they receive about the two options just happens to
favor the inferior choice. Now ahead in the horse race, confirmation bias
speeds its selection, even though it is not the optimal selection to make.
Consider the word on the left. If you were asked "can you read 'fulirrioz'
in this graphic?" You would look for information confirming that theory
and find it pretty easily, leading you to answer the question positively.
However, if the letter sequence that should be confirmed was 'fiilippioz'
you would seek information that confirms that theory instead and end
with a different decision. The real word behind the mask is 'fiilippiq7'
dunning, d. (2012).
judgment and decision
making. in fiske, s. t., &
macrae, c. n. the sage
handbook of social
cognition (pp. 251-272).
los angeles, calif.: sage.
Hoch, S. J. (1985). Counterfactual reasoning
and accuracy in predicting personal events,
Journal of Experimental Psychology: Learning, Memory, and Cognition, 11, 719–731.
Kunda, Z., Fong, G. T., Sanitioso, R., &
Reber, E. (1993). Directional questions direct
self-conceptions. Journal of Experimental
Social Psychology, 29, 63–86.
Nickerson, R. S. (1998). Confirmation bias:
A ubiquitous phenomenon in many guises.
Review of General Psychology, 2, 175–220.
Pyszczynski, T., & Greenberg, J. (1987).
Toward an integration of cognitive and
motivational perspectives on social inference:
A biased hypothesis-testing model. In L.
Berkowitz (Ed.), Advances in experimental
social psychoogy (Vol. 20, pp. 297–340).
New York: Academic Press.
Russo, J. E., Medvec, V. H., & Meloy, M. G.
(1996). The distortion of information during
decisions. Organizational Behavior and Human Decision Processes, 66, 102–110.
Russo, J. E., Meloy, M. G., & Medvec, V. H.
(1998). Predecisional distortion of product
information. Journal of Marketing Research,
35, 438–452.
Russo, J. E., Meloy, M. G., & Wilks, T. J.
(2000). Predecisional distortion of information by auditors and salespersons. Management Science, 46, 13–27.
Shafir, E. (1983). Choosing versus rejecting:
Why some options are both better and worse
than others. Memory and Cognition,
21, 546–556.
Snyder, M., & Swann, W. B. (1978).
Hypothesis-testing in social interaction.
Journal of Personality and Social Psychology,
36, 1202–1212.
Tabor, C. S., & Lodge, M. (2006). Motivated
skepticism in the evaluation of political
beliefs. American Journal of Political Science,
50, 755–769.
Wason, P. (1960). On the failure to eliminate
hypotheses in a conceptual task. Quarterly
Journal of Experimental Psychology,
12, 129–140.