This article was originally published in Brain Mapping: An

This article was originally published in Brain Mapping: An Encyclopedic
Reference, published by Elsevier, and the attached copy is provided by
Elsevier for the author's benefit and for the benefit of the author's institution,
for non-commercial research and educational use including without limitation
use in instruction at your institution, sending it to specific colleagues who you
know, and providing a copy to your institution’s administrator.
All other uses, reproduction and distribution, including without limitation
commercial reprints, selling or licensing copies or access, or posting on open
internet sites, your personal or institution’s website or repository, are
prohibited. For exceptions, permission may be sought for such use through
Elsevier's permissions site at:
http://www.elsevier.com/locate/permissionusematerial
Hsu M., and Preuschoff K. (2015) Uncertainty. In: Arthur W. Toga, editor. Brain
Mapping: An Encyclopedic Reference, vol. 3, pp. 391-399. Academic Press:
Elsevier.
Author's personal copy
Uncertainty
M Hsu, University of California, Berkeley, CA, USA
K Preuschoff, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland
ã 2015 Elsevier Inc. All rights reserved.
Glossary
Ambiguity Situations where probabilities are based on
meager or conflicting evidence, where important
information is clearly missing.
Functional magnetic resonance imaging Noninvasive
technique of measuring brain activity by detecting
associated changes in blood flow.
Introduction
Uncertainty is pervasive in the world (Kahneman & Tversky,
1979; Savage, 1972; Taylor-Gooby & Zinn, 2006). The purchase of a lottery ticket, for example, entails obtaining a small
but nonzero chance of winning a large jackpot, at the cost of a
certain loss in the form of the price of the ticket. Moreover, the
attractiveness of two lottery tickets with the same cost will
likely center on two factors – the size of the prize and the
likelihood of winning. Most will agree, for example, a lottery
that pays $1000 with 80% chance (and $0 otherwise) is worth
more than one paying $500 with 40% chance (Bossaerts,
Preuschoff, & Hsu, 2008; Rangel, Camerer, & Montague,
2008; Savage, 1972; Tversky & Kahneman, 1992).
In natural environments, organisms often face a similar
dilemma in choosing between uncertain large gains and safe
smaller rewards (Krebs & Davies, 2009; Rangel et al., 2008;
Savage, 1972; Sugrue, Corrado, & Newsome, 2005; Tversky &
Fox, 1995). As rewards and punishments present in the environment are rarely known for certain, the brain must be able to
sense, respond to, and, if necessary, resolve uncertainty in ways
that allow the organism to obtain rewards and avoid punishments (Bossaerts et al., 2008; Maia & Frank, 2011; Montague,
King-Casas, & Cohen, 2006; Schultz, 2006; Schultz, Dayan, &
Montague, 1997).
We have long known that organisms are sensitive to uncertainty, as documented by a wealth of research studies spanning
many research areas, from behavioral economics (Camerer,
1989; Kahneman & Tversky, 1979; Savage, 1972) to ecology
(Krebs & Davies, 2009; O’Connell & Hofmann, 2011). But it
was not until about two decades ago that our knowledge of the
neural basis of decision-making under uncertainty began to be
transformed by a multidisciplinary effort that combined simple gambling games such as the lotteries described earlier
(Bossaerts et al., 2008; Camerer, 1989; Knutson et al., 2008;
Kuhnen & Knutson, 2005; Rangel et al., 2008) with neuroscientific methods, including functional magnetic resonance
imaging (fMRI) (Kuhnen & Knutson, 2005; Preuschoff,
Bossaerts, & Quartz, 2006), electrophysiological recordings
(Fiorillo, Tobler, & Schultz, 2003; Ogawa et al., 2013), and
lesion studies (Camille et al., 2004; Hsu et al., 2005) in
Brain Mapping: An Encyclopedic Reference
Reinforcement learning Class of machine learning
algorithms involving trial and error learning, inspired by
animal learning studies in behavioral psychology.
Risk Situations, such as roulette games or lotteries, where
probabilities are known with precision.
Uncertainty A state of having limited knowledge regarding
future outcome. Here used to encompass both risk and
ambiguity.
humans, nonhuman primates (Schultz et al., 1997; Sugrue,
Corrado, & Newsome, 2004), and rodents (Jones et al., 2012;
Kiani & Shadlen, 2009).
The resulting interdisciplinary paradigms are highly suited
for formal mathematical models of uncertainty from economics and control theory. As such, they enable us to quantitatively
characterize neural representations of key constructs hypothesized to underlie uncertainty (Camerer, 1989; Markowitz,
1952; Savage, 1972). On the one hand, the resulting framework provides excellent experimental control with which to
measure and manipulate the uncertainty presented to subjects
and observe their subsequent effects on behavior. On the other
hand, this framework directly links such manipulations and
their behavioral consequences to computations in neural
systems.
We will begin with a discussion of how the brain represents
values in the presence of uncertainty, focusing on the relatively
simple case of uncertainty over outcomes when probabilities
can be assigned. We then address the more complex case of
how the brain processes uncertainty when probabilities themselves are uncertain, so-called ‘ambiguity.’ We then deal with
other forms of uncertainty and how the brain learns about
uncertainty over time. We conclude with open questions and
exciting new frontiers in this topic.
Neural Representation of Expected Value and Risk
To choose between any prospects, certain or uncertain, an
organism has to first assess their values. An inaccurate assessment can result in costs in the form of overpayment in monetary terms, wasted effort, or, in extreme cases, danger to one’s
own survival. Let us return to our example from the preceding
text: it seems clear that a lottery that pays $1000 with 80%
chance is worth more, that is, it has higher value than one
paying $500 with 40% chance. But how much more valuable is
the first lottery? The difficulty becomes more obvious if we
swap the probabilities: Would you rather have a lottery that
pays $1000 with 40% chance (and $0 otherwise) or one that
pays $500 with 80% chance? We already know that most
people have a strong preference for one lottery over the other
http://dx.doi.org/10.1016/B978-0-12-397025-1.00260-8
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
391
Author's personal copy
392
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
despite the fact that they have equal expected value. But how
does the brain value these two options?
We have compelling evidence that the brain, in particular
the ventral striatum, responds to the expected value of a lottery
(Knutson et al., 2001; McClure, Berns, & Montague, 2003;
Schultz et al., 1997). We also know that the dopaminergic
system plays a crucial role in evaluating the outcome of uncertain gambles: after revealing the outcome of stochastic rewards
to a monkey, dopaminergic midbrain neurons reflect errors in
predicting these rewards (Daw, 2007; Schultz et al., 1997;
Figure 1). These error signals are remarkably similar those
used in error-based learning algorithms such as the Rescorla–
Wagner rule or more general reinforcement learning algorithms that also extend to complex multiple stimuli–reward
situations (McClure et al., 2003; Montague, Hyman, & Cohen,
2004; O’Doherty et al., 2003).
Subsequent studies in humans using functional MRI have
linked the midbrain signal to activity in frontostriatal circuits
(i)
that receive abundant dopaminergic input (Abler et al., 2006;
Knutson et al., 2001; O’Doherty, 2004; Seymour et al., 2004).
These results, and subsequent works elucidating the specific
computational mechanisms, are now well established and
form the foundation of much of what we know about the
brain and decision-making in general (Glimcher & Fehr,
2013; Schultz et al., 2008).
Now let us return to the problem of uncertainty. An organism that simply makes decisions based on subjective value can
exhibit behavior that can be quite pathological. One of the first
demonstrations of this is the so-called Saint Petersburg paradox
proposed by Nicholas Bernoulli, ostensibly based on a popular
gambling game then fashionable in Russia (Bernoulli, 1968).
The game involved betting on how many tosses of a coin will be
needed before it first turns up heads, where the player is paid 2n
dollars if the coin comes up heads on the nth toss. The expected
value of this gamble is 12 ð2Þ þ 14 ð4Þ þ 18 þ ¼ 1 þ 1 þ 1 þ ¼ 1
dollar, but when people are asked how much they would be
(i)
Median change in activity
40
(ii)
(ii)
30
20
10
0
–10
(iii)
(iii)
0
(c)
(v)
(a)
1s
(iv)
(v)
Stimulus
Reward
(b)
Reward probability
2.5 imp s–1
1s
(iv)
0.25 0.50 0.75 1.00
Stimulus
Reward
Figure 1 Reward and risk signals in dopamine neurons. (a) Phasic reward value signal reflecting reward prediction (left) and more sustained risk
signal during the stimulus–reward interval in a single dopamine neuron. Visual stimuli predicting reward probabilities (i) 0.0, (ii) 0.25, (iii) 0.5, (iv) 0.75,
and (v) 1.0 alternated semirandomly between trials. Both rewarded and unrewarded trials are shown at intermediate probabilities; the longer vertical
marks in the rasters indicate delivery of the juice reward. (b) Population histograms of responses shown in (a). Histograms were constructed from every
trial in 35–44 neurons per stimulus type. Both rewarded and unrewarded trials are included at intermediate probabilities. (i) 0.0, (ii) 0.25, (iii) 0.5,
(iv) 0.75, and (v) 1.0. (c) Median sustained risk-related activation of dopamine neurons as a function of reward probability. Plots show the sustained
activation as inverted U function of reward probability, indicating relationship to risk as opposed to value. Data from different stimulus sets and
animals are shown separately. Adapted from Fiorillo, C. D., Tobler, P. N., & Schultz, W. (2003). Discrete coding of reward probability and uncertainty by
dopamine neurons. Science (New York, NY), 299(5614), 1898–1902.
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
willing to invest in such a gamble, few would pay more than $25
(Martin, 2004). As such, people’s choice behavior departs significantly from simple expected value maximization (Cox &
Sadiraj, 2008).
There are a number of resolutions to this paradox, but the
central insight is that organisms are sensitive not only to the
expected value of an outcome but also to the spread or, more
formally, its risk (Bernoulli, 1968; Savage, 1972). If responses
of the dopaminergic system underpin calculations of expected
reward, does the brain also respond to the ‘spread’ (or risk) of a
gamble? In our second lottery example earlier, the two lotteries
($1000 @ 40% and $500 at 80%) only differ in their risk. As
detailed in the succeeding text, it appears that the brain values
risky gambles by evaluating their expected reward and risk
separately (Figure 1). The separate evaluations are then merged
to generate a total valuation signal, detectable in, for example,
the prefrontal cortex (PFC).
Early evidence for the explicit representation of spread measured as the variance of the gamble came, once again, from
electrophysiological studies of the monkey midbrain dopamine system (Fiorillo et al., 2003). In addition to a fast
response that scales monotonically with expected reward
(Figure 1(a)), the researchers also found a slower response
that appeared to scale with the risk (variance) of the gamble
(Figure 1(b) and 1(c)).
This response was later captured in humans using fMRI in a
study using a simple card task (Preuschoff et al., 2006;
Figure 2(a)). Consistent with the notion of explicit representations of outcome uncertainty, activity in a number of brain
regions (Figure 2(c)), including the striatum, midbrain, and
insula, reflected variance (or its square root, standard deviation), in addition to regions that responded to expected reward
(Bossaerts et al., 2008; Preuschoff et al., 2006; Schultz et al.,
2008; Figure 2(c)). An important innovation in all these studies is that they systematically vary outcome probabilities, in
order to decouple the relative contribution of expected magnitude of reward and the variance of reward.
By now, the evidence of risk encoding in the human and
nonhuman primate brain is overwhelming. Some regions,
such as the insula, appear to encode risk exclusively (Bossaerts
et al., 2008; Preuschoff et al., 2006), a result that is supported
by recent meta-analyses suggesting such separate encoding of
expected reward, variance, and even higher moments such as
skewness (Wu, Sacchet, & Knutson, 2012; Figure 3). In contrast, the anterior cingulate cortex (ACC) and inferior frontal
gyrus appear to reflect a signal combining expected reward and
risk (Critchley, Mathias, & Dolan, 2001; Paulus et al., 2003;
Weber & Huettel, 2008). In particular, in Figure 4(a), PFC
activation appears to increase with expected reward for all
subjects but decreases with risk for risk-averse subjects and
increases with risk for risk-seeking subjects (Tobler et al.,
2007; Figure 4(b)).
Neural Representation of Ambiguity
The previous section was concerned with decision-making
under risk, that is, situations in which outcome probabilities
are known as is the case when playing a lottery. However, in
most real-life situations, probabilities are unknown or only
393
partially known. The latter is referred to in decision theory as
ambiguity (Camerer & Weber, 1992; Ellsberg, 1961). To illustrate the difference between risk and ambiguity, consider the
so-called Ellsberg paradox with two urns (Camerer & Weber,
1992; Ellsberg, 1961). One urn is composed of 15 red and 15
blue chips (the risky urn). Another urn also has 30 red or blue
chips, but the composition of red and blue chips is completely
unknown (the ambiguous urn). A bet on a color pays a fixed
sum (e.g., $10) if a chip with the chosen color is drawn, and
zero otherwise (Figure 5).
In a typical experiment, subjects are first informed about
the winning color (red or blue) and are then asked to choose
one of the two urns to draw from. Independent of whether the
winning color is red or blue, many people prefer to draw from
the risky urn rather than from the ambiguous urn (Camerer &
Weber, 1992; Hsu et al., 2005). If betting preferences are determined only by probabilities, this pattern is a paradox (Ellsberg,
1961; Raiffa, 1961). In theory, avoiding the ambiguous urn
when the winning color is red implies that its subjective probability is lower than that of the risky urn (pamb(red) <
Prisk(red)). Analogously, avoiding the ambiguous urn when
the winning color is blue implies Pamb(blue) < prisk(blue). But
these inequalities, and the fact that the probabilities of red and
blue must add to one for each deck, implying that 1 ¼ pamb
(red) þ pamb(blue) < prisk(red) þ prisk(blue) ¼ l, are a contradiction. The paradox can be resolved by allowing choices to
depend on subjective probabilities of events and on the ambiguity of those events. For example, if ambiguous probabilities
are subadditive, then 1 pamb(red) pamb(blue) represents
reserved belief and indexes the degree of aversion to ambiguity
(Camerer & Weber, 1992).
There are two natural predictions that fall out of this behavioral evidence of an aversion to ambiguity. First, the subjective
reward of ambiguous choices should be lower than those of
matching risky choices, hence the phenomenon of ambiguity
aversion. Second, the brain should be sensitive to the uncertainty over the probabilities over outcomes. That is, there
should be regions where responses vary systematically depending on the presence of ambiguity.
Using fMRI with gambling tasks involving both risky and
ambiguous gambles such as those in the preceding text, it was
found that striatal activity appeared to distinguish between
risky and ambiguous gambles (Figure 6), such that activity
on risky choices was greater than those in ambiguous choices
(Hsu et al., 2009; Figure 6(b)). This is consistent with the
notion that ambiguity-averse individuals would value risky
gambles higher than ambiguous ones. Second, a separate network of regions, in particular the lateral orbitofrontal cortex
(LOFC) and amygdala, appeared to be sensitive to ambiguity
of the gambles, such that their activity is higher under ambiguity than risk (Hsu et al., 2005; Figure 6(a)).
Moreover, examining the time course of the respective activations revealed that whereas striatal responses appear locked
to choices, LOFC and amygdala responses appear locked to the
presentation of the gambles, suggesting that the LOFC and
amygdala may be responsible for sensing uncertainty over
probabilities (Figure 6). This hypothesis is supported by a
follow-up lesion study of neurological subjects with focal
brain lesions to the LOFC and a matched comparison group
where damage did not include prefrontal regions (Hsu et al.,
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
Figure 2 Risk signals in human brain. (a) On each trial, two cards were drawn (without replacement within each trial) from a deck of ten,
numbered 1–10. Before seeing either card, subjects first placed a $1 bet on one of two options, ‘second card higher’ or ‘second card lower’ (than first
card shown). Subjects could earn $1 if they guessed the right card and lost $1 if they were wrong. Once the bet was placed, subjects saw card 1,
followed 7 s later by card 2. (b) Expected reward increases linearly in the probability of reward p (dashed line) and is minimal at p ¼ 0 and maximal at
p ¼ 1. Risk, measured as reward variance, is an inversely quadratic function of probability that is minimal at p ¼ 0 and p ¼ 1 and maximal at p ¼ 0.5
(solid line). (c) Sustained BOLD response during 6 s correlated with variance as inverted U function of reward probability. Adapted from Preuschoff, K.,
Bossaerts, P., & Quartz, S. R. (2006). Neural differentiation of expected reward and risk in human subcortical structures. Neuron, 51(3), 381–390,
with permission by Elsevier.
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
Mean
Variance
395
Skewness
0.050
0.025
0.003
ALE
A=8
A = 15
A=8
Figure 3 Meta-analysis of uncertainty signals in the brain. Activation likelihood estimate (ALE) meta-analytic maps for high versus low mean,
variance, and skewness. ALE of mean: bilateral NAcc. ALE of variance: bilateral anterior insula. ALE of skewness: left NAcc. Adapted from Wu, C. C.,
Sacchet, M. D., & Knutson, B. (2012). Toward an affective neuroscience account of financial risk taking. Frontiers in Neuroscience, 6: 159.
Contrast estimate
5
0
–5
–10
–15
–20
(a)
Contrast estimate
20
10
0
–10
–20
–4
(b)
–3
–2 –1 0
1
2
3
Degree of risk aversion
4
Figure 4 Risk attitude signals in the lateral OFC. (a) Risk was positively correlated with activity in the lateral OFC for risk-seeking individuals and
negatively for risk-averse individuals, consistent with a ‘safety’ or ‘fear’ signal. (b) The reverse pattern is true for medial OFC, consistent with a
‘risk-seeking’ or ‘gambling’ signal. Adapted from Tobler, P. N., et al. (2007). Reward value coding distinct from risk attitude-related uncertainty coding in
human reward systems. Journal of Neurophysiology, 97(2), 1621–1632.
2005). Strikingly, using a computational model that quantified
risk and ambiguity attitude of the participants, the LOFC lesion
patients were found to be abnormally neutral toward ambiguity, consistent with previous findings that OFC lesion patients
appeared insensitive to different degrees of uncertainty
(Bechara et al., 1999, 2000).
Furthermore, this characterization appears to extend to situations where the degree of ambiguity varies (Levy et al.,
2010). The authors cleverly manipulated the magnitude of
the uncertainty about probabilities. For example, if there are
ten chips of either red or blue in the urn, the decision-maker
may know that there are at least two red chips, but the exact
number of red chips can range from 2 to 10 (Figure 7(a)).
Thus, partial ambiguity can be defined as situations where the
decision-maker has some incomplete knowledge of the underlying probabilities, whereas complete ambiguity can be said to
be cases where the decision-maker has no knowledge. Importantly, by extending the range of ambiguity, the LOFC was
found to respond parametrically to the degree of ambiguity,
which functions perhaps as an index of the degree of missing
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
396
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
Risky urn
information in the environment (Huettel et al., 2006; Levy
et al., 2010; Figure 7(b)).
Ambiguous urn
Learning about Uncertainty
Thus far, we have largely restricted ourselves to the discussion
of uncertainty in one-shot or stable encounters where the
organism must choose between prospects of different risk–
reward profiles. However, in natural environments, these
risk–reward profiles may change over time, predictably as
well as unpredictably. An organism should therefore learn
about the uncertainty in its environment and adjust its choice
behavior accordingly. The problem is that not all prediction
Figure 5 Illustration of uncertainty about probability. In the ‘risky’ urn
on the left, the probabilities for drawing a red or blue chip are known. In
the ‘ambiguous’ urn on the right, they are not known. People typically
prefer to bet on the left (risky) urn. Reproduced from Ellsberg, D. (1961).
Risk, ambiguity, and the savage axioms. The Quarterly Journal of
Economics, 75(4), 643–669.
y = –7
L striatum
R Amygdala
% signal change
% signal change
L Amygdala
0.1
0
0.1
0
–0.1
0
–0.1
5
10
15
Time (s)
0
5
Time (s)
L OFC
R striatum
10
Onset
Mean decision
y=5
R OFC
(b)
0.2
0.1
0
y = 35
Onset
–0.1
Ambiguity
(a)
Mean decision
Ambiguity
Risk
Risk
Figure 6 Neural responses to ambiguity. (a) Amygdala and lateral OFC responses were greater under ambiguity than risk. Dashed vertical lines are
mean decision times. (b) In contrast, striatal activity was greater under risk than ambiguity and moreover was correlated with expected reward of
subjects’ choices. Adapted from Hsu, M., Bhatt, M., Adolphs, R., Tranel, D., & Camerer, C. F. (2005). Neural systems responding to degrees of
uncertainty in human decision-making. Science (New York, NY), 310(5754), 1680–1683.
Ambiguous urns
0.35
0.30
max sem
fMRI response
[% signal change]
0.25
Risky urns
0.20
p = 0.38
0.15
p = 0.25
0.10
p = 0.13
0.05
A = 0.25
0.00
–0.05 –4 –2 0 2 4 6 8 10 12 14 16 18 20 22 24
t [s]
–0.10
A = 0.50
A = 0.75
–0.15
(a)
(b)
(c)
Figure 7 Lateral OFC response modulated by degree of ambiguity. (a) Degree of ambiguity was manipulated by obscuring the relative probability of the
outcome probabilities (either red or blue) using a gray bar. For example, the middle urn consisted of a gamble where there were at least 25% of
red and 25% of blue, leaving 50% unknown. (b–c) Lateral OFC activity responded parametrically to the degree of ambiguity. Adapted from Levy, I., et al.
(2010). Neural representation of subjective value under risk and ambiguity. Journal of Neurophysiology, 103(2), 1036–1047.
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
errors are equal: a prediction error may arise simply because
the environment itself is stochastic (though stable). Errors in
such environments are uninformative and should not lead to
behavioral changes. A prediction error may also arise because
the environment has changed (is unstable). Errors in such
environment are highly informative and should lead to behavioral changes.
On the one hand, learning about outcome uncertainty, or
risk, allows the decision-maker to assess whether or not to
update its reward predictions after committing an error. Only
when prediction errors appear to deviate from the predicted
risk should organisms update their predictions (Bossaerts et al.,
2008; Yu & Dayan, 2005). Otherwise, the learning rate should
be low. It turns out that as with expected reward signals in
the dopaminergic system, activation correlating with risk in
some regions actually reflects risk prediction errors, that is,
the difference between the square-size of the prediction error
and its expectation (the variance). Specifically, phasic activation in anterior insula exhibits strong correlation with risk
prediction errors (Preuschoff & Bossaerts, 2007; Preuschoff
et al., 2008).
On the other hand, in situations where the environment
changes rapidly, past prediction errors become obsolete fast,
and hence, prediction should rely more on recent prediction
errors. In other words, the learning rate should increase. The
intuition has rigorous theoretical underpinning (Preuschoff &
Bossaerts, 2007) as changes in the environment are marked by
large deviations from the expected risk, that is, large risk prediction errors. As such, risk prediction errors are highly useful
in guiding organisms to adapt to changes in the environment.
This has been supported by human neuroimaging work showing that the ACC appears central in the integration of risk
prediction errors to adjust behavior in response to the degree
of stability in the environment (Behrens et al., 2007; PayzanLeNestour & Bossaerts, 2011).
In cases of ambiguity, learning is arguably even more crucial,
as better assessments of probabilities directly translate into more
accurate forecasts of rewards and punishments that can be
obtained (Figure 8(a)). Studies combining ambiguity with
learning have consistently implicated a wide network of lateral
prefrontal and parietal cortices in the processing of ambiguity,
where activity was greater in ambiguity as compared to risk (Bach
& Dolan, 2012; Bach et al., 2011; Hsu & Zhu, 2014; Huettel et al.,
2006; Figure 8(b)). These findings support the hypothesis that
ambiguity is more than simply a heightened form of risk (Bach,
Seymour, & Dolan, 2009; Bach, et al., 2011; Huettel, et al.,
2006).
This stands in stark contrast with the picture so far where
much of the learning appears to take place via dopaminergic
mechanisms in the striatum and (medial) PFC. However, a
closer inspection of the type of information that is learned in
the case of ambiguity provides a clue as regards to the type of
neural system engaged. Unlike in learning about outcome
uncertainty, learning about probabilities entails learning
about relatively abstract states that may or may not have
reward consequences. This is consistent with the substantial
work in cognitive neuroscience that implicates frontoparietal
regions in responding to environmental surprise and bottomup attentional processes (Corbetta, Patel, & Shulman, 2008;
397
Decision (RT)
$0
$12
$20
?
Expectation
(4.5-6s)
$35
$0
$12
$20
Outcome (2s)
$35
$0
$12
$20
You won $20
$35
You could have won $0
(a)
Ambiguity effects
(b)
Figure 8 Neural processes in learning ambiguity. (a) Participants were
presented ambiguous and risky gambles in the form of two roulette
wheels. Following the selection of one of the gambles, the ambiguity was
resolved and outcome determined by the final position of the wheels.
(b) Ambiguity responses were found in the posterior inferior frontal
sulcus and posterior parietal cortex. Adapted from Huettel, S. A., et al.
(2006). Neural signatures of economic preferences for risk and
ambiguity. Neuron, 49(5), 765–775.
Gottlieb & Balan, 2010). Indeed, in related work in modelbased reinforcement learning where learning about states is
explicitly distinguished from learning about rewards, the frontostriatal network appears to be involved in learning about
rewards, whereas responses in the lateral frontoparietal network were associated with learning signals related to states
(Beierholm et al., 2011; Gla¨scher et al., 2010).
Conclusion
The preceding text has reviewed current evidence about how
the brain processes different forms of uncertainty. At this point,
the picture is one where risk and reward are separately encoded
in the brain and cognitive control regions are involved in the
selection of actions that are associated with different expected
rewards. Following the resolution of the uncertainty, a different set of regions are recruited to detect the resulting outcomes
in the environment that can be used to update either reward
expectations or probability assessments.
Despite this substantial progress, there are a number of
important questions that are as yet largely unexplored. One
concerns the extent to which results from these relatively
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
398
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
stylized laboratory settings can be generalized to the much
more complex types of uncertainty faced in the real world.
Although these simple gambling paradigms have been
undoubtedly successful in probing the underlying computational mechanisms, they likely only capture a small slice of the
variety of uncertainty that people experience.
First, most of the studies earlier have explored different
forms of uncertainty, such as risk and ambiguity, in isolation.
In natural environments, different forms are intermixed. It is
unclear how this affects their respective neural representations
and how the different representations may interact.
Second, a distinct aspect of uncertainty in human societies
is that uncertainty is often translated verbally. How verbal
descriptions of risk map onto more objective measures of risk
has been a problem of considerable interest since the early days
of decision theory (Hadar & Fox, 2009; Symmonds, Bossaerts,
& Dolan, 2010). These are initially motivated by applications
in military contexts where communication of uncertainty is of
obvious value but are taking on new urgency in issues such as
climate change. Addressing these questions therefore requires
one to move outside of these rigorous but limiting paradigms,
and embracing complexity associated with language processing, and those arising from social and cultural influences.
Another important question is the extent to which pathologies to the neural systems implicated are associated with
behavioral disorders such as pathological gambling and excessive risk taking with respect to financial, social, and sexual
risks. For example, it is well known that adolescents are particularly prone to risky behavior that carry long-term deleterious
consequences (Tymula et al., 2012), and older adults are particularly vulnerable to poorer financial decision-making and
fraud (Denburg et al., 2007; Hsu, Lin, & McNamara, 2008).
See also: INTRODUCTION TO ACQUISITION METHODS:
Obtaining Quantitative Information from fMRI; INTRODUCTION TO
COGNITIVE NEUROSCIENCE: Neuroimaging of Economic
Decision-Making; Neuroimaging Studies of Reinforcement-Learning;
Prediction and Expectation; Reward Processing; Value Representation;
INTRODUCTION TO SYSTEMS: Emotion; Reward.
References
Abler, B., et al. (2006). Prediction error as a linear function of reward probability is
coded in human nucleus accumbens. NeuroImage, 31(2), 790–795.
Bach, D. R., & Dolan, R. J. (2012). Knowing how much you don’t know: A neural
organization of uncertainty estimates. Nature Reviews Neuroscience, 13, 572–586.
Bach, D., Seymour, B., & Dolan, R. (2009). Neural activity associated with the passive
prediction of ambiguity and risk for aversive events. Journal of Neuroscience, 29(6),
1648–1656.
Bach, D. R., et al. (2011). The known unknowns: Neural representation of second-order
uncertainty, and ambiguity. The Journal of Neuroscience, 31(13), 4811–4820.
Bechara, A., Tranel, D., & Damasio, H. (2000). Characterization of the decision-making
deficit of patients with ventromedial prefrontal cortex lesions. Brain, 123(11),
2189–2202.
Bechara, A., et al. (1999). Different contributions of the human amygdala and
ventromedial prefrontal cortex to decision-making. The Journal of Neuroscience,
19(13), 5473–5481.
Behrens, T. E. J., et al. (2007). Learning the value of information in an uncertain world.
Nature Neuroscience, 10(9), 1214–1221.
Beierholm, U. R., et al. (2011). Separate encoding of model-based and model-free
valuations in the human brain. NeuroImage, 58, 955–962.
Bernoulli, D. (1954). Exposition of a new theory on the measurement of risk.
Econometrica, 22(1), 23–36.
Bossaerts, P., Preuschoff, K., & Hsu, M. (2008). The neurobiological foundations of
valuation in human decision-making under uncertainty. In P. W. Glimcher,
E. Fehr, C. Camerer & R. A. Poldrack (Eds.), Neuroeconomics: Decision making
and the brain (p. 14). USA: Academic Press.
Camerer, C. F. (1989). An experimental test of several generalized utility theories.
Journal of Risk and Uncertainty, 2, 61–104.
Camerer, C. F., & Weber, M. (1992). Recent developments in modeling preferences –
Uncertainty and ambiguity. Journal of Risk and Uncertainty, 5(4), 325–370.
Camille, N., et al. (2004). The involvement of the orbitofrontal cortex in the experience of
regret. Science (New York, NY), 304(5674), 1167–1170.
Corbetta, M., Patel, G., & Shulman, G. L. (2008). The reorienting system of the human
brain: From environment to theory of mind. Neuron, 58(3), 306–324.
Cox, J. C., & Sadiraj, V. (2008). Risky decisions in the large and in the small: Theory
and experiment. In J. C. Cox & G. W. Harrison (Eds.), Research in experimental
economics: Vol. 12. Risk aversion in experiments (pp. 9–40): Emerald Group
Publishing Limited.
Critchley, H., Mathias, C., & Dolan, R. J. (2001). Neural activity in the human brain
relating to uncertainty and arousal during anticipation. NeuroImage, 13(6), pp.
S392–S392.
Daw, N. D. (2007). Dopamine: At the intersection of reward and action. Nature
Neuroscience, 10(12), 1505.
Denburg, N., et al. (2007). The orbitofrontal cortex, real-world decision making, and
normal aging. Annals of the New York Academy of Sciences, 1121(1), 480–498.
Ellsberg, D. (1961). Risk, ambiguity, and the savage axioms. The Quarterly Journal of
Economics, 75(4), 643–669.
Fiorillo, C. D., Tobler, P. N., & Schultz, W. (2003). Discrete coding of reward probability
and uncertainty by dopamine neurons. Science (New York, NY), 299(5614),
1898–1902.
Gla¨scher, J., et al. (2010). States versus rewards: Dissociable neural prediction error
signals underlying model-based and model-free reinforcement learning. Neuron,
66(4), 585–595.
Glimcher, P. W., & Fehr, E. (2013). Neuroeconomics: Decision making and the brain.
Academic Press.
Gottlieb, J., & Balan, P. (2010). Attention as a decision in information space. Trends in
Cognitive Sciences, 14(6), 240–248.
Hadar, L., & Fox, C. (2009). Information asymmetry in decision from description versus
decision from experience. Judgement and Decision Making, 4(4), 317–325.
Hsu, M., Lin, H.-T., & McNamara, P. E. (2008). Neuroeconomics of decision-making in
the aging brain: The example of long-term care. Advances in Health Economics and
Health Services Research, 20, 203–225.
Hsu, M., & Zhu, L. (2014). Commentary: Ambiguous decisions in the human brain. In
P. Crowley & R. Zentall (Eds.), Comparative decision making (pp. 1–3). USA:
Oxford University Press.
Hsu, M., et al. (2005). Neural systems responding to degrees of uncertainty in human
decision-making. Science (New York, NY), 310(5754), 1680–1683.
Hsu, M., et al. (2009). Neural response to reward anticipation under risk is nonlinear in
probabilities. The Journal of Neuroscience, 29(7), 2231–2237.
Huettel, S. A., et al. (2006). Neural signatures of economic preferences for risk and
ambiguity. Neuron, 49(5), 765–775.
Jones, J. L., et al. (2012). Orbitofrontal cortex supports behavior and learning using
inferred but not cached values. Science (New York, NY), 338(6109), 953–956.
Kahneman, D., & Tversky, A. (1979). Prospect theory – Analysis of decision under risk.
Econometrica, 47(2), 263–291.
Kiani, R., & Shadlen, M. (2009). Representation of confidence associated with a
decision by neurons in the parietal cortex. Science (New York, NY), 324(5928),
759–764.
Knutson, B., et al. (2001). Anticipation of increasing monetary reward selectively
recruits nucleus accumbens. The Journal of Neuroscience, 21(16), 159RC.
Knutson, B., et al. (2008). Nucleus accumbens activation mediates the influence of
reward cues on financial risk taking. Neuroreport, 19(5), 509–514.
Krebs, J. R., & Davies, N. B. (2009). Behavioural ecology: An evolutionary approach.
Wiley.
Kuhnen, C., & Knutson, B. (2005). The neural basis of financial risk taking. Neuron,
47(5), 763–770.
Levy, I., et al. (2010). Neural representation of subjective value under risk and
ambiguity. Journal of Neurophysiology, 103(2), 1036–1047.
Maia, T. V., & Frank, M. J. (2011). From reinforcement learning models to psychiatric
and neurological disorders. Nature Neuroscience, 14(2), 154.
Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7(1), 77–91.
McClure, S. M., Berns, G. S., & Montague, P. R. (2003). Temporal prediction errors in a
passive learning task activate human striatum. Neuron, 38(2), 339–346.
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399
Author's personal copy
INTRODUCTION TO COGNITIVE NEUROSCIENCE | Uncertainty
Montague, P. R., Hyman, S. E., & Cohen, J. D. (2004). Computational roles for
dopamine in behavioural control. Nature, 431(7010), 760–767.
Montague, P. R., King-Casas, B., & Cohen, J. D. (2006). Imaging valuation models in
human choice. Annual Review of Neuroscience, 29, 417–448.
O’Connell, L. A., & Hofmann, H. A. (2011). Genes, hormones, and circuits: An
integrative approach to study the evolution of social behavior. Frontiers in
Neuroendocrinology, 32(3), 320–335.
O’Doherty, J. P. (2004). Reward representations and reward-related learning in the
human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14(6),
769–776.
O’Doherty, J. P., et al. (2003). Dissociating valence of outcome from behavioral control
in human orbital and ventral prefrontal cortices. The Journal of Neuroscience,
23(21), 7931–7939.
Ogawa, M., et al. (2013). Risk-responsive orbitofrontal neurons track acquired salience.
Neuron, 77(2), 251–258.
Paulus, M., et al. (2003). Increased activation in the right insula during risk-taking
decision making is related to harm avoidance and neuroticism. NeuroImage, 19(4),
1439–1448.
Payzan-LeNestour, E., & Bossaerts, P. (2011). Risk, unexpected uncertainty, and
estimation uncertainty: Bayesian learning in unstable settings. PLoS Computational
Biology, 7(1), e1001048.
Preuschoff, K., & Bossaerts, P. (2007). Adding prediction risk to the theory of reward
learning. Annals of the New York Academy of Sciences, 1104(1), 135–146.
Preuschoff, K., Bossaerts, P., & Quartz, S. R. (2006). Neural differentiation
of expected reward and risk in human subcortical structures. Neuron, 51(3),
381–390.
Preuschoff, K., Quartz, S. R., & Bossaerts, P. (2008). Human insula activation reflects
risk prediction errors as well as risk. The Journal of Neuroscience, 28(11),
2745–2752.
Raiffa, H. (1961). Risk, ambiguity, and the savage axioms – Comment. The Quarterly
Journal of Economics, 75(4), 690–699.
Rangel, A., Camerer, C., & Montague, P. R. (2008). A framework for studying the
neurobiology of value-based decision making. Nature Reviews. Neuroscience, 9(7),
545–556.
Robert, M. (2004). The St. Petersburg paradox. In E. N. Zalta (Ed.), The Stanford
encyclopedia of philosophy (Fall 2004 ed.), http://plato.stanford.edu/archives/
fall2004/entries/paradox-stpetersburg.
399
Savage, L. J. (1972). The foundations of statistics. New York: Courier Dover
Publications.
Schultz, W. (2006). Behavioral theories and the neurophysiology of reward. Annual
Review of Psychology, 57, 87–115.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and
reward. Science (New York, NY), 275(5306), 1593–1599.
Schultz, W., et al. (2008). Explicit neural signals reflecting reward uncertainty.
Philosophical Transactions of the Royal Society of London. Series B, Biological
Sciences, 363(1511), 3801–3811.
Seymour, B., et al. (2004). Temporal difference models describe higher-order learning
in humans. Nature, 429(6992), 664–667.
Sugrue, L., Corrado, G., & Newsome, W. (2004). Matching behavior and the
representation of value in the parietal cortex. Science (New York, NY), 304(5678),
1782–1787.
Sugrue, L., Corrado, G., & Newsome, W. (2005). Choosing the greater of two goods:
Neural currencies for valuation and decision making. Nature Reviews. Neuroscience,
6(5), 363–375.
Symmonds, M., Bossaerts, P., & Dolan, R. J. (2010). A behavioral and neural evaluation
of prospective decision-making under risk. The Journal of Neuroscience, 30(43),
14380–14389.
Taylor-Gooby, P., & Zinn, J. O. (2006). Risk in social science. Oxford University Press.
Tobler, P. N., et al. (2007). Reward value coding distinct from risk attitude-related
uncertainty coding in human reward systems. Journal of Neurophysiology, 97(2),
1621–1632.
Tversky, A., & Fox, C. (1995). Weighing risk and uncertainty. Psychological Review,
102(2), 269–283.
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative
representation of uncertainty. Journal of Risk and Uncertainty, 5, 297–323.
Tymula, A., et al. (2012). Adolescents’ risk-taking behavior is driven by tolerance to
ambiguity. Proceedings of the National Academy of Sciences of the United States of
America, 109(42), 17135–17140.
Weber, B., & Huettel, S. (2008). The neural substrates of probabilistic and intertemporal
decision making. Brain Research, 1234, 104–115.
Wu, C. C., Sacchet, M. D., & Knutson, B. (2012). Toward an affective neuroscience
account of financial risk taking. Frontiers in Neuroscience, 6, 159.
Yu, A. J., & Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron,
46(4), 681–692.
Brain Mapping: An Encyclopedic Reference, (2015), vol. 3, pp. 391-399